The promise of AI is that it will help us become more efficient at what we do. In many use cases, when it comes to processing vast volumes of data quickly, AI is making a strong case for itself. The problem is that while AI is good, it’s not perfect.
We live in such a tech entrenched world that the default is always in favor of technological advancement. There’s the assumption that if it’s high tech it’s better. In many cases this may be true, but how often do we as humans stop and questions if it is really better for us?
If we’re training AI make decisions on its own, can we trust it to make good decisions? Even if it’s just delivering data to help us make better decisions, how can we be sure that that the data isn’t biased, inaccurate or incomplete?
We can’t. Bias and inaccuracies in AI are already being flagged. There are many people and organizations that are working to identify and reduce bias. But with the speed at which AI is being rolled out into everyday life, will these remedies come about too little too late?
The devils in the details, are we focusing on the right ones?
A major area of focus in improving AI is in the accuracy of data and processing. While this can make a contribution, will it really solve the problem? Specifically, will having better data lead to better decisions? Human history tells a different story.
Each individual makes decisions in a unique way, based on a multitude of factors. Even when the data is the same, factors influencing how a decision is made are rarely the same. In fact, the science of decision making is so complex that it spans over nine different areas of decision sciences. Economics, psychology, design, UX research, philosophy, neuroeconomics, behavioral economics, decision analysis and experimental game theory.
Each of these raises a question which influences the direction of the decision being made and any combination of factors may determine which has the highest priority or most influence. So, when we talk about making better decisions, the starting point is not necessarily data or even how AI processes it. It’s in understanding more about the immensely complex subject of decision science. To do that we need to take a closer look in the mirror.
The nasty hack habit
As humans, we like to think that we make decisions that optimize things, but really what we’re looking for is a hack. It’s not always necessarily about making something better, rather it’s what makes life easier, does things faster with less effort or at less costs. Let’s face it, thinking and making decisions takes effort – and we avoid it at all costs.
Yes, we label this with fancy words like optimization, but really all it is, is a quick fix. And as long as our brains work that way, it’ll always be the shortcut we focus on. It’s this approach that leads to bias, oversight, and excluding data because it doesn’t align with the answers we’re looking for.
When we’re looking to blame AI for bias or for clouding our judgement when making decisions, we need to consider just how complex and flawed decision making, is. Is it really AI influencing us, or is it our approach to reviewing data that is really the stumbling block.