How do you make decisions? 

Even if you are the kind of person that is paralyzed by indecision when faced with too many menu options- we’ve all been there- you make decisions all the time. Sometimes you regret the decisions you make, while at other times you’re very happy with them.

But how good is your decision-making, really?

For decades, psychological research has been telling us that we make plenty of errors in our decision-making. Or more specifically, that our decision-making is prone to cognitive biases. While the word “bias” means many things to many people, they’re generally seen as suboptimal blind spots messing up our decision-making. 

Nobel prize winning psychologist Daniel Kahneman outlined a series of cognitive biases caused by mental shortcuts that often lead us astray. The “availability heuristic”, for example, may mistakenly convince us that car travel is safer than flying.Thanks to the “anchoring effect”, a $100 pair of jeans might seem like a bargain if reduced from $200, even if they cost $2 to make. And the “representativeness heuristic” can mislead gamblers into thinking they are due a win following a string of statistically unrelated losses. 

If you’ve been told to trust your intuition, Kahneman’s research shows you should actually…think twice.

In his view, biases are the result of thought processes that are intuitive, often unconscious and unreliable, which should ideally be corrected by analytical, deliberative and conscious thought processes. Borrowing a theory popular in various corners of psychological research, he talked about the former as System 1 processes and the latter as System 2 ones.

Think of System 1 as your inner impulsive teenager and System 2 as your inner responsible adult. 

Now that you know what biases are and where they come from, you may be wondering- what on earth does this have to do with AI?

Artificial intelligence and the corrective approach

While evidence of biases is piling up, it is also apparent that artificial intelligence (AI) technology is doing more than ever. It’s being deployed in healthcare and warfare; it’s scrutinizing your resume and judging your creditworthiness. 

Proof of the claim that algorithms are more accurate than humans appears to be mounting. Back in 1997, Deep Blue defeated the world chess champion Garry Kasparov. Today, commercial chess programs play at a similar level. IBM’s Watson won on the quiz show Jeopardy! against two of its best contestants. Google’s AlphaGo beat a world champion at Go. 

IBM’s Watson plays Jeopardy! (Credit: Wikipedia Commons)

Notably, the successes of AI have practical implications and can contribute to important societal achievements. Looking for an example? Microsoft researchers have argued that web search queries can predict pancreatic adenocarcinoma. 

They analyzed queries by 6.4 million Bing users and identified those suggesting a recent diagnosis, such as “I was told I have pancreatic cancer, what to expect?”. Then the researchers looked for queries those users had entered months before, indicating symptoms or risk factors, such as blood clots or alcoholism. They concluded that their algorithm “can identify 5% to 15% of cases, while preserving extremely low false-positive rates (0.00001 to 0.0001)” and that “this screening capability could increase 5-year survival”. 

The combination of the rise in identified biases and AI capabilities has led many to come to the conclusion that human cognition is fundamentally flawed and should be replaced by AI. Even back in 1951, Alan Turing observed that, “At some stage we should have to expect the machines to take control.”

That moment has not yet arrived, but we are increasingly hearing the argument that artificial intelligence (AI) is not only badly needed, but that it is also ready to replace human cognitive capabilities for many tasks. We’ll call this the corrective approach towards human cognition. 

In this view, human cognition is severely limited and fallible, and AI is meant to supplement or even replace it. Here, however, we would like to sketch a somewhat different picture, where the relationship between human cognition and AI is far more complex and bi-directional. 

Simply put, AI needs to not only correct, supplement or replace human cognition. It should also be informed, influenced and inspired by it. 

A New View: Four points in defense of human cognition

Despite human error and the power of AI, there are reasons to doubt what we have labelled the corrective view. As it turns out, human cognition is not just flawed and in need of correction, but it can also be highly efficient, accurate and robust. Moreover, AI is not necessarily immune from error, as it can actually be biased, demanding and fragile. 

These observations can be translated to four points in defense of human cognition.

1 ) Heuristic thinking has benefits.

First, while human cognition may be suffering from a number of biases, the heuristics people use also lead them to accurate inferences in many contexts. 

Cognitive psychologist Gerd Gigerenzer has shown over the past twenty years that a great deal of human decision-making is based on fast and frugal heuristics that might be simple but are also very smart. They’re fast in that they do not rely on heavy computation.They’re frugal in that they only need to use some of the available information to make accurate judgements and predictions. 

The simple recognition heuristic makes a great example. This model predicts that if a person recognizes one object and not another, then they will infer that the recognized object has greater value. In a number of experiments it has been shown that people could successfully predict winners at Wimbledon and in elections by relying on this way of thinking. The value of heuristic thinking has also been documented in financial and even more complex scenarios

2 ) AI may perpetuate biases.

Second, although it might be true that people’s heuristics can lead to biases in some contexts, society recently started to wrestle with just how much human biases can make their way into AI systems.

Like the human brain, AI is thus subject to biases, which influence AI through data, algorithms and interaction. Showing that the heuristics people use can actually lead to successful outcomes was just the start of throwing doubt on the corrective approach. However, it is equally important to note that AI systems are not perfect either. The fallibility of human cognition and AI need to be considered carefully. 

3 ) Humans have cognitive capabilities that AI doesn’t. 

We’ve seen how cognitive biases are typically driven by System 1 processes and heuristics, but don’t forget that your brain has a second operating system working hard as well. System 2 processes are slower, more reflective, and accurate. They drive deliberative and deeply analytical thinking. 

According to Deep Learning pioneer Yoshua Bengio, it is important to note  that Machine Learning is still far from having the capability to perform System 2 functions.

Take the ability to learn causality for example. It is considered to be a significant component of human-level intelligence, and is meant to serve as the foundation of AI, yet it still remains a challenge for Machine Learning. ML simply can’t (currently) offer the same degree of understanding or ability to engage with hypothetical and causal reasoning that humans can. 

In particular, System 2 human reasoning is flexible in a way that many AI algorithms are not. For example, in 2008, Google launched Google Flu Trends to predict the spread of flu and influenza-related diseases.

Despite developing an algorithm based on 50 million search terms and their correlations with flu-related visits, it failed to predict the outbreak of swine flu in 2009. Even after the Google engineers revised the algorithm, it continued to be inaccurate. 

Why did Google Flu Trends fail? The algorithm had learned that flu levels are typically high in winter and low in summer, but the swine flu appeared out of season. The future was not like the past and the system failed to display robust performance and adjust. 

4 ) AI is needy. 

Finally, while there have been dramatic advances in AI capabilities, the best algorithms require huge amounts of training data and struggle with generalization. On the other hand, judgment and learning driven by human thought can be very cost-efficient. 

People are typically able to learn from very few data points, in fast and flexible ways that are not captured by today’s algorithms. You only need to see something a few times to make it stick. In the words of Lake and colleagues, “reverse engineering human intelligence can usefully inform AI and machine learning (and has already done so), especially for the types of domains and tasks that people excel at.” 

Therefore, as it turns out, “cognitive science and neuroscience could inspire the next big innovations in AI.”

A Lesson Learned: Never lose sight of the human side of AI

The upshot is rather clear: a prerequisite for advancing AI is developing a deep understanding  of human judgment and decision-making, including both its limitations and strengths. AI is not a replacement for human cognition, but rather serves to be enhanced by it. 

From this conclusion comes an important lesson: never lose sight of the human side of AI. This holds when talking about this technology in general, but is also important to remember when bringing AI into your organization. 

It isn’t enough to adopt powerful technology if you don’t know how to use it. More than technological capabilities, you need humans with strong business logic to help guide implementation, maintenance, and continued success. 

Human cognition is indeed efficient and effective. When brought together with AI to inform the development of the latter rather than be corrected by it, that combination gives rise to something greater than them both. Machines aren’t ready to take over the world…just yet.