Skip to main content

Verified by Psychology Today

Artificial Intelligence

Does AI Have to Be Perfect to Be Good Enough?

The sources of algorithm aversion.

In 1954, clinical psychologist Paul Meehl published a controversial book titled Clinical and Statistical Prediction. In this book, Meehl argued that statistical methods for diagnosing patients and choosing treatments outperform human clinicians. Meehl went on to receive prizes for his work, but the practical impact of his ideas was close to nil. While subsequent research confirms Meehl’s conclusions, Meehl faced strong resistance on the part of his colleagues. Other clinicians simply refused to believe that a statistical method could outperform them.

This is, perhaps, not unexpected. Clinicians may not like the suggestion that what they’d previously seen as the art of diagnosis or treatment can be boiled down to a set of rules. They may fear being displaced, or it may seem to them that their expertise is being questioned. One might have hoped that all of these reasons would pale in comparison with the concern for patients’ well-being, but given everything we know about human psychology, it is not exactly a surprise that this didn’t happen.

Moose Photos/Pexels
Woman wearing virtual reality glasses
Source: Moose Photos/Pexels

More interestingly, recent studies suggest that there is a tendency to distrust algorithms among non-experts in contexts in which one's professional credentials are not on the line. Why do lay people prefer their own judgments to those of a superior algorithm?

One possible explanation is that we don’t like to hand control over to AI. Consistent with this hypothesis, research indicates that people are willing to embrace algorithms if given the choice to modify the algorithm's output. Interestingly, this works even when there are strict limits on how much modification is allowed. A little bit of control seems to be all it takes to appease us, suggesting, perhaps, that what we want is not so much control as the feeling of control.

No Forgiveness for AI

But the most striking finding—one much more difficult to explain than Meehl’s earlier ones—is that we seem to favor human judgers over algorithms even when the human is someone other than us, and even when we stand to lose money in trusting a fallible human over an imperfect but superior algorithm.

This distrust of algorithms across the board has been labeled “algorithm aversion.” We favor human judgers and forecasters in a variety of settings and in the face of ample evidence that algorithms significantly outperform humans. What explains algorithm aversion?

What research indicates is that we hold AI to much higher standards. We can forgive human errors but not algorithm errors. Algorithms have to be essentially perfect for us to embrace them. One mistake, and all bets are off: We are ready to abandon them in favor of a person who will make many more errors.

It is this tendency to be unforgiving when it comes to the errors of algorithms that interests me here.

Imperfection and Learning From Errors

Some study participants indicate that they favor humans because they believe humans can learn from their own mistakes while algorithms cannot. Consistent with this, there is some preliminary evidence suggesting that people can come to embrace an algorithm if shown that it too can learn from its errors and improve.

I suspect, however, that there is more to the explanation. I for one show algorithm aversion although I already know full well that AI can learn. For instance, I am happy to get into a vehicle with a human driver but would be reluctant to get into a self-driving car. So long as I know of a handful of accidents involving AI-operated vehicles, I will continue to be reluctant even if I have lots of evidence that the probability of an accident is much lower in a self-driving car.

My reaction, while not entirely rational, seems common. Perhaps we have a kind of “schema” of what an acceptable artificial intelligence agent must be like: perfect and unerring. That is, after all, what science fiction “promised" us. An imperfect AI does not match this schema. Even so, if we have evidence that it is better than the alternative—that is, better than humans—the rational thing to do would be to embrace it, and we are disinclined to.

I suspect that at the root of algorithm aversion is the intuition that algorithms are not, after all, more reliable than humans despite evidence to the contrary. Perhaps we accept that they are if they show themselves infallible but not otherwise. Even one error—e.g., in medical diagnosis or driving—may make us intuit that algorithms are, in the end, not to be trusted. Thus, if we see a self-driving car get into an accident, we may take this as a sign that self-driving cars are unsafe, more so than human drivers, despite evidence to the contrary. It is as though we think that human performance falls on a spectrum while there are two options only for AI: perfect and not good enough. Fallible AI seems a bit like a fallible deity—no deity at all.

Relatable Humans

I wish to suggest here that this may be because we understand how humans make decisions but not how algorithms do. In the case of humans, we generally find mistakes intelligible. This is important for two reasons. First, an utterly unintelligible error is likely to make us feel as though the judger is irrational and untrustworthy. Second, when we know why some human makes a given error, this helps persuade us that other humans will not make the same error. For instance, I know that a driver is likely to get into an accident if inebriated, so an accident by a drunk driver does not alarm me: I don't expect a lot of drivers to suddenly start driving under the influence. By contrast, I don't know why an algorithm makes the choices it does, so a single mistake on the part of a self-driving car may undermine my trust in that car altogether.

Last but not least, we don't see different self-driving cars as individuals, so if doubt is cast on one of them, we may come to doubt all. That too is unlike the human case in which one bad driver or doctor does not make us distrust the entire profession.

The problem is, of course, that our intuitions may be a poor guide, here and elsewhere. We do not intuit that the Earth is rotating, but it is. What we need to do is look at the evidence. We have very good reasons to believe that in many, many cases, algorithms outperform people. One error does not show an algorithm to be more error-prone than a human is. That’s true even if the AI error is unintelligible to us while the human one is, well, all too human.

Follow me here.

Facebook/LinkedIn image: Rawpixel.com/Shutterstock

advertisement
More from Iskra Fileva Ph.D.
More from Psychology Today