Exploring Algorithmic Aversion on April 8th with The Extended Mind

Blog Posts
March 23, 2021

It’s really hard to believe that it’s already the end of March, right? Time feels like it has a weird meaning these days. But that said, one of the things that I’ve been working on over the past several months is finding new opportunities to learn and grow outside of my 9-5 job, and I have to say: I’ve been so grateful for the opportunity that remote and online learning has presented. To be honest, it’s one of the things that I hope sticks around even after it’s safe to gather in person again. 

Today, I want to share a post written by Jessica Outlaw and Sara Lucille Carbonneau in advance of an upcoming workshop that I am extremely excited about. Jessica’s work has been influential in guiding my thoughts on community and communication within immersive spaces, and her latest workshop on April 8th will explore algorithmic aversion and how humans can be biased towards algorithms. 

There’s been a growing intersection between immersive technology and artificial intelligence. I’m going into this workshop with my own ideas and expectations about what that bias might entail (for example – I suspect that people generally don’t think much about the computer vision algorithms that do object detection on their VR headset guardian system, but do feel biases towards human-like non-player characters) but I’m also excited to learn what I don’t know. The workshop is suited for product owners and creators who are interested in learning how to center humans in the process of creating algorithms, building user trust, and recovering from algorithmic mistakes in a product. 

You can register to join the (virtual, free) workshop, which will be run by Jessica and University of Chicago Professor Berkeley Dietvorst and is open to anyone, here. Below, you can find a post that shares some of the research that the workshop is grounded in, originally published on The Extended Mind Blog and shared here with permission.  

Algorithm Aversion: Humans Prefer Human Forecasters over Algorithms, But that Preference has a Cost 

For the computer scientists who are developing new algorithms and deploying them into everyday life, it is useful to understand people’s starting attitudes about them.

For the computer scientists who are developing new algorithms and deploying them into everyday life, it is useful to understand people’s starting attitudes about them.

What can engineers learn about how algorithms (and their subsequent errors) will be perceived by the people using them?  Turns out that social scientists have already been documenting human’s bias against algorithms.  In this blog post, I’m going to cover a set of studies about forecasting models, the resulting algorithm aversion, and what helps overcome it. 

Algorithm aversion is the tendency for humans to prefer human forecasters over algorithms, even when the human forecasters are less accurate and more costly. To understand this phenomenon more intimately, Berkeley Dietvorst, Joseph Simmons, and Cade Massey conducted a study that reported on how much more harshly algorithmic errors are judged compared to fellow human’s mistakes and then subsequently avoided.  

In this post, I’ll discuss some of the key findings on algorithm aversion and what the technology industry can learn about building trust between humans and algorithmic models.  

Humans have almost no tolerance for algorithmic error

People have a desire for algorithms to make perfect predictions. When they don’t, study participants were quick to abandon the algorithm for a human forecaster – even when the algorithm still outperformed the human. 

Participants of the research study remained fairly consistent in their confidence in human forecasters despite the humans also making errors. This was replicated across different studies, as well as in the book How Humans Judge Machines by Cesar A. Hidalgo et al. Algorithm aversion is a bias against algorithms, with a preference for humans. 

Humans think that other humans can learn from mistakes, but that algorithms cannot

Participants of the research study believed that humans were more able to learn from their mistakes and to refine their skills with practice, which may explain their preference for humans over algorithms in the long run. 

It also demonstrates that humans in the study lacked knowledge about how algorithms are built, how they can be updated, or machine learning capabilities.

Humans are more satisfied by algorithms if they can customize them (even just a little)

The researchers also found that people are more willing to trust and use algorithmic predictions when humans are able to somehow modify the algorithm. These ‘tweaks’ often made the forecasts less accurate, but the empowerment of the human built trust with the model.

Further, it didn’t matter if people could modify the model’s prediction by 2%, 5%, or 10%.  In all cases, people were more satisfied with the algorithm, regardless of the magnitude of the change they made.  

Despite monetary incentives to use algorithms, aversion remained strong

What’s noteworthy about this set of studies is that humans were offered access to forecasting models that actually improved the accuracy of their predictions.  And the people in this study were paid money as bonuses for making correct predictions.  By rewarding accuracy, people had incentives to rely on the algorithms.  However, study participants remained biased against them.  Meanwhile, there are other instances of algorithms being deployed in the workplace to track human performance (number of calls made, tracking  language used on calls, emails sent, the websites visited, frequency of breaks). I would infer that algorithm aversion for the person being tracked would be even higher in those instances. 

Takeaways

As algorithms become more integrated into our personal and professional lives, more research is needed on human-algorithm dynamics. If we can better understand what builds people’s confidence in algorithms, or the contexts in which they are desirable or not, we can build more people-centered products.  

Hopefully research from social science can be leveraged for better customer education, improved product design, and algorithms that are accurate and useful will by well-adapted to the people they are intended to serve.  

There’s also the opportunity to identify specific domains where algorithms may be particularly effective for people (e.g., optimizing driving directions), and also where humans are particularly averse to algorithmic intervention (e.g., moral decisions).

References:

Bloom, Jonty.  Computer says go: Taking orders from an AI boss. BBC. February 14, 2020. 

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2018). Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them. Management Science, 64(3), 1155-1170.

Frick, W. (2015). Here’s why people trust human judgment over algorithms. Harvard Business Review.

Hidalgo, C. A., Orghiain, D., Canals, J. A., De Almeida, F., & Martín, N. (2021). How Humans Judge Machines. MIT Press.

Related Posts

Leave a Reply