Skip to main content
About HEC About HEC
Summer School Summer School
Faculty & Research Faculty & Research
Master’s programs Master’s programs
Bachelor Programs Bachelor Programs
MBA Programs MBA Programs
PhD Program PhD Program
Executive Education Executive Education
HEC Online HEC Online
About HEC
Overview Overview
Who
We Are
Who
We Are
Égalité des chances Égalité des chances
HEC Talents HEC Talents
International International
Sustainability Sustainability
Diversity
& Inclusion
Diversity
& Inclusion
The HEC
Foundation
The HEC
Foundation
Campus life Campus life
Activity Reports Activity Reports
Summer School
Youth Programs Youth Programs
Summer programs Summer programs
Online Programs Online Programs
Faculty & Research
Overview Overview
Faculty Directory Faculty Directory
Departments Departments
Centers Centers
Chairs Chairs
Grants Grants
Knowledge@HEC Knowledge@HEC
Master’s programs
Master in
Management
Master in
Management
Master's
Programs
Master's
Programs
Double Degree
Programs
Double Degree
Programs
Bachelor
Programs
Bachelor
Programs
Summer
Programs
Summer
Programs
Exchange
students
Exchange
students
Student
Life
Student
Life
Our
Difference
Our
Difference
Bachelor Programs
Overview Overview
Course content Course content
Admissions Admissions
Fees and Financing Fees and Financing
MBA Programs
MBA MBA
Executive MBA Executive MBA
TRIUM EMBA TRIUM EMBA
PhD Program
Overview Overview
HEC Difference HEC Difference
Program details Program details
Research areas Research areas
HEC Community HEC Community
Placement Placement
Job Market Job Market
Admissions Admissions
Financing Financing
FAQ FAQ
Executive Education
Home Home
About us About us
Management topics Management topics
Open Programs Open Programs
Custom Programs Custom Programs
Events/News Events/News
Contacts Contacts
HEC Online
Overview Overview
Executive programs Executive programs
MOOCs MOOCs
Summer Programs Summer Programs
Youth programs Youth programs
Article

To What Extent Do People Follow Algorithms’ Advice More Than Human Advice?

Information Systems
Published on:

Algorithms can enable faster, more effective decision making in domains ranging from medical diagnosis to the choice of a romantic partner. But for this potential to actually translate into useful practical choices, humans must trust and follow the advice algorithms provide. Researchers in Information Systems Cathy Liu Yang and Xitong Li of HEC Paris and Sangseok You of Sungkyunkwan University, have explored the factors that influence people's reliance on algorithmic decision aid.

Photo Credits: Have a nice day on Adobe Stock

Machine recommendations result in 80% of Netflix viewing decisions, while more than a third of purchase decisions on Amazon are influenced by algorithms. In other words, algorithms increasingly drive the daily decisions that people make in their lives.

It isn’t just consumer decision making that algorithms influence. As algorithms appear increasingly in different situations, people are using them more frequently to make more fundamental decisions. For example, recent field studies have shown that decision makers follow algorithmic advice when making business decisions or even providing medical diagnoses and releasing criminals on parole.

Do people prefer human or machine advice-giving?

People regularly seek the advice of others to make decisions. We turn to experts when we are not sure. This provides us with greater confidence in our choices. It is clear that AI increasingly supports real-life decision making. Algorithms are ever more intertwined with our everyday lives. What we wanted to find out is the extent to which people follow the advice offered by AI.

To investigate the matter, we conducted a series of experiments to evaluate the extent to which people follow AI advice. Our study showed that people are more likely to follow algorithmic advice than identical advice offered by a human advisor due to a higher trust in algorithms than in other humans. We call this phenomenon “algorithm appreciation”.

Higher Trust in AI… but don’t go overboard on information

We wanted to find out more, to see if people would follow AI advice even if the AI is not perfect. Our second series of experiments focused on exploring under which conditions people might be either more likely or less likely to take advice from AI. We engineered experiments that tested whether people would have greater trust in algorithms even when they were aware of prediction errors with the underlying AI.

Surprisingly, when we informed participants in our study of the algorithm prediction errors, they still showed higher trust in the AI predictions than in the human ones. In short, people are generally more comfortable trusting AI than other humans to make decisions for them, regardless of known and understood imperfections in the process.

 

People are generally more comfortable trusting AI than other humans to make decisions for them, regardless of known and understood imperfections in the process, except when there is too much information about the algorithm and its performance.

 

There was an exception to this rule. We found that when transparency about the prediction performance of the AI became very complex, algorithmic appreciation declined. We believe this is because the provision of too much information about the algorithm and its performance can lead to a person becoming overwhelmed with information (cognitive load). This impedes advice taking. This is because people may discount predictions when presented with too much information about the underpinning detail and they are unable or unwilling to internalize it. However, if we do not overwhelm people with information about AI then they are more likely to rely on it.

What could possibly go wrong?

If algorithms can generally make better decisions than people, and people trust them, why not rely on them systematically? Our research raises potential issues of over-confidence in machine decision-making. In some cases, the consequences of a bad decision recommended by an algorithm are minor: If a person chooses a boring film on Netflix they can simply stop watching and try something else instead. However, for high-stakes decisions that an algorithm might get wrong, questions about accountability come into play for human decision-makers. Remember the miscarriage of justice in the UK Post Office, when more than 700 Post Office workers were wrongfully convicted of theft, fraud and false accounting between 2000 and 2014, because of a fault in a computer system.

However, our research has important implications for medical diagnosis. Algorithmic advice can help where there is a patient data for examination. AI can predict with a level of likelihood whether the chances of the patient having cancer are 60% or 80% and the healthcare professional can include this information in decision making processes about treatment. This can help avoid a patient’s higher level of risk being overlooked by a human and it can lead to more effective treatment, with the potential for a better prognosis.

In wider society, algorithms can help judges in the court system make decisions that will drive a safer society. Judges can be given predictions from algorithms that present the chance of a criminal possibly committing the crime again and so decide for how long to put them away.

Methodology

To explore how and why transparency in performance influences algorithm appreciation, we conducted five controlled behavioral experiments, each time recruiting more than 400 participants via Amazon's Mechanical Turk. Across the five experiments, participants were asked to perform a prediction task in which they predict a target student’s standardized math score based on nine pieces of information about the student before and after being presented with advice generated by the algorithmic prediction regarding the student’s predicted score.

Applications

Where firms need to make investment decisions, employees will trust AI to help inform those choices. With good data and solid, well-thought-out underlying algorithms, this has the potential to save businesses a lot of money.
Based on an interview with HEC Paris professors of Information Systems Cathy Liu Yang and Xitong Li on their paper “Algorithmic versus Human Advice: Does Presenting Prediction Performance Matter for Algorithm Appreciation,” co-written with Sangseok You, Assistant Professor of Information Systems at Sungkyunkwan University, South Korea, and published online in the Journal of Management Information Systems, 2022. This research work is partly supported by funding from the Hi! PARIS Fellowship and the French National Research Agency (ANR)'s Investissements d'Avenir LabEx Ecodec (Grant ANR-11-Labx-0047).

Related content on Information Systems

Information Systems

How We Can Support the Digital Transformation of Microbusiness Owners

By Shirish Srivastava

Artificial Intelligence

How AI Can Help Figure Out When Hospital Patients Need Intensive Care

By Julien Grand-Clément

man chating with a chatbot on his cell phone - thumbnail
Information Systems

How Should We Design the Next Generation of AI-Powered Chatbots?

By Shirish Srivastava

Information Technology

Using Innovations on Social Media for More Engagement? Be Aware of The Cultural Differences

By Reza Alibakhshi, Shirish Srivastava

Information Systems

Digitalization as an Enabler of Business Transformation: The Orange Case

By Shirish Srivastava, Joseph Nehme