Skip to main content
About HEC About HEC
Summer School Summer School
Faculty & Research Faculty & Research
Master’s programs Master’s programs
Bachelor Programs Bachelor Programs
MBA Programs MBA Programs
PhD Program PhD Program
Executive Education Executive Education
HEC Online HEC Online
About HEC
Overview Overview
Who
We Are
Who
We Are
Égalité des chances Égalité des chances
HEC Talents HEC Talents
International International
Sustainability Sustainability
Diversity
& Inclusion
Diversity
& Inclusion
The HEC
Foundation
The HEC
Foundation
Campus life Campus life
Activity Reports Activity Reports
Summer School
Youth Programs Youth Programs
Summer programs Summer programs
Online Programs Online Programs
Faculty & Research
Overview Overview
Faculty Directory Faculty Directory
Departments Departments
Centers Centers
Chairs Chairs
Grants Grants
Knowledge@HEC Knowledge@HEC
Master’s programs
Master in
Management
Master in
Management
Master's
Programs
Master's
Programs
Double Degree
Programs
Double Degree
Programs
Bachelor
Programs
Bachelor
Programs
Summer
Programs
Summer
Programs
Exchange
students
Exchange
students
Student
Life
Student
Life
Our
Difference
Our
Difference
Bachelor Programs
Overview Overview
Course content Course content
Admissions Admissions
Fees and Financing Fees and Financing
MBA Programs
MBA MBA
Executive MBA Executive MBA
TRIUM EMBA TRIUM EMBA
PhD Program
Overview Overview
HEC Difference HEC Difference
Program details Program details
Research areas Research areas
HEC Community HEC Community
Placement Placement
Job Market Job Market
Admissions Admissions
Financing Financing
FAQ FAQ
Executive Education
Home Home
About us About us
Management topics Management topics
Open Programs Open Programs
Custom Programs Custom Programs
Events/News Events/News
Contacts Contacts
HEC Online
Overview Overview
Executive programs Executive programs
MOOCs MOOCs
Summer Programs Summer Programs
Youth programs Youth programs
blue sea - Andrey_Armyagov-AdobeStock

©Andrey Armyagov on Adobe Stock

How to Improve Decision Making

This in-depth issue features the latest and cutting-edge research findings on decision making from HEC Paris' professors. We hope that the tools presented will help you think your decision making from new angles and to elaborate appropriate strategies in various situations, especially during these times of uncertainty. Find the review here.

Structure

Part 1
Risking the future? How Delayed Consequences Can Bias the Perception of Risk
Most decisions have consequences that are uncertain and materialize in the future. Perception of uncertainty may be biased when it regards the future. Indeed, recent research led by Emmanuel Kemel, Professor of Economics and Decision Sciences at HEC Paris and CNRS researcher, and Corina Paraschiv, Professor at LIRAES, has found that people are more likely to take risks if the consequences of their decision aren’t felt immediately.
Part 2
Taking The Help or Going Alone: Students Do Worse Using ChatGPT
How good are students at using tools like ChatGPT? In particular, can they properly evaluate and correct the responses provided by ChatGPT to enhance their performance? An experiment on HEC Paris students set to find out. Its results contribute to the debate on the consequences of the use of ChatGPT in education, and in work in general.
Part 3
How Much to Reveal to Persuade a Decision Maker?
How much information one needs to provide to decision makers to respect transparency while keeping its competitive edge? In a new study using a mathematical probabilistic model, HEC Paris professors in Economics and Decision Sciences Frederic Koessler, Marie Laclau and Tristan Tomala, find the optimal equilibrium of information to reveal for companies in a situation where there are competitors.
Part 4
Nudges and Artificial Intelligence: for Better or for Worse?
The latest developments in artificial intelligence are often allied to the prospect of a better world, with more powerful, more rational algorithms straightening out human flaws. The idea often floated is that public policy will be more effective because it will be better informed and more responsive. Likewise, it is said that medicine will deliver speedier, more accurate diagnoses. But where are we when it comes to the subject of consumption? Can algorithms be used to steer consumers towards better, more enlightened – and less impulsive – choices? Why not? Online assessments about a product or service, for instance, help information flow more smoothly. But we are also heading down a new path: myriad algorithms are now the power behind “nudges”, those fine details about the shopping environment that insidiously guide the choices made by consumers.
Part 5
Yes, You Can Be Trained To Make Better Decisions
Mental distortions known as cognitive biases often shifts our judgement away from rational prescriptions. While such biases are normal – it's just the way our brains are wired – they can lead to poor choices, sometimes with disastrous consequences. But new evidence shows how simple training can help us identify these biases and tremendously improve decision making.
Part 6
Decision Making: Do You Need a Decision Theorist… or a Shrink?
Human beings are notoriously bad at making rational decisions. Even theoretical models designed to help you find the “right” answer are limited in their applications. A trio of researchers calls for a re-appraisal of decision theory, arguing that basic tools can improve decision making by challenging underlying assumptions and uncovering psychological biases.
Part 7
How to Deal with Severe Uncertainty?
Severe uncertainty, deep uncertainty, radical uncertainty, ambiguity… different actors in a range of fields – decision scientists, risk analysts, climate scientists, central bankers – use a variety of phrases to talk of some extreme, important yet too often ignored form of uncertainty. But what is it? And how should we deal with this particular species of uncertainty: how should we characterise it, communicate it, and decide in the face of it? In this interview, CNRS Research Director and HEC Paris Research Professor Brian Hill explains the concept and unveils applicable tools based on theoretical models for guiding decisions in situations of severe uncertainty.
Part 8
Uncertainty Across Disciplines
We, individuals and society, are faced today with many important decisions involving radical degrees of uncertainty. To better communicate on the current state of knowledge about uncertainty, and incorporate it into decisions, Brian Hill, CNRS and HEC Paris Professor of Economics and Decision Sciences, initiated the "Uncertainty Across Disciplines" project, interviewing 10 leading experts on the topic of uncertainty.
Part 9
The Impact of Overconfidence and Attitudes towards Ambiguity on Market Entry
For many people who have started their entrepreneurial adventure, the biggest challenge is to believe in yourself. Yet, for those who choose this path, confidence can also make the entrepreneur underestimate actual business risks, leading to fatal decisions. Researchers of HEC Paris Business School and Bocconi University offer a new explanation for why decision makers often appear too confident, and shed light on the consequences of this characteristic.
Part 10
Thinking About Time Flying? It Can Affect Your Decision Making
When the clock in our minds ticks loudly, it changes not only our perspective of the time remaining in our lives, but also how we process information. A trio of researchers investigated how thinking about the concept of time can affect our decision making. This unique piece of research could explain biases in hiring, voting, and many other contexts.
Part 11
A New Theory in Economics Helps Predict Future Events
When will be the next financial crisis? Who is going to win the next US presidential elections? How do we create beliefs about such events? By understanding how probabilistic beliefs form, economic theorists can now explain and predict phenomena that depend on rational beliefs. Latest research by Rossella Argenziano and Itzhak Gilboa equips economic modeling with a theory and a set of tools of belief formation, based on statistics and psychology. Some of the immediate applications are the equilibrium selection in coordination games.
Part 12
Is It Rational to Stockpile in Times of Crisis?
The health crisis caused by COVID-19 has triggered an economic one. We observe a significant portion of the population fearing shortage of primary consumption goods and marked stockpiling behavior. Because such behavior increases the risk of shortage, several stores have decided to ration some goods, and governments have had to make public announcements to reassure consumers that there would be no shortage. Avoiding consumer stockpiling is hence one of the key aspects of the management of this crisis. But is it rational to stockpile in times of crisis? We review and discuss the rational and irrational aspects of such behavior.
Part 13
3 Objectives to Create Intelligence in the Face of Uncertainty
Uncertainty is an invisible trap, set to blind our capacity to avoid nonsense and create actual intelligence. Why invisible? Because uncertainty is powered by what we do not know, which is particularly difficult to become aware of. Anne-Sophie Chaxel, HEC Paris Associate Professor of Marketing and expert in cognitive biases, gives three objectives to keep in mind to embrace uncertainty, along with practice tool boxes to create intelligence.
Part 14
Decision Making That Reflects Your Strategy
Business decisions are not always in line with company strategy. Researchers Olivier Sibony et al. explore what lies behind counterproductive business decisions and outline guidelines for designing better strategic decision processes.
Part 15
How Believing in Unsubstantiated Claims Leads to Polarization
The COVID-19 pandemic has fostered the sharing of conflicting and unsubstantiated claims by public figures. Early November, a deeply divided nation elected Joe Biden as the President of the United States. A recent research published by professors Anne-Sophie Chaxel of HEC Paris and Sandra Laporte of Toulouse School of Management reveals that individuals believe in unsubstantiated claims when shared by favorite public figures, explaining polarization in opinions. In this article, Anne-Sophie Chaxel explains how rational people come to strongly believe in unchecked claims.
Part 16
How Do Governments And Individuals Make Decisions In A Time Of Crisis? The Case Of The Coronavirus
Why different countries have made very different decisions to fight the coronavirus? What are the potential consequences of such crisis on the psychology of the population? In this interview, Anne-Sophie Chaxel, HEC Paris Associate Professor of Marketing specialized in consumer behavior and decision-making, explains the different approaches of governments toward their responsibility, and the biases behind non-optimal behaviors and decisions. She also shares her recommendation regarding decision-making processes.
Part 17
A New Theory in Economics Helps Predict Future Events
When will be the next financial crisis? Who is going to win the next US presidential elections? How do we create beliefs about such events? By understanding how probabilistic beliefs form, economic theorists can now explain and predict phenomena that depend on rational beliefs. Latest research by Rossella Argenziano and Itzhak Gilboa equips economic modeling with a theory and a set of tools of belief formation, based on statistics and psychology. Some of the immediate applications are the equilibrium selection in coordination games.
Part 18
Black Swans and Other Challenges to Rational Decision Making
When trying to figure out the outcome of a given situation, or the fallout of a sudden event, is it better to reason by analogies and resort to past experience or to think ahead and apply probabilistic reasoning? Researchers present a new mathematical model on making decisions in uncertain circumstances, which takes into account both modes of reasoning.
Part 1

Risking the future? How Delayed Consequences Can Bias the Perception of Risk

Decision Sciences
Published on:
5 mn

Most decisions have consequences that are uncertain and materialize in the future. Perception of uncertainty may be biased when it regards the future. Indeed, recent research led by Emmanuel Kemel, Professor of Economics and Decision Sciences at HEC Paris and CNRS researcher, and Corina Paraschiv, Professor at LIRAES, has found that people are more likely to take risks if the consequences of their decision aren’t felt immediately.

old_man_looking_at_money_cover

Copyright: maurus

Most research around risk attitudes covers situations with immediate consequences, where the decision-maker receives the result directly after making their choice. This applies in most gambling scenarios, where results and consequences or payouts happen immediately after playing.

 

Risk and time are intertwined in real-life decisions, so we decided to explore whether risk attitudes change when the moment of a decision is separated from the outcome of that choice.

 

In reality, the vast majority of consequences of decisions occur after a period of time, for example, finding out the result of an election or receiving a fine for exceeding a speed limit. Risk and time are intertwined in real-life decisions, so we decided to explore whether risk attitudes change when the moment of a decision is separated from the outcome of that choice.

To do this, we asked participants to decide whether they wanted to receive a fixed sum of money or enter a lottery with a chance of either getting nothing or winning a higher amount. Two different treatments were considered. In one treatment, the outcomes were paid immediately, whereas in the other treatment, the outcomes were paid one year after the decision was made.

Risk tolerance rises when consequence is delayed

Most economists assume that people are rational and will act in predictable ways. This simplification allows economists to model human decision making and anticipate behavior. This normative model of decision making states people should be equally likely to take a risk regardless of when the consequence might take place. However, our experimental observations do not agree; they reveal that actual behavior is quite different.

We found that when the lottery payment was postponed, people chose to take more risk, even if the result itself was indicated immediately after playing. Although previous research has already reported a higher risk tolerance for lotteries with delayed payment, our experiments disentangles the delay of payment, from the delay of reception of the result of the lottery: the result of the lottery was always known immediately whereas the outcome was paid either immediately or after one year.

Our results show that delaying only the payment, while sharing results of the lottery immediately, was enough to increase risk tolerance – a surprising result if we expect fully rational behavior.

Optimism bias

As an explanation for this increase in risk tolerance, our findings suggest a higher level of optimism regarding the chances of success when consequences are delayed.

Generally, people can be more or less optimistic or pessimistic; when looking at a mixed weather forecast, an optimist might expect sunny skies, whereas a less positively inclined person may have the more pessimistic outlook that it’ll be raining all day. When considering outcomes in the future people tend to suffer from optimism bias –something that causes us to believe that outcomes will be positive and that things will work out, regardless of the fact that rationally speaking, projects inevitably run into problems.

It has been suggested that our attitude towards climate change is also influenced by optimism bias. With our new research, we can also assume that because the major consequences of climate change will be delayed, people are more likely to take risks with regards to the environment.

 

We must be aware of both optimism bias and the increase in risk tolerance with delayed consequences to avoid unexpected future consequences.

 

Another example of optimism bias can be seen with adolescent smokers, who are two and a half times more likely than non-smokers to doubt that they would ever die from smoking even if they smoked for three or four decades. Considering the issues of smoking or climate change, this optimism bias could have deadly consequences.

Future research

Climate change is far from winning the lottery – it introduces us to new dangers and risks that we have never seen the likes of. Our research only analyzed situations with a non-negative result, which naturally leads to the question of whether the same increase in risk tolerance and optimism would be recorded when participants face a potential loss. In reality, risky situations are usually associated with potential losses, so analyzing this can better help to understand biases in real-life situations like investment decisions and insurance.

Our research has captured and recorded biases when people make decisions, but future studies could be key to discovering how these biases can surface and how we can manage them appropriately to ensure we make more rational decisions. This could also allow better models of human behavior to be created for economists.

 

Inconsistency in choice and preferences can lead to tumultuous risk attitudes.

 

*Learn more about the impact of other biases on investors in this article by HEC Paris Professor Thomas Astebro in this Knowledge@HEC article.

Applications

In economics, people, especially investors*, can suffer from optimistic bias, must be aware of both optimism bias and the increase in risk tolerance with delayed consequences and think critically about financial decisions, or any strategic decision, to avoid unexpected future consequences. Inconsistency in choice and preferences can lead to tumultuous risk attitudes, which can make us more likely to choose things which will have unfavorable consequences in the long run. Before making decisions, we should consider whether we would make the same choice were the consequences to occur immediately, or after a period of delay. Being too happy-go-lucky about future outcomes could favor high-risk decisions.

Methodology

We collected data from 70 undergraduate students from the University of Paris. They all participated in both parts of the experiment, one with present consequences and one with delayed consequences. The real incentives procedure, developed in experimental economics was implemented. According to this procedure, subjects made multiple choices and one of their choices was randomly selected, played, and paid for real. These procedures aimed at ensuring realistic, careful, and truthful answer.
Based on an interview with Professor Kemel about his paper “Risking the future? Measuring risk attitudes towards delayed consequences”, co-written with Professor Corina Paraschiv, published in the Journal of Economic Behavior & Organization, April 2023.
See structure
Part 5

Taking The Help or Going Alone: Students Do Worse Using ChatGPT

Decision Sciences
Published on:

How good are students at using tools like ChatGPT? In particular, can they properly evaluate and correct the responses provided by ChatGPT to enhance their performance? An experiment on HEC Paris students set to find out. Its results contribute to the debate on the consequences of the use of ChatGPT in education, and in work in general.

If, as many suggest, ChatGPT-like tools will be central to many work practices in the future, then we need to think about how to design course elements that help today’s students and tomorrow’s professionals learn how to use these tools properly. A correct use will not involve humans copying the output of these tools blindly, but rather them using it as a means to enhance their own performance. Hence the simple question: can students properly evaluate and where necessary correct the responses provided by ChatGPT, to improve their grade in an assignment, for instance? Motivated by such considerations, I designed the following assignment in a first-year Masters level course at HEC Paris. 

Answering vs. correcting

Students were randomly assigned two cases, and were asked the same question about each. For the first case, students just had to provide the answer, in the traditional way, ‘from scratch’. For the second case, they were provided with an answer to the question: they were asked whether the answer was fully correct, and told to correct or add as required to make it ‘perfect’. They were told that each provided answer had been either produced by ChatGPT or by another student. In reality, in over 60% of cases, the answer had come from ChatGPT. 

Whilst the former, answer task is arguably closer to current work practices, the second correct task may correspond more closely to many jobs in the future, if AI tools become as ubiquitous as many predict. 

However, the two tasks asked for the same thing – a full reply to the question concerning the case – and the same grading scheme was used for both. The marks for both tasks counted in equal amounts for the course grade, so students were motivated to make the same amount of effort on both. 

HEC students


On this assignment, students do better without the help of ChatGPT

Nevertheless, the students, on average, got a 28% lower grade on the correct task than on the answer task. For a given case, a student correcting an answer provided by ChatGPT got, on average, 28 marks out of 100 less than a student answering the question by themselves. Students, it turns out, did considerably worse when they were given a ChatGPT aid and asked to correct it than if they were asked to provide an answer from scratch. 

Students did considerably worse when they were given a ChatGPT aid and asked to correct it than if they were asked to provide an answer from scratch.

A behavioral bias?

Perhaps these results can be explained by postulating high student trust in ChatGPT’s answers. However, students were explicitly primed to be wary of the responses provided: they had been informed that ChatGPT had been tested on a previous, similar assignment and did pretty badly. And previous research suggests that such information typically undermines trust in algorithms. Moreover, no significant difference was found between their grades on the correct task when they thought they were correcting ChatGPT or another student.

 

Our classroom experiment suggests that the professionals of tomorrow may do a considerably worse job when aided by AI than when working alone.

 

A perhaps more promising explanation is in terms of the Confirmation Bias – the tendency to insufficiently collect and interpret information contradicting a given belief or position. Inspection of answers shows a clear tendency among many students to provide small modifications to the provided responses, even where larger corrections were in order. Moreover, there is evidence that this bias tends to persist even when people are warned that the base belief has little claim to being correct 1,2. Could the tendency to display insufficient criticism with respect to certain positions – a bias that is taught in business schools worldwide and HEC in particular – be behind potential misuses of ChatGPT and its alternatives?

Chatbots have been touted as having a future role in aiding humans in a range of areas; but this assumes that humans will be capable of using them properly. One important task for humans in such interactions will be to evaluate, and where necessary correct, the output of their chatbots. 

Our classroom experiment suggests that the professionals of tomorrow may do a considerably worse job when aided than when working alone – perhaps due to behavioral biases that have been long understood, perhaps due to some that remain to be further explored. 


One of the skills of the future, that we will need to learn to teach today, is how to ensure that ChatGPT actually help.


If anything, this argues for more, rather than less, chatbots in the classroom. One of the skills of the future, that we will need to learn to teach today, is how to ensure that they actually help.


References:
1. Kahneman, D. Thinking, fast and slow. (Macmillan, 2011). 
2. Nickerson, R. S. Confirmation Bias: A Ubiquitous Phenomenon in Many Guises. Review of General Psychology 2, 175–220 (1998).
 

Article by Brian Hill, based on his paper, “Taking the help or going alone: ChatGPT and class assignments”.
See structure
Part 3

How Much to Reveal to Persuade a Decision Maker?

Economics
Published on:
Updated on:

How much information one needs to provide to decision makers to respect transparency while keeping its competitive edge? In a new study using a mathematical probabilistic model, HEC Paris professors in Economics and Decision Sciences Frederic Koessler, Marie Laclau and Tristan Tomala, find the optimal equilibrium of information to reveal for companies in a situation where there are competitors.

In a context where a company is obliged to be truthful with a decision maker responsible of its fate, and has competitors in the market, how much information is it relevant to reveal? And does it need to share all the information for the sake of transparency? 

Suppose a government body must decide whether to authorize putting a drug on the market. In this situation of uncertainty, where there are competitors in the marketplace, the decision will be based on the level of information shared by the pharmaceutical industry, who doesn’t know how much to reveal.

Indeed, in some cases, it is not optimal to reveal everything because of the risk of revealing defects — but firms cannot lie. So how much information should a theoretical pharmaceutical company share with governmental authorities to convince them?

How much to reveal to inform a choice: a probabilistic model

Our study is based on a probabilistic model of persuasion (or “information design”) in an uncertain situation. It offers several options: revealing no information, partial information, or an abundance of information. As our model also includes competitors, the company must also find the right balance of information that persuades the decision maker that its drug is the best choice.

In other words, each firm (or “information designer”) seeks to convince, say, the U.S. Food and Drug Administration (the decision maker or “agent”) that its product is effective and would prefer that competitor firms’ products not be approved.

In other examples, departments within an organization or university might try to persuade the head of the organization to allocate a position to their department.

In all these examples, interested parties try to design information to influence the behavior of decision makers. Our study provides a general theoretical framework to analyze such situations.

Case 1: Informing with both public and private messages 

We looked at situations where there were multiple information makers and decision makers. In the general case, information makers were able to pass both public and private messages. With each message or piece of information, decision makers will modify and update their beliefs and judgment depending on the selection of information they receive. So, the information maker hopes to induce beliefs that are beneficial to him/her. In that general model, we were able to find equilibrium between the amount of information revealed and the benefits to the information makers (to be checked).

A key challenge is to capture the equilibrium interplay between the strategies of information disclosures and the choices of decision makers. Many times, a decision maker might act in favor of one or the other information makers. This is particularly salient when the decision maker is indifferent with whom to favor. In such a case, an additional tiny bit of evidence from one or the other might be decisive. We find equilibrium decision making whereby decision makers arbitrage between the interests of the several information makers.

Case 2: Informing with numerous public messages

In another model, each information maker sends only public messages to all agents. In this case, we discuss the number of messages (or clinical trials, in our example) to send to the decision maker. According to this model, we found a robust equilibrium, i.e., a right amount of information to share, if the informer sent a limited number of messages to the decision maker (or conducted a limited number of clinical trials). In other words, though theoretically sending an infinite number of messages (or clinical trials) might perfectly reveal the situation, there is no benefit to doing this.

This finding offers a huge simplification to the problem of searching for the equilibrium amount of information. The model does not put a priori bounds on the number of messages, so it might be theoretically possible that adding more and more messages be increasingly beneficial to the information maker. Our result proves that this is not the case and that the total amount of information is a priori bounded. This is key to designing computerized equilibrium calculations out of market data.

Case 3: Informing with finite sets of messages

The final class is called “rectangular corporation problems”, where each designer controls a group of agents in one “corporation.” In this case, robust pure-strategy equilibria exist with finite sets of messages, and we can characterize the equilibria, i.e., the right amount of information to reveal.

The appeal of this specification of the model is to capture information within competing organizations and to be “simple” to solve. Each corporation can be seen as an organization or firm, composed of many collaborators who take individual decisions and have their own incentives. The information maker of a corporation can be thought of as the manager who releases information to his/her collaborators, considering both their own interests and the competition with other firms. The equilibria can be easily calculated using the traditional methods of game theory.

Conclusion

Our paper offers a methodological contribution to designing what information to disclose when there is competition between information providers for influencing decision makers. Disclosing information can come in various ways and is potentially very complex. 

We provide a unified model that can be used no matter how many information makers and decision makers and no matter the nature of information. We show that equilibrium information disclosure can be found in the most general case, though that task can be daunting. We find important cases where the task can be simplified, and equilibria can be directly calculated from market data.

 

*Long information design (with Frédéric Koessler, Jérôme Renault and Tristan Tomala), Theoretical Economics, 2022, 17, 883-927.

Practical Applications

We demonstrate the existence of an optimal strategy (or an “equilibrium”) in a context where a company is obliged to be truthful and has competitors in the market. It is a significant contribution for any information designer and that is of interest to applied economists, who will help organizations. The authors have also documented ways to calculate the optimal strategies for different cases, which are explained in another paper*.

Methodology

This is a theoretical paper that uses mathematical models to provide a rational framework to analyze the decision-making process. Previous work in the field generally considered only one information designer. Our new mathematical model is applicable to situations where companies are in competition. We examine the trade-off between transparency and competitiveness for the information designer when there are multiple designers in competition and multiple decision makers.
Based on an interview with Marie Laclau and her article “Interactive Information Design” (Mathematics of Operations Research, February 2022), co-written with Frédéric Koessler and Tristan Tomala.
marie laclau
Marie Laclau
Associate Professor (Education Track)
Frederic Koessler
Frédéric Koessler
Professor (Education Track)
See structure
Part 7

Nudges and Artificial Intelligence: for Better or for Worse?

Decision Sciences
Published on:

The latest developments in artificial intelligence are often allied to the prospect of a better world, with more powerful, more rational algorithms straightening out human flaws. The idea often floated is that public policy will be more effective because it will be better informed and more responsive. Likewise, it is said that medicine will deliver speedier, more accurate diagnoses. But where are we when it comes to the subject of consumption? Can algorithms be used to steer consumers towards better, more enlightened – and less impulsive – choices? Why not? Online assessments about a product or service, for instance, help information flow more smoothly. But we are also heading down a new path: myriad algorithms are now the power behind “nudges”, those fine details about the shopping environment that insidiously guide the choices made by consumers.

nudging-cover

Nudge theory was developed by Richard Thaler (Nobel Prize in Economics 2017) and Cass Sunstein in 2008. The two authors suggested that cognitive biases (faulty reasoning or perception that distorts decision-making) might serve as an instrument of public policy. These biases could be used to nudge individuals towards decisions that are deemed good for themselves or the wider community – but which they lack the perspicacity or motivation to pursue. Subtle changes in the decision-making environment can steer behaviors in a virtuous direction. Let's say you are staying in a hotel, and you know that most of the previous guests in your bedroom have re-used the same bath towel from day to day; conformity bias will then prompt you to follow suit. This same bias may prod you to cut back on your energy consumption if you find out that it is higher than your neighbor's. Automatically registering voters on the electoral roll – like pre-filled tax returns – is another instance that draws on the virtuous simplification of the target behavior. Nudging is an insidious way of inducing people to change behaviors while safeguarding their freedom of choice. It is an alternative to the conventional tools of state action such as, for example, bans or taxes.

The same methods of influencing people are also employed in marketing based on the following idea: if reason fails to persuade consumers about the utility of a purchase, you can coax them insidiously. Let’s say you want to reserve a hotel online. The site warns you that there are not many bedrooms left in your chosen category, and that other internet users are currently looking at them… all of which nudges you to book your room at full speed so you do not miss out on what you see as a rare opportunity. Websites that display a default purchase option prominently or default acceptance of specific terms and conditions are too numerous to mention. Of course, you are free to disregard these defaults provided you have time on your hands and enjoy a good search. A free trial that you end up paying for because you forgot to cancel it is another example of nudge used for marketing. And then there are those discount vouchers with conditions so limited they never get used, or targeted ads that make well-timed offers. This approach distorts the very nature of nudging since it does not aim to improve the well-being of the consumer or society, which is why it is sometimes known as “bad nudging” or “sludging”. 

The prediction made by Noah Harari in 2018 has, worryingly, already come true in part: “As biotechnology and machine learning improve, it will become easier to manipulate people's deepest emotions and desires [...] could you still tell the difference between yourself and [the] marketing experts?” It goes without saying that influence strategies are not exclusive to artificial intelligence. Door-to-door sales reps have known and used more-or-less ethical sales techniques for many years, and it follows that nudges can be employed by humans. Just think of the barista who asks if you would like a pastry – or even a set menu – when all you do is ask for a coffee. And then there is the sales assistant who kicks off negotiations with an inflated offer before pretending to give you a generous discount. Artificial intelligence, however, has the power to swell the use of influence methods by rolling them out systematically on a grand scale. The behavioral biases underpinning standard nudges were derived from experimental research. But big data can automatically detect the tiniest weak point in the decision-making process, which can then be leveraged to influence consumers. Once a new behavioral lever has been identified, algorithms can apply it extensively. 

What are the consequences of these “bad nudges”? Consumers may feel deceived because they purchase items or services that do not match their real needs, or because attempts to hold out against the influences generate a fatigue that degrades the shopping experience. Accordingly, using nudges in the field of marketing serves to lower consumer well-being.

In more general terms, the wholesale deployment of nudges often systematizes mistakes that were formerly occasional: irrationality, in other words, becomes the norm.

 

In other words, irrationality becomes the norm.

 

In this respect, the growing use of nudges is upsetting the foundations on which liberal economics is built. In this model, it is the pressure exerted by consumers making informed choices that encourages producers to offer products that best match consumer needs at the best price. Nudges upend this process since producers can use them to influence consumer preferences. This means that consumers who have been influenced by nudges no longer exert their counter-power on producers. The possibility that consumer behavior may be swayed by nudging challenges some of the virtues of the market economy. Likewise, the idea that the public might cast their votes under influence undermines the basis of the democratic model. 

How can we stave off these damaging effects? Is regulation the answer? It would be problematic to legislate in this area, since the distinction between information and influence is so slight. At the very least, it could become a requirement that the information provided during the purchase process (such as the quantities available) be true, although this would still be difficult to enforce. Change could also be driven by consumers, who could turn their backs on platforms that employ these techniques. This is no easy task: the influences are not always conscious, and some platforms operate a quasi-monopoly. Sellers themselves could also reverse the trend by certifying that they do not use influence techniques as a way of guaranteeing quality and respect for their customers. This approach could be supported by artificial intelligence: algorithms could be used to automatically test online sales sites to detect nudges, and a certification label could be created. 

Do we need “good algorithms” for fighting “bad ones”? Although this idea is simplistic, it does remind us that machines only do what we have designed them to do (apart from mistakes in programming). This means that it is up to consumers (or their representatives or advocates) to make use of the possibilities afforded by artificial intelligence to defend their interests.

See structure
Part 5

Yes, You Can Be Trained To Make Better Decisions

Decision Sciences
Published on:
Updated on:

Mental distortions known as cognitive biases often shifts our judgement away from rational prescriptions. While such biases are normal – it's just the way our brains are wired – they can lead to poor choices, sometimes with disastrous consequences. But new evidence shows how simple training can help us identify these biases and tremendously improve decision making.

debiasing cover

Despite its incredible abilities, our brain is often fooled into making seemingly irrational decisions because of certain biases in the way it processes information. Decision making is complex, so we take mental shortcuts based on our emotions, experience or just the way information is framed. We tend to see patterns where there aren't any (clustering illusion), be overly optimistic about our own abilities (overconfidence bias), follow the judgement of others (bandwagon effect) and so on. Scientists regularly remind us of the many ways cognitive biases interfere with the choices we make. 

How does cognitive bias affect decision making?

It can cloud our judgement and lead to disastrous choices. Cognitive bias has practical ramifications beyond private life, extending to professional domains including business, military operations, political policy, and medicine. Some of the clearest examples of the effects of bias on consequential decisions feature the influence of confirmation bias on military operations. Confirmation bias - that is, the tendency to conduct a biased search for and interpretation of evidence in support of our hypotheses and beliefs - has contributed to the downing of Iran Air Flight 655 in 1988 and the decision to invade Iraq in 2003.

So are we doomed to make terrible decisions? 

Daniel_KAHNEMAN
Daniel Kahneman, 
2002 Nobel Memorial Prize
in Economic Sciences

Ever since Daniel Kahneman and Amos Tversky formalized the concept of cognitive bias in 1972, most empirical evidence has given credence to the claim that our brain is incapable of improving our decision-making abilities. However, our latest field study, published by Psychological Science in September 2019, suggests that a one-shot de-biasing training can significantly reduce the deleterious influence of cognitive bias on decision making. We conducted our experiment in a field setting that involved 290 graduate business students at HEC Paris. In our experiment, a single training intervention reduced biased decision making by almost a third.

How much does (or could) this improve decision making? 

The results of our paper - led by Professors Anne Laure Sellier (HEC Paris), Irene Scopelliti (City University of London) and Carey K. Morewedge (Boston University) –establish a clear link between cognitive bias reduction training and improved judgment/decision-making abilities in a high-risk managerial context. Our results could have far-reaching consequences for everyday choices, but also for crucial and high-stakes decisions. At a military level, it could help avoid some of the deadly errors the US Armed Forces committed in the past. As American educator Ben Yagoda pointed out in his compelling article in The Atlantic last year, without confirmation bias, the US may not have believed Iraq possessed weapons of mass destruction and decided to invade Iraq in 2003. As the official 2005 report to George W. Bush put it: “The disciplined use of alternative hypotheses could have helped counter the natural cognitive tendency to force new information into existing paradigms.”

The results of our paper establish a clear link between cognitive bias reduction training and improved judgment/decision-making abilities in a high-risk managerial context.

Which particular biases can be attenuated and how?

Our research focuses on one particular training intervention, which had produced large and long-lasting reductions of confirmation bias, correspondence bias, and the bias blind spot in the laboratory. Our intervention was originally created for the Office of the Director of National Intelligence and was designed to reduce bias in US government intelligence analysts.

The intervention involved playing a serious game that gives players personalized feedback and coaching on their susceptibility to cognitive biases. The training elicited biases from players during game play, and then defined each bias. It gave examples of how each bias influenced decision making in professional contexts (e.g., intelligence and medicine), explained to participants how their choices may have been influenced by the biases, and provided participants with strategies to avoid bias and practice opportunities to apply their learning to new problems.

How exactly did you train the participants in your study?

Before or after they played the serious game, students from three Master’s programs at HEC Paris were asked to crack Carter Racing, a complex business case modelled on the fatal decision to launch the Space Shuttle Challenger, which disintegrated a few minutes after take-off in 1986. Each participant acted as the lead of an automotive racing team making a high-stakes, go/no-go decision: remain in a race or withdraw from it. The case is designed so that its surface features suggest the team should race, but careful analysis of the case evidence reveals that racing will have catastrophic consequences for the team. We measured the effects of cognitive bias reduction training to see if the intervention improved decision making in the case. Would trained participants decide to race, or not? Crucially, trainees were not aware that their decision making would be examined for bias. 

Can such training truly improve judgement?

The results were promising. Participants trained before completing the case were 29% less likely to choose the inferior hypothesis-confirming solution (i.e., to race) than participants trained after completing the case. This result held when we controlled for individual differences including gender, work experience, GMAT scores, GPA, and even participants propensity for cognitive reflection (i.e., their tendency to override an incorrect “gut” response and engage in further reflection leading up to a correct answer). Our analyses of participants’ justifications for their decisions suggest that their improved decision making was driven by a reduction in confirmatory hypothesis testing. Trained participants generated fewer arguments in support of racing—the inferior case solution—than did untrained participants.

Our results provide encouraging evidence that training can improve decision making in the field, generalizing to consequential decisions in professional and personal life. Trained participants were more likely to choose the optimal case solution, so training improved rather than impaired decision making.

How applicable are your (lab-tested) results in the wider world?

Of course, our findings are limited to a single field experiment. More research is needed to replicate the effect in other domains and to explain why this game-based training intervention transferred more effectively than have other forms of training tested in past research. Games may be more engaging than lectures or written summaries of research findings. The game also provided intensive practice and personalized feedback, which is another possibility. A third possibility is the way the intervention taught players about biases. Training may be more effective when it describes cognitive biases and how to mitigate them at an abstract level, and then gives trainees immediate practice testing out their new knowledge on different problems and contexts.

Games may be more engaging than lectures or written summaries of research findings.

People have been debating how to overcome the many ways in which we deviate from rationality well before the concept of cognitive bias was first coined over six decades ago. The general conclusion has been that decision making cannot be improved within persons, and the only way to reduce bias is through changes to the environment like nudges. In September 2018, Nobel laureate Daniel Kahneman said, “You can’t improve intuition. Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning… Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument, the rules go out the window.”

We believe our results show, fortunately, that this conclusion may be premature. Training appears to be a scalable and effective intervention that can improve decisions in professional and personal life.

Article based on an interview with Anne Laure Sellier of HEC Paris and on her paper, “Debiasing Training Improves Decision Making in the Field”, co-authored by Irene Scopelliti, of City University of London and Carey K. Morewedge of Boston University.
See structure
Part 6

Decision Making: Do You Need a Decision Theorist… or a Shrink?

Decision Sciences
Published on:

Human beings are notoriously bad at making rational decisions. Even theoretical models designed to help you find the “right” answer are limited in their applications. A trio of researchers calls for a re-appraisal of decision theory, arguing that basic tools can improve decision making by challenging underlying assumptions and uncovering psychological biases.

a shrink analyzing his patient on a couch - rudall30

©rudall30 on Adobe Stock

Is it worth insuring my house against hurricane damage? Which route will help me beat traffic? Should I invest in this stock? Latte, black, cappuccino, mocha or vanilla? Every day, we are faced with hundreds of decisions, some big, some small, some tough, some easy. Sometimes we follow our instinct, sometimes our intellect, sometimes we just go with habit. But more important than how we choose between various options, is the question, how should we choose? 

Decision theory offers a formal approach, often seen as a rational way to handle managerial decisions. While this theoretical framework has not lived up to early expectations, failing to provide the “right” answer in every case, a trio of researchers says not to throw the baby out with the bathwater just yet.

Decision making has been formalized and useful, but…

Decision making has been formalized since the age of Enlightenment, a famous early example being Blaise Pascal's wager about the existence of God. Decision theory and its key concepts (utility, or desirability of an outcome, states of the world, or possible scenarios, etc.) culminated in the mid-20th century with the invention of game theory and the development of mathematical tools of analysis. 

“In the 1950's there was the idea that mathematical models could automate decisions,” says Itzhak Gilboa, professor of decision science at HEC Paris. “There has been a measure of success, with applications to logistics, or, for example, to route optimization with Google Maps.” 

And yet, today, decision theory is all but dismissed, including in business circles. Olivier Sibony, who worked as a management consultant for 25 years before joining HEC Paris to teach strategy, says he literally never encountered decision theory in those 25 years, either in words or practice, the exception being within a minority of financial institutions. “It's shocking,” he muses, “because it is taught in business schools as a sensible way to make decisions.” 

…Decision theory has its limits 

The textbook model of decision theory, however enticing and elegant it may be, has a number of limitations that prevent it from being widely used by managers.

The theoretical model raises some very practical challenges. Probability is often hard to calculate due to lack of data about the same past problems. Similarly, the desirability of an outcome, such as career choice for example, is hard to quantify because of the wealth of criteria by which it is judged: income, prestige, work-life balance... 

What’s more, behavioral psychology has shown that human beings, far from being the rational agents assumed by economic theory, are hopelessly irrational. Confirmation bias makes us prone to disregard negative data about the option we are considering; overconfidence makes us consistently overestimate our chances of success; mental accounting makes us value equivalent outcomes differently depending on the way they are framed; and on and on.

The list of psychological biases we suffer from is so long, it's a miracle that we haven't blundered ourselves into extinction, as a race. “But we are teetering on the brink of just that!” counters Olivier Sibony. 

 

The list of psychological biases we suffer from is so long, it's a miracle that we haven't blundered ourselves into extinction, as a race.

 

And just because the world functions relatively well doesn't mean we have been good at making decisions, including in business, where success often boils down to sheer luck. “Even a billionaire like Warren Buffet acknowledges the role of luck in his success,” adds Sibony. “We do observe a lot of failures; after all, millions of years of evolution have prepared us to recognize rotten food, but not rotten counterparties,” joke both HEC professors. 

For a rehabilitation of the basic tools of decision theory

Recognizing all the limitations of decision theory, the specialists nonetheless believe that certain tools can be helpful. 

The axioms of rational decision making are especially important in the context of strategic decisions made by managers and executives, who might need to present and justify decisions to their superiors or boards. 

 

Decision theory is not a magic wand for a final answer. It should be used as a conceptual framework, or tool, rather than as a theory that is directly applicable.

 

Decision theory is not a magic wand for a final answer. It should be used as a conceptual framework, or tool, rather than as a theory that is directly applicable. The researchers outline 3 different types of decisions and how decision theory can potentially serve in each of those cases: 

1. In the first type of decision, outcomes and probabilities are clear and all relevant inputs are known or knowable, which means that finding the best solution is simply a matter of using mathematical analysis based on classical decision theory. Simple computing power can find the single best solution (optimize a route or, in the case developed in the research article, allocate sales reps to territories according to travel costs). The decision maker needs not even know the details of the algorithm that the software uses. 

2. In the second type of decision, the desired outcome is clear but not all of the relevant inputs are known or knowable. In this case, decision theory cannot provide a single best answer but can test the consistency of the reasoning by formulating the decision maker’s goals, constraints, and so on, to check whether the reasoning makes sense. 

3. In the third type of decision, either because data is missing or because the logic of the proposed decision cannot be articulated, even the desired outcome is unclear. In such a case, the problem cannot be described in the language of decision theory. But, while theory cannot provide a “correct” answer, it can still serve to test the intuition and logic of the decision maker.

There may be no objective way to assign precise probabilities to different scenarios, or even to identify all the possibilities, but the theory can still potentially challenge underlying assumptions or processes. 

“If you want to be in a certain market just for your ego, fine, but it's my job to uncover it!” says Itzhak Gilboa, comparing the process to “sitting down with a shrink before you press the button”. The idea is simply to understand one's own motivations for a decision and to be comfortable enough with them to explain the rationale to one's own boss. The researcher likes to think of the approach as a “humanistic project”, improving decisions in a way that will ultimately be useful to society – “even if business decisions are rarely life and death matters!”

Applications

Focus - Application pour les marques
The researchers say the most important idea to retain is that of challenging decision making. When it comes to the second and third type of decisions, where an algorithm cannot simply identify the best solution for you, the researchers recommend collaborating with someone who has a firm grasp of decision theory – someone who knows, for example, what a utility function is, or desirability of outcome, and so on – to challenge your decision-making process. "The best thing you can do to improve the quality of a decision is to ask an outsider to challenge not the decision itself but the process and its logic,” says Olivier Sibony. “There are very practical ways of getting theory and practice to dialogue, by setting up routines and methods."

Methodology

methodology
The paper first reviews the main principles and concepts of decision theory and explores its limitations to explain why it is not currently used in business decision making. The researchers then make a case for decision theory as a conceptual framework whose tools can be used to support and refine intuition, and give examples of applications through three imaginary dialogues with executives faced with three different business cases.
Based on an interview with Itzhak Gilboa and Olivier Sibony on their research paper “Decision theory made relevant: Between the software and the shrink,” co-authored with former HEC PhD student Maria Rouziou (Research in Economics, 2018). To find out more about how to use decision theory to challenge your decision making, read the full paper here.
See structure
Part 7

How to Deal with Severe Uncertainty?

Decision Sciences
Published on:

Severe uncertainty, deep uncertainty, radical uncertainty, ambiguity… different actors in a range of fields – decision scientists, risk analysts, climate scientists, central bankers – use a variety of phrases to talk of some extreme, important yet too often ignored form of uncertainty. But what is it? And how should we deal with this particular species of uncertainty: how should we characterise it, communicate it, and decide in the face of it? In this interview, CNRS Research Director and HEC Paris Research Professor Brian Hill explains the concept and unveils applicable tools based on theoretical models for guiding decisions in situations of severe uncertainty.

hourglass on the grass

©icedmocha on Adobe Stock

listen to the podcast:

What is severe uncertainty?

A central characteristic of severe uncertainty is the lack of justified probabilities. When tossing a coin, we know precisely the probability of heads. Economists standardly assume that all uncertainties are glorified coin tosses: we can come up with a precise probability for whatever might happen (even if we might not always be right about it). But clearly many real-life situations are just not like that. There are many cases where we don’t know something for sure, and, though that doesn’t necessarily mean that we know nothing at all, what we do know is not enough to justify a solid, precise probability.

 

A central characteristic of severe uncertainty is the lack of justified probabilities.

 

What’s the coronavirus mortality rate? We know that it’s worse than the flu, and below 15%, but beyond that? Can we give a number we are 90% sure about? How fast will the global economy recover to turn-of-the-year GDP levels, or the Dow Jones to its pre-Covid-19 levels? They will almost surely not be there by September, but beyond that? Can we put precise probabilities? What will happen to sea level in, say, New York over the next 30 years? Given our understanding of climate change, we know it will rise, and almost certainly by less than 4m, but beyond that?  

Why is severe uncertainty relevant now?

Severe uncertainty is especially relevant now because we increasingly face situations involving it. Examples abound, including climate mitigation policy, Coronavirus reaction, economic policy, and of course business decisions. I should also add that this is being increasingly recognized, with the ex-governor of the Bank of England, Lord King, having just published a book on Radical Uncertainty with John Kay.

 

These decisions don’t allow us the time to do that: we have to respond to the Coronavirus before fully understanding it.

 

What do all these examples have in common? Urgency. Since the problem is lack of knowledge, one instinctual response would be to go out and do (more) research. But these decisions don’t allow us the time to do that: we have to respond to the Coronavirus before fully understanding it; by the time we know the sea level in New York in 2050 it might be too late to save it from flooding; and so on.

South of Manhattan surrounded by the bay
"Can we put precise probabilities? What will happen to sea level in, say, New York over the next 30 years?" (Photo: South of Manhattan, New York City ©DiegoAransay on AdobeStock)

 

Why do most people in economics, finance and risk analysis continue to discount severe uncertainty, by assuming that all uncertainty can be fully captured by probabilities?

There are basically two reasons: one pragmatic and the other principled. First, it’s easier to work with precise probabilities, and the mathematical methods are familiar. Second, a bunch of philosophical, “axiom-based” arguments purport to show that, if you stray from precise probabilities, your decision making will violate some seemingly “rational” dynamic principles. These arguments have persuaded many over the years. If they were right, then these rationality principles would justify pretending that we always had precise probabilities (despite the egregiousness of the pretence).

 

In my research, I show that you can satisfy the rationality principles, even if you do not stick to precise probabilities.

 

In sum, beyond these arguments, the only barrier to a more refined, richer approach to uncertainty is inertia. In my research (1), I show that these arguments rest on a mistake: you can satisfy (properly formalised versions of) the rationality principles, even if you do not stick to precise probabilities. It thus removes the main hurdle to building an account of rational or sensible decision making that doesn’t need to assume precise probabilities.

How should we decide in the face of severe uncertainty, then? 

As I see it, severe uncertainty poses a double challenge. The first is to work out what we do know and how solid that knowledge is, avoiding two pitfalls: nihilism – assuming that because we can’t put probabilities, we don’t know anything at all – and self-deception – pretending or assuming that we know more or have more precise knowledge than we in fact do. The second is to work out how to harness what we know – and more importantly recognize what we don’t – in decision making. Good, responsible, and informed but not self-deceptive decision making.

In my research, "Confidence in Beliefs and Rational Decision Making" (2), I have developed an approach to decision under uncertainty that meets each of these challenges. It combines two ingredients:

1. Confidence

Forget pretending that you can always give a probability and:

a. Ask for your best guess. Then ask how confident you are of it. That might not be very confident at all (if so: don’t rely on it!)
b. Then ask: if you had to give a probability range that you were very confident in, what would it be. (For difficult cases, this range could be very large: that’s what makes the case difficult!)
c. Repeat, asking for ranges that more or less confident in, or sure of.
(Note that ranges are well-known ways of not having to give precise values. To take a topical example, often in discussions of Covid-19 (e.g. here), epidemiologists report ranges. Under the proposal, you don’t even need to settle on a single range, but just ask how confident you are in a given range – on the basis of what you know).

 

2. Confidence-based caution
a. For more important decisions, demand more confidence in the judgements on which you rely to take the decision. If you have lots of confidence in a judgement or an assessment, by all means base your decision on it. If not, perhaps you should fall back on the (weaker, more imprecise) judgements of which you are more sure – especially if the decision is very important.
b. Now these judgements may be so weak as not to support any option as best: you don’t know enough to categorically justify a single course of action. In such cases, acknowledging this is a crucial first step. In the face of it, it’s best to show caution and take an alternative that won’t lead to too bad a result, no matter which of the values in the range (of which you are sufficiently confident) turns out to be right.

 

Basically, this advice amounts to applying precaution when you are not confident enough for the importance of the decision, and choosing boldly when you are.

This approach is not just common sense: in my research (2), I have shown that it can be defended by the sort of principled, “rationality” arguments used by some to defend the reducibility of all uncertainty to probabilities.

What about models? 

You often find criticism of, say, economic models with a tendency, when attacking the use of probabilities to represent uncertainty, of throwing the baby out with the bathwater. This is a case of what I previously called the pitfall of nihilism. By contrast, climate scientists have a relatively sophisticated use of models, which can serve as an example.

They realise that models are the input to an assessment or judgement about the question of interest (e.g. temperature in 2050, etc.), but no model – nor even all models – provide the whole picture.

 

In my research on climate uncertainty, uncertainty is reported as a form of confidence judgements on the probability assessments that come out, or could have come out, of the models.

 

Climate scientists (e.g. in IPCC reports) have to make a judgement, drawing on models, but also on other evidence, their experience and common sense. And these judgements do not generally come in the form of precise probabilities, although that’s what models produce. Rather, as I have discussed in my research with co-authors on climate uncertainty ((3) and (4)), they rightly report uncertainty in the form of confidence judgements on the probability assessments that come out, or could have come out, of the models. In other words, they adopt as reporting practice the approach I set out above.

 
1. Dynamic consistency and ambiguity: A reappraisal, Games and Economic Behavior, 120:  289-310, 2020.
2. Confidence in Beliefs and Rational Decision Making, Economics and Philosophy, 35(2): 223-258, 2019
3. Climate Change Assessments: Confidence, Probability and Decision, Philosophy of Science 84 (3): 500-522, 2017 (with R. Bradley, C. Helgeson) 
4. Combining probability with qualitative degree-of-certainty metrics in assessment, Climatic Change 149 (3-4): 517-525, 2018 (with R. Bradley, C. Helgeson) 

 

Learn more on Brian Hill’s “Decision Making under Severe Uncertainty” website, including filmed interviews of experts on the “Uncertainty Across Disciplines” project.  
See structure
Part 8

Uncertainty Across Disciplines

Decision Sciences
Published on:

We, individuals and society, are faced today with many important decisions involving radical degrees of uncertainty. To better communicate on the current state of knowledge about uncertainty, and incorporate it into decisions, Brian Hill, CNRS and HEC Paris Professor of Economics and Decision Sciences, initiated the "Uncertainty Across Disciplines" project, interviewing 10 leading experts on the topic of uncertainty.

Itzhak Gilboa cover

Itzhak Gilboa, Professor of Economics and Decision Sciences at HEC Paris.

How should governments decide in the face of the sorts of uncertainties involved in climate change, energy policy, genetically modified organisms or nanotechnologies, to take a few examples? And what role should scientist’s current state of knowledge and uncertainty play, and how can this uncertainty best be represented, communicated and incorporated into decisions?

The Uncertainty Across Disciplines project aims to paint a portrait of the current state of the art across a wide range of scientific disciplines and professions regarding the study of (severe) uncertainties, and decisions in the face of them.

Through a series of 10 interviews of leading experts and actors, the project presents the perspectives, results and positions in these fields, as well as individual viewpoints on the current and future state of research and practice. 

Find the 10 interviews on this page, the video playlist on YouTube here, as well as the podcast playlist and a special podcast on the COVID-19 case.

 

brian hill interview
Prof. Brian Hill (on the left) interviewing Prof. Massimo Marinacci (on the right).

 

These interviews will hopefully allow comparison, stimulate discussion, and foster communication and collaboration among these various actors. 

- Brian Hill, CNRS Research Professor in the Economics and Decision Sciences Department at HEC Paris

 

 

Filmed interviews

 

 

 

 

 

 

 

 

 

 

 

Podcast playlist

If you prefer to listen only, click on the image to open the link:

 

 

Why is Coronavirus pandemic a particularly challenging case for decision-makers today?

In this podcast, Brian Hill provides tools for appropriate and rational decision-making through the notion of confidence in judgments. Click on the image to open the link:

 

See structure
Part 9

The Impact of Overconfidence and Attitudes towards Ambiguity on Market Entry

Decision Sciences
Published on:

For many people who have started their entrepreneurial adventure, the biggest challenge is to believe in yourself. Yet, for those who choose this path, confidence can also make the entrepreneur underestimate actual business risks, leading to fatal decisions. Researchers of HEC Paris Business School and Bocconi University offer a new explanation for why decision makers often appear too confident, and shed light on the consequences of this characteristic.

overconfident young business man with a phone on a skateboard
Cover Photo Credits: ©lassedesignen on Adobe Stock

 

Many of the key strategic decisions made in businesses may result in wasteful allocation of resources or excess market entry. For example, close to 75% of those who choose careers in entrepreneurship would have been better off as wage workers, and almost 80% of angel investors never recoup their money, both indicating that too many (unskilled) people enter into these activities. Similarly, an average corporate acquisition is more likely to destroy value than to add value.

Many of the key strategic decisions made in businesses may result in wasteful allocation of resources or excess market entry.

Why does this happen? One possible answer that we study may lay in systematic biases that decision makers exhibit when making business entry decisions. We focus on the behavioral drivers of market entry in strategic business contexts with two characteristics that are virtually omnipresent. First, these settings are inherently ambiguous. That is, we know what might happen, but we don’t know the chances that they might happen. Ambiguous situations can be contrasted with risky ones when we know the chances of what will happen, for example when playing roulette. Second, the ambiguity in such settings, and the associated payoff, is likely to be perceived by decision makers as related to their own skills, often in comparison to rivals.

old confident executive man
©Robert Kneschke on AdobeStock

These characteristics imply that at least two distinct behavioral mechanisms could explain entry into the ambiguous, skill-based markets on which we focus in this study: overconfidence -- believing that one’s chances of success are higher than what they really are -- and having a positive attitude toward ambiguity. 

Like many before us we use a laboratory setting to make more precise claims about causality. We rely on a novel experimental treatment where we change the level of confidence that individuals have about their own skills, and the level of ambiguity.

Decision makers are ambiguity seeking when the result of the competition depends on their own and others’ skills.

We find that decision makers are ambiguity seeking when the result of the competition depends on their own and others’ skills. That is, decision makers are more willing to gamble with their money on competitions where the distribution of outcomes is shrouded by a lack of knowledge about what will happen, rather than when they have precise data on the chances of success. When outcomes of competitions are more unknown, having the opportunity to believe that your own ability affects results appears to make them more attractive. 

Similarly, we also show that overconfidence only affects entry in skill-based competitions and does not appear in games that are chance based. 

Both overconfidence and ambiguity seeking can therefore explain why individuals enter into entrepreneurship taking huge risks with their savings, or why mergers and acquisitions often do not pay off.

Article by Thomas Astebro, L’Oreal Professor of Entrepreneurship at the Economics and Decision Sciences Department at HEC Paris, based on the research paper, The Impact of Overconfidence and Ambiguity Attitude on Market Entry, co-authored by Cédric Gutierrez of Bocconi University and Tomasz Obloj of HEC Paris. Published in Organization Science (2020).
See structure
Part 10

Thinking About Time Flying? It Can Affect Your Decision Making

Decision Sciences
Published on:

When the clock in our minds ticks loudly, it changes not only our perspective of the time remaining in our lives, but also how we process information. A trio of researchers investigated how thinking about the concept of time can affect our decision making. This unique piece of research could explain biases in hiring, voting, and many other contexts.

business people watching a sandglass - cover
Cover Photo Credits: ©kirill_makarov on AdobeStock

What happens in our minds when time seems to pass by quickly?

Do you ever get the feeling that your time is running out? Perhaps you’ve been dwelling about the fact that we’ve reached the end of another decade and you’ve still not got life quite figured out. Maybe you’re questioning your life choices after seeing that your friends are all getting married, having children and buying houses, and you’re still stuck in the same job you had five years ago. We all get the feeling that the clock is ticking every now and then, but does this feeling change the way that we interpret new information? This is what we set out to investigate, specifically; we wanted to see how this feeling that the clock is ticking impacts a phenomenon known as “information distortion”.

Information distortion is the idea that people tend to be biased towards their pre-existing beliefs when they hear new facts.

Information distortion is the idea that people tend to be biased towards their pre-existing beliefs when they hear new facts. For example, imagine you are a hiring manager at an accountancy firm and you must choose between two job applicants, Adam and Mark. You hear a series of pieces of information about them in sequence. The first piece of information you look at just so happens to be education. Adam has a first-class university degree but Mark only received a second-class degree. Next you learn that Adam has already received some experience working in another similar firm while Mark is fresh out of university. Information distortion occurs if you were to evaluate this second piece of information, the job experience Adam received, as favouring him more than you would have done if you hadn’t already seen that he received a first-class degree. This phenomenon has been shown to occur everywhere from legal decisions to medical diagnoses.

hiring interview
(Photo Credits: weedezign on AdobeStock)

Manipulating people's time perspective

In order to test experimentally whether the feeling that time is running out, known as “limited time perspective”, impacts information distortion; we asked participants to describe a milestone in their life which they felt they had limited time left to achieve. They were given examples such as getting married or achieving their dream career. Participants in the control group were instead asked to report how long they spent each week completing surveys on Amazon Mechanical Turk, the platform where they were recruited. Next, we asked them how likely they would be to invest in a new business venture, producing a new type of material for making furniture. We then presented four attributes of the furniture in sequence. After each feature of the material was presented, we asked the participants to rate whether the new information made them more likely to invest in the product.

Our finding could help us understand why in the world today facts seem to be becoming more and more distorted and political polarisation appears to be increasing.

As we predicted, we found that leading participants to have a limited time perspective made them more likely to distort information. In other words, thinking about the limited time left in their lives made them more likely to hold on to the beliefs they had before having new information.
Our finding could help us understand why in the world today facts seem to be becoming more and more distorted and political polarisation appears to be increasing. As facts become distorted such as in the case of “fake news” websites that spread ideology-fuelled misinformation, people become more polarised, ebbing towards opposing ends of the political spectrum and rejecting evidence that doesn’t confirm their beliefs. Our results suggest that this could be linked to the fact that we are living in a society where we often feel we don’t have enough time, it’s possible that this may be increasing political polarisation.

time square in New York by night, Reno Laithienne on Unsplash
"Our results suggest that the fact to reject evidence that doesn’t confirm their beliefs could be linked to the fact that we are living in a society where we often feel we don’t have enough time."
(Photo by Reno Laithienne on Unsplash)

When age increases bias

Another aspect of the recent phenomenon of increasing political polarisation that is touched upon by our research is ageing. It’s well known that the elderly tend to vote differently from young people and in recent times there has been much speculation that the gap is widening. In recent years, this gap in voting behaviour has been blamed on everything from Brexit to the election of Donald Trump.

Thinking about the limited time left in their lives made them more likely to hold on to the beliefs they had before having new information.

In order to assess whether age has an impact on information distortion, we repeated our study but instead of artificially leading participants to have a limited time perspective we looked at age. To make our participants think about their age, we asked them to categorise themselves as 18-29, 30-50, or over 50 years of age. We then conducted our study as before and compared the different results based on the various age groupings. As we expected, we found that ageing had the same impact as having a limited time perspective, our older participants were more likely to show biased towards their own pre-existing beliefs. 

Ultimately our work shows that the age-old phenomenon of age impacting information distortion can be artificially manipulated very easily in people of all ages by making them think about the time they have left in their lives. Our research provides the first evidence of such a phenomenon so it should be treated with a healthy level of scepticism until it is supported by further studies, however it may provide a fruitful avenue for further research.

Methodology

Focus - Methodologie
We used Amazon’s Mechanical Turk to recruit participants, then we manipulated them to have a limited time perspective. After that we had them complete a decision task involving imagining investing in a new business venture, in order to assess the impact of limited time perspective on information distortion.

Applications

Focus - Application pour les marques
Our research has implications for political scientists studying the causes of information distortion. It may also prove valuable for marketers as our work could have implications for subjects such as brand loyalty and consumer confidence. It could also benefit human resources professionals due to the implications our work has for understanding the decision process of older managers.
Based on an interview with Anne-Sophie Chaxel and on her article “The impact of a limited time perspective on information distortion”, co-written with Catherine Wiggins of Cornell University and Jieru Xie of Virginia Polytechnic Institute and State University, Organizational Behavior and Human Decision Processes, 149 (2018).
See structure
Part 11

A New Theory in Economics Helps Predict Future Events

Economics
Published on:

When will be the next financial crisis? Who is going to win the next US presidential elections? How do we create beliefs about such events? By understanding how probabilistic beliefs form, economic theorists can now explain and predict phenomena that depend on rational beliefs. Latest research by Rossella Argenziano and Itzhak Gilboa equips economic modeling with a theory and a set of tools of belief formation, based on statistics and psychology. Some of the immediate applications are the equilibrium selection in coordination games.

black swan on a lake - Tatiana-AdobeStock

©Tatiana on Adobe Stock

How can people predict future events? They create beliefs and probabilities based on the observation of similarities between past events and an ongoing event. Let’s understand this through the three cases of the Obama election, the fall of the Soviet bloc and the curbing inflation.

1 - The Obama election

The election of Barack Obama triggered excitement and enthusiasm because a non-white became President of the United-States for the first time. Presidential elections are a rare event, and not two are exactly alike. This makes the use of statistics tricky: which past events should be included in one’s sample? How do we describe present and past events? In particular, is "race" an important feature? We claim that the precedent of Obama’s election didn’t only change the statistics – with one non-white president as opposed to zero – but also changed the way we do statistics: it showed that "race" was not an important variable in judging the similarity between events.

 

People, especially economists, can predict future events by creating beliefs and probabilities based on the observation of similarities between past events and an ongoing event.

 

2 - The fall of the Soviet bloc

The Soviet bloc started collapsing with Poland, which was the first country in the Warsaw Pact to break free from the rule of the USSR. Once this was allowed by the USSR, practically all its satellites in Eastern Europe underwent democratic revolutions, culminating in the fall of the Berlin Wall in 1989. The single precedent of Poland generated a "domino effect." This paper suggests a belief formation process that explains how a single precedent can have such a dramatic effect even in the absence of informational spillovers and strategic dependency among games.

 

Fall of Berlin Wall - Raphaël Thiémard
Fall of the Berlin Wall, November 1989. Author: Raphaël Thiémard

 

Revolution attempts are typically modeled as coordination games*: the expected utility derived from taking part in an uprising increases in the probability of its success, which in turn increases in the number of participants. For a citizen trying to decide whether to join such an attempt, it is crucial to predict the outcome of the uprising. A natural piece of information to use for such a prediction is the outcome of past revolutions in similar contexts. We suggest that the importance of the successful revolution in Poland didn't lie only in changing the relative frequency of successful revolutions, but also in changing the notion of which past revolution attempts were similar to current ones, hence relevant to predict their outcomes.

Specifically, the case of Poland was the first revolution attempt after the "Glasnost" policy was declared and implemented by the USSR. Pre-Glasnost attempts in Hungary in 1956 and in Czechoslovakia in 1968 had failed. In 1989, one might well wonder, has Glasnost made a difference? Is it a new era, where older cases of revolution attempts are no longer relevant to predict the outcome of a new one, or is it "Business as usual", and Glasnost doesn't change much more than does, say, a leader's proper name, leaving pre-Glasnost cases relevant for prediction?

So how can we learn that the revolution in Poland could help in attempting new, successful revolutions?

If the revolution attempt in Poland were to fail as did previous ones, it would seem that the variable "post-Glasnost" does not matter for prediction: with or without it, revolution attempts fail. As a result, when a person wonders what is the "right" way of judging similarity between past cases, she would likely be led to the conclusion that the variable "post-Glasnost" should be ignored, and that, consequently, the statistics are zero successes out of three revolution attempts. By contrast, because the revolution attempt in Poland succeeded, it had a double effect on the statistics. First, it increased the frequency of successful revolutions from 0/2 to 1/3. While 1/3 is larger than 0, it still leads to pessimistic predictions about successes of future attempts. However, if people also learn how to judge similarity, the single case of Poland leads them to the conclusion that "post-Glasnost" is an important variable. 

How can we learn to judge whether a past event is similar to a current one?

The theory presented in our latest research paper, "Similarity-Nash Equilibria* in Statistical Games", suggests people learn from past events not only what are the frequencies, but also what is the relevant database.

 

Our theory suggests people learn from past events not only what are the frequencies, but also what is the relevant database.

 

Indeed, if we use the Polish revolution as an example, the frequency of successes post-Glasnost, 1/1, differs dramatically from the pre-Glasnost frequency, 0/2. Once this is taken into account, pre-Glasnost events are not as relevant for prediction as they used to be. If we consider the somewhat extreme view that post-Glasnost attempts constitute a class apart, the relevant empirical frequency of success becomes 1/1 rather than 1/3. Correspondingly, other countries in the Soviet Bloc could be encouraged by this single precedent, and soon it wasn't single any more.

How to find the relevant variable among many others to judge if a past situation is similar to a current one?
In a previous paper1, we show that the “empirically optimal similarity function” can be identified under certain conditions. In essence, many observations for few variables make learning easier.

3 - Curbing inflation

As another example, consider a central bank, which redenominates* its currency in an attempt to restrain inflation. Inflation is an equilibrium phenomenon: an economic agent (or individual) who expects others to raise prices of goods and services would be wise to do so herself. Thus, one can think of the inflation game as a price-setting game with multiple equilibria, and redenomination as an attempt to switch from a hyperinflation equilibrium to a low inflation equilibrium2. Will economic agents – consumers and firms, bankers and investors – use the new variable in their belief formation? Will firms assume that prices will no longer increase, when pricing their own goods? Or will they dismiss the redenomination as a "cosmetic change" and believe that inflation will continue to run high? Our analysis suggests that the answer depends on the periods immediately following the redenomination: if in these periods inflation is low, the variable “new currency” will be used for prediction and a new, low-inflation equilibrium can be readached. So it means that if something happens today, then the coming year will be crucial to judge whether the same thing happening means that a similar event is coming.

 

woman in suit adding coins on a pile next to an hourglass - Andrey Popov-AdobeStock
©Tatiana on Adobe Stock

 

By contrast, if in the first periods the inflation rate continues to be high, agents will realize that it’s “business as usual”, so the variable will be judged irrelevant. So people will see that with and without the change, things look the same. As a result, the entire history will be used for prediction, making it very difficult to convince economic agents that the future will differ from the past. Israel switched from a Lira to a Shekel (worth 10 Liras) in 1980 and then to a New Shekel (worth 1,000 Shekels) in 1985. In 1980 the change was not accompanied by fiscal policy changes, meaning that the government didn’t cut expenses and try to finance the deficit by “printing money”, so inflation spiraled into hyper-inflation. According to our account, people realized that, Shekel or Lira, inflation runs high, and then, of course, it did.

By contrast, the change in 1985 was accompanied by budget cuts, and inflation was curbed in the following years. We argue that the real change in fiscal policy gave meaning to the nominal change* of redenomination: the New Shekel, which was perceptibly different from its predecessor the Shekel, suddenly seemed to actually behave differently. Hence, rational, economic persons who ask themselves, “which are the periods from the past that are relevant to construct beliefs to predict future events?” found that the older periods were not so relevant. This gave a chance to believe in a low-inflation equilibrium.

A word to the experts

For standard economic theory, with perfectly rational individuals in economics, it is hard to explain currency redenomination: it is a purely nominal exercise that all agents should view as irrelevant. Psychological accounts, on the other hand, can explain why people react differently to different nominal sums, but may be challenged in explaining the difference between successful and unsuccessful redenominations. Our account takes a middle ground: our agents may be perfectly rational, but, realizing that they are playing a coordination game with others, they do take into account perceptions that may be used to select an equilibrium, even if, in and of themselves, they are economically irrelevant. Thus, erasing three zeroes from all monetary sums is a noticeable change. It will have economic meaning only if most agents think it has economic meaning. And here, we claim, come the learning of the similarity function: if the perceptual change is accompanied by real policy changes, a new equilibrium may be selected.

 

Our account takes a middle ground: agents may be perfectly rational, but, realizing that they are playing a coordination game with others, they do take into account perceptions that may be used to select an equilibrium, even if they are economically irrelevant.

 

 

*Keywords:

Equilibrium: In economics, an equilibrium is a situation in which agent’s optimal actions and prices are such that supply and demand are equal. 

Nash equilibrium: In game theory in economics, the Nash equilibrium is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy. (Source: Osborne, Martin J.; Rubinstein, Ariel (12 Jul 1994). A Course in Game Theory. Cambridge, MA: MIT. p. 14)

Coordination games: In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies. 

Nominal change: In economics, the nominal value, rate, or level of something is the one expressed in terms of current prices or figures, without taking into account general changes in prices that take place over time (Source: Reverso). A “nominal” change would be one where we say “from now on, one (new) euro is worth what 100 old euros used to be worth”. Economists call this “nominal” because there is no real change in the economy – it’s just a change of name. If I used to get 100,000 euros a month and spend 60,000 at the supermarket, and now I get 1,000 euros and pay only 600, nothing “real” has changed. A “real” change would happen if, for instance, the government buys less on the market, or employs less workers etc. (Itzhak Gilboa) 

Redenomination: The process of exchanging old currency for new currency, or changing the face value of existing notes in circulation. 

 

1Argenziano, R. and I. Gilboa, "Second-Order Induction in Prediction Problems", PNAS, 116 (2019). Find the filmed interview of Itzhak Gilboa here.

2See Mosley (2005): "(...) redenominations often occur after economic crises, as governments attempt to convince citizens and markets that hyperinflation is a thing of the past. In some cases, the timing is correct, in that redenomination caps off high levels of inflation. In other cases, governments are not able to reign in inflation immediately after redenomination, and they may make multiple efforts (...)."

Article by HEC Paris Professor Itzhak Gilboa, based on latest research publication, “Similarity-Nash Equilibria in Statistical Games” (full paper) by Rossella Argenziano of the University of Essex and Itzhak Gilboa. This research work has benefited from the support of the HEC Foundation through the "F Project".
See structure
Part 12

Is It Rational to Stockpile in Times of Crisis?

Decision Sciences
Published on:
8 minutes

The health crisis caused by COVID-19 has triggered an economic one. We observe a significant portion of the population fearing shortage of primary consumption goods and marked stockpiling behavior. Because such behavior increases the risk of shortage, several stores have decided to ration some goods, and governments have had to make public announcements to reassure consumers that there would be no shortage. Avoiding consumer stockpiling is hence one of the key aspects of the management of this crisis. But is it rational to stockpile in times of crisis? We review and discuss the rational and irrational aspects of such behavior.

someone cares a big pile of cardboard stacks

©BillionPhotos.com / Adobe Stock

Listen to the podcast in French:

 

Although over-purchasing in times of crisis might be considered as irrational, scholars in economics, operations research and marketing have proposed theoretical models explaining when and how individuals rationally decide to stockpile. Besides rational motives, many behavioral aspects can also motivate over-purchase decisions. 

Stockpiling as a rational decision involving risk and time

Decisions to purchase and store quantities in prevision of future hazards are not infrequent, and may concern not only individuals, but also states and companies. At the State level, decisions to stockpile goods such as oil, weapons, medical masks and drugs are highly strategic. It can also be in the interest of the companies and consumers to stockpile primary or consumption goods, as an insurance against variations of future prices (as in the case of shortage risks).

 

It can be in the interest of the companies and consumers to stockpile primary or consumption goods, as an insurance against variations of future prices.

 

In all these contexts, the decision can be analyzed using the same framework. Stockpiling is a safe but costly option: the costs relate to purchasing additional quantity at the present time rather than smoothing the expense across time, as well as to storage costs (e.g. warehouse space and guarding). Not stockpiling is a risky option that exposes the decision maker to future price variations. 

The best option, or optimal amount of stockpiling is therefore a decision involving risk and time and as such involves many factors: the perceived risk of price variations, the attitude towards time (how the decision maker values future consequences) and the risk attitudes (how the decision maker values risky consequences). In rational decision making these factors are combined using a model called “discounted expected utility”. For example, under this model, a consequence received at a future time period t with a perceived probability p is valued p exp(-rt)u(x), where r is the discount rate that captures attitudes towards the future, and u is a utility function that characterizes risk attitudes. Assuming that the decision maker has well-defined risk perception, discount rate and risk attitudes, the model makes recommendations about how much to stock pile.

 

Decision to stockpile depends on the perceived probability of shortage, risk aversion, discount rate and storage costs.

 

As one could expect, recommended stockpiling will increase with the perceived probability of shortage and risk aversion; it will decrease with the discount rate and storage costs. 

The discounted expected utility model can be used to study many other decisions involving risk and time in various domains such as strategy, finance, marketing and industrial organization.

Deviations from the rational decision-making model

Beyond its normative appeal, the model underlying such recommendations cannot satisfactorily describe observed behavior. See Machina (1987) for violations of this model in the context of risk, and Loewenstein and Prelec (1992) for the context of time. We have investigated several of these anomalies in a recent laboratory experiment where subjects had to make decisions involving both risk and time with real possible gains. We observed systemic deviations from the predictions of the rational model. As previously observed, subjects did not exhibit stable risk attitudes. They took more risks in decisions involving small probabilities than in decisions involving medium or large probabilities. Another result regards the impact of time. Here again, time preferences were not constant.

 

empty shelf in a supermarket
"Observing that other people stockpile creates a social pressure" (©zephy p on AdobeStock)

 

More impatience was observed towards the near future than concerning periods further away in time. This pattern is responsible for several anomalies in decisions involving time, such as reversal of preferences over time or procrastination. Though well documented in the literature, several scholars have hypothesized that this pattern would disappear in decisions involving both risk and time. Our results, recently published in Games and Economic Behavior(1), show that this pattern holds even in these more general contexts. 

Another source of irrational decisions regards the way people perceive risks when probabilities are not available (e.g. Tversky and Kahneman 1974). For example, in their evaluation of the likelihoods of uncertain future events, people generally tend to overestimate rare events and underestimate frequent ones. In another recently published paper, we propose a method for measuring people's beliefs about uncertain events(2) from simple choices. The method allows to put beliefs into numbers and to test if peoples’ perception is accurate.

Another important research question in the decision sciences relates to how people formulate and update their beliefs in the light of available evidence. In the context of stockpiling, decision makers can also be influenced by the behavior of their peers. 

The social dimension of stockpiling: an analogy with bank runs

Stockpiling is an individual decision that can have dire social consequences. Indeed, in the context of shortage risk, individuals deciding to overpurchase effectively contribute to the risk. This kind of situation is called “self-fulfilling prophecy”.

 

Like bank runs, stockpiling decisions show two equilibria: one where decision makers stay calm, one where they panic, leading to a catastrophic situation.

 

When considered as a game involving many players, the decision to stockpile can be studied in game theory and is analogous to bank run games. These games have two equilibria: one where decision makers stay calm and do not overpurchase; another where decision makers panic and decide to overpurchase, leading to a catastrophic situation of real shortage. The first equilibrium is obviously better than the second one. Nevertheless, regarding individual rationality, both are “Nash equilibria”, meaning that when one sees that other people start to stockpile, individual rationality recommends you to stockpile too! In a social context, stockpiling can therefore be considered as a rational but selfish decision. 

The role of herding behavior

Considering stockpiling as a social game introduces the fact that the beliefs and actions of each decision maker can be influenced by the actions of the other decision makers. Updating beliefs after observing the behavior of the others can be rational. Such situations are called information cascades. But behavioral studies reveal that people are sensitive to the behavior of others, even when it is uninformative or even misleading! In particular, people tend to conform to the dominant behavior, even in the absence of rational reasons to do so. In the present case of COVID 19, we can speculate that the sudden but notable stockpiling of toilet paper was due to herding. 

People can probably easily convince themselves that, even if there were a major economic collapse, toilet paper is not the good that need be given the highest priority. However, observing that other people stockpile creates a social pressure: “it is not possible that so many people behave so irrationally: there must be a good reason for them to do so”.

 

Decision science suggests that stockpiling can be rational from an individual perspective. But in practice, people do not stockpile optimally because of individual irrationality and group pressure.

 

Overall, decision science focusing on both individual decision making and game theory suggests that stockpiling can be rational from an individual perspective. However, in practice, there are many reasons to think that people do not stockpile optimally because they violate the rules of individual decision rationality or are irrationally influenced by the behavior of others. 

 

References 

(1) Abdellaoui, M., Kemel, E., Panin, A., & Vieider, F. M. (2019). Measuring time and risk preferences in an integrated framework. Games and Economic Behavior, 115, 459-469.
(2) Abdellaoui, M., Bleichrodt, H., Kemel, E., & L’Haridon, O. (2017). Measuring beliefs under ambiguity. Operations Research, in press.
Loewenstein, G., & Prelec, D. (1992). Anomalies in intertemporal choice: Evidence and an interpretation. The Quarterly Journal of Economics, 107(2), 573-597.
Machina, M. J. (1987). Decision-making in the presence of risk. Science, 236(4801), 537-543.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. science, 185(4157), 1124-1131.

 

See structure
Part 13

3 Objectives to Create Intelligence in the Face of Uncertainty

Decision Sciences
Published on:

Uncertainty is an invisible trap, set to blind our capacity to avoid nonsense and create actual intelligence. Why invisible? Because uncertainty is powered by what we do not know, which is particularly difficult to become aware of. Anne-Sophie Chaxel, HEC Paris Associate Professor of Marketing and expert in cognitive biases, gives three objectives to keep in mind to embrace uncertainty, along with practice tool boxes to create intelligence.

group of people working in white blouses - gorynvd-AdobeStock

©gorynvd on AdobeStock

Watch the webinar's replay:

 

 

 

There has been a lag between the unfolding of the virus and the awareness the crisis is real in the general public. We first behaved as if normalcy was to be expected in a few weeks. We then hoped for a few months. Most people realize now that we will be leaving in an uncertain world for… an unknown period of time. Therefore, waiting for normalcy to come back is not anymore a viable option. Decision-makers need to embrace uncertainty; and grieve their hopes to overcome uncertainty. Instead, decision-makers must learn to manage it, and ride with it. To do so, I outline three objectives to keep in mind.

 

You shouldn’t be waiting for the back to normal but you should be making decisions right now.

 

Objective 1 - Seizing up uncertainty

First, I say, let’s not fear uncertainty. Instead, let’s face it. We need to make the trap visible. To do so, let’s consider where uncertainty hides, and let’s forcefully look for it. The first source of uncertainty are the data themselves. When gathering data in an uncertain context, we tend to pay too much information to the most readily available information. So at the beginning of the crisis, especially in Western countries, we heard “virus” and we recalled information that we had stored in our memories about the flu. Asian countries on the opposite had assimilated with MERS and SARS that viruses can be dangerous, and therefore had a faster reaction. The problem of such selective retrieval of information is that it impacts what type of information we seek next. That’s the very famous confirmatory bias. So once we labelled the coronavirus a “flu”, we then tended to look for information that the virus was not dangerous. Hence, the data we retrieve is uncertain because we do not know much about the situation we are facing, and existing information is misleading. 

 

Not only is the data uncertain, but we also tend to ascribe too much certainty to the data we get, as if we were doubling-down on the uncertainty trap.

 

The second place to look for uncertainty is therefore yourself. Situations of uncertainty require a good dose of humility, and to recognize that one’s estimations may be off. If not – in the absence of this humility, we proceed to reaching conclusions without even noticing that we do not have all the knowledge necessary to make a sound decision. 

The third source of uncertainty are others. We tend to prefer any knowledge over no knowledge at all in the face of uncertainty. But information abounds, and not all of it is reliable. Unfortunately, we often tend to trust the most confident person in the room, yet this person may not necessarily be the most knowledgeable. The most knowledgeable person may appear the least confident, because they know how unpredictable the situation is.

 

Best practice tool box to embrace uncertainty

In your data

  • Question the representativeness of your data: are you using it because it is easily available, or because this is the data you need?
  • Use multiple sources of estimates, not only one 
In yourself
In others
  • Do not follow blindly the know-it-all person in the room
  • Consider the source
  • Seek disconfirming points of view

 

 

The most knowledgeable person may appear the least confident, because they know how unpredictable the situation is.

 

 

Objective 2 - Remember the goal is to expand what you know

If there is something we know for sure about uncertainty is that people really hate it, and that people would do anything to regain a sense of control over their lives. To regain a sense of control, most people tend to “freeze” their beliefs (in their brain). In other words, we want to be able to say to ourselves: “At least I know what I know”. And to protect our worldview, we tend to bias our evaluation of incoming information to make them fit with what we believe in. This phenomenon, called information distortion, leads us to develop stronger opinions when we face uncertainty, decrease creativity, leads to more stereotyping, and more societal polarization. 

 

man in suit meditating in the middle of a meeting - vectorfusionart-AdobeStock

 

In addition, to regain a sense of control, we also tend to look for cues that the unpredictable was indeed predictable. One way is to look for scapegoats, and to ascribe the responsibility to the crisis to somebody. This is the famous “hindsight bias” by which we tend to think that low-probably events could have been better dealt with. Another way is to come up with conspiracy theories. 

 

To avoid thoughts that the world is random, we tend to come up with explanations that involve large causes - it’s a conspiracy.

 

Why are conspiracy theories blooming? A world that is predictable is a word for which a small cause has a small consequence, and for which a large cause has a large consequence (think fitting a regression). The virus has large consequences. Therefore, to avoid thoughts that the world is random, we tend to come up with explanations that involve large causes (it’s a conspiracy). In other words, we tend to want to regain control in areas over which we cannot, actually, regain control.

To be able to avoid freezing in a context of uncertainty, I suggest regaining as much control as possible in controllable elements of your life. Think of the outcome of any decision as the product of three elements: chance (or close to unpredictable factors), your decision-making skills (how you made the decision), and your implementation plan. Grieve your loss of control over uncontrollable elements of your decision-making process. Instead, focus on controlling what you can control in your life. Outside of management, a good example of how people have tried to regain control is the new excitement around cooking. Nothing is more predictable and rewarding that following a recipe: a clear action guide, a clear implementation, and the desired outcome. In the management domain, regain control of the elements you consider in your decision-making process and your implementation planning. Once we successfully distinguish what is controllable vs. uncontrollable and regain control in controllable contexts, we will feel more incline to stay flexible and relinquish control over uncontrollable elements. 

 

Best practice tool box to remember to expand what you know 

Challenge the status quo

Regain control in controllable domains

  • Do not seek control over uncontrollable events 
  • Control the controllable aspects of your decision making: your thinking process and your implementation
  • Control other controllable aspects of your life
Expect to be wrong
  • In uncertain environments, the expected outcome will never occur
  • Grieve your expected outcome
  • Change how you measure success: validate key criteria to make a solid, process-based decision instead of valuing the outcome 

 

Objective 3 - Build certain knowledge, and fast

How can we learn from experience when we know very little? First, we need to be able to build certain knowledge, fast, so that we can reduce uncertainty. Doing so requires thinking of learning as much shorter feedback loops than usual. In other words, yes – there needs to be a plan. But expect the plan to go wrong and adapt it quicker than usual. Think small steps first. 

One challenge to do so is that our capacity to distinguish between chance and skill is fogged in the face of uncertainty. Therefore, I suggest assessing the validity of a decision at the time it is made (i.e., how the decision was made) instead of simply relying on the outcome. In addition, using pre-mortems instead of post-mortem analyses can be useful to make sure all important information has been taken into account.

 

Best practice tool box to build certain knowledge

Learning

  • Distinguish between experience and learning
  • Distinguish between skill and luck
Adopt short feedback loops
  • Assess the validity of a decision at the time it is made
  • Plan, but regularly adapt the plan 
Think forward
  • Do not expect your plan to go as expected
  • Use pre-mortems instead of post-mortems
  • Create an enthusiasm for learning, not for outcomes 

 

The coronavirus situation has taken a physical toll, but it also has impacted our capacity to think clearly. The good news is that there exists a lot of certain knowledge about how we can avoid nonsense in the face of uncertainty. For the least, the current situation has tested our ability to react with intelligence when we know very little. 

 

HEC Paris Executive Education is organizing a series of free webinars to best deal with the crisis caused by the Covid-19. Professors share their analysis of the situation and give insights on how to recover from it. This article is an outcome of the webinar led by Prof. Chaxel, "Creating Intelligence in the Face of Uncertainty". In French: Retrouvez la série de webinars pour préparer à la reprise post-coronavirus.
See structure
Part 14

Decision Making That Reflects Your Strategy

Decision Sciences
Published on:

Business decisions are not always in line with company strategy. Researchers Olivier Sibony et al. explore what lies behind counterproductive business decisions and outline guidelines for designing better strategic decision processes.

Decision Making Business Strategy - Sibony - HEC Paris ©Rawpixel.com

Strategic decision-making is an integral part of running a business. And yet, often, a company’s decisions do not reflect the strategy laid out by those in charge. In some cases, a firm that wants to take risks, and be highly entrepreneurial and innovative, will note that its managers nevertheless make restrictive, conservative decisions. On the other hand, a firm that has not explicitly decided to place big bets may make risky choices, such as a large capital investment or the launch a new line of products. Professor Olivier Sibony asks, “Why is there often a disconnect between what a company wants to achieve, and the decisions it makes to achieve it?” 

 

Why is there often a disconnect between what a company wants to achieve, and the decisions it makes to achieve it? 

 

To answer this question and provide solutions for business executives, Sibony et al. explored how behavioural strategy can help ensure the right business decisions are made. Noting that cognitive and behavioural biases often play a part in decisions that go against company strategy, the researchers define guidelines for designing decision-making processes that promote greater alignment with an organization’s overall business strategy.

Daily decisions drive the strategic direction of companies 

Some everyday decisions have, in aggregate, a big impact on how a business functions. Such decisions include, for example, a consumer goods firm deciding which products to launch or a pharmaceutical firm managing its drug development pipeline. Sibony et al. note that these decisions are not always considered as part of a company’s overall strategic plan, but viewed, instead, as mere functional routines.

However, these decisions shape the future of the company. “These processes make up the core strategic decision architecture of a firm,” he says. “Some processes are common to most companies, such as budget or investment processes. Others are specific. Decisions made during these processes can have a knock-on effect to other strategic processes and drive company strategy in a particular direction.”

Designing core strategic decision-making processes to reduce bias

As such, it is the decisions made during these core strategic processes that affect a company’s ability to achieve its goals. If a company’s core strategic processes can be identified and designed more intentionally, therefore, Sibony argues that it can minimise the risk of behavioural and cognitive biases. “Decision routines are not set in stone. Companies can and should design them actively to minimize bias and produce the outcomes they hope for,” he explains. 

3 types of decision processes to redesign: investment, resource allocation and blue sky 

Sibony et al. identify 3 types of strategic decision-making processes and the biases that tend to emerge in each unless you design against them: 

1. Investment: In general, investment processes have a tendency to result in too much risk-taking on big decisions and not enough on small ones. Effective design of investment processes addresses these two contradictory biases and results in more or less conservatism where relevant.

2. Resource allocation: When it comes to the allocation of resources, such as budget or personnel, the natural tendency is to replicate past allocations – for instance, by continuing to over-resource declining businesses. Rather than making marginal adjustments to existing allocations, processes should be designed to start as much as possible from a blank slate.

3. Blue sky: To achieve breakthroughs, companies must foster creativity. But not all companies pursue radical innovation. Depending on the degree of innovation they want, the decision processes to achieve it will look very different. 

3 types of strategic decision processes - Sibony HEC Paris

7 design levers

Sibony et al. outline 7 levers that can be considered in designing strategic decision-making processes: formality; layering; information; participation; incentives; debate; and closure. “All 7 levers can be used to fine-tune a given decision process and achieve a company’s required level of risk taking, agility and innovation.”

7 levers to designing strategic decision processes

 

Applications

Ensuring alignment with overall company strategy requires intentional design of your core strategic decision-making processes. “A manager needs to think about how to apply the 7 levers to ensure high quality decision making outputs,” Sibony explains. “There are very practical steps to improve decision processes.” For example, in a strategy debate, a manager can ask: Who should be involved? Does everyone have an equal voice? How can a productive confrontation be orchestrated to get the most ideas? How can unproductive conflict be avoided? At the executive level, Sibony encourages company leaders to think through the strategic decision architecture of their company. “Executives must design a decision architecture that will help them achieve their desired strategy.” To do this, he stresses the importance of thinking beyond a firm’s overall strategic plan. Instead, executives need to be aware of the core strategic decision-making processes on which the execution of that strategic plan depends. They must ask themselves whether they are happy with the output of these processes: Is the firm taking the right amount of risk? Are resources allocated effectively? Is the firm as agile and innovative as it should be? If not, they need to rethink the design of these processes and use the 7 levers as tools to achieve the desired outcome.
Based on an interview with Olivier Sibony, on his paper “Behavioural strategy and the strategic decision architecture of the firm” (California Management Review, 2017), co-authored with Dan Lovallo and Thomas C. Powell. To learn more about what behavioural strategy can tell us about strategic decision making and how to design processes to make the right decisions, read the full paper here. Find here the latest book published by Olivier Sibony and his colleagues Bernard Garrette (HEC Paris) and Corey Phelps (McGill University) on problem-solving: Cracked It! How to Solve Big Problems and Sell Solutions like Top Strategy Consultants.
See structure
Part 15

How Believing in Unsubstantiated Claims Leads to Polarization

Decision Sciences
Published on:

The COVID-19 pandemic has fostered the sharing of conflicting and unsubstantiated claims by public figures. Early November, a deeply divided nation elected Joe Biden as the President of the United States. A recent research published by professors Anne-Sophie Chaxel of HEC Paris and Sandra Laporte of Toulouse School of Management reveals that individuals believe in unsubstantiated claims when shared by favorite public figures, explaining polarization in opinions. In this article, Anne-Sophie Chaxel explains how rational people come to strongly believe in unchecked claims.

donal trump and doctors - PICRYL

Donald Trump and doctors. Source: PICRYL

listen to the podcast:

 

Knowing what to believe in the context of COVID-19 is challenging. Conflicting narratives from an array of prominent sources make distinguishing what is true and false difficult. This research highlights a new phenomenon, that we label “truth distortion” and is a major source of polarization in opinions in uncertain environments.

 

The COVID-19 pandemic has fostered conflicting narratives where so-called facts are shared without substantive evidence by various public figures.

 

The goal: Understanding how people come to believe unsubstantiated claims

The COVID-19 pandemic has fostered conflicting narratives where so-called facts are shared without substantive evidence by various public figures. For instance, during the French lockdown, a number of personalities defended or rejected the idea that hydroxychloroquine was a cure for the virus. The resulting controversy triggered a number of heated debates on this topic. 

How do people come to strongly defend or reject this type of controversial claims? “Controversial” in this context is meant as a synonym of “unsubstantiated”. In other words, the statement, or fact under consideration, is not yet fully established. In other words, “truth” is actually unknown. 

 

Because so much information during the COVID-19 pandemic was shared across media without proper vetting, we decided to investigate how a preference for a source of information influences our way to judge unchecked statements as to be true.

 

We started with this initial insight: judgments of truth are more often than not constructed, meaning that they are not binary and they are sensitive to context. Said differently, hearing that hydroxychloroquine could be a cure to treat the COVID-19 does not trigger an immediate labelling as “true” or “false”. Instead, people ascribe to such uncertain statements a likelihood they may be true, based on their prior experience and knowledge. 

Based on this insight, we made the hypothesis that truth judgments may be distorted by context, such as participant’s prior knowledge about the source of information. Because so much information during the COVID-19 pandemic was openly and repeatedly shared across media without proper vetting, we decided to investigate precisely the process by which a preference for a source of information influences our way to judge unchecked statements about COVID-19 as to be true.

The Method: Tracing the distortion of truth judgments related to COVID-19

To reach our objective, we ran two studies. In the first study, we gave some preliminary information to the participants about a judge in the United States, currently reviewed by a senator committee to be appointed in the US court of appeals. While we were reading this background information, the participants were asked several times whether they would support his nomination. Because most of the information provided to the participants was positive, a very large majority of the participants supported his nomination. Once this preliminary information was reviewed, participants sequentially read three opinion statements by this same judge, on topics related to COVID-19, such as whether the virus is man-made. After each of these three opinion statements, they were asked to indicate their support for the judge, and the extent to which they agreed with controversial statements related to COVID-19.

 

In our experiment, only about 11% of the sample changed their voting decision following repeated unsubstantiated claims from a politician they liked. It means 89% of the sample actually sticked to their preferred candidate, even if his claims were unsubstantiated, such as claiming a specific drug could help cure COVID-19.

 

We then compared those responses to the responses of a control group, who indicated their agreement with the same statements, without any knowledge about the judge or his nomination. 

When comparing them, we could compute a “truth distortion” score for each participant and each statement, measuring the extent to which participants switch their truth judgments in the same direction as their preference for the public figure that is the source of information. The second study replicated the first study, with statements unrelated to COVID-19.

The result: Truth distortion increases polarization

We found that an early positive or negative evaluation of a public figure causes people to distort their truth judgments in the same direction as their preference. We found that early support for a public figure translated into endorsements of the statements made by that figure, regardless of the validity of those claims. In other words, people would believe false or unsubstantiated statements if made by someone they liked and supported. 

 

November 3, 2020: People waiting for elections's results in Washington D.C., USA. (Photo source: Myanmore.)

 

In addition, the research also revealed that people would grow more supportive of other unconnected claims the public figure would have and would become ever more convinced when claims were repeated over time.

For instance, imagine the public figure supported the idea that COVID-19 is man-made, and that participants in turn tended to believe it more, i.e,. “distorted” their truth judgment in the same direction as the source of information. If this same public figure then states that hydroxychloroquine is a cure for the COVID-19, results show that participants will support this second statement even more than they supported the first. In other words, support, or “distortion of truth” fuels up even stronger support. The consequence of this process is that only a minor proportion of people reversed their early preference for the source, despite the highly controversial nature of the statements. 

Equally, we also found that people who did not like or support the source making unsubstantiated statements, their disapproval of the source’s claims would grow over time. 
Indeed, the minority of participants who did switch preference during the choice task, i.e., decided not to support the Judge’s nomination, and kept their rejection.

Imagine you do not support the judge’s nomination. Then hearing him voicing that COVID-19 is man-made will make you even less likely to believe in this statement than the control group. 

Interestingly, within this minority of people turning against the judge, rejection was actually close to twice stronger than for participants who supported the judge. Insistant support or rejection by participants therefore triggers disagreement about what truth is or is not, across groups. In other words, support or rejection of a public figure is a psychological mechanism by which polarization may occur.

In a nutshell

The “truth distortion” phenomenon - or the fact to support or reject more the same person over time, highlighted in these two studies demonstrates how uncertainty in information can become a major source of societal polarization on a major public health issue, such as COVID-19.

 

The fact to support or reject more the same person over time demonstrates how uncertainty in information can become a major source of societal polarization on a major public health issue, such as COVID-19.

 

Both positions are crucial in understanding the process of polarization. Indeed, possible consequences include individual’s willingness to comply with preventive measures, growing disparities in public opinion, and heated disagreement on what is true or not true, even in the absence of actual scientific evidence. A concluding remark would be on ways to fight the phenomenon distortion, which is the topic of our on-going research.
 

Article based on “Truth Distortion: A Process to Explain Polarization over Unsubstantiated Claims Related to COVID-19”, by Anne-Sophie Chaxel of HEC Paris and Sandra Laporte of Toulouse School of Management, published in the Journal of the Association for Consumer Research in October 2020.
See structure
Part 16

How Do Governments And Individuals Make Decisions In A Time Of Crisis? The Case Of The Coronavirus

Decision Sciences
Published on:

Why different countries have made very different decisions to fight the coronavirus? What are the potential consequences of such crisis on the psychology of the population? In this interview, Anne-Sophie Chaxel, HEC Paris Associate Professor of Marketing specialized in consumer behavior and decision-making, explains the different approaches of governments toward their responsibility, and the biases behind non-optimal behaviors and decisions. She also shares her recommendation regarding decision-making processes.

crowd in a city - adobe

©Alexander Ozerov on Adobe Stock

listen to the podcast:

 

How do you explain that different governments have made very different decisions to fight the coronavirus? 

There are three major steps in making such impactful decisions: situation framing, information gathering, and coming to conclusions. Each step can lead to adopting a different approach to a same problem. The first step is what we call situation framing, which is to determine the different aspects by which a decision can be analyzed. In the coronavirus crisis, there is not one single aspect that is important, but a number of dimensions that can impact the decisions that will need to be made: the human, economic, financial, and individual freedom dimensions are for instance all key aspects of the decision-making process. Each government will need to prioritize which ones they consider the most important, and the ones they consider the least important.

 

This choice will be highly dependent of how each country defines the role that the government must play in a society.

 

This choice will be highly dependent of how each country defines the role that the government must play in a society. You can already see stark differences on this framing phase in the approaches adopted by, let’s say- the US, focusing on protecting individual freedom and wishing to keep the economy going, and Denmark, focusing first and foremost on helping people who cannot work during the crisis. 

Once the framing is defined, follows a phase of information gathering. In the coronavirus crisis, this is a delicate phase, as more information is learned every single day about how the virus works and spreads. When key information is missing, governments must make assumptions to enable decision-making in the face of uncertainty. Each government can select different information to base their decisions on, make different assumptions and inferences based on the information that they do have, and therefore again end-up with different decisions.

 

Different groups of experts in different countries can process and interpret the same information differently, and thereby, reach different decisions.

 

And finally, once the information is gathered, governments must interpret the information they have, and come to conclusion. Yet, interpreting the information needs a solid process to lead to an accurate decision. If adopting a solid process can be far more efficient than unorganized thinking, the group of people who interprets the information plays a tremendous role in shaping the decision that are made. Different groups of experts in different countries can, based on the same data, actually process and interpret information differently, and thereby, reach different decisions.

Are there specific biases that decision-makers have to be particularly careful to avoid when making decisions?

I will cite three biases that seem to me to be particularly relevant to the current crisis. The first one is overconfidence. We have seen overconfidence operating at the beginning of the crisis, when we were (over)confident the virus would not be major disruption – and that even if it were, we would be ready to confront it. To avoid overconfidence, decision-makers need to have not only good primary knowledge of the situation, but most importantly good metaknowledge, which is a fair assessment of what we know and what we do not know. In the case of the coronavirus, awareness that our metaknowledge was very limited about the virus may have led us to make decisions that are more drastic earlier in the crisis. 

 

Only selecting information that would tend to prove that the virus would not come to Europe is a good example of selective exposure to information.

 

Another bias that may be at play is actually a family of biases, which we call confirmatory bias. Confirmatory bias happens when we select information that fits what we want to be believe in (that’s what we call selective exposure to information), or when we process information in a way that confirms what we’d like to be true (that’s what we call information distortion). For instance at the beginning of the crisis, only selecting information that would tend to prove that the virus would not come to Europe (e.g., warmer weather, or recalling previous instances of virus that were active in Asia but not in Europe) is a good example of selective exposure to information, i.e., picking information that fits what you’d hope for. Even more pervasive is information distortion, which happens completely without awareness and therefore is particularly dangerous. This happens when we interpret objective information in a way that confirms that we want to believe in. For instance, looking at information about how to fight the virus (say, drinking water), and interpreting the results in a way that confirms what we would hope for (say, it helps fight the coronavirus). This is one of the major reasons for the spread of fake news. 

Finally, a last bias I want to mention is groupthink, by which groups tend to avoid conflict and usually want to reach consensual decisions, sometimes leading groups to make more biased decisions than single individuals. Most governments have appointed groups of medical experts to gather and interpret information related to the coronavirus – this is in no way a certain way of avoiding unbiased decision-making, as experts have been shown to be themselves subject to groupthink. If the expertise is well-present in such groups (which is a required first step for accurate decision-making), there are ways to avoid consensus to be reached to fast, such as appointing devil advocates in such groups, which systematically defend a different set of actions and thereby challenge the consensus, to make sure of the validity of the chosen course of action.

What are the potential consequences of such crisis on the psychology of the population?

This is difficult to predict because the current situation brings up very different emotions in different people, such as anxiety for some, depression for others, or detachment for others. That being said, some of my work has demonstrated that limited time perspective (or thinking of time as scarce) increases confirmatory biases, such as processing information in a way that confirms our beliefs.

 

I would expect during the crisis to see people process information in a way that makes them see what they want to see.

 

Thinking about one’s own mortality is definitely a good instance of framing the time in our life as limited. So to a largely extent I would expect during the crisis to see people defend their beliefs even more than usual, and process information in a way that makes them see what they want to see. Such process can only lead to more polarization. 

Is there any recommendation regarding decision-making processes that you’d like to offer?

Besides attempting to avoid the biases I have mentioned, I think one important aspect not to forget is learning from experience. New data are gathered and interpreted every day, and the validity of each action must be reassessed regularly. This is only through this systematic reassessment that our decision-makers can improve their course of action in the weeks and months to come. I also hope that such learning from experience will be done at the end of the crisis, to reassess our readiness to approach other types of unexpected crises.

 

See structure
Part 11

A New Theory in Economics Helps Predict Future Events

Economics
Published on:

When will be the next financial crisis? Who is going to win the next US presidential elections? How do we create beliefs about such events? By understanding how probabilistic beliefs form, economic theorists can now explain and predict phenomena that depend on rational beliefs. Latest research by Rossella Argenziano and Itzhak Gilboa equips economic modeling with a theory and a set of tools of belief formation, based on statistics and psychology. Some of the immediate applications are the equilibrium selection in coordination games.

black swan on a lake - Tatiana-AdobeStock

©Tatiana on Adobe Stock

How can people predict future events? They create beliefs and probabilities based on the observation of similarities between past events and an ongoing event. Let’s understand this through the three cases of the Obama election, the fall of the Soviet bloc and the curbing inflation.

1 - The Obama election

The election of Barack Obama triggered excitement and enthusiasm because a non-white became President of the United-States for the first time. Presidential elections are a rare event, and not two are exactly alike. This makes the use of statistics tricky: which past events should be included in one’s sample? How do we describe present and past events? In particular, is "race" an important feature? We claim that the precedent of Obama’s election didn’t only change the statistics – with one non-white president as opposed to zero – but also changed the way we do statistics: it showed that "race" was not an important variable in judging the similarity between events.

 

People, especially economists, can predict future events by creating beliefs and probabilities based on the observation of similarities between past events and an ongoing event.

 

2 - The fall of the Soviet bloc

The Soviet bloc started collapsing with Poland, which was the first country in the Warsaw Pact to break free from the rule of the USSR. Once this was allowed by the USSR, practically all its satellites in Eastern Europe underwent democratic revolutions, culminating in the fall of the Berlin Wall in 1989. The single precedent of Poland generated a "domino effect." This paper suggests a belief formation process that explains how a single precedent can have such a dramatic effect even in the absence of informational spillovers and strategic dependency among games.

 

Fall of Berlin Wall - Raphaël Thiémard
Fall of the Berlin Wall, November 1989. Author: Raphaël Thiémard

 

Revolution attempts are typically modeled as coordination games*: the expected utility derived from taking part in an uprising increases in the probability of its success, which in turn increases in the number of participants. For a citizen trying to decide whether to join such an attempt, it is crucial to predict the outcome of the uprising. A natural piece of information to use for such a prediction is the outcome of past revolutions in similar contexts. We suggest that the importance of the successful revolution in Poland didn't lie only in changing the relative frequency of successful revolutions, but also in changing the notion of which past revolution attempts were similar to current ones, hence relevant to predict their outcomes.

Specifically, the case of Poland was the first revolution attempt after the "Glasnost" policy was declared and implemented by the USSR. Pre-Glasnost attempts in Hungary in 1956 and in Czechoslovakia in 1968 had failed. In 1989, one might well wonder, has Glasnost made a difference? Is it a new era, where older cases of revolution attempts are no longer relevant to predict the outcome of a new one, or is it "Business as usual", and Glasnost doesn't change much more than does, say, a leader's proper name, leaving pre-Glasnost cases relevant for prediction?

So how can we learn that the revolution in Poland could help in attempting new, successful revolutions?

If the revolution attempt in Poland were to fail as did previous ones, it would seem that the variable "post-Glasnost" does not matter for prediction: with or without it, revolution attempts fail. As a result, when a person wonders what is the "right" way of judging similarity between past cases, she would likely be led to the conclusion that the variable "post-Glasnost" should be ignored, and that, consequently, the statistics are zero successes out of three revolution attempts. By contrast, because the revolution attempt in Poland succeeded, it had a double effect on the statistics. First, it increased the frequency of successful revolutions from 0/2 to 1/3. While 1/3 is larger than 0, it still leads to pessimistic predictions about successes of future attempts. However, if people also learn how to judge similarity, the single case of Poland leads them to the conclusion that "post-Glasnost" is an important variable. 

How can we learn to judge whether a past event is similar to a current one?

The theory presented in our latest research paper, "Similarity-Nash Equilibria* in Statistical Games", suggests people learn from past events not only what are the frequencies, but also what is the relevant database.

 

Our theory suggests people learn from past events not only what are the frequencies, but also what is the relevant database.

 

Indeed, if we use the Polish revolution as an example, the frequency of successes post-Glasnost, 1/1, differs dramatically from the pre-Glasnost frequency, 0/2. Once this is taken into account, pre-Glasnost events are not as relevant for prediction as they used to be. If we consider the somewhat extreme view that post-Glasnost attempts constitute a class apart, the relevant empirical frequency of success becomes 1/1 rather than 1/3. Correspondingly, other countries in the Soviet Bloc could be encouraged by this single precedent, and soon it wasn't single any more.

How to find the relevant variable among many others to judge if a past situation is similar to a current one?
In a previous paper1, we show that the “empirically optimal similarity function” can be identified under certain conditions. In essence, many observations for few variables make learning easier.

3 - Curbing inflation

As another example, consider a central bank, which redenominates* its currency in an attempt to restrain inflation. Inflation is an equilibrium phenomenon: an economic agent (or individual) who expects others to raise prices of goods and services would be wise to do so herself. Thus, one can think of the inflation game as a price-setting game with multiple equilibria, and redenomination as an attempt to switch from a hyperinflation equilibrium to a low inflation equilibrium2. Will economic agents – consumers and firms, bankers and investors – use the new variable in their belief formation? Will firms assume that prices will no longer increase, when pricing their own goods? Or will they dismiss the redenomination as a "cosmetic change" and believe that inflation will continue to run high? Our analysis suggests that the answer depends on the periods immediately following the redenomination: if in these periods inflation is low, the variable “new currency” will be used for prediction and a new, low-inflation equilibrium can be readached. So it means that if something happens today, then the coming year will be crucial to judge whether the same thing happening means that a similar event is coming.

 

woman in suit adding coins on a pile next to an hourglass - Andrey Popov-AdobeStock
©Tatiana on Adobe Stock

 

By contrast, if in the first periods the inflation rate continues to be high, agents will realize that it’s “business as usual”, so the variable will be judged irrelevant. So people will see that with and without the change, things look the same. As a result, the entire history will be used for prediction, making it very difficult to convince economic agents that the future will differ from the past. Israel switched from a Lira to a Shekel (worth 10 Liras) in 1980 and then to a New Shekel (worth 1,000 Shekels) in 1985. In 1980 the change was not accompanied by fiscal policy changes, meaning that the government didn’t cut expenses and try to finance the deficit by “printing money”, so inflation spiraled into hyper-inflation. According to our account, people realized that, Shekel or Lira, inflation runs high, and then, of course, it did.

By contrast, the change in 1985 was accompanied by budget cuts, and inflation was curbed in the following years. We argue that the real change in fiscal policy gave meaning to the nominal change* of redenomination: the New Shekel, which was perceptibly different from its predecessor the Shekel, suddenly seemed to actually behave differently. Hence, rational, economic persons who ask themselves, “which are the periods from the past that are relevant to construct beliefs to predict future events?” found that the older periods were not so relevant. This gave a chance to believe in a low-inflation equilibrium.

A word to the experts

For standard economic theory, with perfectly rational individuals in economics, it is hard to explain currency redenomination: it is a purely nominal exercise that all agents should view as irrelevant. Psychological accounts, on the other hand, can explain why people react differently to different nominal sums, but may be challenged in explaining the difference between successful and unsuccessful redenominations. Our account takes a middle ground: our agents may be perfectly rational, but, realizing that they are playing a coordination game with others, they do take into account perceptions that may be used to select an equilibrium, even if, in and of themselves, they are economically irrelevant. Thus, erasing three zeroes from all monetary sums is a noticeable change. It will have economic meaning only if most agents think it has economic meaning. And here, we claim, come the learning of the similarity function: if the perceptual change is accompanied by real policy changes, a new equilibrium may be selected.

 

Our account takes a middle ground: agents may be perfectly rational, but, realizing that they are playing a coordination game with others, they do take into account perceptions that may be used to select an equilibrium, even if they are economically irrelevant.

 

 

*Keywords:

Equilibrium: In economics, an equilibrium is a situation in which agent’s optimal actions and prices are such that supply and demand are equal. 

Nash equilibrium: In game theory in economics, the Nash equilibrium is a proposed solution of a non-cooperative game involving two or more players in which each player is assumed to know the equilibrium strategies of the other players, and no player has anything to gain by changing only their own strategy. (Source: Osborne, Martin J.; Rubinstein, Ariel (12 Jul 1994). A Course in Game Theory. Cambridge, MA: MIT. p. 14)

Coordination games: In game theory, coordination games are a class of games with multiple pure strategy Nash equilibria in which players choose the same or corresponding strategies. 

Nominal change: In economics, the nominal value, rate, or level of something is the one expressed in terms of current prices or figures, without taking into account general changes in prices that take place over time (Source: Reverso). A “nominal” change would be one where we say “from now on, one (new) euro is worth what 100 old euros used to be worth”. Economists call this “nominal” because there is no real change in the economy – it’s just a change of name. If I used to get 100,000 euros a month and spend 60,000 at the supermarket, and now I get 1,000 euros and pay only 600, nothing “real” has changed. A “real” change would happen if, for instance, the government buys less on the market, or employs less workers etc. (Itzhak Gilboa) 

Redenomination: The process of exchanging old currency for new currency, or changing the face value of existing notes in circulation. 

 

1Argenziano, R. and I. Gilboa, "Second-Order Induction in Prediction Problems", PNAS, 116 (2019). Find the filmed interview of Itzhak Gilboa here.

2See Mosley (2005): "(...) redenominations often occur after economic crises, as governments attempt to convince citizens and markets that hyperinflation is a thing of the past. In some cases, the timing is correct, in that redenomination caps off high levels of inflation. In other cases, governments are not able to reign in inflation immediately after redenomination, and they may make multiple efforts (...)."

Article by HEC Paris Professor Itzhak Gilboa, based on latest research publication, “Similarity-Nash Equilibria in Statistical Games” (full paper) by Rossella Argenziano of the University of Essex and Itzhak Gilboa. This research work has benefited from the support of the HEC Foundation through the "F Project".
See structure
Part 18

Black Swans and Other Challenges to Rational Decision Making

Decision Sciences
Published on:

When trying to figure out the outcome of a given situation, or the fallout of a sudden event, is it better to reason by analogies and resort to past experience or to think ahead and apply probabilistic reasoning? Researchers present a new mathematical model on making decisions in uncertain circumstances, which takes into account both modes of reasoning.

tarot - cover

Photo Credits: Fergregory / Adobe Stock

People like to try to predict the future, for both personal and professional reasons. But the number of unforeseen events with extreme consequences — popularly known as black swans — in the 20th century alone shows just how unsuccessful we are at foreseeing future events.

Many major events, from the Great Depression to the fall of the Soviet Union to the attack on the Twin Towers and collapse of Lehman Brothers up to today’s COVID-19 pandemic and Russia-Ukraine war, were all surprises to a majority of people. The failure to predict these events should have made us humbler about our foresight, but it did not.

An ideal decision maker

We consider a decision maker who is so rational that she has the meta-knowledge to know that her knowledge is incomplete. She plans for the future and does her best at describing scenarios and at assigning probabilities to these scenarios. But she recalls that, even when she consulted the greatest experts in a given field, about every decade she found a major event that hadn’t been discussed at all, or had been dismissed as improbable. So, our rational decision maker says to herself, "Well, it's time I learn from my failures to predict.” When she plans for the future, she is not going to insist on figuring out everything that can happen and to assign probabilities to all possible scenarios, because she admits she doesn’t have the knowledge to do that. Rather, she includes in her model also a simpler, humbler, way of reasoning, which simply looks at past events and follows the maxim of the great philosopher David Hume who said, “from causes which appear similar, we expect similar effects”. We refer to this approach as reasoning by analogies (or "case-based reasoning").

 

We develop a model of decision making that combines theory-based and case-based reasoning.

 

We develop a model of decision making that combines theory-based and case-based reasoning. When evaluating the probability of an outcome, our decision maker looks at the probabilities of this outcome according to each theory she entertains, but also looks at similar past cases in which this outcome has occurred.

The relative weight placed on theory-based compared to case-based reasoning depends on several factors, including the past success of the theories the agent entertains and the similarity of past cases to the present case. Also, the balance between the two modes of reasoning may be a personal trait, depending on cognitive style and education. 

The venture capitalist’s case

In a hypothetical example, a team of entrepreneurs seeks funding from a venture capital firm for a new cancer treatment. When considering the proposal, the potential investors do so from different perspectives. John examines the efficacy of the treatment, possible competitors, possible delays and costs of clinical trials, the amount insurers will pay for the treatment and expected profits. He finds the potential investment quite promising.

Sarah, who is more experienced, is skeptical. Though she admits that John’s analysis has merit, she has seen few projects that have been successful in the past. Rachel, who has little experience, is also skeptical of the project, noting that even the most careful calculations might not have captured all of the relevant possibilities and weighted them appropriately; she urges caution with “fantastic new technologies.”

In this case, Sarah has a larger database of past cases in her memory than John. They may agree on the probability of success of the enterprise and may have the same cognitive style, but Sarah’s experience has made her more skeptical. John and Rachel both have little information about past cases, yet have different cognitive styles, with Rachel being more cautious about trusting theories. 

Different people may analyze the same situation by placing different emphases on case-based and theory-based reasoning, and arrive at different conclusions.

“All eyes on the market”

In times of great uncertainty, case-based reasoning is particularly relevant. Currently, for example, people are making analogies between the Russia-Ukraine war and other wars. The COVID-19 pandemic spurred an interest in the 1918 flu epidemic. 

 

In times of great uncertainty, case-based reasoning is particularly relevant.

 

There is evidence that this approach is relevant and sometimes successful: during the financial crisis of 2007-2008, governments learned from the past and acted so that it did not become another Great Depression. Similarly, in the aftermath of 9/11, financial experts learned from past crises to predict market behavior.

All eyes will be on the market Monday morning,” a Barron’s columnist wrote in the first issue of the publication after the 2001 terrorist attack. In the same issue, another columnist was reassuring, noting that share prices were resilient following events such as the fall of France in 1940, Pearl Harbor, the Kennedy assassination and the Gulf War. 

Not business as usual

We argue that theory-based reasoning is an excellent way of making decisions when there is a bounty of data: when it’s business as usual. Yet we find that this mode of reasoning is insufficient, even useless, in the face of a black swan — and the world is filled with black swans. In times of surprise, decision makers may naturally put less weight on their probabilistic reasoning and rely more on past analogies.

Methodology

By employing the axiomatic approach, we translate the abstract mathematical model to observable behavior in concrete situations, in the hope of better understanding what the model implies. 

Applications

We believe that scientists in economics, finance, political science and related fields can benefit from using such a model, and we hope that our axiomatic results would convince them that this is a reasonable model to use when trying to understand human behavior, as well as when making recommendations for individuals and organizations.
Based on an interview with Stefania Minardi and her article “Theories and cases in decisions under uncertainty” (Games and Economic Behavior, September 2020), co-written with Itzhak Gilboa (HEC Paris) and Larry Samuelson (Yale University).

Related content on Decision Sciences

Decision Sciences

Risking the future? How Delayed Consequences Can Bias the Perception of Risk

By Emmanuel Kemel

Brian Hill GREGHEC
Brian Hill
CNRS Research Professor
Emmanuel Kemel HEC professor
Emmanuel Kemel
CNRS Research Professor
Economics

How Much to Reveal to Persuade a Decision Maker?

By Tristan Tomala, Marie Laclau, Frédéric Koessler

Subscribe button for Knowledhe@HEC newsletter

Newsletter knowledge

A monthly brief in your email box and 3 issues of the book per year.

follow us

Insights @HECParis School of #Management

Follow Us

Support Research

Our articles are produced thanks to our reader's support