Nudge theory was developed by Richard Thaler (Nobel Prize in Economics 2017) and Cass Sunstein in 2008. The two authors suggested that cognitive biases (faulty reasoning or perception that distorts decision-making) might serve as an instrument of public policy. These biases could be used to nudge individuals towards decisions that are deemed good for themselves or the wider community – but which they lack the perspicacity or motivation to pursue. Subtle changes in the decision-making environment can steer behaviors in a virtuous direction. Let's say you are staying in a hotel, and you know that most of the previous guests in your bedroom have re-used the same bath towel from day to day; conformity bias will then prompt you to follow suit. This same bias may prod you to cut back on your energy consumption if you find out that it is higher than your neighbor's. Automatically registering voters on the electoral roll – like pre-filled tax returns – is another instance that draws on the virtuous simplification of the target behavior. Nudging is an insidious way of inducing people to change behaviors while safeguarding their freedom of choice. It is an alternative to the conventional tools of state action such as, for example, bans or taxes.
The same methods of influencing people are also employed in marketing based on the following idea: if reason fails to persuade consumers about the utility of a purchase, you can coax them insidiously. Let’s say you want to reserve a hotel online. The site warns you that there are not many bedrooms left in your chosen category, and that other internet users are currently looking at them… all of which nudges you to book your room at full speed so you do not miss out on what you see as a rare opportunity. Websites that display a default purchase option prominently or default acceptance of specific terms and conditions are too numerous to mention. Of course, you are free to disregard these defaults provided you have time on your hands and enjoy a good search. A free trial that you end up paying for because you forgot to cancel it is another example of nudge used for marketing. And then there are those discount vouchers with conditions so limited they never get used, or targeted ads that make well-timed offers. This approach distorts the very nature of nudging since it does not aim to improve the well-being of the consumer or society, which is why it is sometimes known as “bad nudging” or “sludging”.
The prediction made by Noah Harari in 2018 has, worryingly, already come true in part: “As biotechnology and machine learning improve, it will become easier to manipulate people's deepest emotions and desires [...] could you still tell the difference between yourself and [the] marketing experts?” It goes without saying that influence strategies are not exclusive to artificial intelligence. Door-to-door sales reps have known and used more-or-less ethical sales techniques for many years, and it follows that nudges can be employed by humans. Just think of the barista who asks if you would like a pastry – or even a set menu – when all you do is ask for a coffee. And then there is the sales assistant who kicks off negotiations with an inflated offer before pretending to give you a generous discount. Artificial intelligence, however, has the power to swell the use of influence methods by rolling them out systematically on a grand scale. The behavioral biases underpinning standard nudges were derived from experimental research. But big data can automatically detect the tiniest weak point in the decision-making process, which can then be leveraged to influence consumers. Once a new behavioral lever has been identified, algorithms can apply it extensively.
What are the consequences of these “bad nudges”? Consumers may feel deceived because they purchase items or services that do not match their real needs, or because attempts to hold out against the influences generate a fatigue that degrades the shopping experience. Accordingly, using nudges in the field of marketing serves to lower consumer well-being.
In more general terms, the wholesale deployment of nudges often systematizes mistakes that were formerly occasional: irrationality, in other words, becomes the norm.
In other words, irrationality becomes the norm.
In this respect, the growing use of nudges is upsetting the foundations on which liberal economics is built. In this model, it is the pressure exerted by consumers making informed choices that encourages producers to offer products that best match consumer needs at the best price. Nudges upend this process since producers can use them to influence consumer preferences. This means that consumers who have been influenced by nudges no longer exert their counter-power on producers. The possibility that consumer behavior may be swayed by nudging challenges some of the virtues of the market economy. Likewise, the idea that the public might cast their votes under influence undermines the basis of the democratic model.
How can we stave off these damaging effects? Is regulation the answer? It would be problematic to legislate in this area, since the distinction between information and influence is so slight. At the very least, it could become a requirement that the information provided during the purchase process (such as the quantities available) be true, although this would still be difficult to enforce. Change could also be driven by consumers, who could turn their backs on platforms that employ these techniques. This is no easy task: the influences are not always conscious, and some platforms operate a quasi-monopoly. Sellers themselves could also reverse the trend by certifying that they do not use influence techniques as a way of guaranteeing quality and respect for their customers. This approach could be supported by artificial intelligence: algorithms could be used to automatically test online sales sites to detect nudges, and a certification label could be created.
Do we need “good algorithms” for fighting “bad ones”? Although this idea is simplistic, it does remind us that machines only do what we have designed them to do (apart from mistakes in programming). This means that it is up to consumers (or their representatives or advocates) to make use of the possibilities afforded by artificial intelligence to defend their interests.