Analyzing decisions is human
Decision making is, indeed, associated with animals more than with plants, and with higher life forms more than with lower ones. Surely, plants may exhibit phenomena that can be described as “deciding” to do something. A plant that turns its branches to face the sun can be viewed as making a decision, and even as choosing the branches’ angles optimally. Yet, this description would be more of a metaphor, a personification of the plant, than an actual account of decision making. A detailed process of encountering a problem, pondering it, and deciding what to do is more typical of higher life forms with relatively large brains than of lower ones. A tiger who decides if and when to start chasing its prey, a crow who decides how to break a nut’s shell, an alpha-male ape who decides whom to grant privileges to in its clan – all seem to go through stages of thought, starting with noticing the decision problem, going through deliberation, all the way to making a decision. At times, animals can even have emotional reactions not only to the decision’s outcome, but also for having made it, as in the case of a dog who seems to be ashamed, guilty, or embarrassed about a decision it should not have made.
Thus, the human race is clearly not the only species who makes decisions. By contrast, analyzing decisions does seem to be much more typical of humans than of animals.[1] Decision analysis has many manifestations, occurring prior to, during, and following a decision. In particular, a decision that we think of as conscious requires that one imagine the different states of affairs that can be brought about by different possible choices. The ability to depict these should not be taken for granted. In particular, it requires that one be able to suspend some knowledge one might have about oneself. Even if a person’s choice is easily predictable from choices she has made in similar past cases, rational decision making still requires that this prediction be put on hold, to allow her to contemplate her habitual choices and the possibility of changing them. This mental exercise of suspending self-knowledge is probably essential to the psychological phenomenon of free will: we would probably not have the internal experience of free will if we could not entertain several images of the future concurrently. Similarly, learning from past decisions requires that one be able to reason about counterfactuals, imagining what would have transpired had one made different choices. The psychological phenomenon of thinking in causal terms is thus tightly related to our ability to analyze our decisions.
Decision theory was developed with an emphasis on the human
If decision analysis is a hallmark of humanity, we’d expect it to be present in any account of humanity, including the most ancient ones. Indeed, recorded history is replete with the analysis of decisions: recommendations for optimal decisions; reasoning that leads to decisions; contemplation of alternative decisions, and so forth. Moreover, some of the insights modern theory can provide into decision making have a rather respectable history. Ask an investment analyst how for advice, and they may suggest diversification, namely, splitting the investment among several assets, so as to minimize risk. The preference for lower risk and this idea of diversification already appear in Genesis, where Jacob is about to face his brother Esau, and, fearing the encounter, splits his camp in two, so that one half might be saved should Esau smite the other. Similarly, dealing with self-discipline problems, we might recommend various ways of self-commitment. But in so doing we will be following Ulysses, who tied himself to the mast of his boat so as to enjoy the Sirens’ singing but not be tempted to swim to the dangerous rocks.
And yet, decision theory as a field of inquiry hasn’t been developed before the 17th century. It wasn’t enough to be human to develop such a theory; it was necessary that humanity be celebrated. This process, which in western culture (re-)started with the Renaissance and was advanced by the Enlightenment, put the human at center stage. This had many manifestations, some of which resurrected the Classics, and some was truly novel. It included dramatic changes in music, architecture, and the arts; the development of modern science; and also the invention of some aspects of social science. It took awarding humans a central status to make their decisions an appropriate object of study.
If we had to select one point in which decision theory started, it would probably be Pascal’s wager. It is perhaps not too surprising that one of the people most closely associated with the invention of probability theory and mathematical expectation is also the one who first thought about decision making in an orderly way. The wager is considered a very modern, post-Renaissance type of question: Pascal doesn’t attempt to prove that God exists; rather, that one would do wisely to become a believer (at least in due course). The locus of analysis isn’t the universe, in which God is, or is not. Rather, it is the human mind, wherein a belief in God may reside. Moreover, while Pascal is careful not to assume that one can simply choose one’s beliefs, he does frame the question as one of choice.
In his famous analysis of the wager, Pascal lays the foundations for several important ideas of decision theory. He informally describes the decision matrix, drawing a distinction between acts, over which one has control, and states or events, over which one doesn’t. He describes the notion of a dominant strategy. He then proceeds to refer to subjective probabilities – introducing the idea that the machinery of probability theory, developed to deal with objectively given probabilities as in chance games, can be used to organize one’s beliefs and decision making. Pascal then introduces his main argument, which is basically that of expected utility maximization, and concludes with the adaptation of his argument to the case of unknown probabilities.
It is no coincidence that decision theory wasn’t developed in medieval times. There was little room for such a theory when center stage was occupied by deities, and the main ways to deal with an uncertain future were prayer and penitence. Only when humans turned the spotlights to themselves did decision theory come into being.
Theory cannot replace human decisions; it should complement them
With the rise of mathematical economics, operations research, and related fields, decision theory and game theory were also developed in the mid-20th century. The foundations laid by luminaries such as John von Neumann and Oskar Morgenstern; Frank Ramsey, Bruno de Finetti, and Leonard Savage; John Nash, Kenneth Arrow, Gerard Debreu; Lloyd Shapley and Robert Aumann gave much hope for the ability to understand decision making in a mathematical way, providing both predictions and recommendations. However, this promise was not fully fulfilled. Despite some huge advances in problems such as operations research, closer to computer science and engineering, much seemed to be lacking when it came to everyday life problems, ranging from macroeconomic and political systems to individual decisions involving personal values, unique events, and so forth. At times, it appeared that the problem is one of information, where notions such as utility and probability are hard to assess, whether for predicting people’s choices or for making recommendations for decision makers. At other times, it seemed that more is missing. Indeed, following the pioneering works of Daniel Kahneman and Amos Tversky, psychologists showed that practically all the assumptions of rational choice theory can be refuted in carefully designed experiments. These studies clearly showed that classical decision theory was lacking as a description of human behavior.
Worse still, they also raised doubts about its appropriateness as a normative standard. In many of these cases not only did the theory fail to be a good description of how people make decisions; it was also considered to be impractical or unconvincing.
The disenchantment with decision theory occurs when modernism is on the defense. For various reasons, authority is being questioned in general, and scientific authority is no exception. In such an atmosphere there is a risk that decision theory will be perceived as a total failure, to be replaced by sheer intuition. There are people who believe that intuition leads us astray, and that this cannot be fixed; and there are people who believe that intuitive decision making is very effective when things matter, and when people are in their natural environment. Common to both is the conclusion that there is no room for the development or study of the theory, as it is anyway useless.
I hold that this would be a serious mistake. True, only in certain domains can decision theory fully replace intuition and provide the “best” recommendation. But in most if not all situations it can still be useful as a set of guidelines, as an aid to decision making. If we adopt a perhaps more realistic, more mature view of the theory, we may find that it can help us make better decisions – in our own eyes – also when it cannot mathematically model the problem with precision. In some cases, as in finding the shortest path between two points on Google Maps, decision theory can be trusted to replace intuition. But in many other cases – as in deciding what career to choose, where to invest one’s money, and so on – it can help us avoid mistakes without necessarily “computing” the correct answer.
Analyzing our decisions is an essential part of our humanity. It would be a pity to lose it. The analysis should take the human into account, which means that rational thought should be in a dialog with intuition and emotions. It is perhaps this dialog which is a quintessential ingredient of being human.
1 - This may partly be an artifact of our language: for all we know, it is possible that apes and dolphins analyze decisions as well, only lack the means to convey this analysis to us. For the sake of the present discussion, we will view decision analysis as a defining feature of humankind, and allow for the possibility that this is another way in which apes can be human.