I often complain that behavioural economics (behavioural science) often appears to be no more than a loosely connected set of heuristics and biases, crying out for theoretical unification. Evolutionary biology is likely the source of that unification.
Over the last few years, I’ve spotted the occasional attempt to analyse a bias through an evolutionary lens. But late last year, I came across Owen D Jones, a professor of law and professor of biological sciences at Vanderbilt University. At the time, I posted on his forthcoming book chapter Why Behavioral Economics Isn’t Better, and How it Could Be, but since then have been working through his impressive back catalogue (his SSRN page is here). For around 15 years Jones has published on the link between behavioural economics (or in his case, behavioural law and economics) and evolutionary biology, but this work has barely carried across from the law to the economics literature.
I plan to post on a few of his papers, and I’ll start with a 2000 article Time-Shifted Rationality and the Law of Law’s Leverage: Behavioral Economics Meets Behavioral Biology. As in the chapter I linked above, Jones starts by critiquing the lack of theoretical background in behavioural economics, a claim that is still fair today:
BLE [behavioural law and economics] scholars stand accused, for example, of merely organizing anecdotes, and of confusing counterstories for theories. This should not, of course, be construed as automatically damning. After all, unexpected empirical facts can, in sufficient number, warrant changes in legal strategies for pursuing existing goals, even absent convincing explanations for their patterned occurrence. And a number of BLE scholars have succeeded in making convincing cases for legal reform, based on empirical data about irrationalities alone, irrespective of causes.
Nevertheless, in the absence of buttressing theory such efforts represent isolated successes, rather than promisingly synergistic ones that would signal a broad, systematic approach. For it is quite clear in the end that BLE shows neither a present and satisfactory account of the origins and patterns of identified irrationalities, nor signs of making quick progress toward developing one. Constructing the theoretical foundation of these phenomena will ultimately be necessary if BLE is to achieve its potential and be as useful, persuasive, and important to law as its proponents now hope.
Jones argues that an evolutionary analysis can provide that theoretical foundation, primarily through distinguishing proximate from ultimate causes. Proximate causes relate to the internal mechanisms or physical processes that underlie behaviour. Ultimate causes are the evolutionary processes by which a behaviour came to be commonly observable in a species. Jones argues that there is a general failure to analyse the biases through the lens of ultimate causation, which would allow us to understand the patterns of biases and why some biases are so widespread.
I am tempted to go further and would say that often there is not even an analysis of the proximate causes of biases. Gerd Gigerenzer tends to operate in this territory, looking to understand what decision rules are being exercised in particular environments, which allows you to understand the ecological rationality of the decision. A lot of behavioural economics research simply finds a deviation from what they consider a rational decision and moves on - with no thought as to how the decision making process led to the decision. Prospect theory, for instance, bears practically no resemblance to mechanisms or processes by which people actually make decisions.
Back to Jones, he argues that under the lens of ultimate causation, many biases turn out to be features, not bugs:
[S]ome behaviors currently ascribed to cognitive limitations reflect not defect, but rather finely tuned features of brain design. If so, we may gain important insights into the patterns of human irrationality by combining our proximate causation analysis with our ultimate causation analysis to yield a comprehensive evolutionary analysis.
A biologically informed view of the brain makes clear that substantive irrationalities are probably not just about physical, temporal, and informational limits. They are also, in some circumstances, likely to be about specific, narrowly tailored, efficiently operating features of brain design. My argument here is that the traditional approach to bounded rationality and decision-making is, in many cases, both descriptively wrong and materially misleading. It is descriptively wrong in the same way that it would be wrong to say that a Porsche Boxster is “defective” when it fails to climb logs and ford streams off road, or that a moth’s brain is “defective” when the moth flies into an artificial light source. It is materially misleading because to the extent that irrationalities are considered to be the result of defects, rather than design features, their specific content is assumed to be, though patterned ex post, unpredictable, unsystematized, and random ex ante—rather than predictable, interrelated, and content-specific. Put another way, turning old cognitive tools to entirely new uses introduces changed circumstances, not defects. And the inappropriateness of old tools to new uses does not mean those tools lack specialized design and function. Understanding what the tools were designed to do provides significant purchase on explaining and predicting how they will function when applied in novel contexts.
Today, we tend to put old cognitive tools to new uses in environments that don’t reflect those of our evolutionary past. Jones calls this “time-shifted rationality” (I think I prefer to just call it mismatch), which relates to the use of a once-successful tool in new, possibly inappropriate circumstances.
[T]here will be times when a perfectly functioning brain—functioning precisely as it was designed to function— will incline us toward behavior that, viewed only in the present tense and measured only by outcomes in current environments, will appear to be substantively irrational. This is simply because the brain was designed to process information in ways tending to yield behaviors that were substantively rational in different environments than the ones in which we now find ourselves.
Specifically, time-shifted rationality describes any trait resulting from the operation of evolutionary processes on brains that, while increasing the probability of behavior that was adaptive in the relevant environment of evolutionary adaptation in the ancestral past, leads to substantively irrational or maladaptive behavior in the present environment. In other words, poor behavior choices sometimes derive not from brain defects, per se, by rather from the brain’s deployment of old, once-successful techniques in the face of new problems. So before judging the brain’s abilities, we need to consider the effects of its choices in the environments for which the brain is principally adapted.
Here’s one example of this analysis at work (although I don’t agree with the point about increases in life expectancy as an explanation):
Researchers have noted not only that people often prefer to receive a smaller good now over a disproportionately greater good later, but also that people reverse this preference as the delay for receiving either good increases in equal amounts. This seems irrational. For example, the fact that a majority of adults would rather receive $50 now than $100 in two years—at the same time that virtually no one prefers $50 in four years to $100 in six years—is seen as clear evidence of “anomalies in the utilitarian reasoning of the normal human adult.” ...
It is likely a mistake to conclude that seemingly irrationally discounted futures are necessarily the function of calculating errors. Evolutionary analysis suggests an ultimate cause explanation. Hyperbolic discounting may reflect another time-shifted rationality. How might modern environmental features differ from features of the environment of evolutionary adaptation in ways that render once-adaptive predispositions maladaptive? First, average life expectancy has skyrocketed. And high discount rates make sense when life expectancy is short. Second, for nearly all of the roughly seventy million years of primate evolution, there was no such thing as a reliable future, let alone a reliable future payoff. Even under the most generous definition of investment, investment horizons were short. Third, a “right” to receive something in the future is a trivially recent invention of modern humanity.
Since long lives, reliable futures, and reliable rights to future payoffs were not part of the environment in which the modern brain was slowly built, it is not particularly surprising that the modern brain tends to steeply discount the value of a future benefit compared to an immediate one, and is not particularly well equipped to reach the outcome currently deemed most rational. Rather than assume that people will be rational discounters, we should, logically, expect and assume the opposite: most often people will be hyperbolic discounters. In the EEA, the environment of evolutionary adaptation, the kind of hyperbolic discounting that humans now so regularly exhibit often would have led to more substantively rational results than the alternative.
Put another way, at almost no time in human evolutionary history could there have been a selection pressure that regularly favored the kind of coolly calculated and deferred gratification now deemed to be so reasonable.
The other major area that Jones covers in the article is what he calls the law of law’s leverage, which deserves a future post of its own.