Behavioral Scientist put out the call to share hopes, fears, predictions and warnings about the next decade of behavioral science. Here’s my contribution: As behavioral scientists, we’re not exactly a diverse bunch. We’re university educated. We live in major cities. We work in academia, tech, consulting, banking and finance. And dare I say it, we’re rather liberal. Read the twitter streams or other public outputs of the major behavioral science institutions, publications and personalities, and the topics of interest don’t stray too far from what a Democratic politician (substitute your own nation’s centre-left party) would discuss in a stump speech.
Consider the following claim: We don’t need loss aversion to explain a person’s decision to reject a 50:50 bet to win $110 or lose $100. That just simple risk aversion as in expected utility theory. Risk aversion is the concept that we prefer certainty to a gamble with the same expected value. For example, a risk averse person would prefer $100 for certain over a 50-50 gamble between $0 and $200, which has an expected value of $100.
I am somewhat slow in posting this - the article has been up more than a week - but my latest article is up at Behavioral Scientist. The article is basically an argument that the scrutiny we are applying to algorithmic decision making should also be applied to human decision making systems. Our objective should be good decisions, whatever the source of the decision. The introduction to the article is below.
Loss aversion is the idea that losses loom larger than gains. It is one of the foundational concepts in the judgment and decision making literature. In Thinking, Fast and Slow, Daniel Kahneman wrote “The concept of loss aversion is certainly the most significant contribution of psychology to behavioral economics.” Yet, over the last couple of years several critiques have emerged that question the foundations of loss aversion and whether loss aversion is a phenomena at all.
From a new(ish) book by David Leiser and Yhonatan Shemesh, How We Misunderstand Economics and Why it Matters: The Psychology of Bias, Distortion and Conspiracy: Working memory is a cognitive buffer, responsible for the transient holding, processing, and manipulation of information. This buffer is a mental store distinct from that required to merely hold in mind a number of items and its capacity is severely limited. The complexity of reasoning that can be handled mentally by a person is bounded by the number of items that can be kept active in working memory and the number of interrelationships between elements that can be kept active in reasoning.
Nick Chater’s The Mind is Flat: The Illusion of Mental Depth and the Improvised Mind is a great book. Chater’s basic argument is that there are no ‘hidden depths’ to our minds. The idea that we have an inner mental world with beliefs, motives and fears is just a work of imagination. As Chater puts it: no one, at any point in human history, has ever been guided by inner beliefs or desires, any more than any human being has been possessed by evil spirits or watched over by a guardian angel.
From Eliezer Yudkowsky on Less Wrong (a few years old, but worth revisiting in the light of my recent Gigerenzer v Kahneman and Tversky post): When a single experiment seems to show that subjects are guilty of some horrifying sinful bias - such as thinking that the proposition “Bill is an accountant who plays jazz” has a higher probability than “Bill is an accountant” - people may try to dismiss (not defy) the experimental data.
From Gerd Gigerenzer’s The bounded rationality of probabilistic mental models (PDF) (one of the papers mentioned in my recent post on the Kahneman and Tversky and Gigerenzer debate): Defenders and detractors of human rationality alike have tended to focus on the issue of algorithms. Only their answers differ. Here are some prototypical arguments in the current debate. Statistical algorithms Cohen assumes that statistical algorithms … are in the mind, but distinguishes between not having a statistical rule and not applying such as rule, that is, between competence and performance.
Through the late 1980s and early 1990s, Gerd Gigerenzer and friends wrote a series of articles critiquing Daniel Kahneman and Amos Tversky’s work on heuristic and biases. They hit hard. As Michael Lewis wrote in The Undoing Project: Gigerenzer had taken the same angle of attack as most of their other critics. But in Danny and Amos’s view he’d ignored the usual rules of intellectual warfare, distorting their work to make them sound even more fatalistic about their fellow man than they were.
I typically find the argument that increased choice in the modern world is “tyrannising” us to be less than compelling. On this blog, I have approvingly quoted Jim Manzi’s warning against extrapolating the results of an experiment on two Saturdays in a particular store - the famous jam experiment - into “grandiose claims about the benefits of choice to society.” I recently excerpted a section from Bob Sugden’s excellent The Community of Advantage: A Behavioural Economist’s Defence of the Market on the idea that choice restriction “appeals to culturally conservative or snobbish attitudes of condescension towards some of the preferences to which markets cater.