I am somewhat slow in posting this - the article has been up more than a week - but my latest article is up at Behavioral Scientist. The article is basically an argument that the scrutiny we are applying to algorithmic decision making should also be applied to human decision making systems. Our objective should be good decisions, whatever the source of the decision. The introduction to the article is below.
Loss aversion is the idea that losses loom larger than gains. It is one of the foundational concepts in the judgment and decision making literature. In Thinking, Fast and Slow, Daniel Kahneman wrote “The concept of loss aversion is certainly the most significant contribution of psychology to behavioral economics.” Yet, over the last couple of years several critiques have emerged that question the foundations of loss aversion and whether loss aversion is a phenomena at all.
From a new(ish) book by David Leiser and Yhonatan Shemesh, How We Misunderstand Economics and Why it Matters: The Psychology of Bias, Distortion and Conspiracy: Working memory is a cognitive buffer, responsible for the transient holding, processing, and manipulation of information. This buffer is a mental store distinct from that required to merely hold in mind a number of items and its capacity is severely limited. The complexity of reasoning that can be handled mentally by a person is bounded by the number of items that can be kept active in working memory and the number of interrelationships between elements that can be kept active in reasoning.
Nick Chater’s The Mind is Flat: The Illusion of Mental Depth and the Improvised Mind is a great book. Chater’s basic argument is that there are no ‘hidden depths’ to our minds. The idea that we have an inner mental world with beliefs, motives and fears is just a work of imagination. As Chater puts it: no one, at any point in human history, has ever been guided by inner beliefs or desires, any more than any human being has been possessed by evil spirits or watched over by a guardian angel.
From Eliezer Yudkowsky on Less Wrong (a few years old, but worth revisiting in the light of my recent Gigerenzer v Kahneman and Tversky post): When a single experiment seems to show that subjects are guilty of some horrifying sinful bias - such as thinking that the proposition “Bill is an accountant who plays jazz” has a higher probability than “Bill is an accountant” - people may try to dismiss (not defy) the experimental data.
From Gerd Gigerenzer’s The bounded rationality of probabilistic mental models (PDF) (one of the papers mentioned in my recent post on the Kahneman and Tversky and Gigerenzer debate): Defenders and detractors of human rationality alike have tended to focus on the issue of algorithms. Only their answers differ. Here are some prototypical arguments in the current debate. Statistical algorithms Cohen assumes that statistical algorithms … are in the mind, but distinguishes between not having a statistical rule and not applying such as rule, that is, between competence and performance.
Through the late 1980s and early 1990s, Gerd Gigerenzer and friends wrote a series of articles critiquing Daniel Kahneman and Amos Tversky’s work on heuristic and biases. They hit hard. As Michael Lewis wrote in The Undoing Project: Gigerenzer had taken the same angle of attack as most of their other critics. But in Danny and Amos’s view he’d ignored the usual rules of intellectual warfare, distorting their work to make them sound even more fatalistic about their fellow man than they were.
I typically find the argument that increased choice in the modern world is “tyrannising” us to be less than compelling. On this blog, I have approvingly quoted Jim Manzi’s warning against extrapolating the results of an experiment on two Saturdays in a particular store - the famous jam experiment - into “grandiose claims about the benefits of choice to society.” I recently excerpted a section from Bob Sugden’s excellent The Community of Advantage: A Behavioural Economist’s Defence of the Market on the idea that choice restriction “appeals to culturally conservative or snobbish attitudes of condescension towards some of the preferences to which markets cater.
Summary: An important book describing how many experts make decisions, but with a lingering question mark about how good these decisions actually are. Gary Klein’s Sources of Power: How People Make Decisions is somewhat of a classic, with the version I read being a 20th anniversary edition issued by MIT Press. Klein’s work on expert decision making has reached a broad audience through Malcolm Gladwell’s Blink, and Klein’s adversarial collaboration with Daniel Kahneman (pdf) has given his work additional academic credibility.
As a record largely for myself, below are some notes in review of 2018 and a few thoughts about 2019. Writing: I started 2018 intending to post to this blog at least once a week, which I did. I set this objective as I had several long stretches in 2017 where I dropped the writing habit. I write posts in batches and schedule in advance, so the weekly target did not require a weekly focus.