In Sources of Power: How People Make Decisions, Gary Klein writes:
Kahneman, Slovic, and Tversky (1982) present a range of studies showing that decision makers use a variety of heuristics, simple procedures that usually produce an answer but are not foolproof. … The research strategy was not to demonstrate how poorly we make judgments but to use these findings to uncover the cognitive processes underlying judgments of likelihood.
Lola Lopes (1991) has shown that the original studies did not demonstrate biases, in the common use of the term. For example, Kahneman and Tversky (1973) used questions such as this: “Consider the letter R. Is R more likely to appear in the first position of a word or the third position of a word?” The example taps into our heuristic of availability. We have an easier time recalling words that begin with R than words with R in the third position. Most people answer that R is more likely to occur in the first position. This is incorrect. It shows how we rely on availability.
Lopes points out that examples such as the one using the letter R were carefully chosen. Of the twenty possible consonants, twelve are more common in the first position. Kahneman and Tversky (1973) used the eight that are more common in the third position. They used stimuli only where the availability heuristic would result in a wrong answer. … [I have posted some extracts of Lopes’s article here.]
There is an irony here. One of the primary “biases” is confirmation bias—the search for information that confirms your hypothesis even though you would learn more by searching for evidence that might disconfirm it. The confirmation bias has been shown in many laboratory studies (and has not been found in a number of studies conducted in natural settings). Yet one of the most common strategies of scientific research is to derive a prediction from a favorite theory and test it to show that it is accurate, thereby strengthening the reputation of that theory. Scientists search for confirmation all the time, even though philosophers of science, such as Karl Popper (1959), have urged scientists to try instead to disconfirm their favorite theories. Researchers working in the heuristics and biases paradigm condemn this sort of bias in their subjects, even as those same researchers perform more laboratory studies confirming their theories.
On explaining everything
On 3 July 1988 a missile fired from the USS Vincennes destroyed a commercial Iran Air flight taking off over the Persian gulf, killing all onboard. The crew of the Vincennes had incorrectly identified the aircraft as an attacking F-14.
The Fogarty report, the official U.S. Navy analysis of the incident, concluded that “stress, task fixation, an unconscious distortion of data may have played a major role in this incident. [Crew members] became convinced that track 4131 was an Iranian F-14 after receiving the … report of a momentary Mode II. After this report of the Mode II, [a crew member] appear[ed] to have distorted data flow in an unconscious attempt to make available evidence fit a preconceived scenario (‘Scenario fulfillment’).” This explanation seems to fit in with the idea that mental simulation can lead you down a garden path to where you try to explain away inconvenient data. Nevertheless, trained crew members are not supposed to distort unambiguous data. According to the Fogarty report, the crew members were not trying to explain away the data, as in a de minimus explanation. They were flat out distorting the numbers. This conclusion does not feel right.
The conclusion of the Fogarty report was echoed by some members of a five-person panel of leading decision researchers, who were invited to review the evidence and report to a congressional subcommittee. Two members of the panel specifically attributed the mistake to faulty decision making. One described how the mistake seemed to be a clear case of expectancy bias, in which a person sees what he is expecting to see, even when it departs from the actual stimulus. He cited a study by Bruner and Postman (1949) in which subjects were shown brief flashes of playing cards and asked to identify each. When cards such as the Jack of Diamonds were printed in black, subjects would still identify it as the Jack of Diamonds without noticing the distortion. The researcher concluded that the mistake about altitude seemed to match these data; subjects cannot be trusted to make accurate identifications because their expectancies get in the way.
I have talked with this decision researcher, who explained how the whole Vincennes incident showed a Combat Information Center riddled with decision biases. That is not how I understand the incident. My reading of the Fogarty report shows a team of men struggling with an unexpected battle, trying to guess whether an F-14 is coming over to blow them out of the water, waiting until the very last moment for fear of making a mistake, hoping the pilot will heed the radio warnings, accepting the risk to their lives in order to buy some more time.
To consider this alleged expectancy bias more carefully, imagine what would have happened if the Vincennes had not fired and in fact had been attacked by an F-14. The Fogarty report stated that in the Persian Gulf, from June 2, 1988, to July 2, 1988, the U.S. Middle East Forces had issued 150 challenges to aircraft. Of these, it was determined that 83 percent were issued to Iranian military aircraft and only 1.3 percent to aircraft that turned out to be commercial. So we can infer that if a challenge is issued in the gulf, the odds are that the airplane is Iranian military. If we continue with our scenario, that the Vincennes had not fired and had been attacked by an F-14, the decision researchers would have still claimed that it was a dear case of bias, except this time the bias would have been to ignore the base rates, to ignore the expectancies. No one can win. If you act on expectancies and you are wrong, you are guilty of expectancy bias. If you ignore expectancies and are wrong, you are guilty of ignoring base rates and expectancies. This means that the decision bias approach explains too much (Klein, 1989). If an appeal to decision bias can explain everything after the fact, no matter what has happened, then there is no credible explanation.
I’m not sure the right base rate is the proportion of aircraft challenged, but it is still an interesting point.