Defenders and detractors of human rationality alike have tended to focus on the issue of algorithms. Only their answers differ. Here are some prototypical arguments in the current debate.
Cohen assumes that statistical algorithms … are in the mind, but distinguishes between not having a statistical rule and not applying such as rule, that is, between competence and performance. Cohen’s interpretation of cognitive illusions parallels J.J. Gibson’s interpretation of visual illusions: illusions are attributed to non-realistic experimenters acting as conjurors, and to other factors that mask the subjects’ competence: ‘unless their judgment is clouded at the time by wishful thinking, forgetfulness, inattentiveness, low intelligence, immaturity, senility, or some other competence-inhibiting factor, all subjects reason correctly about probability: none are programmed to commit fallacies or indulge in illusions’ … Cohen does not claim, I think, that people carry around the collected works of Kolmogoroff, Fisher, and Neyman in their heads, and merely need to have their memories jogged, like the slave in Plato’s Meno. But his claim implies that people do have at least those statistical algorithms in their competence that are sufficient to solve all reasoning problems studied in the heuristics and biases literature, including the Linda problem
Non-statistical algorithms: heuristics
Proponents of the heuristics-and-biases programme seem to assume that the mind is not built to work by the rules of probability:
In making predictions and judgments under uncertainty, people do not appear to follow the calculus of chance or the statistical theory of prediction. Instead they rely on a limited number of heuristics which sometimes yield reasonable judgments and sometimes lead to severe and systematic errors.
(Kahneman and Tversky, 1973:237)
Cognitive illusions are explained by non-statistical algorithms, known as cognitive heuristics.
Statistical and non-statistical heuristics
Proponents of a third position do not want to be forced to choose between statistical and non-statistical algorithms, but want to have them both. Fong and Nisbett … argue that people possess both rudimentary but abstract intuitive versions for statistical principles such as the law of large numbers, and non-statistical heuristics such as representativeness. The basis for these conclusions are the results of training studies. For instance, the experimenters first teach the subject the law of large numbers or some other statistical principle, and subsequently also explain how to apply this principle to a real-world domain such as sports problems. Subjects are then tested on similar problems front he same or other domains. The typical result is that more subjects reasons statistically, but transfer to domains not trained in is often low.
However, Gigerenzer argues that we need to consider more than just the mental algorithms.
Information needs representation. In order to communicate information, it has to be represented in some symbols system. Take numerical information. This information can be represented by the Arabic numeral system, by the binary system, by Roman numbers, or other systems. These different representations can be mapped in a one-to-one way, and are in this sense equivalent representations. But they are not necessarily equivalent for an algorithm. Pocket calculators, for instance, generally work on the Arabic base-10 system, whereas general purpose computers work on the base-2 system. The numerals 10000 and 32 are representations of the number thirty-two in the binary and Arabic system, respectively. The algorithms of my pocket calculator will perform badly with the first kind of representation but work well on the latter.
The human mind finds itself in an analogous situation. The algorithms most Western people have stored in their minds - such as how to add, subtract and multiply - work well on Arabic numerals. But contemplate for a moment division in Roman numerals, without transforming them first into Arabic numerals.
There is more to the distinction between an algorithm and a representation of information. Not only are algorithms tuned to particular representations, but different representations make explicit different features of the same information. For instance, one can quickly see whether a number is a power of 10 in an Arabic numeral representation, whereas to see whether that number is a power of 2 is more difficult. The converse holds with binary numbers. Finally, algorithms are tailored to given representations. Some representations allow for simpler and faster algorithms than others. Binary representation, for instance, is better suited to electronic techniques than Arabic representation. Arabic numerals, on the other hand, are better suited to multiplication and elaborate mathematical algorithms than Roman numerals …