The benefits of cognitive limits


Jason Collins


January 19, 2015

Cleaning up some notes recently, I was reminded of another interesting piece from Gerd Gigerenzer’s Rationality for Mortals:

Is perfect memory desirable, without error? The answer seems to be no. The “sins” of our memory seem to be good errors, that is, by-products (“spandrels”) of a system adapted to the demands of our environments. In this view, forgetting prevents the sheer mass of details stored in an unlimited memory from critically slowing down and inhibiting the retrieval of the few important experiences. Too much memory would impair the mind’s ability to abstract, to infer, and to learn. Moreover, the nature of memory is not simply storing and retrieving. Memory actively “makes up” memories—that is, it makes inferences and reconstructs the past from the present. This is in contrast to perception, which also makes uncertain inferences but reconstructs the present from the past. Memory needs to be functional, not veridical. To build a system that does not forget will not result in human intelligence.

Cognitive limitations both constrain and enable adaptive behavior. There is a point where more information and more cognitive processing can actually do harm, as illustrated in the case of perfect memory. Built-in limitations can in fact be beneficial, enabling new functions that would be absent without them (Hertwig & Todd, 2003). …

Newport (1990) argued that the very constraints of the developing brain of small children enable them to learn their first language fluently. Late language learners, in contrast, tend to experience difficulties when attempting to learn the full range of semantic mappings with their mature mental capacities. In a test of this argument, Elman (1993) tried to get a large neural network with extensive memory to learn the grammatical relationships in a set of several thousand sentences, yet the network faltered. Instead of taking the obvious step of adding more memory to solve the problem, Elman restricted its memory, making the network forget after every three or four words—to mimic the memory restrictions of young children who learn their first language. The network with the restricted memory could not possibly make sense of the long complicated sentences, but its restrictions forced it to focus on the short simple sentences, which it did learn correctly, mastering the small set of grammatical relationships in this subset. Elman then increased the network’s effective memory to five or six words, and so on. By starting small, the network ultimately learned the entire corpus of sentences, which the full network with full memory had never been able to do alone.

Gigerenzer also makes the case that most visual illusions are “good errors” necessary in an intelligent animal. Assumptions used to create the illusions, such as “light tends to come from above”, inform what we “see”.

Perceptual illusions are good errors, a necessary consequence of a highly intelligent “betting” machine (Gregory, 1974). Therefore, a perceptual system that does not make any errors would not be an intelligent system. It would report only what the eye can “see.” That would be both too little and too much. Too little because perception must go beyond the information given, since it has to abstract and generalize. Too much because a “veridical” system would overwhelm the mind with a vast amount of irrelevant details. Perceptual errors, therefore, are a necessary part, or by-product, of an intelligent system. They exemplify a second source of good errors: Visual illusions result from “bets” that are virtually incorrigible, whereas the “bets” in trial- and-error learning are made in order to be corrected eventually. Both kinds of gambles are indispensable and complementary tools of an intelligent mind.

The case of visual illusions illustrates the general proposition that every intelligent system makes good errors; otherwise it would not be intelligent. The reason is that the outside world is uncertain, and the system has to make intelligent inferences based on assumed ecological structures. Going beyond the information given by making inferences will produce systematic errors. Not risking errors would destroy intelligence.

In other parts of his work Gigerenzer builds the case that many of the “biases” identified by Kahneman and friends fall into the “good errors” camp.