Both the heuristics-and-biases program and the recently emerging work on ecologically rational heuristics have been linked to Herbert Simon's notion of bounded rationality. This concept can be understood by contrasting it to the traditional decision-making approach embodied in unbounded rationality, illu strated by the following example. Imagine being faced with the decision of whether or not to marry. How can this decision be made in a rational way? Assume that you attempted to resolve this question by maximizing your subjective expected utility. To compute your personal expected utility for marrying, you would have to determine all the possible consequences that marriage could bring (e.g., children, companionship, and countless further consequences), attach quantitative probabilities to each of these consequences, estimate the subjective utility of each, multiply each utility by its associated probability, and finally add all these numbers. The same procedure would have to be repeated for the alternative "not marry.'' Finally, you would have to choose the alternative with the higher total expected utility.
Maximization of expected utility in this way is probably the best known realization of the prominent vision of unbounded rationality. Models of unbounded rationality have been criticized for having little or no regard for the constraints of time, knowledge, and computational capacities that real humans face. For instance, while you are deliberating about whether marrying is the right choice, considering each of the myriad conceivable consequences and assigning probabilities to each, any potential partner will probably have married someone else. To this criticism proponents of unbounded rationality generally concede that their models assume unrealistic mental abilities, but they nevertheless defend them by arguing that humans act as if they were unboundedly rational. In this interpretation, the models of unbounded rationality do not describe the process but merely the outcome of reasoning.
If the lofty ideals of human reasoning do not capture the processes of how real people make decisions in the real world, what then are those processes? In other words, what models take into account the challenging conditions under which people have to solve problems, including limited time, knowledge, and computational capacity? Herbert Simon proposed that these constraints force humans to use "approximate methods'' (heuristics) to handle most tasks. These approximate methods form the basis of bounded rationality.
Simon's vision of bounded rationality has two interlocking components that act like a pair of scissors to shape human rational behavior. The two blades in this metaphor are the computational capabilities of the actor and the structure of task environments. First, the computational capability blade implies that models of human judgment and decision making should be built on what we actually know about the mind's limitations rather than on fictitious competencies assumed in models of unbounded rationality. There are two key limitations central to bounded rationality. First, contrary to models of unbounded rationality, humans cannot search for information for all of eternity. In computationally realistic models, search must be limited because real decision makers have only a finite amount of time, knowledge, attention, or money to apply to a particular decision. Limited search requires rules to specify what information to seek and in what order (i.e., an information search rule) and a way to decide when to stop looking for information (i.e., a stopping rule).
Another key limitation of the human mind is that the pieces of information uncovered by the search process are not likely to be processed in an overly complex way. In contrast, most traditional models of inference, from linear multiple regression models to Bayesian models to neural networks, try to find some optimal integration of all information available: Every bit ofinformation is taken into account, weighted, and combined in some more or less computationally expensive way. Models of bounded rationality instead rely on processing steps that are computationally bounded. For instance, a bounded decision or inference can be based on only one or a few pieces of information, whatever the total amount of information found during search. The simple decision rule used to process this limited knowledge need not weigh or combine pieces of information, thus eliminating the need to convert different types of information into a single common currency (e.g., utilities). Note that decision rules and information search and stopping rules are connected. For instance, when a heuristic searches for only one (discriminating) cue, this largely constrains the possible decision rules to those that do not integrate information. On the other hand, if search extends to many cues, the decision rule will be less constrained. The cues may then be weighted and integrated, or only the best of them may determine the decision.
These two key limitations, limited information search and limited processing of information, can be instantiated into models of heuristics. The limitations help explain how heuristics achieve one of their most important advantages, namely, speed. In fact, for much decision making in the real world—the stock broker who decides within seconds to keep or sell a stock, or the captain of the firefighter squad who within a few moments must predict how a fire will progress and whether or not to pull out the squad— speed is often the crucial objective.
The second blade in Simon's scissors metaphor, operating in tandem with computational capability, is environmental structure. This blade is of crucial importance in shaping bounded rationality because it can explain when and why heuristics perform well, namely, if the structure of the heuristic is adapted to the structure of the environment (i.e., if the heuristic is ecologically, rather than logically, rational). Simon's classic example concerns foraging organisms that have a single need—food. An organism living in an environment in which little heaps of food are randomly distributed can survive with a simple heuristic: Run around randomly until a heap of food is found. For this, the organism needs some capacity for movement, but it does not need a capacity for inference or learning. For an organism in an environment in which food is distributed not randomly but in patches whose locations can be inferred from cues, more sophisticated strategies are possible. For instance, it could learn the association between cues and food and store this information in memory. The general point is that in order to understand which heuristic an organism employs, and when and why the heuristic works well, one needs to examing the structure of the information in the environment.
Was this article helpful?