Dialectical bootstrapping: A new paradigm to improve individual judgment

Ref. 10503

Allgemeine Beschreibung

Periode

-

Geographischer Raum

Zusätzliche geographische Informationen

-

Kurzbeschreibung

Executive summary We proposed "dialectical bootstrapping"-simulating the "wisdom of crowds" within a single mind-as technique to improve individual judgment (Herzog & Hertwig, 2009). This project tests the robustness of dialectical bootstrapping and whether people can be taught to use it. Background: Different lines of research have addressed how to best make quantitative predictions, including psychology, management science, computer science, statistics and medicine. One time proven way to improve judgment is the following: When there are several different plausible predictions (stemming from different experts and/or different statistical procedures) and no reliable track record about their past performance are available, average those predictions. Averages of predictions outperform the typical predictions in the set and can even outperform the best single prediction. Can a single person benefit from averaging without actually consulting other people? We proposed a novel approach to improve individual judgment called "dialectical bootstrapping", which enables different opinions to be generated and combined by the same person, thus simulating the "wisdom of crowds" within a single mind (Herzog & Hertwig, 2009). Goal We have two main goals: First, we test the robustness and boundary conditions of dialectical bootstrapping by investigating its effectiveness in different domains and using different procedures. Second, we examine several psychological aspects of dialectical bootstrapping, namely whether people spontaneously use dialectical bootstrapping, whether the tool can be taught to people and whether people are prepared to combine conflicting estimates at all when the conflict's source is their own mind. Relevance: Dialectical bootstrapping promises to be a practical tool to improve quantitative judgments. In many settings, as for example in finance, medical and managerial decision making, there are successful decision aids available that can be applied in routine decision making situations. However, whenever new situations emerge, decision makers often do not have the time, resources or data to construct appropriate statistical models or to seek advice from other people. Instead, they could try to tap into the wisdom of the crowd in their own mind by applying dialectical bootstrapping. Extended summary Experts and laypeople alike cannot help but make judgments under uncertainty in a wide range of situations. Different lines of research have addressed how to best make quantitative predictions, including psychology, management science, computer science, statistics and medicine. One time proven way to improve judgment is the following: When there are several different plausible predictions (stemming from different experts and/or different statistical procedures) and no reliable track record about their past performance, mechanically average those predictions. Averages of predictions outperform the typical predictions in the set and can even outperform the best single prediction. Averaging estimates increases accuracy in two ways: It cancels out random error, and it can reduce systematic error. The key insight is that averaging succeeds in decreasing error to the extent that the individual errors are non-redundant. Models of forecast combination demonstrate that averaging is a good default strategy and should not be abandoned without good reasons. In stark contrast to the usefulness of averaging, most people are reluctant to use the power of averaging to improve their judgment, mostly because they fail to appreciate the averaging principle. They do not consider that errors of opposing sign cancel each other out. Instead they tend to assume that the accuracy of the average is equal to the average accuracy of the sources considered. As a consequence, people often try, for instance, to pick the best advisor from a set of advisors-with typically only modest success, thereby forgoing the gain they could have achieved had they averaged rather than selected what they thought is the best advice. Aggregation, however, requires more than one prediction. What can a person do if he or she is unable to consult a crowd of people or models? Can a single person still benefit from the wisdom of the crowd without actually consulting other people? That is, can cognitive diversity within one person give rise to "The Wisdom of Crowds" in a single mind? This research project investigates a novel approach to improve individual judgment that was first proposed by Herzog and Hertwig (2009)-a mental tool called "dialectical bootstrapping", which enables different opinions to be generated and combined by the same person, thus simulating the wisdom of crowds within a single mind. According to this framework, an individual can reduce her overall error by averaging her first estimate with a second, plausible estimate (termed "dialectical estimate"), which is likely to have a different error to the first estimate. Herzog and Hertwig provided an existence proof for dialectical bootstrapping. This research proposal aims to extend our research on dialectical bootstrapping in two ways: First, we test the robustness and boundary conditions of dialectical bootstrapping by investigating its effectiveness in different domains, using different elicitation procedures, and assessing the marginal benefits of additional dialectical estimates. Second, we examine several psychological aspects of dialectical bootstrapping, namely whether people spontaneously use dialectical bootstrapping, whether the tool can be taught to people and whether people are prepared to combine conflicting estimates at all when the conflict's source is their own mind.

Resultate

-