As the sixth-century BC poet and philosopher Lao Tzu observed, “Those who have knowledge don’t predict. Those who predict don’t have knowledge.”
Auditors provide a good example of this bias. One hundred thirty-nine professional auditors were given five different auditing cases to examine. The cases concerned a variety of controversial aspects of accounting. For instance, one covered the recognition of intangibles, one covered revenue recognition, and one concerned capitalization versus expensing of expenditures. The auditors were told the cases were independent of each other.
The auditors were randomly assigned to either work for the company or work for an outside investor who was considering investing in the company in question. The auditors who were told they were working for the company were 31 percent more likely to accept the various dubious accounting moves than those who were told they worked for the outside investor. So much for an impartial outsider—and this was in the post-Enron age!
Philip Tetlock has done one of the most comprehensive studies of forecasters, their accuracy, and their excuses. When studying experts’ views on a wide range of world political events over a decade, he found that, across the vast array of predictions, experts who reported they had 80 percent or more confidence in their predictions were actually correct only around 45 percent of the time. Across all predictors, the experts were little better than coin tossers.
They asked 40 owners of iPods how influenced they were by the trendiness of the product relative to their peers. The scale went from one (much less than average) to nine (much more than average), with five as average. So the neutral answer was clearly five. However, the average response from participants was 3.3.
Another frightening example comes from the realm of medicine. This time participants were given information on the effectiveness of treatments as a percentage of those cured overall (ranging from 90 to 30 percent). This is known as base rate information. They were also given a story, which could be positive, negative, or ambiguous.
For instance, the positive story read as follows: Pat’s decision to undergo Tamoxol resulted in a positive outcome. The entire worm was destroyed. Doctors were confident the disease would not resume its course. At one-month post-treatment, Pat’s recovery was certain.
The negative story read: Pat’s decision to undergo Tamoxol resulted in a poor outcome. The worm was not completely destroyed. The disease resumed its course. At 1-month post-treatment, Pat was blind and had lost the ability to walk.
Subjects were then asked would they undergo the treatment if they were diagnosed with the disease. Of course, people should have relied upon the base rate information of the effectiveness of treatment as it represented a foil sample of experience. But did this actually happen?
Of course not. Instead the base rate information was essentially ignored in favor of the anecdotal story. For instance, when participants were given a positive story and were told the treatment was 90 percent effective, 88 percent of people thought they would go with the treatment. However, when the participants were given a negative story and again told the treatment was 90 percent effective, only 39 percent of people opted to pursue this line of treatment.
Conversely, when told the treatment was only 30 percent effective and given a negative story, only 7 percent said they would follow this treatment. However, when low effectiveness was combined with a good story, 78 percent of people said they would take the drug. As you can see, the evidence on effectiveness of the treatments was completely ignored in favor of the power of the story.