In a well-run insurance company, if you randomly selected two qualified underwriters or claims adjusters, how different would you expect their estimates for the same case to be? Specifically, what would be the difference between the two estimates, as a percentage of their average?
We asked numerous executives in the company for their answers, and in subsequent years, we have obtained estimates from a wide variety of people in different professions. Surprisingly, one answer is clearly more popular than all others. Most executives of the insurance company guessed 10% or less. When we asked 828 CEOs and senior executives from a variety of industries how much variation they expected to find in similar expert judgments, 10% was also the median answer and the most frequent one (the second most popular was 15%). A 10% difference would mean, for instance, that one of the two underwriters set a premium of $9,500 while the other quoted $10,500. Not a negligible difference, but one that an organization can be expected to tolerate.
Our noise audit found much greater differences. By our measure. the median difference in underwriting was 55%, about five times as large as was expected by most people, including the company’s executives.
Many other studies produced similar results. Kahneman and Tversky divided 245 undergrads at the University of British Columbia in half and asked one group to estimate the probability of a massive flood somewhere in North America in 1983, in which more than 1,000 people drown.’ The second group was asked about an earthquake in California sometime in 1983, causing a flood in which more than 1,000 people drown.’ Once again, the second scenario logically has to be less likely than the first but people rated it one-third more likely than the first. Nothing says ‘California’ quite like ‘earthquake’.
Excerpt from: Risk: The Science and Politics of Fear by Dan Gardner
I’ve been to several scientific conferences at which Professor Kahneman has spoken; and, when Daniel Kahneman talks, people listen. I am invariably among them. So I took special notice of his answer to a fascinating challenge to put to him not long ago by an online discussion site. He was asked to specify the one scientific concept that, if appreciated properly, would most improve everyone’s understanding of the world. Although in response he provided a full five-hundred-word essay describing what he called “the focusing illusion,” his answer is neatly summarized in the essay’s title: “Nothing in life is as important as you think it is while you are thinking about it.”
Daniel Kahneman sets them straight in Thinking, Fast and Slow: ‘If you care about being thought credible and intelligent, do not use complex language where simpler language will do. My Princeton colleague Danny Oppenheimer refuted a myth prevalent among undergraduates about the vocabulary that professors find most impressive. In an article titled “Consequences of Erudite Vernacular Utilized Irrespective of Necessity: Problems with Using Long Words Needlessly”, he showed the couching familiar ideas in pretentious language is taken as a sign of poor intelligence and low credibility.
“To withdraw now is to accept a sure loss,” he writes about digging oneself deeper into a political hole, “and that option is deeply unattractive.” When you combine this with the force of commitment, “the option of hanging on will therefore be relatively attractive, even if the chances of success are small and the cost of delaying failure is high.”
By behavioural psychologists Daniel Kahneman and Amos Tversky in 1973. In their classic experiment, they asked people to listen to a list of names and then recall whether there were more men or women on the list. Some people in the experiment were read a list of famous men and less famous women, while others were read the opposite. Afterwards, when quizzed by the researchers, individuals were more likely to say that there were more of the gender from the group with more famous names. Later researchers have linked this effect to how easily people could retrieve information: we tend to over-rely on what we can remember easily when coming to decisions or judgements.
Why does this matter? There’s solid evidence that experiencing such losses—noticing that our portfolio is losing money—leads to poor choices. In one lab experiment by Richard Thaler, Amos Tversky, Daniel Kahneman, and Alan Schwartz, subjects were far more likely to invest in a bond fund when feedback was given more frequently. Unfortunately, these low-risk bonds also generate lower returns over the long haul. As the scientists noted, “Providing such investors with frequent feedback about their outcomes is likely to encourage their worst tendencies…. More is not always better. The subjects with the most data did the worst in terms of money earned.” Such is the vicious circle of loss aversion, as our strong dislike of losses causes us to lose even more.