With all due respect to the lessons of experience, I prefer the rigor of evidence. When a trio of psychologists conducted a comprehensive review of thirty-three studies, they found that in every one, the majority of answer revisions were from wrong to right. This phenomenon is known as the first-instinct fallacy.
In one demonstration, psychologists counted eraser marks on the exams of more than 1,500 students in Illinois. Only a quarter of the changes were from right to wrong, while half were from wrong to right. I’ve seen it in my own classroom year after year: my students’ final exams have surprisingly few eraser marks, but those who do rethink their first answers rather than staying anchored to them end up improving their scores.
In another life-and-death situation, in 1989 Bengal tigers killed about 60 villagers from India’s Ganges delta. No weapons seemed to work against them, including lacing dummies with live wires to shock the tigers away from human populations.
Then a student at the Science Club of Calcutta noticed that tigers only attacked when they thought they were unseen, and recalled that the patterns decorating some species of butterflies, beetles, and caterpillars look like big eyes, ostensibly to trick predators into thinking their prey was also watching them. The result: a human face mask, worn on the back of head. Remarkably, no one wearing a mask was attacked by a tiger for the next three years; anyone killed by tigers during that time had either refused to wear the mask, or had taken it off while working. — sidebar: Occam’s Razor in the Medical field
In 1963, the UC Santa Barbara ecologist and economist Garrett Hardin’ Proposed his First Law of Ecology: “You can never merely do one thing.” We operate in a world of multiple, overlapping connections, like a web, with many significant, yet obscure and unpredictable, relationships. He developed Second-order thinking into a tool, showing that if you don’t consider “the effects of the effects,” you can’t really claim to be doing any thinking at all.
When it comes to the overuse of antibiotics in meat, the first-order consequence is that the animals gain more weight per pound of food consumed, and thus there is profit for the farmer. Animals are sold by weight, so the less food you have to use to bulk them up, the more money you will make when you go to sell them.
The second-order effects, however, have many serious, negative consequences. The bacteria that survive this continued antibiotic exposure are antibiotic resistant. That means that the agricultural industry, when using these antibiotics as bulking agents, is allowing mass numbers of drug-resistant
isolation is powerful but misleading. For a start, while humans have accumulated a vast store of collective knowledge, each of us alone knows surprisingly little, certainly less than we imagine. In 2002, the psychologists Frank Keil and Leonid Rozenblit asked people to rate their own understanding of how zips work. The respondents answered confidently — after all, they used zips all the time. But when asked to explain how a zip works, they failed dismally. Similar results were found when people were asked to describe climate change and the economy. We know a lot less than we think we do about the world around us. Cognitive scientists call this ‘the illusion of explanatory depth’, or just ‘the knowledge illusion’.
In a famous speech in the 1990s, Charlie Munger summed up this approach to practical wisdom: “Well, the first rule is that you can’t really know anything if you just remember isolated facts and try and bang ‘em back. If the facts don’t hang together on a latticework of theory, you don’t have them in a usable form. You’ve got to have models in your head. And you’ve got to array your experience both vicarious and direct on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You’ve got to hang experience on a latticework of models in your head.”
The core of Bayesian thinking (or Bayesian updating, as it can be called) is this: given that we have limited but useful information about the world, and are constantly encountering new information, we should probably take into account what we already know when we learn something new. As much of it as possible. Bayesian thinking allows us to use all relevant prior information in making decisions. Statisticians might call it a base rate, taking in outside information about past situations like the one you’re in.
Consider the headline “Violent Stabbings on the Rise.” Without Bayesian thinking, you might become genuinely afraid because your chances of being a victim of assault or murder is higher than it was a few months ago. But a Bayesian approach will have you putting this information into the context of what you already know about violent crime. You know that violent crime has been declining to its lowest rates in decades. Your city is safer now than it has been since this measurement started. Let’s say your chance of being a victim of a stabbing last year was one in 10,000, or 0.01%. The article states, with accuracy, that violent crime has doubled. It is now two in 10,000, or 0.02%. Is that worth being terribly worried about? The prior information here is key. When we factor it in, we realize that our safety has not really been compromised.
The first flaw is perspective. We have a hard time seeing any system that we are in. Galileo’ had a great analogy to describe the limits of our default perspective. Imagine you are on a ship that has reached constant velocity (meaning without a change in speed or direction). You are below decks and there are no portholes. You drop a ball from your raised hand to the floor. To you, it looks as if the ball is dropping straight down, thereby confirming gravity is at work.
Now imagine you are a fish (with special x-ray vision) and you are watching this ship go past. You see the scientist inside, dropping a ball. You register the vertical change in the position of the ball. But you are also able to see a horizontal change. As the ball was pulled down by gravity it also shifted its position east by about 20 feet. The ship moved through the water and therefore so did the ball. The scientist on board, with no external point of reference, was not able to perceive this horizontal shift.
This analogy shows us the limits of our perception. We must be open to other perspectives if we truly want to understand the results of our actions. Despite feeling that we’ve got all the information, if we’re on the ship, the fish in the ocean has more he can share.
However, serious academic consideration of public opinion about fictitious issues did not start until the ’80s, when George Bishop and colleagues at the University of Cincinnati found that a third of Americans either favoured or opposed the fictitious Public Affairs Act. Bishop found that this figure dropped substantially when respondents were offered an explicit don’t know’ option. However, 10 per cent of respondents still selected a substantive answer, even when given a clear opportunity to express their lack of familiarity. Similar findings were reported in the US at around the same time by Howard Schuman and Stanley Presser, who also found that a third of respondents to their survey expressed positions on issues which, though real, were so obscure that few ordinary citizens would ever have heard of them.
It’s much more challenging when emotional reactions are involved, as we’ve seen with smokers and cancer statistics. Psychologist Ziva Kunda found the same effect in the lab when she showed experimental subjects an article laying out the evidence that coffee or other sources of caffeine could increase the risk to women of developing breast cysts. Most people found the article pretty convincing. Women who drank a lot of coffee did not.
We often find ways to dismiss evidence that we don’t like. And the opposite is true, too: when evidence seems to support our preconceptions, we are less likely to look too closely for flaws.
The more extreme the emotional reaction, the harder it is to think straight.
Research routinely shows that people who’re aware of communication from brand X are more likely to buy that brand. Sometimes used as evidence that communication drives sales, in fact causality usually runs the other way: buying brand X makes you more likely to notice its communications. This phenomenon (the so-called ‘Rosser Reeves effecť – named after the famous 1950s adman) has been known for decades, yet is still routinely used to ‘prove’ communication effectiveness (most recently to justify social media use).
Marketing and advertising people can talk a load of nonsense at the best of times. But if you want to hear them at their worst, ask them to talk about social trends. The average social trends presentation is a guaranteed mix of the obvious, irrelevant and false.
Recently, we were listening to a conference speech about changing lifestyles’. Life nowadays is faster than ever, said the speaker. We work longer hours. We have less free time. Families are fragmenting. Food is eaten on the run..
We’ve been listening to this bullshit for 30 years. And it’s no more true now that it was then. The inconvenient, less headline-worthy truth is that people have more free time than ever. Economic cycles wax and wane, but the long-term trend in all developed economies is toward shorter, more flexible working hours. And longer holidays. People start work later in life and spend much longer in retirement. Work takes up a smaller percentage of our life than it used to.
Related myths about pressures on. family time are equally false. Contrary to popular belief, in developed economies parents spend more time with their children these days. Not less. Research shows the amount of time families spend eating together has stayed remarkably constant over the years, As has the amount of time they spend together watching TV.
Let’s put this in perspective. Abraham Lincoln inspired generations in a speech that lasted two minutes. John F. Kennedy took 15 minutes to shoot for the moon. Martin Luther King Jr. articulated his dream of racial unity in 17 minutes. Steve Jobs gave one of the most famous college commencement speeches of our time at Stanford University in 15 minutes. If you can’t sell your idea or your dream in 10 to 15 minutes, keep editing until you can.
Ideas don’t sell themselves. Be selective about the words you use. If they don’t advance the story, remove them. Condense, simplify, and speak as briefly as possible. Have the courage to speak in grade-school language. Far from weakening your argument, these tips will elevate your ideas, making it more likely you’ll be heard.
“There are always three speeches for every one you actually gave: the one you practiced, the one you gave, and the one you wish you gave.”
Graham and I thought it was rather a good sketch. It was therefore terribly embarrassing when I found I’d lost it. I knew Graham was going to be cross, so when I’d given up looking for it, I sat down and rewrote the whole thing from memory. It actually turned out to be easier than I’d expected.
Then I found the original sketch and, out of curiosity, checked to see how well I’d recalled it when rewriting. Weirdly, I discovered that the remembered version was actually an improvement on the one that Graham and I had written. This puzzled the hell out of me.
Again I was forced to the conclusion that my mind must have continued to think about the sketch after Graham and I had finished it. And that my mind had been improving what we’d written, without my making any conscious attempt to do so. So when I remembered it, it was already better.
Chewing this over, I realised it was like the tip-of-the-tongue phenomenon: when you can’t remember a name, and you chase after it in your mind
Excerpt from: Creativity: A Short and Cheerful Guide by John Cleese
Another renowned venture capitalist, Kleiner Perkins’s Randy Komisar takes this idea one step further. He dissuades members of the investment committee from expressing firm opinions by stating right away that they are for or against an investment idea. Instead, Komisar asks participants for a “balance sheet” of points for and against the investment: “Tell me what is good about this opportunity; tell me what is bad about it. Do not tell me your judgment yet. I don’t want to know.” Conventional wisdom dictates that everyone should have an opinion and make it clear. Instead, Komisar asks his colleagues to flip-flop!
The models whose success we admire are, by definition, those who have succeeded. But out of all the people who were “crazy enough to think they can change the world,” the vast majority did not manage to do it. For this very reason, we’ve never heard of them. We forget this when we focus only on the winners. We look only at the survivors, not at all those who took the same risks, adopted the same behaviors, and failed. This logical error is survivorship bias. We shouldn’t draw any conclusions from a sample that is composed only of survivors. Yet we do, because they are the only ones we see.
Our quest for models may inspire us, but it can also lead us astray. We would benefit from restraining our aspirations and learning from people who are similar to us, from decision makers whose success is less flashy, instead of a few idols
Such misleading stories, however, may still be influential and durable. In Human, All Too Human, philosopher Friedrich Nietzsche argues that “partial knowledge is more often victorious than full knowledge: it conceives things as simpler than they are and therefore makes its opinion easier to grasp and more persuasive.”
The ‘winter detector’ problem is common in big data analysis. A literal example, via computer scientist Sameer Singh, is the pattern-recognising algorithm that was shown many photos of wolves in the wild, and many photos of pet husky dogs. The algorithm seemed to be really good at distinguishing the two rather similar canines; it turned out that it was simply labelling any picture with snow as containing a wolf. An example with more serious implications was described by Janelle Shane in her book You Look Like a Thing and I Love You: an algorithm that was shown pictures of healthy skin and of skin cancer. The algorithm figured out the pattern: if there was a ruler in the photograph, it was cancer. If we don’t know why the algorithm is doing what it’s doing, we’re trusting our lives to a ruler detector.
The advertising industry – whose only important asset is ideas – has learned nothing from this. We keep heading in the wrong direction. We keep bulking up everything in our arsenal except our creative resources. Then we take the people who are supposed to be our idea people and give them till 3 o’clock to do a banner.
Sure, we need people who are tech-savvy and analytical. But more than anything, we need some brains-in-a-bottle who have no responsibility other than to sit in a corner and feed us crazy ideas. We keep looking to “transform” our industry but ignore the one transformation that would kill.
When pushed, people push back. So rather than telling people what to do, or trying to persuade, catalysts allow for agency and encourage people to convince themselves.
People are attached to the status quo. To ease endowment, catalysts surface the costs of inaction and help people realize that doing nothing isn’t as costless as it seems.
Too far from their backyard, people tend to disregard. Perspectives that are too far away fall in the region of rejection and get discounted, so catalysts shrink distance, asking for less and switching the field.
Seeds of doubt slow the winds of change. To get people to un-pause, catalysts alleviate uncertainty. Easier to try means more likely to buy.
Some things need more proof. Catalysts find corroborating evidence, using multiple sources to help overcome the translation problem.
Excerpt from: Catalyst by Jonah Berger
1. Myopia: a tendency to focus on overly short future time horizons when appraising immediate costs and the potential benefits of protective investments;
2. Amnesia: a tendency to forget too quickly the lessons of past disasters;
3. Optimism: a tendency to underestimate the likelihood that losses will occur from future hazards;
4. Inertia: a tendency to maintain the status quo or adopt a default option when there is uncertainty about the potential benefits of investing in alternative protective measures:
5. Simplification: a tendency to selectively attend to on subset of the relevant factors to consider when making choices involving risk; and
6. Herding: a tendency to base choices on the observed actions of others.
1. Establish the scope.
2. Break the challenge into addressable parts.
3. Identify the target outcome.
4. Map the relevant behaviors.
5. Identify the factors that affect each behavior.
6. Choose the priority behaviors to address.
7.Create evidence-led intervention(s).
8. Implement the intervention(s).
9. Assess the effects.
10. Take further action based on the results
Excerpt from: Behavioural Insights by Michael Hallsworth
Research shows there are many psychological processes at work which together limit the effectiveness of brainstorming. ‘Social loafing’ – a group situation encourages and allows individuals to slack off. ‘Evaluation apprehension’ – we’re nervous of being judged by colleagues or looking stupid. ‘Production blocking’ – because only one person can speak at a time in a group, others can forget or reject their ideas while they wait. We’re also learning more about the power of our “herd’ tendencies. As humans, we have innate desires to conform to others with only the slightest encouragement. When asked to think creatively, these implicit norms are invisible but powerful shackles on our ability to think differently.
No wonder so few ideas emerge.
Social scientists have long understood that statistical metrics are at their most pernicious when they are being used to control the world, rather than try to understand it. Economists tend to cite their colleague Charles Goodhart, who wrote in 1975: ‘Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes. (Or, more pithily: ‘When a measure becomes a target, it ceases to be a good measure.’) Psychologists turn to Donald T. Campbell, who around the same time explained: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.
Goodhart and Campbell were on to the same basic problem: a statistical metric may be a pretty decent proxy for something that really matters, but it is almost always a proxy rather than the real thing.
It is to be found in the exceptional human capacity to synthesize our experiences, influences, knowledge and feelings into one, unified, original entity. To have such an inbuilt facility that enables us to make seemingly random connections across a broad It has to be the single most important creative faculty we have, as Einstein observed when he said, “Combinatory play seems to be the essential feature in productive thought.’
The process our conscious and unconscious selves go through when editing, connecting and combining all that we know and feel into an original coherent thought happens over a period of time. It cannot be forced. It happens when we are awake and when we are asleep. It happens when we are thinking about something else entirely, or playing a game of tennis. It happens because a stimulus in our immediate surroundings – usually without our knowing
The explanation for Kahan’s results? Ideology. Irrespective of the actual figures, Democrats who identified as liberal, normally in favour of gun control, tended to find that stricter laws brought crime down. For the conservative Republican participants, the reverse was the case. They found that stricter gun control legislation did not work.
These answers are no longer to do with the truth, Kahan argued. They are about protecting your identity or belonging to your tribe! And the people who were good at maths, Kahan also found, were all the better at this. Often completely subconsciously, by the way. It was their psyche that played tricks on them.
In evidence that stacking works, consider dental floss. Many of us clean our teeth regularly but fail to floss. To test whether stacking increases flossing, researchers gave fifty British participants, who flossed on average only 1.5 times per month, information encourage them to do it more regularly.
Half of the participants were told to floss before they brushed at night, and half after they brushed. Note that only half of the participants were really stacking-using an existing automated response (brushing their teeth) as a cue for a new behavior (flossing). The other half, who first flossed and then brushed, had to remember, oh, yes, first I need to floss, before I brush. No automated cue.
Each day for four weeks, participants reported by text whether they flossed the night before. At the end of the month of reminders, they all flossed about twenty-four days on average. Most interesting is what they were all doing eight months later. Those who stacked, and flossed after they brushed, were still doing it about eleven days a month. For them, the new behavior was maintained by the existing habit. The group originally instructed to floss before they brushed ended up doing it only about once a week.
The Paradox of progress, and the paradox of choice: There is a familiar story of a New York banker vacationing in Greece, who, from talking to a fisherman and scrutinizing the fisherman’s business, comes up with a scheme to help the fisherman make it a big business. The fisherman asked him what the benefits were; the banker answered that he could make a pile of money in New York and come back to vacation in Greece; something that seemed ludicrous to the fisherman, who was already there doing the kind of things bankers do when they go on vacation in Greece.
The story was well known in antiquity, under a more elegant form, as retold by Montaigne (my translation): When King Pyrrhus tried to cross into Italy, Cynéas, his wise adviser, tried to make him feel the vanity of such action. “To what end are you going into such enterprise?” he asked. And Pyrrhus answered, “To make myself the master of Italy.” Cynéas: “Then ?” Pyrrhus: To conquer Africa, then … come rest at ease.” Cynéas: But you are already there; why take more risks?”
The premortem – when an organisation has almost come to an important decision but hasn’t formally committed itself, the decision makers gather for a brief session.
They are asked to imagine that it is one year later and that the idea has been a complete disaster.
They then have to write a short history of what happened.
The premortem can prevent many a disaster.
By contrast, a postmortem is always too late.
It’s all about timing.
Initially, Briggs had designed her questionnaire to identify solid marriage partners, but after the Second World War, her daughter repositioned it to place people in the right jobs. The MBTI does what all profiling systems do: asks batteries of questions and organises the answers into types which are supposed to define your personality. And yet the test has no basis in clinical psychology, though it is deployed by most Fortune 500 companies, many universities, schools, churches, consulting companies, the CIA, the army and the navy.
The MBTI test-retest validity lies below statistical significance, meaning that if you test someone more than once, you are likely to get different results. More worrying is that the questionnaire poses binary questions, asking, for example, whether you value sentiment more than logic or vice versa. The question assumes that there is a simple answer to this question, absent of context. Yet in real life, preference is highly contextual: I value logic when purchasing car insurance; I may value sentiment more when choosing to play with my son. Binaries always simplify, often to the point of absurdity, and they polarise what are often complements.
Excerpt from: Uncharted: How to Map the Future by Margaret Heffernan