It’s much more challenging when emotional reactions are involved, as we’ve seen with smokers and cancer statistics. Psychologist Ziva Kunda found the same effect in the lab when she showed experimental subjects an article laying out the evidence that coffee or other sources of caffeine could increase the risk to women of developing breast cysts. Most people found the article pretty convincing. Women who drank a lot of coffee did not.
We often find ways to dismiss evidence that we don’t like. And the opposite is true, too: when evidence seems to support our preconceptions, we are less likely to look too closely for flaws.
The more extreme the emotional reaction, the harder it is to think straight.
The ‘winter detector’ problem is common in big data analysis. A literal example, via computer scientist Sameer Singh, is the pattern-recognising algorithm that was shown many photos of wolves in the wild, and many photos of pet husky dogs. The algorithm seemed to be really good at distinguishing the two rather similar canines; it turned out that it was simply labelling any picture with snow as containing a wolf. An example with more serious implications was described by Janelle Shane in her book You Look Like a Thing and I Love You: an algorithm that was shown pictures of healthy skin and of skin cancer. The algorithm figured out the pattern: if there was a ruler in the photograph, it was cancer. If we don’t know why the algorithm is doing what it’s doing, we’re trusting our lives to a ruler detector.
I can think of nothing an audience won’t understand. The only problem is to interest them; once they are interested they understand anything in the world.
Social scientists have long understood that statistical metrics are at their most pernicious when they are being used to control the world, rather than try to understand it. Economists tend to cite their colleague Charles Goodhart, who wrote in 1975: ‘Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes. (Or, more pithily: ‘When a measure becomes a target, it ceases to be a good measure.’) Psychologists turn to Donald T. Campbell, who around the same time explained: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.
Goodhart and Campbell were on to the same basic problem: a statistical metric may be a pretty decent proxy for something that really matters, but it is almost always a proxy rather than the real thing.
Perhaps we should have seen this acceleration coming. In the 1930s an American aeronautical engineer named T. P Wright carefully observed aeroplane factories at work. He published research demonstrating that the more often a particular type of aeroplane was assembled, the quicker and cheaper the next unit became. Workers would gain experience, specialised tools would be developed, and ways to save time and material would be discovered. Wright reckoned that every time accumulated production doubled, unit costs would fall by 15 per cent. He called this phenomenon ‘the learning curve’.
Three decades later, management consultants at Boston Consulting Group, or BCG, rediscovered Wright’s rule of thumb in the case of semiconductors, and then other products too. Recently, a group of economists and mathematicians at Oxford University found convincing evidence of learning curve effects across more than 50 different products from transistors to beer – including photovoltaic cells. Sometimes the learning curve is shallow and sometimes steep, but it always seems to be there.
The learning curve may be a dependable fact about technology, but paradoxically, it creates a feedback loop that makes it harder to predict technological change. Popular products become cheap; cheaper products become popular.
‘It should be remembered, that in few departments have important reforms been effected by those trained up in practical familiarity with their details. The men to detect blemishes and defects are among those who have not, by long familiarity, been made insensible to them.’
And it inspired competitors – notably Sears Roebuck, which soon became the market leader. (The story goes that the Sears Roebuck catalogue had slightly smaller pages than Montgomery Ward’s – with the intention that a tidy-minded housewife would naturally stack the two with the Sears catalogue on top.)
By the century’s end, mail-order companies were bringing in $30 million a year – a billion-dollar business in today’s terms; in the next twenty years, that figure grew almost twenty-fold. The popularity of mail order helped fuel demands to improve the postal service in the countryside – if you lived in a city, you’d get letters delivered to your door, but rural dwellers had to schlep to their nearest post office.
What Palchinsky realised was that most real-world problems are more complex than we think. They have a human dimension, a local dimension, and are likely to change as circumstances change. His method for dealing with this could be summarised as three ‘Palchinsky principles’: first, seek out new ideas an try new things; second, when trying something new, do it on a scale where failure is survivable; third, seek out feedback and learn from your mistakes as you go along. The first principle could simply be expressed as ‘variation’; the third as ‘selection’.
A second, ironic, problem is that companies fear that if they produce a truly vital technology, governments will lean on them to relinquish their patent rights or slash prices. This was the fate of Bayer, the manufacturer of the anthrax treatment Cipro, when an unknown terrorist began mailing anthrax spores in late 2001, killing five people. Four years later, as anxiety grew about an epidemic of bird flu in humans, the owner of the patent on Tamiflu, Roche, agreed to license production of the drug after very similar pressure from governments across the world. It is quite obvious why governments have scant respect for patients in true emergencies. Still, if everybody knows that governments will ignore patents when innovations are most vital, it is not clear why anyone expects the patent system to encourage vital innovations.
What is more, the effect was large: groups that has to accommodate an outsider were substantially more likely to reach the correct conclusion — they did so 75 per cent of the time, versus 54 per cent for a homogeneous group and 44 per cent for an individual.
It’s fun to speculate about what those inventions might be, but history cautions against placing much faith in futurology. Fifty years ago, Herman Kahn and Anthony J. Wiener published The Year 2000: A Framework For Speculation. Their crystal-ball gazing got a lot right about information and communication technology. They predicted colour photocopying, multiple uses for lasers, ‘two-way pocket phones’ and automated real-time banking. That’s impressive. But Kahn and Wiener also predicted undersea colonies, silent helicopter-taxis and cities lit by artificial moons. Nothing looks more dated than yesterday’s technology shows and yesterday’s science fiction.
Excerpt from: Fifty Things that Made the Modern Economy by Tim Harford