Why it becomes harder to predict technological change (as technologically develops)

πŸ’Ž Why it becomes harder to predict technological change (as technology develops)

Perhaps we should have seen this acceleration coming. In the 1930s an American aeronautical engineer named T. P Wright carefully observed aeroplane factories at work. He published research demonstrating that the more often a particular type of aeroplane was assembled, the quicker and cheaper the next unit became. Workers would gain experience, specialised tools would be developed, and ways to save time and material would be discovered. Wright reckoned that every time accumulated production doubled, unit costs would fall by 15 per cent. He called this phenomenon ‘the learning curve’.

Three decades later, management consultants at Boston Consulting Group, or BCG, rediscovered Wright’s rule of thumb in the case of semiconductors, and then other products too. Recently, a group of economists and mathematicians at Oxford University found convincing evidence of learning curve effects across more than 50 different products from transistors to beer – including photovoltaic cells. Sometimes the learning curve is shallow and sometimes steep, but it always seems to be there.

The learning curve may be a dependable fact about technology, but paradoxically, it creates a feedback loop that makes it harder to predict technological change. Popular products become cheap; cheaper products become popular.

Excerpt from: The Next Fifty Things that Made the Modern Economy by Tim Harford

HT: @rshotton

Facebook Comments

Product Geek?

Join over 5,000 product geeks and get one email every Monday containing the best excerpts I've read over the previous week.

See some of what you're missing...