In her study ‘Brilliant but Cruel’, Teresa Amabile, a professor at Harvard Business School, asked people to evaluate the intelligence of book reviewers using reviews taken from the New York Times. Professor Amabile changed the reviews slightly, creating two different versions: one positive and one negative. She made only small changes in terms of the actual words, for example changing ‘inspired’ to ‘uninspired’ and ‘capable’ to ‘incapable’.
A positive review might read, ‘In 128 inspired pages, Alvin Harter, with his first work of fiction, shows himself to be an extremely capable young American author. A Longer Dawn is a novella – a prose poem, if you will – of tremendous impact. It deals with elemental things – life, love and death, and does so with such great intensity that it achieves new heights of superior writing on every page.’
While a negative review might read, ‘In 128 inspired pages, Alvin Harter, with his first work of fiction, shows himself to be an extremely incapable young American author. A Longer Dawn is a novella – a prose poem, if you will – of negligible impact. It deals with elemental things – life, love and death, and does so with such little intensity that it achieves new depths of inferior writing on every page.’
Half the people in the study read the first review, the other half read the second, and both rated the intelligence and expertise of the reviewer. Even though the reviews were almost identical – the only difference being whether they were positive or negative – people considered the reviewers with negative versions 14 per cent more intelligent and as having 16 per cent more expertise in literature. Professor Amabile writes the ‘prophets of doom and gloom appear wise and insightful’. Anyone can say something nice – but it tales an expert to critique it.
As another example, an HBS study of the freelancer contracting site oDesk (now renamed Upwork) found that surprise incentives resulted in greater employee effort than higher pay. Harvard Business School researchers posted a data-entry job on oDesk that would take four hours. One of the postings offered $3 per hour fo the job; the other offered $4 per hour. People with past data-entry experience were hired at either the $3 or $4 rate. But some of those who were initially told they’d be paid $3 were later told that the hiring company had a bigger budget than what they expected: “Therefore, we will pay you $4 per hours instead of $3 per hour.” The group initially hired at $4 an hour worked no harder than this hired at $3. But those who received the surprise raise worked substantially harder than the other two groups, and among those with experience, their effort more than made up for the cost of the extra pay.
In January 2014, researchers from Harvard Business School released a controversial working paper on a study they had conducted. The study revealed that non-black Airbnb hosts could charge approximately 12 per cent more, on average, than black hosts – roughly $144 per night, versus $107. In September 2016, looking across 6,000 listings, the same researchers found that requests from guests with distinctively African-American-sounding names (like Tanisha Jackson) were 16 per cent less likely to be accepted by Airbnb hosts than those with Caucasian-sounding names (like Allison Sullivan). Particularly troubling was that, in some instances, Airbnb users would rather allow their property to remain vacant than rent to a black-identified person.