75% of marketing decisions based on psychological experiments may be flawed

Brain in glass sphere - 75 percent of marketing decisions based on psychological experiments may be flawed

An article published in the journal Science this month casts doubts on the findings of nearly two-thirds of psychology research, meaning many marketing decisions may be based on incorrect assumptions about human behaviour.

A team of 270 scientists, led by Professor Brian Nosek of the University of Virginia, repeated 100 experiments that first appeared in psychology journals in 2008. They found they were unable to replicate the results of 50% of cognitive psychology experiments and in 75% of social psychology experiments. Their finding are presented in a paper entitled Estimating the reproducibility of psychological science.

Even in the experiments where results could be replicated, the effect was roughly half that originally reported, suggesting the original scientists may have excluded data that failed to support their hypothesis or (as Guardian Science editor Ian Sample has suggested) scientific journals only publish papers that appear to make the strongest claims.

Neuromarketers watch out

Marketers, and particularly those specialising in neuromarketing, who are basing their decisions on the findings of such experiments should be concerned by this report. If these results cannot reliably be replicated in the lab, it’s highly unlikely they can be depended upon in life.

Yet many marketers continue to accept experimental findings and statistics at face value, and use them to inform their decisions and how they use their six figure marketing budgets.

Any approach to bring scientific method to marketing is to be applauded. But as Professor Nosek says, “Scepticism is a core part of science and we need to embrace it. If the evidence is tentative you should be sceptical of your evidence”.

This report follows another paper published in 2013, by Professors John Ioannidis (Stanford University) and Marcus Munafo (Bristol University) that found significant weaknesses in many neuroscience studies.

Why should so many studies be, let’s be charitable, subject to exaggeration? Professor Munafo suggests it may be due to pressure on scientists to write large numbers of papers to secure funding. As a result, each paper is based on rushed experiments with small sample sizes, which Munafo describes as ‘[not] the way to get one really robust right answer’.

Dorothy Bishop, writing in The Guardian, makes a strong case for this paper as a sign of neuroscience maturing and self regulating (see further reading below).

Nevertheless, it’s worth checking your sources before basing expensive marketing decisions on potentially flawed data.

Further reading – The Guardian, Psychology research: hopeless case or pioneering field?

 

Image (c) Shutterstock / Andresr