Pages

Thursday, 14 February 2013

fMRI Studies Indicate that up to 50% of Researchers are Bad at Statistics


MRI



fMRI, along with other imaging techniques brought to bear on research into the brain/mind relationship, can often seem like the only reliable way to get hard data out of a very messy area. In principle it seems neat, clean and rigorous: expose a subject to some stimuli, or ask them to think particular thoughts or perform a particular action, then check what brain activity correlates with this stimulus. Having done this, and observed the fMRI readings, Bob's your uncle- the areas of the brain showing most activation are involved  in generating the particular feeling, action or thought in question.

However, while brain scanning can no doubt teach us an awful lot, the most valuable thing that many papers using such readings in their data can tell us is that their authors are rather poor at statistics.



Inevitably, researchers make certain assumptions about fMRI.

First, it must be assumed that the alterations in blood flow that fMRI measures truly indicate that simultaneous brain activity causing feelings or thoughts is occurring in the regions showing increased blood flow. This of course is by no means certain.

Second, there is a related assumption made that correlation implies causation.The putative validity of fMRI as a research tool depends upon these assumptions, and while they do present problems, we can acknowledge them and get on with things regardless: brain scan studies have, beyond doubt, driven significant advances in our understanding of minds and brains.

However, having accepted the validity of the technique and that correlation/causation inference based on fMRI is warranted, there remains a more fundamental problem. Shoddy use of statistics and misuse of numbers, detailed by Rebecca Goldin and Cindy Merrick here, threaten to undermine the validity of many inferences, and perhaps to corrode faith in studies using fMRI altogether.

One of the principal errors highlighted in the article is  the "nonindependence error", or "double dipping", whereby a researcher makes use of non-independent data sets.
The Texan gunman, for instance, who fills the wall of his barn with buckshot, observes the damage and then draws a target around the best looking cluster of holes to demonstrate his marksmanship is guilty of double dipping. He is creating his data-set, choosing his target based on the data set, then using the same data set to prove his hypothesis that he is a good marks-man.

In the context of fMRI, a researcher who uses fMRI data to decide what area of the brain should be focused upon, and then makes use of the SAME data set to compute correlations and calculate his results, is guilty of using nonindependent data. In using observed correlations to decide where to look for correlations and then using the very same data to calculate the extent of these correlations, the researcher is guaranteed a positive result.
In the worst cases, double-dipping can cause correlations to be observed where none exist, and at the very least it will cause exaggeration of any correlations that do exist.

Such statistical errors are by no means rare. In one literature search of 134 fMRI papers, by Kriegeskorte et al., the authors suggested that 42% of these papers was guilty of at least one non independence error, and that a further 14% did not provide sufficient evidence to judge the quality of their statistical work.

These numbers are shocking: consider the idea that the results of over 50% of the fMRI papers read should be doubted.
If nothing else, this article underlines the importance of reviewing the "methods" section of any fMRI study claiming to show significant results.





4 comments:

  1. The title made me laugh, I think it's quite ironic that they calculated statistics to talk about problems with statistics.

    ReplyDelete
  2. I feel like a lot of this is caused by the "politics" of research. While the ultimate goal is finding the truth, everyone wants their hypotheses to be correct. If researchers can fudge with numbers and use vague terminology, they can make it seem like data supports a hypothesis when in reality it does not... in fact maybe this is actually a testament to how good these researchers are at statistics (they can make the numbers say anything they want). Nevertheless because of these things, I think the reader has a responsibility to be able to interpret data independently and draw his own conclusions.

    ReplyDelete
  3. For those interested in bad statistics and its impact on the reliability of Neuroscience, check out this article on statistical power which was published last week in Nature Reviews Neuroscience:

    http://www.nature.com/nrn/journal/v14/n5/full/nrn3475.html

    You might also be interested in this article, aptly named 'Voodoo correlations in social Neuroscience':

    http://pdc.stanford.edu/newlm/images/4/47/VulEtAl.pdf

    ReplyDelete
  4. For those interested, I found a really interesting blog by neuroscientist Brad Buchsbaum called Flowbrain, that deals with topics about statistics, functional neuroimaging and cognitive neuroscience:

    http://flowbrain.blogspot.ca/

    ReplyDelete