Page 1 of 2
A debate on the internet over findings released before their formal publication date raises important questions for science.
Few endeavors have been affected more by the tools and evolution of the internet than science publishing. Thousands of journals are available online, and an increasing number of science bloggers are acting as translators, often using lay language to convey complex findings previously read only by fellow experts within a discipline. Now, in the wake of a new paper challenging the methodology of a young field, there is a case study for how the internet is changing the way science itself is conducted.
That area of research is the burgeoning subfield of social neuroscience, which seeks to understand the neurobiological basis of social behavior. Using neuroimaging techniques such as fMRI, researchers correlate neural activity with social and behavioral measures in order to pinpoint areas of the brain associated with social decision making or emotional reactivity.
Late last year, Ed Vul, a graduate student at MIT working with neuroscientist Nancy Kanwisher and UCSD psychologist Hal Pashler, prereleased “Voodoo Correlations in Social Neuroscience” on his website. The journal Perspectives in Psychological Science accepted the paper but will not formally publish it until May.
The paper argues that the way many social neuroimaging researchers are analyzing their data is so deeply flawed that it calls into question much of their methodology. Specifically, Vul and his coauthors claim that many, if not most, social neuroscientists commit a nonindependence error in their research in which the final measure (say, a correlation between behavior and brain activity in a certain region) is not independent of the selection criteria (how the researchers chose which brain region to study), thus allowing noise to inflate their correlation estimates. Further, the researchers found that the methods sections that were clearing peer review boards were woefully inadequate, often lacking basic information about how data was analyzed so that others could evaluate their methods. (Read Vul et al.‘s entire in-press paper here.)
A number of online science writers and bloggers, including the widely read Sharon Begley of Newsweek, immediately wrote about the paper. The vast majority of the online responses to the paper were extremely positive, with Begley suggesting that “like so many researchers in the social sciences, psychologists have physics envy, and think that the illusory precision and solidity of neuroimaging can give their field some rigor.” Vaughan Bell of Mind Hacks predicted that the paper “has the potential to really shake up the world of social cognitive neuroscience.”
In the paper, Vul and his coauthors cite specific studies, many of which were published in leading journals such as Nature and Science, going so far as to call some of the studies “entirely spurious.” Suddenly, a number of researchers found themselves under attack. The paper began filling neuroscientists’ inboxes. Two groups of neuroimaging scientists, shocked by the speed with which this paper was being publicly disseminated, wrote rebuttals and posted them in the comments section of several blogs, including Begley’s. Vul followed up in kind, linking to a rebuttal of the rebuttals in the comment sections of several blogs. This kind of scientific discourse — which typically takes place in the front matter of scholarly journals or over the course of several conferences — developed at a breakneck pace, months before the findings were officially published, and among the usual chaos of blog comments: inane banter, tangents, and valid opinions from the greater public.
Tor Wager, a Columbia University cognitive neuroscientist, whose work was not mentioned in Vul’s paper but who helped prepare one of the rebuttals, says that it was important to respond both publicly and swiftly. “The public and the news media operate on sound bites, and the real scientific issues are quite complex.” His complaints focus not only on the content of Vul’s paper, but also on the authors’ diction — specifically, the title, and its use of “voodoo.”
“When the conversation gets complex — and with statistics it always is — many blog readers will form opinions based on very simple things,” says Wager. “Like words such as ‘voodoo correlations.’ There’s no reason to use such loaded words when making a statistical argument. The argument should be able to stand on its own.”
Page 1 of 2