Credit: Flickr user eriwst
I write this column (and started ResearchBlogging.org) primarily because I believe that it’s important to consider peer-reviewed research when discussing science. In an age when anyone can have a blog, it’s easier than ever to disseminate pseudoscience, so it’s important to identify and report on unbiased scientific research.
But what if the scientific publication process itself is biased? Certainly we all get much more excited when a novel and unique result is found: A new treatment for depression, a new way to lose weight or prevent heart disease or treat cancer. What about all the studies that don’t succeed? Are peer-reviewers less likely to approve publication of a finding saying that a treatment doesn’t work? Might researchers be tempted to sweep uninteresting results under the rug?
Unfortunately, in many fields, the answer to these questions appears to be “yes.” It’s a documented problem known as publication bias. In the case of medical research, publication bias could have serious consequences. What if a drug company sponsored several clinical trials of a drug, some of which worked and some of which did not? The company could profit handsomely if the results favoring their drug were the only ones published. The potential for this sort of abuse is large enough that the US government now requires that the data from all drug trials be stored and shared on its site ClinicalTrials.gov.
But posting to a government database isn’t the same as publication in a scientific journal, where work—especially trials of promising new drugs—is likely to reach a much larger audience through the mainstream media and blogs. In these cases, publication bias can still have a big impact. Now, since ClinicalTrials.gov has been active for over 10 years, it’s possible to put some hard numbers to publication bias. The UK neuroscientist who blogs as “Neuroskeptic” discussed the most recent such study earlier this month. Florence Bourgeois led an analysis of the results of 546 trials in five major categories of drugs. The work was published in Annals of Internal Medicine.
The researchers found that only 66 percent of the trials conducted between 2000 and 2006 were actually published. Industry-funded trials were significantly less likely to be published than government- or organization-funded trials. Industry-funded trials were also more likely to get positive results. That suggests there’s a strong potential that industry is stacking the deck by only submitting positive results for publication. Since the drug industry doesn’t profit from unsuccessful drugs, they also have a clear motive for suppressing studies that don’t favor their interests.
But publication bias doesn’t just occur in drug research. Even when there’s no profit motive, scientists still want to be seen as producing interesting, positive results. I’ve frequently heard researchers say an experiment “didn’t work,” meaning not that they made a procedural error, but that the results weren’t interesting or surprising. In March, UK medical writer Helen Jaques blogged about a 2010 study on publication bias in research on psychotherapy for depression. While studies about psychotherapy typically don’t get drug-industry backing, researchers are still interested in demonstrating that therapy is effective. In a raw analysis of 117 studies, they found, on average, a moderate benefit of therapy. But when the results were adjusted for publication bias, the statistical power of the research was diminished, and the effect of therapy could only be described as small. The research was published in the British Journal of Psychiatry.
Publication bias has been found in other domains, too, from animal studies to homeopathic medicine. Because positive results are almost by definition more interesting than null or negative results, reviewers prefer to approve them, and researchers prefer to submit them for publication.
But if publication bias distorts the truth, what can be done about it? Aside from statistically adjusting the results, as in the study cited by Jaques (which can only be done after a number of different studies have been published), the key may be to change how research is published. Neuroskeptic suggests that it might be possible to force all researchers conducting clinical trials to publish.
I heard a different suggestion at a forum I attended last March. The open-source advocacy group Public Library of Science (PLoS) hosted the event, which brought together scientists and science communicators to discuss the future of science publishing. What if peer-review was only conducted after publication? PLoS co-founder Michael Eisen suggested that rather than screening out works prior to publishing, maybe all research produced by credible scientists should be published online as soon as it is written up. Then researchers could get credit for negative or null results that wouldn’t have ordinarily been published. Reviews would subsequently be commissioned and published alongside the research, so readers could see what reservations other experts had about the work. In cases where serious flaws were found, research could be amended or retracted.
Some forum attendees recommended going even further, arguing that researchers should publish all their data along with their formal write-up of the research, so that others could verify their findings or easily combine different datasets in massive meta-analyses.
However, these approaches also have pitfalls. While most publishing today is supported by subscriptions, a new “publish first” model would have to be supported in a different way. PLoS currently charges authors a publication fee after their manuscript is accepted. Would scientists be willing to pay such a fee even if their work might be panned by reviewers? How would the mainstream media know if a newly published study merits their attention? And how would the system prevent crackpots from gumming up the works with pseudoscience? Eisen believes that publishers can come up with a system that addresses these issues while speeding the publication process and reducing publication bias.
I’m inclined to agree with Eisen. The current system of research publishing, though it has served us well for generations, is in need of an overhaul. By publishing the original research report along with expert commentary and perhaps the raw data as well, readers will get a better sense of the context for research. While like the process of science itself, research publishing won’t ever be ideal, it can—and should—be much better than it is.
Dave Munger is editor of ResearchBlogging.org, where you can find thousands of blog posts on this and myriad other topics. Each week, he writes about recent posts on peer-reviewed research from across the blogosphere. See previous Research Blogging columns »
Originally published August 18, 2010