In 1998 a team of researchers led by Andrew Wakefield published a now-infamous article in the journal The Lancet. The paper purported to show that a small group of children began showing symptoms of autism shortly after receiving an MMR (Mumps, Measles, Rubella) vaccination. The paper had problems from the start. With a sample size of just 12 children, there was certainly not enough evidence for a large-scale change in medical practice. At most, the work suggested that the link, if any, between MMR and autism should be investigated further. Yet fanned by credulous media reports, public sentiment began to sway against the MMR vaccine and against vaccination in general.
Even as study after study failed to confirm Wakefield’s results, the portion of English children receiving the vaccine declined dramatically, from over 90 percent in 1995 to just 80 percent in 2003-04.
Soon reports began to question the integrity of the data in the Wakefield study. In six of the 12 cases presented in the study, symptoms of autism were present before they were given the vaccine that was supposed to have caused the disorder. At the time the study was published, Wakefield was a paid consultant for law firms that were attempting to sue the vaccine manufacturers for “vaccine injury”—a clear conflict of interest. Finally, in February 2010, The Lancet formally retracted Wakefield’s report.
Despite this, the global anti-vaccination movement continues to have millions of followers. Even though the evidence suggests that Wakefield falsified at least a portion of his data, the damage continues to this day, with hundreds of cases of measles in the U.S. and at least two reported deaths over the past decade.
In Wakefield’s case, the research community wasn’t as credulous as the general public, but it’s unclear whether this is typically the case; few systematic studies of scientific fraud have been undertaken. How does the research community respond to a retraction? Janet Stemwedel, an ethicist at San Jose State University, discusses one such study at her blog, Adventures in Ethics and Science. A team led by Anne Victoria Neale examined 102 cases where published research articles involved fraud or misconduct. Their study was published in the journal Science and Engineering Ethics in 2007. While nearly every article was either retracted or corrected, Neale’s team wanted to know if the articles had an influence on other research. They found that an astonishing 5,393 articles cited those reports! Stemwedel points out that Neale and her fellow researchers didn’t analyze those articles for context: It could be that citations of the fraudulent or unethical work were made in order to show that the research couldn’t be replicated.
Several commenters to Stemwedel’s post suggested that the fraudulent research was probably due to stress or institutional pressure, so she examined another study that attempted to uncover the causes of unethical behavior in scientists. Mark S. Davis, Michelle Riske-Morris, and Sebastian R. Diaz undertook a sophisticated analysis of misconduct cases from the same period as Neale’s study. Rather than start with a preset range of categories for the situations leading to misconduct, they used the reports themselves to generate their list of categories. In the end, they found no single dominant reason for the misconduct, instead identifying an array of complaints including personal factors, organizational factors, job insecurities, rationalizations, personal inhibitions, and personality.
Again Stemwedel’s commenters point out a flaw in this analysis. Without comparing these cases to work conducted ethically, it’s difficult to know whether anything abnormal was going on in the workplaces where the misconduct occurred. But still, it suggests there are no easy solutions to the problem of scientific misconduct.
British science and technology writer Jacob Aron points out that a study conducted last year found that around 2 percent of all researchers surveyed in a number of different studies admit to having fabricated or falsified data at least once, with 14 percent admitting to other questionable research practices, and as many as 72 percent saying they had seen others engaging in questionable research practices. While the study acknowledges that many survey respondents could be pointing to the same few bad apples, it does suggest that many cases of questionable research practices are never caught.
In an earlier career as a high-school science teacher, I witnessed the problem of scientific misconduct first-hand. After grading our weekly quiz and telling the class that they had all gotten the same two questions wrong, the entire class turned and groaned at the one student from whom they had all copied their answers. My students were unabashed in acknowledging they had cheated; they saw this as normal behavior. With global warming “skepticism” and negative attitudes about scientists on the rise, we may be nearing the point when all scientists must pay for the misdeeds of a few. If the belief that “everyone” cheats becomes pervasive, then why should anyone believe what a scientist tells them? For more discussion on issues of scientific ethics, visit ResearchBlogging.org.
Front page image courtesy of Alan Cleaver.
Originally published April 14, 2010