While I was working on last week’s column, I posted on Twitter that my least-favorite part of the writing process was coming up with headlines. Why? Because I’m aware that while a catchy headline can sell your writing, it can also mislead readers—and many more readers see a headline than actually read the article. What’s more, even if they do read the article, they might believe the simplified version of the work presented in the headline. A reader might see a headline like “Does Red Meat Cause Cancer?” and assume that every hamburger is a death sentence.
The problem with headlines is actually part of a much larger set of effects found in nearly every type of research involving human participants, called the subject-expectancy effect. The placebo effect is the probably best known of these, and it’s indeed a powerful factor in medical and pharmaceutical research.
Let’s say a new drug appears to be effective in combating a condition like chronic anxiety and is the subject of popular news stories. When the drug enters clinical trials, patients who take the drug report significantly less anxiety. But so do patients who were given sugar pills. Because FDA regulations require that any proposed drug perform significantly better than a placebo, the drug isn’t approved, and the pharmaceutical company developing the drug must swallow millions in research expenses. The regulations make some sense: Why approve a new drug with potential side effects when a placebo works just as well?
This is the primary misconception about placebos: that the placebo itself is somehow “working” to treat a medical condition. You can see it even in the headline for an otherwise well-crafted article that appeared in Wired last August: “Placebos Are Getting More Effective. Drugmakers Are Desperate to Know Why.” As internist and medical professor Peter Lipson noted on the Science-Based Medicine blog, placebos by definition have no medical effect. The “placebo effect” is due to the subject’s (and sometimes, the experimenter’s) expectation that a treatment will work. And, of course, a patient sometimes recovers simply due to chance or because his or her immune response handled the problem. Researchers observe an improvement, and this gets attributed to the placebo. In the case of the Wired article, the misconception in the headline is cleared up by the text of the report: The placebo effect may be getting stronger for reasons that are unclear to researchers. Placebos themselves, as ever, remain ineffective.
The anonymous blogger and UK-based neuroscientist Neuroskeptic also addresses the Wired report in a post entitled ”Deconstructing the Placebo.” Neuroskeptic points out that many of the issues we have with placebos are more properly directed at the medical conditions a placebo could supposedly address. If a placebo is just as effective at reducing anxiety as a drug designed for that purpose, what does that tell us about the nature of anxiety? Is participation in a research study motivating people to do other ostensibly anxiety-reducing activities? How exactly are these additional activities helping the problem? Even if placebos aren’t cures, we should be able to learn more about real medical conditions by investigating how people respond to a fictional “treatment.”
In a separate post, Neuroskeptic brings up another problem with placebos: Just as drugs have side effects, people can also report side effects when given a placebo. In an experimental trial, participants are given the same set of instructions—including warnings about side effects—whether they are given a placebo or the treatment being tested. This is an important part of the testing process because in order to test for the placebo effect, both the experimenter and the patient must be unaware of whether a placebo is being administered. After being warned of side effects, people receiving a placebo in trials for an antidepressant said they experienced side effects ranging from dry mouth to impotence. But in a test for a migraine drug, people receiving the same placebo reported different side effects: sleepiness and dizziness.
If the placebo group reports these side effects, it could be that the actual side effects of a drug are exaggerated: After all, people may only experience the side effects because it’s suggested they might, whether they’re taking a placebo or the real drug. On the other hand, it would be unethical not to warn drug users of potential side effects. So even if the placebo effect is responsible for over-reporting of side effects, it’s unclear how these effects not directly caused by a drug can be avoided.
Of course, it’s also possible to exaggerate the impact of the placebo effect. For a placebo to work, it has to be plausible that the treatment will be effective. No placebo can heal a broken limb or cure cancer. Peter Lipson says the plausibility problem doesn’t just affect placebos. Sometimes implausible treatments somehow manage to enter clinical trials, as well, with disastrous results. The “Gonzalez regimen” for pancreatic cancer (a malady known to have an extremely low survival rate) treated the toxins that may have caused the cancer, rather than the cancer itself. It was like testing a safety fence for people who had already fallen off the cliff. The patients treated with the Gonzalez regimen survived just 4.7 months while experiencing more pain and discomfort than patients receiving traditional chemotherapy, who survived an average of 14 months. Lipson calls it “one of the least ethical clinical studies ever conducted.”
Plausibility is important in other settings as well. If you read a headline stating simply “Researchers Cure Cancer,” you’d probably be more skeptical than you would of a headline like “Proposed Breast Cancer Treatment Shows Promise.” A misleading but plausible headline has the potential to do great damage, just as a realistically administered placebo can result in a statistically measurable response. How can we work to minimize these problems? One place to start is by getting involved in the discussion of placebos and related effects at ResearchBlogging.org.
Originally published October 28, 2009