Interviewee: ... Before, in the Venue you could leave your drinks and go for a dance. But not so much now. I only leave [my drink] with friends, but even then… [hesitates].
Interviewer: You wouldn’t leave it with a friend?
Interviewee: No, I do trust my friends, I don’t know why I said that…
This brief excerpt from a 2009 study led by Adam Burgess on Rohypnol—the so-called “date-rape drug”—illustrates the level of concern many women now have about their drinks being spiked so that men can sexually assault them.
There’s no denying that “date rape” is a vast problem—over 160,000 women were victims of rape in the US in 2008 alone, and 70 percent of those rapes were committed by non-strangers. But is Rohypnol really to blame for a large portion of these crimes?
Martin Robbins, a UK-based researcher and science writer, examined the Burgess study (Part 1, Part 2), which was published in the British Journal of Criminology. The researchers found that the fear about having one’s drinks laced with Rohypnol was disproportionate to the real risk—75 percent of the 200 survey respondents listed drink-spiking as a risk factor. This was significantly greater than the respondents’ perceptions of the risks of drinking or taking drugs.
Several studies suggest that the real-world use of Rohypnol or other sedatives to facilitate rape is almost vanishingly small. One study found evidence of sedatives in only 2 percent of rape victims, and another found none. By contrast, an Irish study found that a third of all rape victims had drunk so much that they could have experienced amnesia or been unconscious. Yet the perception of danger is just the opposite. Women are concerned about date-rape drugs but feel they can control their own drinking. In reality, many women are raped while they and/or their assailants are so drunk that they may have no recollection of the experience. Robbins emphasizes that his post doesn’t downplay the significance of date rape or absolve rapists from blame, but rather shows how troubling the problem of binge drinking is.
Alcohol is so embedded in most cultures that perceptions and reality intermix in surprising ways. Last week psychologist Polly Palumbo discussed a 2008 study about mothers’ beliefs about their own kids’ drinking. You might think that if mothers were concerned about their young children becoming drinkers in high school, they might be more successful in preventing some of the kids from actually engaging in underage drinking. In fact, the study, led by Stephanie Madon and published in the Journal of Personality and Social Psychology, found the opposite. Mothers who worried their children might become drinkers had kids that were significantly more likely to drink.
The researchers are careful to point out that the study is just a correlation; we can’t say that the mothers’ belief about drinking is what caused their kids to drink. But because the study was administered over several years, it’s better than many correlational studies: We know the belief preceded the drinking, so it’s pretty much impossible that the kids’ drinking behavior itself led to the belief. Madon’s team argues that this is an example of a self-fulfilling prophecy. The kids grow up in a family that expects they will drink, and so they do. What this study cannot show us is where these parents went wrong. What, specifically, do parents do to lead their children to drinking? It’s a difficult problem, and as Robbin’s posts demonstrate, one with very harsh consequences.
Since we know drinking can have a devastating impact, whether through alcoholism, date rape, or drunk driving, it’s clearly important to measure the influence of alcohol, right? “Neuroskeptic,” an anonymous blogger and UK-based neuroscientist, spotted a 2009 study which puts this line of reasoning into question. A genetic variation commonly found in East Asia renders drinkers slower to process the alcohol they consume, but faster in processing the acetaldehyde created as a biochemical byproduct of drinking.
When researchers compared drinkers who possessed the genetic variation to other drinkers who processed acetaldehyde at a more typical rate, they found that both groups had similar blood alcohol levels, but the drinkers with the variation had much higher acetaldehyde levels. The team then tested the psychomotor skills of both groups, and found that blood acetaldehyde, not blood alcohol, was responsible for all the psychomotor impairment associated with alcohol consumption. In other words, some people who can pass a blood-alcohol test may still be impaired as a result of drinking alcohol.
The study, led by Sung-Wan Kim, was published in Biological Psychiatry. Neuroskeptic points out that the psychomotor impairment may be more a result of uncomfortable side effects of drinking associated with this genetic variation: symptoms of the “flush reaction” include nausea, headache, and elevated pulse, which themselves could affect performance on many tasks. However, it also demonstrates that our response to alcohol is not as predictable or clear-cut as it might seem.
Drinking is so embedded in many cultures that people are often unable to recognize or predict its effects. Researchers are now beginning to unravel some of the more pernicious side effects of drinking. You can learn more about their findings on ResearchBlogging.org
Front page image courtesy of ezioman.
Originally published March 3, 2010