Credit: Flickr user dhutchman
Three weeks ago I wrote a column about possible causes of suicide. I thought I was being careful to explain the results of several studies, showing that suicide is a difficult problem with many potential contributing factors and confounding variables, including mental illness, depression, and the seemingly contradictory influences of intelligence. Yet on social-networking sites, many readers latched on to one finding: That countries with higher average IQ tend to have higher suicide rates. Commenters implicitly agreed or disagreed with that finding, apparently without even reading the rest of the column.
Admittedly, my editor and I did decide to emphasize the IQ research in the deks, the short descriptions preceding the article. We were hoping to harness the power of the common stereotype of “tormented genius” to attract readers, then show them that this view of suicide was an oversimplification. Instead, we may have merely reinforced the stereotype.
I’ve long believed that most readers are capable of understanding very sophisticated scientific reasoning as long as it’s explained clearly and without unnecessary jargon. But this depends on one key factor: That readers are giving the story their full attention. In a world full of mobile devices, constant interruptions from sites like Twitter and Facebook, and a nearly infinite supply of information, attention is a precious and fleeting commodity.
So if many readers aren’t paying enough attention to understand the nuances of complex science, is there any point in attempting to report science to non-experts? Obviously, since I’m still writing this column, directed at experts and non-experts alike, I think there is—and I think both journalists and scientists have a role in communicating science. I even see a role for people who don’t consider themselves to be journalists or scientists.
For scientists, however, the stakes may be higher than for anyone else. Not only do scientists often rely on the public for funding their research, but when scientists work at public universities and other public research institutions, the very existence of their job depends on public support. In an April case study published in PNAS, scientists explained the communication efforts behind establishing a marine reserve off the California coast. Ars Technica blogger Matt Ford discussed the study last week. Ford says the researchers identified four major components of successful communication: Understanding the audience, determining the message, deciding on strategies, and measuring success. While these may seem relatively obvious, I’d submit that most science communicators ignore the first and last points. Without knowing your audience, you’re going to have a hard time choosing an effective strategy for conveying your message. And while they may be concerned with the impact factor of journals they publish in, many scientists don’t take the time to assess the influence of their papers after publication.
Ford, for his part, then makes an effort to assess how Ars Technica’s Nobel Intent blog addresses those issues, pointing out how some articles have been both criticized and praised—highlighting the difficulty of satisfying all readers. The first commenter on Ford’s post pointed out another flaw in his writing: There was no citation or link to the source article in his post (this turned out to be an oversight; Ford later added the citation).
How can bloggers—whether journalists, scientists, or interested laypeople—use the information in the PNAS study? Often their goals aren’t as well-defined as the case study in the article. Certainly it’s important to consider your audience, but as Ford points out, often bloggers don’t have much control over their audience. A blog post can be linked by a few friends on Twitter, skyrocket to popularity on Digg or Reddit, or be read only by a few specialists, one of whom could be on the editorial board of a journal you’d like to publish in.
The answer is that you can’t write a blog post to satisfy all possible audiences. All you can do is write for your intended primary audience—experts, laypeople, or somewhere in between—and offer enough extra information for others to fill in the gaps. For example, this column is written for an audience with a good general education. If experts want more details, they can follow the links I provide to blog posts and the original peer-reviewed research. A blog pitched at experts might offer links to definitions of key terms so that interested laypeople can follow along.
Blogs can also serve as media watchdogs. Biologists Caryn Bondar and Zen Faulkes have recently started a regular feature where they analyze media coverage of a journal article, debating whether or not the coverage distorted the research results.
But as blogger/journalist Ed Yong observed at the January ScienceOnline2010 conference, bloggers and journalists need not be adversaries. Often their roles overlap, and as more of the mainstream media moves online, the distinction between “blogger” and “journalist” becomes increasingly blurry, and there are good and bad examples of both.
To my mind, one of the best ways to ensure quality both among bloggers and in the mainstream media is for bloggers and journalists to be critical of each other—and themselves.
For my part, let’s take another look at my suicide column. Why did some readers misconstrue it as saying only that suicide is correlated with IQ? I think the reason isn’t just the dek, which simply asks whether the stereotype of the “tormented genius” is true. The problem is, that line of reasoning is continued throughout the first paragraph, and isn’t debunked until the third sentence in the third paragraph. Compare that to last week’s column, which contrasts two viewpoints in the dek and the first paragraph, then resolves them over the course of the article. I think I did a better job last week, and looking over the response to the article on social media, few commenters misinterpreted it.
In the hectic, overloaded modern world of communication, those who want their message to be accurately heard need to understand the many different ways an audience will encounter their work. I don’t think this means that research needs to be “dumbed down,” but it might mean asking whether portions of the message, taken in isolation, have the potential to present a distorted version of the truth.
Dave Munger is editor of ResearchBlogging.org, where you can find thousands of blog posts on this and myriad other topics. Each week, he writes about recent posts on peer-reviewed research from across the blogosphere. See previous Research Blogging columns »
Originally published June 30, 2010