Illustration: Mike Pick
Malcolm Gladwell is a rare figure: A science journalist who is loved by everyone except scientists and journalists. It’s easy to see where the love comes from. The prolific New Yorker contributor and sure-fire bestseller author has made a cottage industry out of translating the latest sociology, psychology, and neuroscience research into easily graspable nuggets of folk wisdom, all with a clear, effortless style. Is that so bad, really?
The resentment bubbling just below the surface of both scientists and journalists seems to have recently overflowed, mostly in response to Gladwell’s greatest-hits compilation of New Yorker stories, What the Dog Saw. Faced with such concentrated success, the backlash began at Vanity Fair with a parody, and continued at The Nation, The New Republic and elsewhere. If there were only a glib term to concisely describe this phenomenon.
For the journalists, it would be easy to describe this resentment as simple jealousy. An early example of the Gladwell Tipping Point from The Daily Beast captures that envy; the idea of a magazine writer not named Hunter S. Thompson having honest-to-goodness groupies is just too much, especially when that journalist is rocking the Sideshow Bob look.
For scientists, jealousy may also be a factor. There’s the perception that Gladwell’s writing is simply much easier to read (and write for that matter) than that of his scientific counterparts, and any one example will almost certainly garner the former more fame and fortune than any journal article published by the latter. But wrapped up in that question of difficulty is one of rigor: Gladwell can hold forth on all branches of social science without having to do a single regression analysis—or without even understanding what one is.
That was at the core of evolutionary psychologist Steven Pinker’s argument in his Sunday New York Times review of Gladwell’s new book. There are a number of deeply satisfying zingers in there, but none more so than what Pinker calls “The Igon Value Problem.” This is in reference to a faulty transcription of “eigenvalue” Gladwell makes in an article about Nassim Taleb. According to Pinker, this slip outs Gladwell as the height of dilettantism and intellectual phoniness: an ostensible expert who is ignorant of his own ignorance on the subject in question.
Gladwell’s response attempts to deflect that charge as a matter of misspelling rather than a lack of familiarity with the concept. He also posts a screengrab of a correct version of the offending paragraph from the New Yorker, though the incorrect version remains on his own site. It seems that the New Yorker’s crack fact-checking team had caught the error in Gladwell’s manuscript but his book publishers did not. How this speaks well for Gladwell is unclear.
That’s not to say his response didn’t land some punches. Gladwell calls out Pinker’s residence on “the lonely ice floe of IQ fundamentalism” and his reliance on a dubious, Bell-Curve-quoting source. But after that, Gladwell and his commenters descend into the weeds of his argument on predicting the value of NFL quarterbacks from their draft positions, losing the clarity and simplicity that makes his writing attractive in the first place.
All this does is show that, where statistical rigor is actually applied, it takes the discussion to a level of abstraction that is not useful to the average reader. And even for the NFL managers who could potentially parse this information, the list of caveats is long enough to throw any actionable knowledge gleaned from the exercise into question.
To try my hand at Gladwell’s technique: Conventional wisdom suggests that if getting Gladwell’s level of popular traction means sacrificing aspects of both science and journalism, it might be better to have no Malcolm Gladwells at all.
But I don’t think that’s the case. Both scientists and journalists want it all, but it may be that the subjects Gladwell loves to cover have just too many moving parts and externalities to condense into a popular article. Perhaps we’d all be happier if he would just admit that.
Making it personal
Another argument about rigor and accuracy was waged this week; the battleground of this dust-up was the much lauded but still tenuous world of personal genomics.
In one corner was genome-sequencing pioneer Craig Venter and a cohort of coauthors, who published a Nature editorial in October that crunched the numbers of two direct-to-consumer genetic testing companies: 23andMe and Navigenics. They found that while the companies were accurate in reporting what genetic variants customers had, the utility of the genotype-phenotype correlations would be fundamentally unclear to users. To further confuse the issue, the two companies differently weight the predictive value of certain markers, skewing the results in relation to one another.
In the other corner are representatives of those companies, firing back on 23andMe’s blog (Nature wouldn’t publish their response due to “space concerns”). Even if the predictive value of their results are small in comparison to other factors, they can be extremely useful in some cases, such as with certain mutations related to breast cancer.
This debate comes at an interesting time for both 23andMe and Navigenics, as the only other competitor in their nascent field—deCODE—has just filed for bankruptcy (the fourth, Knome, has got George Church on board but is only for high-rollers who are willing to spend $99,000 for their entire genome on blinged-out flash drive).
This isn’t necessarily a reason to pop the champagne. deCODE will likely continue to exist as assets are shifted around to American investors, as will deCODEme, the personal genomic service that competes with 23andMe and Navigenics. And deCODE’s DTC genetic testing arm was only a sliver of their core business: finding drug targets from associational studies based on Iceland’s highly homogenous population. If that game plan isn’t profitable in the long term under ideal conditions, it doesn’t exactly augur well for others trying to make a buck in the personal genomics industry.
In the end, this isn’t so different from the Igon Value Problem. The level of rigor necessary to make predictions about genes that are both accurate and useful is quite high, and there are still honest disagreements amongst experts about what to factor into those predictions. This means that most while users will understand enough to create a narrative for themselves—my gene X puts me at higher risk for outcome Y—they might not understand why that narrative makes sense.
Originally published November 20, 2009