By Steven Shapin | Posted November 20, 2008
Page 2 of 3
Accounting for 14 percent of the value of all 2007 IPOs, energy represents the largest sector of the new science ventures. Though oil and natural gas exploration remains a major part of the energy landscape, a number of big photovoltaic cell manufacturers made their debut last year, and the billion-dollar IPO of Brazilian sugar giant Cosan Ltd was based largely on its ethanol division. Metric source: Hoovers.com.
From a historical point of view, neither "science" nor "technology" is a self-evident category. In the 16th and 17th centuries, natural philosophers (we might now call them physicists) and mathematicians (whom we might now call engineers) were thought to have fundamentally different aims; incidentally, much more prestige was then attached to the philosopher seeking pure understanding than to the mathematician seeking to manipulate the world.
World War II fundamentally changed that state of affairs. At Los Alamos, physicist Robert Oppenheimer presided over the transformation of theoretical knowledge into practical application, of chalkboard equations into explosive reality. Hiroshima and Nagasaki profoundly altered public perceptions both of what scientific knowledge was and what scientists were capable of delivering, and in the immediate postwar period, Oppenheimer and his colleagues spoke about a new, widespread belief that apparently useless knowledge could unpredictably become extremely useful. Statesmen of science involved in the founding of the National Science Foundation urged that basic research deserved large-scale federal support for just that reason. Big German and American electrical, chemical, and photographical companies had accepted a limited version of that argument from very early in the 20th century, but it took the atomic bomb and its political fallout to extend the sensibility to almost all of science and to make it politically and culturally consequential. With the advent of the Bomb, almost all scientists — not just nuclear physicists — began to appear as sources of power, and the extent to which American science fed off that legacy during the Cold War decades cannot be overemphasized.
World War II did much to allay long-held American skepticism of scientific inquiry seemingly disengaged from material outcomes and a deep-rooted disapproval of government support for research not promising immediate contributions to national welfare or security. But it did so by enshrining what has been called the Linear Argument — pure science produces applied science, which produces technology and economic growth — at the heart of the new contract between science and the state. In the mid-1970s and '80s, the late Sen. William Proxmire used to give out monthly "Golden Fleece" awards to individuals and institutions wasting public money in the most "frivolous" and "egregious" ways. A famous award went to $2,000 toilet seats for Air Force planes, but also to projects funded by the National Science Foundation and National Institutes of Health for research whose point and payoff the good senator did not understand: research on the sex lives of quail and on alcoholism in rats, to name but two studies. And in the early 1980s, when a sample of the American public was asked whether they wanted their tax dollars spent in pursuit of pure research, only 9 percent said they did. The American public still tends to approve of science in proportion to the contribution they think it can make to a range of valued goals — better health, an improved environment, reduced crime, etc. In the first presidential debate, John McCain angled for applause by once again citing research on the DNA of bears: "I don't know if it was a criminal issue or a paternal issue," he quipped, "but the fact is that it was $3 million of our taxpayers' money. And it has got to be brought under control." Even Rita Colwell, who as director of the National Science Foundation was wholly committed to "curiosity-driven research," worked strenuously to convince the George W. Bush administration "how much of the wealth and health of the country...comes from our investment in science," thus paradoxically, but typically, defending intellectual curiosity on the grounds of material national interest. So the modern American scientist is held in some esteem, valued as a useful sort of person, but there is little understanding of what it might be to engage in scientific inquiry for its own sake and little evident approval of such a thing.
If early modern intellectual culture considered the "philosophers" to be the most prestigious sort of practitioner, the culture heroes of contemporary America are increasingly the "engineers" and the enterprising scientists whose discoveries can be turned to cures, power, and, of course, profit. Although only a fraction of present-day scientists and engineers fit this conspicuous mold, and an even smaller fraction become wealthy through entrepreneurial activities, the figure of the scientist-as-businessperson is fast becoming iconic. Our culture now admires commercializing entrepreneurs; it encourages and rewards them; and more and more it identifies the life of science with that of business and private enterprise.
In one sense, the enfolding of science in structures that produce wealth and project power is just a sign of its practitioners' immense success over the course of the past century. Even in the early 1900s, it was common for the public to think of scientists as useless, funny, and rather eccentric. Now, while the caricature is still, if just barely, recognizable, and while Beauty and the Geek trades on the supposed social ineptness of the technically knowledgeable, governments and industry around the world compete for their services. The global "knowledge economy" calls out for universities to supply more of them and, since the proportion of science and engineering degrees awarded by US universities to US citizens is declining precipitously, for government to supply many more H-1B visas for the large numbers that must be imported.
Science is now widely understood as an engine of economic growth, so it is remarkable that there are still many who associate the scientific life with institutions of higher education conceived on the model of the Ivory Tower. This was not the case in the early part of the previous century, nor is it the case now. Today almost two-thirds of all American science and engineering degree-holders are working either in the forprofit sector or are self-employed; only 9 percent work for colleges or universities. Even pure science has long had a significant presence outside academia. At the origins of corporate research in the early 20th century, big companies such as General Electric, AT&T, Eastman Kodak, and DuPont were the dominant sponsors of industrial science, and although the great majority of their money went to applied research and development, government and academia then supplied so little funding for basic research that most of that too was done in industry. It is now widely said that the research laboratories of big industrial firms are on their way out: The decline and fall of Bell Labs and the so-called "crisis in innovation" in global Big Pharma have both made recent headlines. Yet, if anything, the place of science in the for-profit sector has become more secure due to the past four decades of growth by small, entrepreneurial high-tech and biotech firms, where the boundary between making things and making knowledge is increasingly unclear and even irrelevant, and by the burgeoning commitment to all sorts of scientific research by such companies as Microsoft, Intel, and, most visibly, Google. The commercial sector now does about 70 percent of all American R&D in dollar terms. And while the overwhelming majority of corporate R&D remains biased toward development and applied research, about a fifth of US basic research is still done in industry.
Page 2 of 3
The Scientist in 2008
By Steven Shapin
Posted November 20, 2008
Originally appeared in Seed 19