In April 1965, a young researcher named Gordon Moore wrote a short article for the now-defunct Electronics Magazine pointing out that each year, the number of transistors that could be economically crammed onto an integrated circuit roughly doubled. Moore predicted that this trend of cost-effective miniaturization would continue for quite some time.
Two years later Moore co-founded Intel Corporation with Robert Noyce. Today, Intel is the largest producer of semiconductor computer chips in the world, and Moore is a multi-billionaire. All this can be traced back to the semiconductor industry’s vigorous effort to realize Moore’s prediction, which is now known as “Moore’s Law.”
There are several variations of Moore’s Law—for instance, some formulations measure hard disk storage, while others concern power consumption or the size and density of components on a computer chip. Yet whatever their metric, nearly all versions still chart exponential growth, which translates into a doubling in computer performance every 18 to 24 months. This runaway profusion of powerful, cheap computation has transformed every sector of modern society—and has sparked utopian speculations about futures where our growing technological prowess creates intelligent machines, conquers death, and bestows near-omniscient awareness. Thus, efforts to understand the limitations of this accelerating phenomenon outline not only the boundaries of computational progress, but also the prospects for some of humanity’s timeless dreams.
Chip manufacturers are already struggling with the power consumption and heat dissipation of state-of-the-art computer processors. And though the shrinking size of individual features on computer chips is approaching the atomic scale, once a chip component is comprised of a single atom, it’s unclear how further miniaturization could occur. The exponential acceleration of computer performance—Moore’s Law—would end.
However, there is the hope that “quantum computers” could be built at such small scales. These computers would take advantage of quirks in the microscopic behavior of matter and energy to perform certain calculations far faster than classical computers.
But even quantum computers can’t offer endless increasing returns, as two professors in electrical and computer engineering at Boston University, Lev Levitin and Tommaso Toffoli, recently demonstrated. In a paper published in an October edition of Physical Review Letters, Levitin and Toffoli delineated a universal limit on computational performance—a point beyond which no computer, classical or quantum, can feasibly increase its speed.
In the 1970s, Levitin defined the most fundamental elementary operation that a quantum computer can perform—a transition roughly equivalent to the flipping of a bit from “0” to “1” in a classical computer. This latest work calculates the maximum rate at which this elementary operation can take place, based on the amount of energy it would require and the fluctuating spread of energy within an idealized quantum system.
On one level the finding is intuitive, says Toffoli. “If I want to send a message tied to a stone through a window, the harder I throw the stone the faster the message will travel,” he says. “Computation is just a very special kind of message, a message that changes as it travels—the more energy you invest, the faster it occurs. This work sets a bound in respect to the energy that you can provide.”
The boundary is as fundamental as the speed of light. Similar to how it’s impossible for a stone to travel at light speed—Einstein’s theory of special relativity dictates this would require investing an infinite amount of energy—Levitin and Toffoli’s intrinsic linkage between computation and energy suggests a similar limit.
“This paper is giving a tradeoff between computation and energy,” says Scott Aaronson, an assistant professor at MIT and an expert in the science of quantum computing. “When you combine that with knowledge of how much energy you’re able to have in a given region of space-time before that region collapses into a black hole, you can get an upper bound on the amount of computation possible in any region. The creation of a black hole is one of nature’s funny ways of telling you that you can’t do something.”
Fortunately Levitin and Toffoli’s limit still leaves plenty of room for ambitious engineers to increase computing performance: The researchers estimate that per unit of energy their theoretical system could perform quadrillions more operations per second than the fastest processors available today.
A naive extrapolation of Moore’s Law of exponentially accelerating computing power would bring us to Levitin and Toffoli’s limit in well less than a century, but according to Aaronson, the necessary computational developments are exceedingly unlikely. Second-order technological hurdles or economic and social considerations may short-circuit Moore’s Law long before this limit is within reach.
“Moore’s Law is something that’s been happening in our civilization for the last 40 or 50 years, but when anything grows exponentially, whether it’s processing speed or a nuclear chain reaction or human population, it’s a pretty good bet that it can’t continue forever,” Aaronson says. “Human civilization might never ever get within even 20 orders of magnitude of saturating the limits we’re discussing.”
Toffoli agrees. “Unless people overhaul quantum mechanics the way they overhauled Newtonian classical mechanics, [our limit] is as far as we’d be able to go, even if we had complete control of everything,” he says. “But we’ll never attain this limit because to do so we’d have to exist in an absolutely perfect world, free of noise from other people, other creatures, stars seething and sending out photons, and so on. This is only to have an idea where we stand—not to say we can actually reach it.”
Originally published December 15, 2009