A little more than a decade ago, while on a visit to now long-since-failed supercomputer maker Kendall Square Research (KSR), it became clear to me that the world of computing had been turned completely upside down.
KSR was one of a handful of computer companies emblematic of the “big iron” era of computing and the last big push of the military industrial complex–big computers, then being built out of high-priced custom circuits. A single computer could consume as much power as a small city and cost as much as $30 million.
Computing technology in that era trickled down from large military and corporate computing projects until–years later–it finally appeared in desktop personal computers.
One look at the heart of the KSR machine, however, revealed a very different reality then coming into existence. The custom microprocessor at the heart of each of its dozens of computing modules was manufactured in Japan–on the same factory production line that made the chips inside each $300 Sharp Wizard calculator and organizer.
All of a sudden, technology was trickling
The world of big computing had started to become dependent on the infrastructure developed for inexpensive consumer electronics.
Today, this inversion is the norm. Rather than trickling down from the top, new technologies and the most powerful computers almost always appear first at the very bottom of industry. Not long ago, for example, researchers at NASA Ames Research Center–a scientific laboratory that still relies heavily on Macintosh computers–received their first order of Power Mac G4 desktop computers.
They quickly ported a number of scientific programs to the new machines to benchmark performance. They wanted to find out if the 128-bit-wide registers provided by the AltiVec extensions to the G4 chip would be useful for scientific calculations. What they found is the $4,000 G4 easily matched the performance on a $30 million 1985-vintage Cray II for raw computing performance. In a decade and a half, the cost of processing had fallen almost four orders of magnitude.
Of course, it doesn’t stop there. With the advent later this year of the Sony PlayStation 2–the $300 gaming console that will have more power than today’s Intel-based PCs–it will indeed be possible to make two unequivocal statements about how the world has been turned upside down. First, in the future the cheapest computers will be the fastest. And second, the companies that make the fastest computers will be the ones that make things to go under Christmas trees.
The computing inversion is the logical consequence of Moore’s Law. By this time, many of us are acquainted with the observation made by Intel’s cofounder Gordon Moore in the mid-1960s: Every 18 months the number of transistors that can be etched on a given area of silicon doubles. That has meant that, with brutal efficiency, the cost of computing has continued to fall while power has increased. What most people haven’t realized is that while we live in a world where time passes in a linear fashion, Moore’s Law ensures that the change in computing power is exponential. So brace yourself.
In the short space of the next year, the processing power of a Power PC G4 will more than double–an increase that will match the progress made in the last 20 years of computing. And the year after that, it will double again! My first computer, purchased in 1981, was a 8MHz IBM PC; almost two decades later I’m writing this on a PowerBook G3 that has roughly 40 times its processing power. Next year the Mac I write with will almost certainly have twice as much speed, consume less power, cost less, and look better.
What does the fact that computer processing power is heading into the stratosphere while costs dwindle imply? Historically it has meant that with remarkable regularity over the past three decades Silicon Valley has spun off entire new industries with each stair step in processing power: digital watches, video games, personal computers, the Internet, mobile phones, and PDAs have all grown into global consumer industries. The next leap in the power-to-price ratio might have similarly world-altering potential.
Not that it’s predictable. What is perhaps most delicious about Moore’s Law is the regularity with which the pundits and executive “visionaries” stumble in their predictions about what the next big thing will be. In the early 1990s Silicon Valley bet big and guessed wrong on interactive television. Several years later in
The Road Ahead
William Gates largely missed the rise of the Internet.
Now the next big bet is on wireless handheld computing. But no one is certain how and when that world will emerge. Despite pronouncements about networks that will permit computer users in urban areas to have high-speed wireless connections to their portable computers that will match DSL and cable speeds, the United States is confronted with a bewildering array of competing wireless standards.
If wireless does take off, it will happen because a wave of systems that allows people to access the Internet by voice from their cell phones will expand the power of the Internet far beyond the 25 percent of Americans who are connected. Already, systems such as Tellme (www.tellme.com) in Silicon Valley are opening up the Internet to cell phone users without browsers. Users can get news, receive driving instructions, make reservations at movies and restaurants, and buy things online all by using voice with a remote computer system.
But try to guess what’s next? It’s a fool’s game. For, as remarkable as Moore’s Law is, processing power is now actually the laggard compared with the accelerating power of both magnetic data storage and optical fiber communications. Just one example: in 1981, a 10MB hard drive cost approximately $1000. Now you can buy a 10GB drive for about $100. These technologies are now improving at a rate that makes Moore’s Law seem pedestrian. Put it all together, and the world as we know it is certain to be turned upside down again.
JOHN MARKOFF is a senior writer for the New York Times in San Francisco. He is a coauthor of Takedown (Warner Books, 1996).