For all the attention being heaped on Moore’s Law this week, there’s another more important law that chip makers must contend with as they push the limits of semiconductor technology ever further: the law of diminishing marginal returns.
Tuesday marked the 40th anniversary of an article written by Gordon Moore, cofounder and chairman emeritus of Intel Corp., and published in the April 19, 1965, issue of Electronics magazine. In that article, which was titled “Cramming more components onto integrated circuits,” Moore observed that the number of components on a silicon chip had doubled at regular intervals and he predicted this trend would continue into the future.
Over the last four decades, chip makers have basically done as Moore predicted they would, cramming more and more transistors onto silicon chips at an exponential rate. Because increased density — the number of transistors on a chip — has for many years been closely tied to greater performance, this achievement made possible rapid increases in the computing power offered by a single chip.
“Moore’s Law has definitely helped drive the industry forward because it sort of sets a target,” said Nathan Brookwood, an analyst at market research firm Insight64.
But the importance of Moore’s Law, which does not address chip performance, is limited. The incremental performance gains now achieved by regularly doubling the number of transistors on a chip, such as a desktop microprocessor, aren’t as significant as they used to be. This is the law of diminishing marginal returns, an economic law that states the marginal return on a unit of input decreases as more inputs are added.
The problem of diminishing returns is compounded by rising costs. The law of diminishing marginal returns assumes that production costs remain constant, but advances in semiconductor technology and manufacturing know-how have become more expensive for chip makers with each new generation of technology. This has raised sharply the cost of keeping pace with Moore’s Law and its call for exponential increases in density.
“Moore’s Law is interesting but it’s not relevant to the problems we face,” said Bernie Meyerson, chief technologist at IBM Corp.’s Systems and Technology Group.
One of those problems is scalability, the question of how to make transistors smaller without affecting their ability to function. Classical notions of scalability are rooted in research conducted during the early 1970s by a group of IBM researchers, including Bob Dennard, the inventor of the single-transistor DRAM (dynamic RAM) cell.
The classic scaling theory put forth in 1972 by Dennard and his colleagues outlined how the physical and chemical properties of a transistor could be shrunk to produce a transistor with a channel length of 1 micron, or one-millionth of a meter. The channel is one of the parts — or features — that make up a transistor and is used to conduct or block the flow of electric current when a transistor is switched on or off.
That theory guided the design of semiconductors for 30 years and beyond the 1-micron mark, until chip makers began using the 130-nanometer production process and classic scaling practices ran into a brick wall, Meyerson said. At that point, chip makers could no longer scale some transistor features and power consumption rose sharply, he said. The reference to size when describing a chip-making process refers to the average feature size on a chip built using that process. One nanometer is one-billionth of a meter.
Increased power consumption resulted in chips that generated too much heat for some applications, Meyerson said. “By following Moore’s Law in the absence of correctly scaling the device, you get a dense chip you can’t correctly use,” he said.
The relationship between increased performance and greater density basically ended somewhere around 130 nanometers, leaving chip makers to seek out other ways of improving performance, Meyerson said. One solution is to put multiple processor cores, each using a lower clock frequency, on a single chip, he said.
That’s exactly what Intel has done with the latest addition to its desktop processor line. Faced with surging heat levels, the company last year abandoned plans to offer a 4GHz Pentium 4 processor, capping the clock speed for that product line at 3.8GHz. On Monday, the company introduced its latest desktop chip, the dual-core Pentium Extreme Edition 840, which has two processor cores running at 3.2GHz.
Even though industry roadmaps promise more semiconductor advances to come in the years ahead, Moore’s Law will eventually come to an end, as Moore himself noted in a recent conference call with reporters.
Transistor size is ultimately constrained by the limits of the physical world and a time will come when it is no longer possible, or economical, to fit more transistors on a chip. “There will be a point where we’re going to have to give up,” said Hans Stork, senior vice president of silicon technology development and chief technology officer at Texas Instruments Inc.
And that’s not necessarily a bad thing. “There’s something exciting about the fact that there’s an end to Moore’s Law, because now there’s the discovery and imagination of what else is going to be there,” Stork said.
(James Niccolai, in Paris, contributed to this report.)