Intel’s upcoming Xeon server chips incorporate significant advancements that could form the basis for future chips that could handle high-performance computing tasks, analysts said.
The new Xeons are based on Intel’s latest Nehalem microarchitecture, which improves system speed by cutting bottlenecks that plagued Intel’s earlier chips.
The advancements are a step by Intel to build server chips that are able to run applications faster. Chip makers are consistently etching more complex features onto the surface of processors to handle a larger number of applications, which could reduce the need for extra components in servers.
Intel is due to launch the Xeon chips on March 30, and initial offerings will be targeted at workstations and servers. Apple and Lenovo have already announced workstations with dual- and quad-core Xeon chips. The Mac Pro models introduced by Apple earlier this month feature Xeon 3500 and 5500 processors, both part of the Nehalem family.
Server announcements from other vendors are expected during the launch. Later this year, Intel could release Nehalem-based chips with six cores and eight cores, according to the company’s road map.
“Nehalem is a significant architecture that overcomes certain limitations Intel faced in the past,” said Jim McGregor, chief technology strategist at In-Stat. “This is a much bigger jump than they have had in quite a big time.”
Perhaps the most significant change is that the Xeon chips integrate a memory controller on the CPU, which gives the chip a faster path to communicate with the memory, said Dan Olds, principal analyst with Gabriel Consulting Group. It removes the memory latency that affected earlier Intel processors, which should translate to better server performance.
Data-intensive applications like video processing frequently require a processor to fetch information from memory, and Intel’s earlier chips had to go through a bus called the front-side bus (FSB). After facing years of criticism, Intel removed the FSB and integrated the memory controller into the CPU with Nehalem chips.
Intel’s rival Advanced Micro Devices has been integrating memory controllers on CPUs for many years, which gave it a long-standing performance advantage, said Roger Kay, founder and president of Endpoint Technologies Associates. Intel relied heavily on cache for improved performance, but the integration of the memory controller brings both chip makers to par in terms of technology advancements.
The improvement should also make it easier for Intel to take advantage of faster memory technologies like DDR3, McGregor said. Compared to existing DDR2-capable processors, the new DDR3-capable Xeon chips will be able to talk to memory quicker, leading to improved system performance.
Another significant technology advancement is the addition of the QuickPath Interconnect (QPI) technology, a point-to-point interconnect that provides the CPU with a faster pipe to communicate with other processors and system components, the analysts said.
QPI removes some old bottlenecks of Intel’s earlier chip microarchitectures to improve system speed and performance per watt, Gabriel’s Olds said. As cores are added, QPI will help servers execute more tasks in parallel and tackle larger workloads and more data-intensive applications that need a lot of bandwidth and processing power.
The benefits are not on the hardware side alone—the chips will be able to execute more tasks than earlier Xeon processors while drawing less power, which could help cut energy costs. A larger number of cores should help consolidate servers into a smaller space in data centers.
Despite the new features, the chip may struggle to find buyers immediately as organizations cut back on IT spending during the recession.
“In the current economic environment, people will not go out and upgrade because it’s a cool new thing,” Olds said.
However, the new chips will generate a good margin for Intel and perhaps push the company into new markets like high-performance computing, analysts said.
The eight-core chip, code-named Nehalem-EX, will be able to run data-intensive applications like scientific and technical workloads, In-Stat’s McGregor said. The chips have enough bandwidth and processing power to execute a larger number of tasks in parallel than typical quad-core Xeon chips, which are designed to tackle less-demanding applications like databases.
“As you’re getting into more high-performance applications, you need the core count and the applications designed to use them,” McGregor said.
Going into six or eight cores for Nehalem chips is a double-edged sword, though, Olds said. Effective use of the cores depends on whether the software is designed to execute the tasks simultaneously across a larger number of cores. Applications need to be broken down into little pieces for distribution over multiple cores for parallel execution, which can be difficult.
But that’s not a huge concern, and the addition of cores should spur independent software vendors to write software for parallel execution, Olds said.
Intel started talking about Nehalem chips in 2007. It launched the first Nehalem-based chip, the Core i7, for high-end desktops in November.
Note: When you purchase something after clicking links in our articles, we may earn a small commission. Read ouraffiliate link policyfor more details.