Analysis: Intel developer forum introduces your next Mac
First Apple drops “Computer” from its name to expand its focus to music, phones, and settop boxes, then Intel devotes six keynote-speech hours at last week’s Intel Developer Forum to consumer electronics, Internet TV, futuristic human-machine interaction, and 3-D movies. There’s a paradigm shift underway, and Apple and its primary chip supplier are shifting right along with it.
But make no mistake. Even though Intel’s message to the 6,000-plus international übergeeks that filled San Francisco’s Moscone Center West was clearly that it planned to move into consumer electronics in a big way, the company still had plenty to say about the nuts and bolts that hold together the entire world of computing. And much of it figures to influence the future of your Mac.
Of course, nobody would say as much at last week’s conference. Apple is, as ever, tight-lipped about future product plans, and Intel’s not about to prematurely spill any beans. Still, that doesn’t mean we can’t take a look at what Intel talked about at its developer forum and consider what may or may not find its way into your next desktop or laptop.
Your next Mac’s microprocessor
Not only is the future going to be both centered on both consumer electronics and the Internet, it’ll also be parallel-processed, thanks to the introduction later this year of a new multi-core, multi-threading microprocessor architecture that Intel’s marketing department now calls Core i7, but which the geeks at IDF still referred to by its codename, Nehalem. Interestingly, all the printed schedules, hand-outs, and plasma-screen announcements at IDF identified this breakthrough architecture as Nehalem and not Core i7—perhaps there’s a broken e-mail link between Intel marketing and engineering. Being a wannabe geek myself, I’ll use the term Nehalem.
Nehalem’s general outline has been known for some time. Rumors began to surface late last year, then Intel released a white paper covering its main attributes this April. With the proverbial cat already out of the proverbial bag, Intel used this year’s IDF to elaborate on a few of Nehalem’s capabilities and update its delivery timeline: late 2008 for high-end desktops and servers and by the third quarter of 2009 for mainstream desktop and mobile platforms.
Before I dig into Nehalem’s goodness, a bit of clarification: The term Nehalem refers to a new microprocessor architecture—the chip’s inner workings—and not to a specific microprocessor itself. Intel will release many differently configured Nehalem microprocessors during the architecture’s lifetime; the configuration discussed at IDF was designed for high-end desktops (HEDTs) and servers. Each member of the upcoming Nehalem line will have identical cores (where the actual number and data crunching gets done), thus making it far easier for software developers to standardize their development efforts. The differences will be in the number of cores, cache sizes, graphic capabilities, and the like.
Nehalem microprocessors will be built using the same chip-making technology, identified by its process, as are the microprocessors in today’s Macs. Processes are defined by their transistor-to-transistor distances; the current process is 45 nanometers (nm). That may sound insanely small—Oprah’s hair is about 3,000 times thicker, and Paris Hilton’s (unless she dyes it) about 1,000—but since processes shrank to below 130nm, microprocessors’ transistors have leaked power like a sieve. Power leakage is a bad; it consumes energy and causes heat. Fortunately, Nehalem uses the same leakage-busting silicon technology, dubbed Power Gate, as does its 45nm older brother, Penryn. (The Penryn chips currently power all Mac laptops except for the MacBook Air, but expect Apple’s lightweight laptop to jump on the bandwagon soon, since Intel announced a mobile form-factor Penryn processor at IDF.)
Nehalem, however, introduces new on-chip power-management circuitry called the Power Control Unit (PCU), which watches the processing areas of chip in high detail, turning power on and off to sections of it as needed. Crunching data? You get power. Lollygagging? No soup for you! The extent of the PCU’s capabilities can be deduced from the fact that it contains a full one-million transistors—compare that to the 29 thousand transistors in the Intel 8086 introduced 30 years ago. What’s more, the power levels of Nehalem’s processing cores and its per-core data and instruction caches are decoupled, since caches need higher levels of power to keep their contents error-free, and when the cores heat up above spec, the PCU can step each core’s power down in smaller increments than Penryn chips, thus making power corrections less drastic.
Another marquee Nehalem feature is Turbo Mode. Complex in execution, Turbo Mode is simple in concept: Say that a Nehalem processor, as will the first ones to be introduced later this year, has four processing cores. And say that the application it’s running is only making use of two of those cores. Not only are those other two cores standing idle, the power that they would otherwise be using is being ignored. With the help of the aforementioned PCU, Nehalem’s Turbo Mode technology senses that there’s available power, and boosts the clock rates of the cores that are hard at work. Those two cores speed up and work faster, but the total amount of power that the microprocessor as a whole consumes and the heat it generates remains the same as if all four cores were active. Turbo Mode will therefore improve the performance of apps that haven’t been efficiently optimized for multicore processors—and, sadly, there are far too many of those littering the Mac ecosystem.
Turbo Mode can adjust to any number of cores that may be active or inactive at any given time and switch core speeds without wasting even a single clock cycle to pull off that feat. In the first Nehalem processors the upticks in clock rate will be 133MHz per inactive core. Oh, and speaking of clock rates, hobbyists will be happy to learn that overclocking protection has been eliminated in Nehalem—may a thousand case mods bloom!
Another radical new feature of the Nehalem architecture is that its memory controller—the circuitry that moves data in and out of RAM—is now included on the microprocessor itself, and not in a separate memory controller chip. While you’re to be forgiven for thinking “Who cares?”, remember that in previous designs, data flying back and forth to and from memory had to share space on the front side bus (FSB) with traffic from hard drives, graphics cards, USB devices, and so on. Now it has its own private channel—and it’s a fast one: over 33GB per second. Your data will get to each core faster, so your Mac will process it faster.
Some future Nehalem processors will also have a graphics controller on-chip. Don’t expect blazing performance from those controllers though. Instead, their advantage will be in lower chip counts and lower power requirements. Think cheaper laptops systems with longer battery life. Future Nehalem-based systems will also allow you to choose either the on-chip graphics controller, useful when your laptop is operating on battery power, and a discrete, third-party graphics controller, useful when you need top graphics performance.
Nehalem will eschew the FB-DIMMs in current Macs—it appears that FB-DIMMs are following floppies, LocalTalk, and Zip drives to obsolescenceville. Nehalem will support DDR-3 DIMMs, with three channels of DDR-3 per socket and up to three DIMMs per channel. DDR-3 uses less power than does DDR-2, and requires no buffer power as do DDR-2 FB-DIMMs. There seems to be a theme here: faster, less power required. The performance of Nehalem microprocessors will be highest when they’re coupled with matched pairs per of DIMM per pair of slots, although unmatched pairs will still work; the HEDT version discussed at IDF will support 24GB of memory.
Now that memory is no longer flowing over the FSB, Intel decided to dump it and replace it with what it calls the QuickPath Interconnect (QPI). I could talk for an hour about the glories of this fine technology—Intel’s Bob Maddox, who presented the QPI session, certainly had no difficulty in doing so—but I’ll simply sum up by saying that QPI is a new way for processors to talk with each other and with the rest of the computer. And QPI is fast. Very fast: around 25.6GB per second—that’s more than twice the speed of the Mac Pro’s 1600MHz FSB. Bottom line: You guessed it—a Nahalem-based Mac will be fast. Very fast.
Your next Mac’s hard drive
Well, that intro may be a bit misleading: Your Mac’s next hard drive may not be a hard drive at all, but instead a chunk of silicon called a solid-state drive (SSD). At IDF, Intel announced its second generation of SSDs, and this time it appears that it’s poised to move these rugged, power-miserly, low-heat, silent, highly reliable, and fast storage devices into the mainstream—so much so, in fact, that the company named one of its two new lines of SSDs “Mainstream.”
The new Intel SSDs are all SATA-based, so incorporating them into existing Macs would be a simple matter of plugging them in and watching them go. Current operating systems such as Mac OS X and that other one from Microsoft won’t require any new commands to use SSDs, although both operating systems would benefit from optimization to remove some commands (such as ones instructing the system to wait for a hard drive to spin up) that would unnecessarily cramp SSD performance.
Intel’s new SSDs come in two flavors: Mainstream and Extreme. The former are designed for people like you and me: users of laptops and desktops; the latter are designed for high-end, hard-working servers in data centers. There are two Mainstream form factors, 1.8-inch (X18-M) and 2.5-inch (X25-M), but the Extreme units are limited to 2.5-inch (X25-E) units. The SSDs will start small, with 80GB Mainstream models shipping in next 30 days, and with 160GB units appearing in the first three months of 2009. The Extreme SSDs will be even smaller, starting at 32GB in 90 days and doubling to 64GB in early 2009.
The specs for the Mainstream SSDs are impressive: up to 250MB/sec read performance and 70MB/sec write, a 1.2 million hour average life (mean time between failure, or MTBF), and a miniscule 150mW power hunger at a typical workload. The Extreme SSDs are even more impressive, with up to 250MB/sec read and 170MB/sec write, and a 2 million hour MTBF.
Performance appears to be equally impressive. One Intel demo showed a laptop with a Mainstream SSD running a suite of straightforward tasks between four and five times as fast as an identically configured laptop with a standard 5,400 rpm hard drive. Intel also said that although final tuning has not yet been completed, in their labs the battery life of an SSD-equipped laptop is over a half-hour longer than an identical hard drive–equipped sibling.
Mainstream SSDs are produced using a 34nm multi-level cell (MLC) technology, while Extreme SSDs use 34nm single-level cells (SLC). SLC not only has faster read rates than MLC, but it’s also more robust: SLC SSDs are projected to have 10 times the life of the MLCs—which are no slouches, having a claimed ability to transfer 100GB per day for five years. Do you transfer that much data every day? I don’t—our SSDs will therefore last longer than the projected MTBF.
The question, of course, remains: How much will SSDs cost? In answer to that question, an Intel rep told us to wait until the first SSDs ship in 30 days. One tiny bit of pricing information did slip, however, during a demonstration of a Mainstream SSD beating the pants off a high-speed RAID 0 setup. The RAID contained two 300GB Western Digital Velociraptor drives spinning at 10,000 rpm; the capacity of the SSD was not given, but considering that the 160GB Mainstream SSDs aren’t even scheduled for sampling until the end of this year, 80GB might be a reasonable guess. During the demo, an Intel rep mentioned that the two storage systems were “about equal” in pricing—and considering that 300GB Velociraptors retail for $300 each, it doesn’t appear that SSDs are going to be cheap when first released.
But Intel is deeply committed to SSDs, and projects that by 2010 the market will be filled with “billions of gigabytes” of SSD drives. With that amount of product appearing in desktops and servers worldwide, prices are certain to drop dramatically. For example, remember that Apple’s first LCD display, the 15-inch Studio Display, cost a cool $2,000 when it was released in 1998. Today you can’t even find 15-inch LCD displays, and 19-inch models start at under $150. Technology marches on; prices march downward.
Analysis: Intel developer forum...Next Page