Apple has been using nVidia GPUs for years in laptops; iMacs use GPU chips from AMD. But those chips could change, depending on the CPU.
That’s because Intel’s next-generation CPU, Sandy Bridge, builds the graphics core into the CPU chip itself—a first for Intel. Part of that new core is an insanely fast and capable video decoder and encoder chip that excels at displaying or transcoding high-definition video. Plus, the entire Sandy Bridge package—CPU and on-die GPU—is extremely light on power consumption.
AMD, too, is building capable graphics into its line of CPUs. Apple has so far been averse to using those CPUs, but there’s no reason it couldn’t.
Both AMD and nVidia are expected to move to a 28-nanometer chip-manufacturing process in 2012, and could move to 22nm after that. (In chip-making, the nanometer scale represents the smallest feature that process can create.) Both vendors will be able to pack more power into smaller packages that use less energy.
That means that even compact desktops like the iMac may sport graphics engines that rival power chips like nVidia’s GTX 480. That, in turn, means that users of apps like iMovie and iPhoto could see much faster response times: Instead of taking several hours to convert a movie from one format to another, it could take minutes.
With the Mac Pro, the sky’s the limit. nVidia recently began offering its very high-end Quadro line of performance graphics cards for the Mac Pro, making the tower that much more attractive to professional content creators.
There is one gating factor on graphics performance: heat. The faster that graphics (and other) chips go, the more heat they throw off. In the confined space of a laptop or an all-in-one like the iMac, that heat has nowhere to go. So Apple can’t drop the latest, bleeding-edge graphics chips into them. An nVidia GTX 480 chip—which currently represents the cutting edge—draws more power by itself than an iMac does on its own.
AMD and nVidia are well aware of these issues. Both companies are starting to build discrete graphics chips that use less power while delivering more performance. Already, nVidia has shown a road map for its next-generation GPU architectures that offer twice the performance per watt of its current line; three years from now, the company promises to up that to six times that of the current line.
Meanwhile, the trend toward using GPUs for more than graphics will continue to gain momentum. Software developers are eyeing that power for use in video transcoding and photo editing. Adobe Premiere Pro CS5’s Mercury playback engine, for example, uses the GPU for real-time previews at full resolution, without the long rendering times typical in complex video editing. At the same time, Firefox and Chrome will support hardware acceleration for video and 3D, through HTML5 and WebGL.
There are competing standards for doing general computing on the GPU-—but Apple seems to be trying to play both sides. Apple was the original author of the OpenCL compute interface, which offloads video editing, photo editing, and other content-creation tasks to the GPU; OS X has supported it since Snow Leopard. Meanwhile, nVidia GPUs support the CUDA standard—but as mentioned earlier, OS X supports that, too. Individual applications can decide which frameworks they’ll use.
The upshot: The new chips and the updated software that will take advantage of them will not only help gamers, they’ll help other Mac users, too.
Loyd Case is a contributing writer at Maximum PC.