NVIDIA Corp.’s new GeForce4 line introduces a new vocabulary to potential Power Mac G4 customers interested in getting the most out of their new machines, so here’s a look at what the new technology offers.
GeForce4 Titanium (Ti) is the high-end chip in the new architecture. The graphics processing unit offers a variety of new features including Lightspeed Memory Architecture II (LMA II), Accuview Antialiasing, and the programmable nfiniteFX II Engine.
LMA II
The new memory architecture of the GeForce4 line sports four independent memory controllers and support for 128-bit Double Data Rate (DDR) memory. The architecture delivers double the effective memory bandwidth and up to three times the overall performance of GeForce3, according to NVIDIA. This will result in a major performance boost right off the bat for existing applications, according to an Apple spokesperson at last night’s rollout of the new technology.
The speed of the GeForce4 GPU creates an interesting bottleneck, according to NVIDIA senior director of GPU business Tony Tamasi: today’s processors are evolving faster than memory performance. So with this design, NVIDIA has tooled whatever it can to make data move through memory as quickly as possible. The GeForce4 line sports multiple memory caches on a single chip, or what NVIDIA calls quad Cache — there are four individually dedicated and optimized memory caches to improve pipeline access through the chip.
4:1 lossless Z-compression technology helps save the amount of data that’s actually pushed. The GeForce4 line also sports a new and improved generation of Z-occlusion culling technology, which saves the GeForce4 from having to render pixels that you can’t actually see. To demonstrate, Tamasi showed off an animation involving bug-like creatures wandering over a hilly terrain. With technology that doesn’t support Z-occlusion culling, the ant models not visible from the user’s perspective would still have to be rendered; with Z-occlusion culling, that’s no longer the case.
LMA II offers a 50 percent more efficient architecture than GeForce3 did, according to Tamasi, and about 100 percent more efficiency than GeForce2.
Accuview Antialiasing
Antialiasing has long been available in image editing programs. The technology reduces the appearance of “jaggies” — the edges of sharply defined objects are smoothed to create an appearance that’s more natural. This technology has also been available in 3D graphics cards for a while, to help improve the natural appearance of 3D rendered objects, but it’s often been available only at a performance penalty.
Not so with GeForce4, according to NVIDIA. The Accuview Antialiasing engine implements high-resolution multisampling techniques, including 2x, 4x, Quincunx and a new 4XS mode. GeForce4 offers two to three times the performance of other high-end graphics products, according to NVIDIA, and makes it possible to keep antialiasing as the preferred display mode.
Best of all, both of LMA II and Accuview technologies are standard issue for the GeForce4 MX and Ti products, so new Power Mac G4 users will gain some benefits regardless of which NVIDIA-based graphics card they choose for their new system.
nfiniteFX II Engine
For users looking for the greatest realism in their games and 3D software, the GeForce4 Ti specifically utilizes a new vertex and pixel shading technology called the nfiniteFX II Engine. The new technology sports up to three times the performance of the GeForce3 by utilizing dual vertex shaders (GeForce3 only had a single vertex shader).
The GeForce4 Ti also features new advanced pixel shaders with a new form of Z-correct bump mapping. This enables models to be created with surface detail that hasn’t been seen in the consumer graphics card market before.
NVIDIA emphasized the technology in a series of demonstrations that showed off lifelike surface detail in living objects, water, skin and fabric. The calculation of fluid dynamics, fur, realistic skin texture and motion and other excruciating detail was all being rendered on the card itself, freeing up the CPU to perform AI subroutines and other things that help bring realism to games.
Portability
NVIDIA also introduced the GeForce4 Go line, the latest implementation of the company’s mobile graphics technology. NVIDIA claims that the GeForce4 Go is the world’s fastest mobile GPU. The chip emphasizes power management features like voltage and frequency scaling to help maximize battery life — all wrapped up in a technology NVIDIA calls PowerMizer.
Like the other GeForce4 products, the GeForce4 Go features the Video Processing Engine (VPE) for DVD playback and MPEG2 decoding. It also sports the Lightspeed Memory Architecture II. NVIDIA also claims that the GeForce4 Go’s support of Accuview Antialiasing helps to improve the quality of display on notebook LCD panels. 3D rendering is accomplished through the use of NVIDIA Shading Rasterizer (NSR), which can process up to 11 texture and lighting operations per clock, single-pass multitexturing, and cubic environment mapping.
The technology is derived from the same core architecture as what’s used on the GeForce4 desktop chips, and as a result GeForce4 Go could ultimately be adopted by Apple for its future laptop designs. Presently, Apple utilizes technology developed by ATI for its PowerBook and iBook lines.
Heavy iron
The GeForce4 Ti has been crafted using .15 micron technology, and the chip contains over 63 million transistors. It sports 650MHz DDR memory — the fastest implementation on the planet, according to Tamasi. It also has a 300MHz core clock. It can calculate 1 trillion operations per second.
To help put that in some sort of perspective, Tamasi offered some whimsical performance benchmarks. The GeForce4 Ti can render about 100 dinosaurs with the same level of complexity as what was seen in the motion picture Jurassic Park at 30 frames per second, he claimed, and 8 of the new chips have more geometry power than all of the 3dfx Voodoo1-based graphics cards ever shipped. What’s more, a single GeForce4 chip sports more floating point calculation ability than existed on the entire planet in 1985.
For more about the NVIDIA GeForce4 rollout, please see our related story.