Parallel processing is one of the hot topics in computing these days, so I’m always interested when a major tools vendor introduces a new technology designed to make parallelization and concurrency simpler for the lay programmer. Most recently, Apple finalized the APIs for Grand Central this week, coinciding with the latest development build of its forthcoming Snow Leopard edition of Mac OS X.
Grand Central is a new set of technologies that bakes concurrency into the heart of Mac OS X. They’re designed to make it easier for programmers to divide their applications into separate, atomic processing tasks, each of which can then be handed off to the OS for efficient distribution across multiple CPU cores. With Grand Central, the OS itself handles much of the low-level grunt work of supervising and routing independent tasks, freeing programmers to concentrate on user-facing issues.
This kind of assistive technology will be essential if developers hope to take full advantage of the next generation of high-performance CPUs. Cores have replaced clock speed as the new metric for processor power. If you think it’s challenging to write code that runs efficiently on today’s four- and eight-core systems, just wait until the average desktop PC contains 16 or 32 cores, or more.
Still, I can’t help but wonder whether the industry as a whole might be running in the wrong direction. Despite years of research into grid processing and HPC (high-performance computing), efficient parallelization remains a tough nut to crack. The systems that do it well are mainly purpose-built environments that are poorly suited to the needs of your average PC user. So why not just have those systems do what they do best and leave our PCs to handle user experience and interactivity? In other words, why are we trying to re-create Google-style parallelism on our desktops when we could just have Google handle the heavy lifting for us?
Apple’s concurrency tools won’t change the world
Disclaimer: I’m not a Premier or Select member of Apple Developer Connection, so I don’t have direct access to developer documentation on Grand Central. Even if I did, I would be bound by a nondisclosure agreement not to talk about unreleased Apple technologies. Everything I know about Grand Central comes from leaked reports and other online sources.
From what I understand, however, Grand Central defines a number of APIs that allow programmers to define tasks, “dispatch” them to the OS, and monitor interprocess communication via event queues. Furthermore, to facilitate Grand Central, Apple has extended its Objective-C language to include support for closures, which makes it easier to implement control structures that support concurrency. It’s a one-two approach that builds concurrency into the OS internals and the accompanying language and developer tools alike.
That’s all well and good, but it still implies a sea change away from programming practices that have served the developer community well for the last 20 years or more. Grand Central will make it easier to write parallelizable software, but it’s not any kind of fairy dust that will allow existing software to take advantage of multiple cores when it couldn’t before. Developers will still need to change the way they think to write good concurrent software, including picking up practices from the world of functional programming (such as closures). As the saying goes, you can lead an old dog to water, but you can’t make it ride a horse.
Apple isn’t the only company working on developer tools to support concurrency. Also this week, Microsoft officially debuted Axum, a language for developing parallel applications on the .Net platform. And Sun Microsystems is developing Fortress, a language built from the ground up to support concurrency. But the problem with such solutions is that they’re built specifically with concurrency in mind, and not as general-purpose programming languages. Merely using them implies that the developer is making concurrency a top priority—and sometimes that just isn’t the case.
Concurrent code: Is the tail wagging the dog?
In the ivory tower of enterprise-grade server applications, scalability and concurrency are accepted best practices. Not so on the desktop, however, where powerful, high-end software must share the limelight with a motley assortment of utilities, gadgets, and amusements. It makes sense for a database server to be highly concurrent, but what about a Facebook client?
The simple fact is that most client-facing applications spend most of their time idle, waiting for user input. There’s just no incentive to break their functions down into highly parallelizable units.
Furthermore, many commonplace tasks resist concurrency. Multimedia encoding is one of the more processor-intensive activities that a typical user might engage in, for example, but many popular MP3 encoders are linear, single-threaded processes that don’t benefit from multiprocessing. The best way to improve performance is to launch two separate MP3 encoders at the same time and have each use a single processor core to work on a separate file.
That’s why a solution like Apple’s Grand Central is probably the best choice for a client-facing system like Mac OS X. Putting the OS in charge of concurrency allows single-threaded applications to coexist peacefully with more processor-intensive, highly parallel ones. But as long as developers need to learn exotic new concepts to master concurrency, I expect the number of single-threaded applications to vastly outnumber the concurrent ones.
Where will the highly concurrent, multithreaded applications be found? On the Web, of course—where programmers are free to write purpose-built concurrent applications that don’t need to worry about being interrupted by a screen saver. Nor do they need to coexist on the same hardware with other developers’ Web-based apps.
Adding cores and trumpeting greater CPU power through concurrency is great marketing for hardware vendors like Intel—and Apple—but the truth is that today’s high-end PCs have long since exceeded the performance needs of casual users. It’s little wonder that netbooks are the hot hardware trend. Forget multiple cores; customer demand is even driving clock speeds downward. Complex processing is moving outward, into the cloud.
That’s why, when I hear that every developer will soon need to retrain to code for concurrency, I can’t help but be skeptical. From an academic standpoint, tools such as Axum, Fortress, and Grand Central are fascinating. But from the standpoint of the everyday developer, all this hubbub about concurrency may ultimately prove to be nothing more than a tempest in a teapot.