Apple’s strategy for an embedded music architecture in Mac OS X is starting to emerge in the wake of the Worldwide Developer Conference (WWDC) held in May.
“OS X Audio Features Documentation,” a document distributed at WWDC, outlines Apple’s two key goals for audio capabilities in the new OS: to deliver a superior listening experience for Mac users, and to provide developers with an OS-level “Core Audio” system that will free them from having to rely on third-party audio and MIDI protocols in their applications.
This Core Audio system presents a multi-tiered set of application programming interface (API) services that range from low-level access to particular audio devices to sequencing and software synthesis.
For the technically minded readers out there, note that these services are presented in frameworks — types of bundles that package a dynamically shared library with the resources that the library requires (including header files). Frameworks are implemented in C and C++ and present a C-based function API. There’s also a Java API presented to developers for these audio system services. What it boils down to is that developers can choose the language best suits their needs and requirements.
Apple says that the key features of the core audio architecture include a flexible audio format; a multi-channel audio I/O (in/out); support for both PCM (Pulse Code Modulation, a method of analog-to-digital conversion) and non-PCM formats (Float32 is the generic format); fully specifiable sample rates; multiple application usage of Audio Devices; application determined latency; and ubiquity of timing information.
The new audio architecture will combat two problems that relate to timing, one endemic to the process of recording and the other a by-product of its new “industrial strength” OS.
A significant challenge of music recording and production is to ensure that tracks presently being recorded can be synchronized to previously recorded tracks, and also to ensure that they can be written back to disk correctly in time. (The delay between reading the previous tracks from the hard disk and routing them to the musician is called latency.)
Prior to OS X, companies developed their own technologies, which included buffering, to combat latency and to allow audio tracks and MIDI channels to be routed to the artist in unison, enabling the musician to play “in sync.” Digidesign’s TDM (time-division multiplexing) and Steinberg’s VST (Virtual Studio Technology) are two examples of this type of technology for dealing with audio. (The inclusion of folders bearing these acronyms in OS X’s libraries indicates both these technologies will be accommodated in OS X.)
The preemptive multitasking and virtual memory capabilities of OS X’s BSD core, where the CPU will be called upon to regulate computing time between a variety of applications and processes, would have proven too great a challenge for a third-party developer to combat. (Under previous versions of Mac OS, most sequencer developers have required that virtual memory be turned off.) Moving these services inside the OS frees developers to concentrate on enhancing their interfaces and adding post-production capabilities.
And MIDI (Musical Instrument Digital Interface) timing and device control in OS X will finally move beyond the venerable OMS (Open Music System) and Mark of the Unicorn’s FreeMIDI applications. The presence of Doug Wyatt (the developer of OMS) on Apple’s music team shows that the company will build upon — and presumably surpass — the world standard in this area. Wyatt, incidentally, has long played music with Jef Raskin, one of the originators of the Macintosh concept. The document distributed at WWDC claims that OS X will handle MIDI with a latency factor of one millisecond — almost real-time performance.
So what’s next?
The official release of Mac OS X’s audio component remains unclear; an announcement at the upcoming Macworld Expo would certainly be a boon for musicians and studios.
But the software may be waiting for standardization among hardware developers. Yamaha announced at the International Music Products Association (NAMM) trade show in January that its mLAN technology (that provides a FireWire-based structure for inter-instrument communication) would be an integral part of OS X’s music architecture. Two non-mLAN-enabled FireWire-based audio I/O systems (from MOTU and Metric Halo Labs) have been released this spring, but negotiations between audio-device developers and Yamaha regarding mLAN are reportedly continuing.
Macworld Expo and NAMM will fill the days from July 17-22, with one day of overlap. Hopefully, by the close of business on Sunday, music-makers will know what lies ahead.