As was the case with past Apple products that were highly anticipated (such as the iPhone and the Apple Watch), the number of reports and rumors circling an Apple AR/VR headset have intensified, from sources both reputable and otherwise. There have been renders and mockups, specs and chips, and capability theories aplenty. At this point, it would be more surprising if Apple didn’t announce a mixed reality headset next year.
But setting aside all the information garnered from the likes of supply chains and analysts, it’s always worth taking a step back and looking at what Apple is doing right now that might provide clues as to where it’s headed. The company is meticulous about laying groundwork for new products in its existing line-up, and if it’s rolling out something as major as a brand new platform sometime in the next six to nine months, you can bet that today’s Apple devices hold some indications.
Communication is especially intriguing. We’ve seen visions of mixed reality communication from companies like Meta and Microsoft, most of which rely on the idea of some sort of virtual avatar. That makes perfect sense, because when you’re wearing a headset, you can’t exactly use a live video feed of you wearing a headset inside a virtual meeting space. That, I think we can all agree, would be weird.
Apple, of course, has dabbled in the virtual avatar realm with Animoji and Memoji, but the current implementations of those technologies aren’t likely to work with a headset for the simple reason that they rely on the TrueDepth camera systems on later iPhones and iPads, and the headset is something you wear, so any cameras it has probably won’t be pointed at you, much less able to capture your facial expressions.
That said, it seems almost certain that Apple will still leverage the look and feel of its Memoji when it comes to providing avatars, probably animating them by using input from other movement-tracking sensors, such as accelerometers. Plus if, as has been speculated, the headset will take advantage of advanced hand tracking, then it sure makes sense that Apple recently added torsos and arms for Memoji in iOS 15.
The SharePlay’s the thing
When Apple launches a major new technology across all of its platforms, it’s worth sitting up and taking notice. One of this year’s ambitious additions to Apple’s devices was SharePlay, which debuted this fall on iOS, but is still in the process of rolling out to the Mac. While it may not be a feature that’s in wide use right now, it’s not hard to imagine that might be in part because its killer opportunity hasn’t come along yet.
Remember that one of the big areas Apple is focusing on for mixed reality is “media consumption” and SharePlay is a framework specifically designed for media consumption…in conjunction with communication.
Because while it’s easy enough to imagine watching a movie on a giant screen in virtual reality, it’s hardly a social experience. You and your family probably aren’t all going to sit on the couch in your own individual headsets, watching the same movie (though Apple would surely love it if you’d buy a headset for every family member). Nor does that really seem in line with the kind of personal experiences that Apple has traditionally pushed in its marketing.
That said, meeting up with far-flung friends in a virtual space where you can share a movie, TV show, or album as though you were in the same place? Now that seems like something Apple can tell a story about. And the underpinnings of SharePlay are perfectly positioned to enable that experience—just replace the FaceTime call with your Memoji avatars.
Apple has spent a lot of time pushing its implementation of spatial audio over the last year, for both video and for music. In iOS 15, you can not only hear audio specifically encoded using surround sound technology like Dolby Atmos, also take advantage of an option to have the device convert stereo audio into spatial audio, and have it respond to your head movements.
This leads to a weird experience where you could be walking along listening to music playing from the iPhone in your pocket, turn your head to one side or the other, and have the music pan as though it was emanating from a source in front of you. As it stands, this doesn’t make a lot of sense.
But when you think about a shared media experience like, say, watching a virtual concert, it starts to come together. Spatial audio could be a key component to a mixed reality device, mimicking the way we actually process audio to provide an experience that’s closer to the real world. That way when someone in a virtual space is talking on your left, you’ll actually hear them on your left. It adds an idea of presence to audio that makes way more sense in a virtual reality context than in a real-world one, and the more senses that these experiences can engage, the more immersive the final product.
Dan has been writing about all things Apple since 2006, when he first started contributing to the MacUser blog. He's a prolific podcaster and the author of the Galactic Cold War series, including his latest, The Nova Incident.