I’ve been loving my AirPods Pro since I got them a few weeks ago. I was originally an AirPods skeptic—I never liked the way EarPods fit or sounded—but their wire-free convenience made me a believer. The AirPods Pro took it further with the introduction of noise-cancelling technology that allows me to use them when I’m vacuuming, mowing the lawn, or flying, and their clever Transparency mode lets me stay in tune with my surroundings when I need to.
But this is just the beginning. I agree with my colleague Dan Moren that the features of AirPods Pro hint at Apple’s future in augmented reality tech. As Apple increases the amount of processing power that it can fit into AirPods, Transparency mode is a huge hint of the audio-processing possibilities to come.
Software that works like magic
I produce a lot of podcasts, many of which feature speakers who are participating from challenging audio conditions. They’ve often got bad (or no) microphones, they’re in echoey rooms, and frequently there’s a heater, air conditioner, or fan running in the background. (Nothing marks the passing of seasons for a podcast editor more than hearing the recordings move from the hum of AC to the buzz of heating!)
What I’ve learned in the last few years as I’ve become more savvy about audio software is that for a few hundred dollars, you can buy software that will process audio in ways that seemed impossible. I own plug-ins that will remove electrical hums and broadband hiss from the background of an audio file automatically and in a very short period of time. That person who recorded next to a blasting air conditioner in the middle of summer? My software can make it so you wouldn’t even know the AC unit was there.
Then there’s the de-echoing software, which can take the sound of someone who is in a room full of hard surfaces and sounds like they’re at the bottom of a well, and clean them up to the point that they sound almost as good as someone in a sound booth.
Blowing wind? There’s a plug-in for that. Extraneous breathing? There’s a plug-in to wipe that out. The list goes on. Professional audio software is really good. Way better than I ever expected.
Which brings me back to AirPods Pro.
Process my world
Right now, AirPods Pro has three audio modes. In the first, there’s no processing at all—the outside world is only filtered out because you’ve stuck little earbuds in your ear holes, which naturally blocks some of the sound. In the second, the AirPods each use two microphones to monitor the noise in your surrounding environment and then generate an inverted waveform to cancel out that noise—that’s how noise cancelling works.

The black patch on the AirPods Pro is a microphone used for noise cancelllation.
The third mode, Transparency, is the most interesting. It relays sound from an external microphone and layers it over whatever you’re listening to, so you are artificially hearing the outside world. It’s a dramatically different sound and I’ve heard a lot of people say they appreciate being able to listen to audio while also having the sounds of the real world accessible.
What strikes me about Transparency, though, is that Apple seems to be adjusting the sound from the outside world very little, if at all. When I use Transparency, I don’t just hear people talking or the sound of a car coming down the street—I hear a background hum from traffic on a nearby freeway.
Now, imagine a future version of AirPods Pro, with a little more processing power. In addition to Transparency mode, perhaps there’s a Smart Transparency mode that takes a cue from all the audio processing software out there to do things like remove unchanging background noise and even remove room echo so that what you hear is clearer than it might be if you heard it unfiltered. The algorithms are there today, measuring the reflectivity of the room on the fly and cancelling echoes; it’s just a matter of building hardware powerful enough to processing all the data in real time.
I recently read a story about the quest for “smart” hearing aids that suggested that algorithms can do a pretty good job of filtering out background conversations, and might even be able to figure out how to emphasize the voices of specific speakers based on who a person is looking at. The challenge, once again, is processing power—and it’s hard to imagine that Apple won’t be able to keep progressing the power of the AirPods Pro.
Accessibility for all
AirPods aren’t hearing aids. And I’m not entirely sure Apple wants to enter the hearing-aid market, though given the company’s constant discussion of the importance of medical and health initiatives and device accessibility, I wouldn’t put it past them. With the advent of hearing-aid deregulation in the United States, it’s not impossible that Apple could apply the lessons it’s learned about audio processing and integration with other Apple devices to improve the audio of people with mild to moderate hearing loss.
But let’s leave formal hearing aids aside for a moment. One thing I’ve learned in the last few years is that accessibility features almost always have unexpected benefits. Likewise, I think Apple has an opportunity to augment the hearing of AirPods users, whether they consider themselves hearing impaired or not. There are probably a lot of us who would welcome a dialogue-enhancement mode for use at noisy parties that would try to filter out all noise except for the human voices in the foreground. I’d love a filter that would let me hear public-address announcements but eliminate background chatter.
This isn’t easy stuff, and it requires a lot of technology to achieve it, but the AirPods Pro make me feel that Apple is already moving down this path, and fast. Right now I view my AirPods Pro as a great set of noise-cancelling earbuds to use while listening to music or podcasts, but they’re also augmented-reality devices for my ears. In some ways, they’re Apple’s first dedicated AR product. Transparency mode’s goals are modest, but its future potential comes through loud and clear.