Everyone wants to talk about AI. Most of them don’t know what it is (artificial intelligence), but they still want to talk about it. Who’s got it and who doesn’t. What industries it’s ripe to disrupt. And the one company that’s not in the middle of that conversation is Apple.
Viral images generated by Stable Diffusion, and pathological chatbots from OpenAI, Microsoft, and Google are the story of the day. Apple, meanwhile, has nothing. Or perhaps, considering the current state of Siri, less than nothing.
Apple’s nowhere when it comes to AI. That’s the narrative. The thing is, it’s not true. At least, not yet.
Presenting the Neural Engine
It’s hard to imagine that you could portray Apple as a company that’s been asleep at the AI switch when it built its first machine-learning-focused custom silicon, the Neural Engine, into the A11 chip that shipped in iPhones five years ago and has continued to upgrade it almost every year.
If you hadn’t noticed, Apple is pretty serious when it comes to its chip designs. And for the last five years–and who knows how many years of development before that–Apple has believed so much in empowering its devices to run machine-learning algorithms that it’s designed and included special processor cores dedicated specifically to the task. Apple clearly understood the power of this technology back before most of us had even heard of it.
And clearly, Apple’s been paying attention to the latest trends that have gotten the world’s attention. Last December, Apple released its own Apple Silicon optimizations for the Stable Diffusion image-generation engine. That’s Apple looking at the latest trends and pointing out ways that its hardware can be better used to take advantage of them.
And he’s absolutely right. Apple has been dropping purpose-built machine-learning algorithms into its software since at least 2016 when it added object, face, and scene identification to the Photos app. Since then, in addition to expanding its Photos algorithms, Apple has added machine-learning techniques to biometric security identification, ECG on the Apple Watch, fall and crash detection on both the iPhone and Apple Watch, and many aspects of the creation of the perfect iPhone camera image.
Unfortunately for Cook and Apple, none of those examples have ended up being the ones people have been talking about lately. But there’s no denying that it has been–carefully and tactically–weaving AI stuff into its products for years. As Cook said, “We’ll continue weaving [AI] in our products on a very thoughtful basis.”
Your chatbot lied to me
But thoughtful weaving is probably not going to set the world on fire. Chatbots and image algorithms, on the other hand, are flashy and tend to blow the minds of people who can’t believe computers are capable of such things. Apple is like a magician, one who prefers to keep their methodology a secret.
Apple’s care and conservatism when releasing products have both helped and hurt it here. Maybe there’s no such thing as bad publicity, but high-profile chatbot launches are immediately followed by days or weeks of coverage of bad chatbot behavior. The image algorithms immediately came under fire over questions of copyright infringement.
Can you imagine the firestorm that would’ve happened if Apple launched a “beta” version of Siri powered by an AI chatbot that couldn’t get facts right, tried to get a reporter to leave his wife for it, and accused a professor of sexual harassment? It would be Sirigate! Apple would be rushing out crisis-management PR and promising to fix the problem.
This is why Apple is careful and thoughtful. It holds itself to a higher standard–and it knows the world does, too.
So now what?
All that is well and good, but the fact remains that Siri isn’t very good, and when you talk to an AI chatbot, you start to get the sense that we’re very close to a world where semi-intelligent agents will be able to carry on contextual conversations and perform basic tasks.
Just the other day, a friend needed some help parsing a giant data file. I realized I could probably write a quick Python script to get him the output he wanted. But rather than do that, I asked ChatGPT to do it. It came up with a working script in less than a minute.
The way I see it, Apple’s greatest risk in AI is not failing to capture the attention of the world. It’s the company’s commitment to being careful and thoughtful–because, if taken to an extreme, it might cause Apple to turn its back on promising areas of research. Sure, a chatbot mishap would be very embarrassing for Apple, but so will sticking with Siri when the world is filled with far more capable intelligent assistants.
Or, to put it another way: Apple’s probably announcing a mixed-reality headset next month. By most accounts, the device will be very expensive and ship in extremely low quantities–but it’s a risk Apple’s willing to take because it’s playing the long game. Shouldn’t Apple be willing to do the same and take more risks with AI-powered technology if the rewards are potentially so great?