Apple’s out-of-the-blue announcement last week that it was adding a bunch of features to iOS involving child sexual abuse materials (CSAM) generated an entirely predictable reaction. Or, more accurately, reactions. Those on the law-enforcement side of the spectrum praised Apple for its work, and those on the civil-liberties side accused Apple of turning iPhones into surveillance devices.
It’s not surprising at all that Apple’s announcement would be met with scrutiny. If anything is surprising about this whole story, it’s that Apple doesn’t seem to have anticipated all the pushback its announcement received. The company had to post a Frequently-Asked Questions file in response. If Q’s are being FA’d in the wake of your announcement, you probably botched your announcement.
Such an announcement deserves scrutiny. The problem for those seeking to drop their hot takes about this issue is that it’s extremely complicated and there are no easy answers. That doesn’t mean that Apple’s approach is fundamentally right or wrong, but it does mean that Apple has made some choices that are worth exploring and debating.
I’m not sure quite why Apple chose this moment to roll out this technology. Apple’s Head of Privacy implies that it’s because it was ready, but that’s a bit of a dodge—Apple has to choose what technologies to prioritize, and it prioritized this one. Apple may be anticipating legal requirements for it to scan for CSAM. It’s possible that Apple is working on increased iCloud security features that necessitate this approach. It’s also possible that Apple just decided it needed to do more to stop the distribution of CSAM.
The biggest clue about Apple’s motivations is the very specific way this feature has been implemented. I’ll spare you the long explanation, but in short: Apple is comparing images against a hash of illegal images compiled by the National Center for Missing and Exploited Children. It’s scanning new images that are going to be synced with iCloud Photos. It’s not scanning all the photos on your device, and Apple isn’t scanning all the photos it’s storing on its iCloud servers.
In short, Apple has built a CSAM detector that sits at the doorway between your device and iCloud. If you don’t sync photos with iCloud, the detector never runs.
This all leads me to believe that there’s another shoe to drop here, one that will allow Apple to make its cloud services more secure and private. If this scanning system is essentially the trade-off that allows Apple to provide more privacy for its users while not abdicating its moral duty to prevent the spread of CSAM, great. But there’s no way to know until Apple makes such an announcement. In the meantime, all those potential privacy gains are theoretical.
Where is the spy?
In recent years, Apple has made it clear that it considers the analysis of user data that occurs on our devices to be fundamentally more private than the analysis that runs in the cloud. In the cloud, your data must be decrypted to be analyzed, opening it up to pretty much any form of analysis. Any employee with the right level of access could also just flip through your data. But if all that analysis happens on your device—this is why Apple’s modern chips have a powerful Neural Engine component to do the job—that data never leaves home.
Apple’s approach here calls all of that into question, and I suspect that’s the source of some of the greatest criticism of this announcement. Apple is making decisions that it thinks will enhance privacy. Nobody at Apple is scanning your photos, and nobody at Apple can even look at the potential CSAM images until a threshold has passed that reduces the chance of false positives. Only your device sees your data. Which is great, because our devices are sacred and they belong to us.
Except… that there’s now going to be an algorithm running on our devices that’s designed to observe our data, and if it finds something that it doesn’t like, it will then connect to the internet and report that data back to Apple. While today it has been purpose-built for CSAM, and it can be deactivated simply by shutting off iCloud Photo Library syncing, it still feels like a line has been crossed. Our devices won’t just be working for us, but will also be watching us for signs of illegal activity and alerting the authorities.
The risk for Apple here is huge. It has invested an awful lot of time in equating on-device actions with privacy, and it risks poisoning all of that work with the perception that our phones are no longer our castles.
It’s not the tool, but how it’s used
In many ways, this is yet another facet of the greatest challenge the technology industry faces in this era. Technology has become so important and powerful that every new development has enormous, society-wide implications.
With its on-device CSAM scanner, Apple has built a tool carefully calibrated to protect user privacy. If building this tool enabled Apple to finally offer broader encryption of iCloud data, it might even be a net increase in user privacy.
But tools are neither good nor evil. Apple has built this tool for a good purpose, but every time a new tool is built, all of us need to imagine how it might be misused. Apple seems to have very carefully designed this feature to make it more difficult to subvert, but that’s not always enough.
Imagine a case where a law enforcement agency in a foreign country comes to Apple and says that it has compiled a database of illegal images and wants it added to Apple’s scanner. Apple has said, bluntly, that it will refuse all such requests. That’s encouraging, and I have little doubt that Apple would abandon most countries if they tried to pull that maneuver.
But would it be able to say no to China? Would it be able to say no to the U.S. government if the images in question would implicate members of terrorist organizations? And in a decade or two, will policies like this be so commonplace that when the moment comes that a government asks Apple or its equivalents to began scanning for illegal or subversive material, will anyone notice? The first implementation of this technology is to stop CSAM, and nobody will argue against trying to stop the exploitation of children. But will there be a second implementation? A third?
Apple has tried its best to find a compromise between violating user privacy and stopping the distribution of CSAM. The very specific way this feature is implemented proves that. (Anyone who tries to sell you a simplified story about how Apple just wants to spy on you is, quite frankly, someone who is not worth listening to.)
But just because Apple has done its due diligence and made some careful choices in order to implement a tool to stop the spread of heinous material doesn’t mean that it’s off the hook. By making our phones run an algorithm that isn’t meant to serve us, but surveils us, it has crossed a line. Perhaps it was inevitable that the line would be crossed. Perhaps it’s inevitable that technology is leading us to a world where everything we say, do and see is being scanned by a machine-learning algorithm that will be as benevolent or malevolent as the society that implemented it.
Even if Apple’s heart is in the right place, my confidence that its philosophy will be able to withstand the future desires of law enforcement agencies and authoritarian governments is not as high as I want it to be. We can all be against CSAM and admire the clever way Apple has tried to balance these two conflicting needs, while still being worried about what it means for the future.