Aug 13, 2021

Hany Farid: Should we Celebrate or Condemn Apple’s New Child Protection Measures?

By Hany Farid

Last week, Apple announced that it would deploy hashing technologies to protect children from sexual abuse and exploitation. In response, child-rights advocates cheered and privacy-rights advocates jeered. Both, however, are putting too much stock in Apple's announcement, which is neither cause for celebration nor denunciation.

Each year, hundreds of millions of images and videos of child sexual abuse circulate online. The majority of children in these materials are prepubescent, and many of them are infants and toddlers. In addition, every day children are exposed to unsolicited sexual advances and sexual content online. We must do more to protect our children, both online and offline.

For the past two decades, the technology industry as a whole has been lethargic, even negligent, in responding to the threats posed by the global trade of child sexual abuse material (CSAM), live-streaming of child sexual abuse, predatory grooming and sexual extortion. At the same time, the industry has made every effort to make sure its products and services get into—and remain in—the hands of children.

“For the past two decades, the technology industry as a whole has been lethargic, even negligent, in responding to the threats posed by the global trade of child sexual abuse material...”
— Hany Farid

Since 2010, after years of cajoling, many social media platforms, email services and cloud-storage services have deployed perceptual-hashing technology to prevent redistribution of previously identified CSAM content. Previously unseen material and other online risks for children, however, remain a threat. Child-rights activists consider hashing technology a bare-minimum protection that any responsible online provider should deploy, not a gold standard.

To its credit, Apple has used hashing technology in its email services for years. Its most recent announcement simply extends the reach of this technology to operate on any image stored in iCloud. This technology, as with previous hashing technologies, is extremely accurate. Apple expects an error rate of 1 in 1 trillion; to further ensure reporting accuracy, any matched image is manually reviewed by Apple and the National Center for Missing and Exploited Children before the company alerts law enforcement. In keeping with the company's longstanding focus on privacy, Apple's approach is designed not to expose any information about non-matching CSAM images. Apple cannot access any image information until a threshold of matches for an account is reached.

Child-rights advocates are understandably supportive of Apple's announcement. Apple, however, should not be celebrated for this modest and long overdue step of extending the reach of a decade-old technology, which applies only to images. Videos constitute nearly half of the more than 65 million pieces of content reported to NCMEC last year. Sadly, Apple's technology will be blind to this content. Apple has come late to the game and is tackling, at best, half of the problem. This is hardly a reason to celebrate.

On the other hand, many privacy-rights advocates have denounced Apple's announcement as a setback for user privacy. These voices, however, fail to acknowledge that disrupting the global spread of CSAM does a great deal to respect privacy—of the child victims.

For example, Will Cathcart, head of Facebook's WhatsApp messaging app, had this to say about Apple's announcement: "I read the information Apple put out yesterday and I'm concerned. I think this is the wrong approach and a setback for people's privacy all over the world. People have asked if we'll adopt this system for WhatsApp. The answer is no." He continued with: "Apple has built software that can scan all the private photos on your phone—even photos you haven't shared with anyone. That's not privacy."

These statements are misinformed, fear-mongering and hypocritical.

While Apple's technology operates on a user's device, it only does so for images that will be stored off the device, on Apple's iCloud storage. Furthermore, while perceptual hashing, and Apple's implementation in particular, matches images against known CSAM (and nothing else), no one—including Apple—can glean information about non-matching images. This type of technology can only reveal the presence of CSAM, while otherwise respecting user privacy.

Cathcart's position is also hypocritical—his own WhatsApp performs a similar scanning of text messages to protect users against spam and malware. According to WhatsApp, the company "automatically performs checks to determine if a link is suspicious. To protect your privacy, these checks take place entirely on your device." Cathcart seems to be comfortable protecting users against malware and spam, but uncomfortable using similar technologies to protect children from sexual abuse.

Cries of potential abuse of Apple's technology by bad actors or repressive governments are equally unfounded. Perceptual hashing has been in use for more than a decade without the types of doomsday abuses predicted by critics. Technologies to limit spam, malware and viruses—including WhatsApp's suspicious-link technology—have similarly been in use for decades without widespread abuse. Every day billions of text messages, emails and files are scanned to protect against all types of online cyber threats. Apple's recent efforts fall squarely within these well accepted and understood technologies.

Apple has taken the modest but long overdue step of extending a decade-old technology to protect children online, with appropriate protections for user privacy. It should not be criticized for these efforts, nor should it be lauded. Instead, Apple and the rest of the technology sector should put significantly more effort into protecting children from online harms—at least as much effort as they put into getting their devices and services into the hands of younger and more vulnerable children.


Originally published as Should we Celebrate or Condemn Apple's New Child Protection Measures? | Opinion; by Newsweek on August 13, 2021. Reprinted with the author’s permission.

Hany Farid is Associate Dean and Head of School for the University of California, Berkeley School of Information, and Senior Faculty Advisor for the Center for Long-Term Cybersecurity.

Last updated: August 18, 2021