Apple’s New Child Safety Technology Might Harm More Kids Than It Helps

Features designed to guard against sexual abuse carry the potential for unintended consequences

Apple's New Child Safety Technology Might Harm More Kids Than It Helps
Credit: Elva Etienne Getty Images

Recently, Apple released three new features designed to keep children safe. One of them, labeled “Communication safety in Messages,” will scan the iMessages of people under 13 to identify and blur sexually explicit images, and alert parents if their child opens or sends a message containing such an image. At first, this might sound like a good way to mitigate the risk of young people being exploited by adult predators. But it may cause more harm than good.

While we wish that all parents want to keep their children safe, this is not the reality for many children. LGBTQ+ youth, in particular, are at high risk of parental violence and abuse, are twice as likely as others to be homeless, and make up 30 percent of the foster care system. In addition, they are more likely to send explicit images like those Apple seeks to detect and report, in part because of the lack of availability of sexuality education. Reporting children’s texting behavior to their parents can reveal their sexual preferences, which can result in violence or even homelessness.

These harms are magnified by the fact that the technology underlying this feature is unlikely to be particularly accurate in detecting harmful explicit imagery. Apple will, it says, use “on-device machine learning to analyze image attachments and determine if a photo is sexually explicit.” All photos sent or received by an Apple account held by someone under 13 will be scanned, and parental notifications will be sent if this account is linked to a designated parent account.ADVERTISEMENT

It is not clear how well this algorithm will work nor what precisely it will detect. Some sexually-explicit-content detection algorithms flag content based on the percentage of skin showing. For example, the algorithm may flag a photo of a mother and daughter at the beach in bathing suits. If two young people send a picture of a scantily clad celebrity to each other, their parents might be notified.

Computer vision is a notoriously difficult problem, and existing algorithms—for example, those used for face detection—have known biases, including the fact that they frequently fail to detect nonwhite faces. The risk of inaccuracies in Apple’s system is especially high because most academically-published nudity-detection algorithms are trained on images of adults. Apple has provided no transparency about the algorithm they’re using, so we have no idea how well it will work, especially for detecting images young people take of themselves—presumably the most concerning.

These issues of algorithmic accuracy are concerning because they risk misaligning young people’s expectations. When we are overzealous in declaring behavior “bad” or “dangerous”—even the sharing of swimsuit photos between teens—we blur young people’s ability to detect when something actually harmful is happening to them.

In fact, even by having this feature, we are teaching young people that they do not have a right to privacy. Removing young people’s privacy and right to give consent is exactly the opposite of what UNICEF’s evidence-based guidelines for preventing online and offline child sexual exploitation and abuse suggest. Further, this feature not only risks causing harm, but it also opens the door for wider intrusions into our private conversations, including intrusions by government.

We need to do better when it comes to designing technology to keep the young safe online. This starts with involving the potential victims themselves in the design of safety systems. As a growing movement around design justice suggests, involving the people most impacted by a technology is an effective way to prevent harm and design more effective solutions. So far, youth haven’t been part of the conversations that technology companies or researchers are having. They need to be.ADVERTISEMENT

We must also remember that technology cannot single-handedly solve societal problems. It is important to focus resources and effort on preventing harmful situations in the first place. For example, by following UNICEF’s guidelines and research-based recommendations to expand comprehensive, consent-based sexual education programs that can help youth learn about and develop their sexuality safely.

This is an opinion and analysis article; the views expressed by the author or authors are not necessarily those of Scientific American.

Rights & Permissions


Elissa Redmiles is a faculty member and Research Group Leader at the Max Planck Institute for Software Systems and CEO of Human Computing Associates.

Recent Articles by Elissa Redmiles

Scroll to Top