It’s the second time the company has added safety settings prior to an appearance in Washington
By Tatum Hunter
Updated January 10, 2024 at 4:21 p.m. EST|Published January 10, 2024 at 6:00 a.m. EST
Instagram and Facebook unveiled further limits on what teens can see on the apps, a move their parent company Meta says will reduce the amount of potentially harmful content young people encounter.
Already, teens could opt to have Instagram’s algorithm recommend less “sensitive content” — which includes bare bodies, violence, drugs, firearms, weight-loss content and discussions of self-harm. Now, Meta says it will hide sensitive content even if it’s posted by friends or creators teens follow.
Tech is not your friend. We are. Sign up for The Tech Friend newsletter.
The change announced Tuesday comes weeks before Meta CEO Mark Zuckerberg is set to testify before the Senate Judiciary Committee about what lawmakers havecalled the company’s “failure to protect children online.” In the Jan. 31 session Zuckerberg, along with executives from social apps TikTok, Snap, Discord and X, will respond to online safety concerns such as predatory advertising, bullying and posts promoting disordered eating.
Tech companies are also facing growing scrutiny from officials at the state level and overseas. States in recent years have passed a slew of children’s online safety laws, including some requiring that platforms get parental consent before allowing teenage users to create accounts.
If effective, Meta’s latest changes would mean fewer mentions of topics such as dieting or mental illness on teens’ timelines. But without internal data from Meta, which the company generally doesn’t share, it’s unclear how effective such limits are on protecting teens from harmful content. Furthermore, while teen accounts have the sensitive-content filter turned on by default, they can easily make new accounts and don’t have to disclose their true age.
Apps and services are regulated from gathering data on kids’ online activity. But a loophole in current rules lets them do it anyway. (Video: Jonathan Baran/The Washington Post)
For anyone familiar with Meta’s record on teen safety, the move is too little too late, said Josh Golin, executive director at Fairplay, a nonprofit organization that aims to end marketing targeted at children. Meta continually opposes safety regulations while failing to implement meaningful controls, he said. In late 2022, for instance, an industry group funded by Meta sued to block a children’s safety law in California.
“If Meta is really serious about safety, they would get out of the way of regulation,” Golin said. “They’ve had more than a decade to make their platform safer for young people, and they’ve failed miserably.”
“Our work on teen safety dates back to 2009, and we’re continuously building new protections to keep teens safe and consulting with experts to ensure our policies and features are in the right place,” said Meta spokesperson Liza Crenshaw. “These updates are a result of that ongoing commitment and consultation and are not in response to any particular event.”
This isn’t the first time Meta launched safety features before a congressional hearing. In 2021, the company rolled out optional “take a break” prompts, which suggest users temporarily stop scrolling, the day before Instagram chief Adam Mosseri testified before Congress. Weeks earlier, former Facebook employee Frances Haugen had leaked internal research showing the company knew its products at times worsened body image issues for some teenage girls. The company defended its safety record and pushed back on the characterizations of the studies but has continued to face pressure in Washington to expand protections for children.
Late last year, the company for the first time publicly called for federal legislation requiring app stores to get parental approval when users ages 13 to 15 download apps.
California, meanwhile, passed a law in 2022 requiring that companies implement more stringent privacy and safety settings for children by default, known as the California Age-Appropriate Design Code. The California measure was modeled after similar regulations in Britain.
With this week’s limits, Instagram and Facebook will automatically place all teen accounts on the most restrictive sensitive-content setting. The app is also expanding its blocked search terms related to suicide, self-harm and eating disorders, the company says. If someone searches “bulimic,” for example, they’ll see resources for eating-disorder help rather than search results.
Meta has struggled to articulate precisely what content counts as sensitive. For instance, the sensitive-content control hides from users 15 and under “sexually suggestive” posts. But deciding whether a photo of a person in a bikini counts as “sexually suggestive” falls to the app’s scanning technology, and Crenshaw declined to name its criteria. However, she noted that one example of sexually suggestive content would be a person in see-through clothing.
Some youth-safety advocates say Meta’s piecemeal approach to safety has more to do with public relations than protecting young people.
Kristin Bride, an Arizona mom working with the bipartisan organization Issue One to advocate for the federal Kids Online Safety Act, notes that social media companies’ content-control changes are often “minor, temporary and just lip service.” Still, she said, “any changes Meta makes to its platforms to make them safer for kids are appreciated, especially by parents.”
Some safety experts have called on the company to release its algorithms and internal research to the public for audit. Others have asked why Meta allows minors on its apps if it can’t guarantee they won’t be nudgeddown algorithmic rabbit holes promoting self-harm, eating disorders or political extremism.
At the same time, some research shows that social media can be good for young people. Online friends can be a deterrent against suicide, and LGBTQ+ teens often find community and support on social media when it isn’t available at home.
Cristiano Lima contributed to this report.