Google is rolling out a new Sensitive Content Warning system that has started to show up on Android phones. Some users have noticed that Google Messages is blurring images containing suspected nudity. It’s intended to protect users from unwanted nudity in their images and was announced last year.
According to Google’s Help Center post, when the feature is turned on, the phone can detect and blur images with nudity. It can also generate a warning when one is being received, sent or forwarded.
“All detection and blurring of nude images happens on the device. This feature doesn’t send detected nude images to Google,” the company says in its post. These warnings also offer resources on how to deal with nude images.
It’s possible that images not containing nudity may be accidentally flagged, according to Google.
The feature is not enabled by default for adults, and it can be disabled in Google Account settings for teens aged 13-17. For those on supervised accounts, it can’t be disabled, but parents can adjust the settings in the Google Family Link app.
How to enable or disable the feature
For adults who want to be warned about nude photos or to disable the feature, the toggle switch is under Google Messages Settings / Protection & Safety / Manage sensitive content warnings / Warnings in Google Messages.
The nude content feature is part of SafetyCore on Android 9 plus devices. SafetyCore also includes features Google has been working on to protect against scams and dangerous links via text and to verify contacts.
Measuring the feature’s effectiveness
Filters that screen for objectionable images have become more sophisticated due to a better understanding of context through AI.
“Compared to older systems, today’s filters are far more adept at catching explicit or unwanted content, like nudity, with fewer mistakes,” says Patrick Moynihan, the co-founder and president of Tracer Labs. “But they’re not foolproof. Edge cases, like artistic nudity, culturally nuanced images or even memes, can still trip them up.”
Moynihan says that his company combines AI systems with Trust ID tools to flag content without compromising privacy.
“Combining AI with human oversight and continuous feedback loops is critical to minimizing blind spots and keeping users safe,” he says.
Compared to Apple’s iOS operating system, Android can offer more flexibility. However, its openness to third-party app stores, sideloading and customization creates more potential entry points for the kind of content Google is trying to protect people against.
“Android’s decentralized setup can make consistent enforcement trickier, especially for younger users who might stumble across unfiltered content outside curated spaces,” Moynihan says.
‘Kids can unblur it instantly’
While Apple does offer Communication Safety features that parents can turn on, Android’s ability to enable third-party monitoring tools “makes this kind of protection easier to roll out at scale and more family-friendly,” says Titania Jordan, an author and chief parenting officer at Bark Technologies, which makes digital tools to protect children.
Jordan says mobile operating systems have not made it easy for parents to proactively protect against content like nude images.
“Parents shouldn’t have to dig through system settings to protect their kids,” she says. She points out that Google’s new feature only blurs images temporarily.
“Kids can unblur it instantly,” she says, “That’s why this needs to be paired with ongoing conversations about pressure, consent, and permanence, plus monitoring tools that work beyond just one app or operating system.”
According to Moynihan, making the system automatically opt-out for adults and opt-in for minors is a practical way to offer some initial protection. But he says, “The trick is keeping things transparent. Minors and their guardians need clear, jargon-free info about what’s being filtered, how it works, and how their data is protected.”