Apple and Google Steer Users to AI Nudify Apps


TL;DR

  • Report: A Tech Transparency Project report said Apple and Google store discovery features were still steering users to nudify apps.
  • Scale: The report said the category had 483 million lifetime downloads, $122 million in revenue, and 31 apps rated for minors.
  • Response: Apple said it removed 15 apps, while Google said suspensions were underway and enforcement review remained ongoing.
  • Why It Matters: Apple and Google already ban sexualized and non-consensual deepfake content, sharpening scrutiny of store search and recommendation systems.

A report from the Tech Transparency Project (TTP) says Apple and Google are still steering users to nudify apps through App Store and Google Play search, paid placements, and autocomplete suggestions. Instead of merely hosting software built to generate non-consensual sexualized imagery, the report says the stores were also helping users find it. That framing turns a moderation failure into a broader question about how the two app stores surface risky apps in the first place.

TTP said it found 18 nudify apps in Apple’s store and 20 in Google Play. AppMagic data cited in the same report put the category at 483 million lifetime downloads and $122 million in lifetime revenue, while 31 apps rated suitable for minors remained available.

One example was Video Face Swap AI: DeepFace, which TTP described as advertising face swaps onto partially undressed women. Many such apps are rated “E” for Everyone.

Google said its review was ongoing and that several named apps had already been suspended. Apple removed 15 apps and Google removed seven after the findings were shared. Taken together, those details make the episode harder to dismiss as a fringe moderation error.

How Store Search Expanded the Problem

Those details push the story beyond a familiar moderation lapse. If TTP’s account is accurate, the problem was not only that harmful apps entered Apple’s and Google’s stores, but that search, ads, and recommendations kept helping users find them after the platforms had already published rules against sexual content and deepfake abuse. Ranking, promoted placements, and autocomplete cues all shape what people see before any later takedown takes effect.

For users and parents, that creates a different trust problem. Search boxes, promoted results, and autocomplete suggestions are supposed to narrow risk, not turn harmful apps into an easy-to-reach category. TTP’s figures describe a large commercial cluster with mass downloads, revenue, and low age ratings rather than a few obscure titles slipping through review.