TL;DR
- Report: A Tech Transparency Project report said Apple and Google store discovery features were still steering users to nudify apps.
- Scale: The report said the category had 483 million lifetime downloads, $122 million in revenue, and 31 apps rated for minors.
- Response: Apple said it removed 15 apps, while Google said suspensions were underway and enforcement review remained ongoing.
- Why It Matters: Apple and Google already ban sexualized and non-consensual deepfake content, sharpening scrutiny of store search and recommendation systems.
A report from the Tech Transparency Project (TTP) says Apple and Google are still steering users to nudify apps through App Store and Google Play search, paid placements, and autocomplete suggestions. Instead of merely hosting software built to generate non-consensual sexualized imagery, the report says the stores were also helping users find it. That framing turns a moderation failure into a broader question about how the two app stores surface risky apps in the first place.
TTP said it found 18 nudify apps in Apple’s store and 20 in Google Play. AppMagic data cited in the same report put the category at 483 million lifetime downloads and $122 million in lifetime revenue, while 31 apps rated suitable for minors remained available.
One example was Video Face Swap AI: DeepFace, which TTP described as advertising face swaps onto partially undressed women. Many such apps are rated “E” for Everyone.
Google said its review was ongoing and that several named apps had already been suspended. Apple removed 15 apps and Google removed seven after the findings were shared. Taken together, those details make the episode harder to dismiss as a fringe moderation error.
How Store Search Expanded the Problem
Those details push the story beyond a familiar moderation lapse. If TTP’s account is accurate, the problem was not only that harmful apps entered Apple’s and Google’s stores, but that search, ads, and recommendations kept helping users find them after the platforms had already published rules against sexual content and deepfake abuse. Ranking, promoted placements, and autocomplete cues all shape what people see before any later takedown takes effect.
For users and parents, that creates a different trust problem. Search boxes, promoted results, and autocomplete suggestions are supposed to narrow risk, not turn harmful apps into an easy-to-reach category. TTP’s figures describe a large commercial cluster with mass downloads, revenue, and low age ratings rather than a few obscure titles slipping through review.
That is the point Katie Paul pressed when discussing the findings.
“It’s not just that the companies are failing to actually appropriately review these apps and continue to approve them and profit from them. They are actually directing users to the apps themselves.”
Katie Paul, Tech Transparency Project director (via Bloomberg)
Katie Paul’s quote captures TTP’s central allegation: store search and merchandising systems may be amplifying harmful apps long before any cleanup. Apple and Google therefore face questions about how they rank, suggest, and monetize apps, not only how quickly they remove them. Microsoft’s work on revenge porn on Bing shows how another platform has tried to address a similar harm.
What Apple and Google Said, and What Their Rules Already Forbid
Apple’s own guidelines say sexual or pornographic material is not allowed in App Store apps. Separate Apple rules also say ads must be age-appropriate, a notable standard in a story centered partly on promoted placements.
Google Play lists apps that claim to undress people as policy violations. Separate Google rules bar non-consensual deepfake sexual content, making the contradiction hard to miss if TTP’s examples are accurate. Google said its policies prohibit sexual content, that multiple apps named in the report had been suspended, and that enforcement remained under review.
Why App Store Accountability Pressure Keeps Growing
Pressure on the category is no longer limited to store moderation alone. Minnesota and the U.K. are considering bans around AI nudification apps. That wider backdrop raises the stakes for Apple and Google if outside researchers keep finding policy violations through ordinary discovery features.
Nudify and deepfake apps have already pushed some governments to propose laws. Apple and Google now face a more durable question than whether they can remove apps after exposure: can the features people rely on to find software stop steering users toward harmful apps before another outside report forces a cleanup?

