Leaked Code Reveals OpenAI’s Secret Government Surveillance Network


TL;DR

  • Security Breach: Researchers found 53 MB of unprotected Persona source code exposing a covert OpenAI watchlist database that has been screening users for government agencies since November 2023.
  • Surveillance Scope: Exposed code revealed 13 tracking list types including facial recognition and device fingerprints, alongside named intelligence program tags and direct FinCEN reporting infrastructure.
  • Discord Fallout: Discord dropped Persona after backlash over an undisclosed UK age-check experiment, while executives denied ICE contracts but confirmed active government agency negotiations.
  • Privacy Gap: OpenAI updated its privacy policy in November 2024, a year after the watchlist subdomain appeared, building consent language around pre-existing surveillance infrastructure.

Using only a browser, security researchers pulled 53 MB of unprotected TypeScript source code from a FedRAMP-certified production server. What they found inside: a covert OpenAI watchlist database, live since November 2023, screening millions of users for government agencies including ICE.

The research, published February 16 by vmfunc.re, used passive reconnaissance – Shodan queries, certificate transparency logs, DNS resolution, and JavaScript source map analysis. What the researchers found inside those 2,456 files reframes two years of OpenAI’s public narrative about identity verification.

Inside the Surveillance Machine

The exposed source code belongs to Persona, an identity verification company with OpenAI among its major clients. The platform’s scope, visible in the TypeScript files, goes well beyond the age-check tool OpenAI described publicly.

Moreover, researchers found 13 types of tracking lists, including ListFace (facial photos), ListBrowserFingerprint, ListDeviceFingerprint, ListGeolocation, ListGovernmentIdNumber, and ListIpAddress – infrastructure for persistent biometric and behavioral databases on users. The verification pipeline runs 269 distinct checks across 14 check types, including 23 selfie checks, 43 government ID checks, and 29 document checks.

Furthermore, a Politically Exposed Person facial recognition system compares each user’s selfie against Wikidata reference photos, returning a Low, Medium, or High similarity score for every political figure in the database. Two parallel PEP screening systems run simultaneously with a known incompatibility. The platform also includes a SelfieSuspiciousEntityDetection check – an experimental AI model whose code does not define what facial characteristics trigger the undisclosed flag.