TL;DR
- Security Breach: Researchers found 53 MB of unprotected Persona source code exposing a covert OpenAI watchlist database that has been screening users for government agencies since November 2023.
- Surveillance Scope: Exposed code revealed 13 tracking list types including facial recognition and device fingerprints, alongside named intelligence program tags and direct FinCEN reporting infrastructure.
- Discord Fallout: Discord dropped Persona after backlash over an undisclosed UK age-check experiment, while executives denied ICE contracts but confirmed active government agency negotiations.
- Privacy Gap: OpenAI updated its privacy policy in November 2024, a year after the watchlist subdomain appeared, building consent language around pre-existing surveillance infrastructure.
Using only a browser, security researchers pulled 53 MB of unprotected TypeScript source code from a FedRAMP-certified production server. What they found inside: a covert OpenAI watchlist database, live since November 2023, screening millions of users for government agencies including ICE.
The research, published February 16 by vmfunc.re, used passive reconnaissance – Shodan queries, certificate transparency logs, DNS resolution, and JavaScript source map analysis. What the researchers found inside those 2,456 files reframes two years of OpenAI’s public narrative about identity verification.
Inside the Surveillance Machine
The exposed source code belongs to Persona, an identity verification company with OpenAI among its major clients. The platform’s scope, visible in the TypeScript files, goes well beyond the age-check tool OpenAI described publicly.
Moreover, researchers found 13 types of tracking lists, including ListFace (facial photos), ListBrowserFingerprint, ListDeviceFingerprint, ListGeolocation, ListGovernmentIdNumber, and ListIpAddress – infrastructure for persistent biometric and behavioral databases on users. The verification pipeline runs 269 distinct checks across 14 check types, including 23 selfie checks, 43 government ID checks, and 29 document checks.
Furthermore, a Politically Exposed Person facial recognition system compares each user’s selfie against Wikidata reference photos, returning a Low, Medium, or High similarity score for every political figure in the database. Two parallel PEP screening systems run simultaneously with a known incompatibility. The platform also includes a SelfieSuspiciousEntityDetection check – an experimental AI model whose code does not define what facial characteristics trigger the undisclosed flag.
Therefore, the undefined classification indicates Persona is deploying consequential facial scoring without auditable criteria – the precise architecture that privacy regulators in the US and EU have targeted in recent AI governance enforcement actions.
Meanwhile, a discrepancy in biometric retention drew particular attention. OpenAI’s public disclosures reference biometric data stored “up to a year.” The Persona dashboard UI, extracted from AddListModal.tsx, tells administrators however:
“Item state will change to Inactive and biometric data will be erased (max 3 years).”
Persona dashboard UI (via vmfunc.re)
Building on this, the coded retention maximum is 1,095 days. Government IDs can be retained permanently. For users who completed verification without knowledge of actual retention periods, that gap is the factual basis for enforcement under state biometric privacy laws such as the Illinois Biometric Information Privacy Act (BIPA) – not a question of optics.
OpenAI’s Secret Watchlist
The underlying infrastructure was already running – in secret. The openai-watchlistdb.withpersona.com subdomain has been found to be operational since November 16, 2023.
Furthermore, it runs on a dedicated Google Cloud instance, isolated from Persona’s standard Cloudflare infrastructure – 18 months before OpenAI publicly disclosed identity verification requirements.
Persona’s case study for OpenAI states the system “automatically screens over 99% of users behind the scenes in seconds.” OpenAI’s rationale, cited on the Persona customer page, framed the screening as a safety imperative: “To offer safe AGI, we need to make sure bad people aren’t using our services.”
Nevertheless, the 27-month certificate history on this subdomain indicates OpenAI was running undisclosed screening infrastructure for over two years before any consent framework existed – a disclosure gap that no retroactive privacy policy update addresses.
Government Infrastructure and Financial Reporting
Persona’s relationship with government extends further than client-side screening. The company achieved FedRAMP Authorized status at the Low Impact level on October 7, 2025, and is FedRAMP Ready at Moderate Impact, serving federal agencies via a dedicated government portal.
Furthermore, Persona files Suspicious Activity Reports directly with FinCEN and Suspicious Transaction Reports with FINTRAC in Canada. STRs in the code can be tagged to named programs: Project ANTON, ATHENA, CHAMELEON, GUARDIAN, LEGION, PROTECT, and SHADOW.
Additionally, a government subdomain appeared in certificate logs on February 4 – twelve days before the research published. Its content security policy includes the OpenAI API endpoint, and its Kubernetes namespace is named “persona-onyx.”
Consequently, the researchers connect the name to ICE’s $4.2 million contract for Fivecast ONYX, though no direct code references to Fivecast or ICE exist, and the connection remains circumstantial.
Taken together, FedRAMP authorization, direct FinCEN filing, named intelligence program tags, and a government domain with OpenAI’s API in its security policy indicate Persona has built the technical scaffolding for federal intelligence integration. That infrastructure exists regardless of whether any specific ICE contract has been executed. This is not the architecture of an age-verification tool.
A Public Narrative That Didn’t Match
OpenAI’s help center described Persona in minimal terms, hiding its extend:
“Persona is a trusted third-party company we use to help verify age.”
As WinBuzzer reported in April 2025, OpenAI planned to introduce verification for future model access. By January 2026, it was live on ChatGPT, drawing developer backlash over mandatory ID verification.
OpenAI’s Privacy Policy was updated November 4, 2024, to include language about collecting information “to establish your identity or age.” That update came 12 months after the watchlist subdomain’s first certificate entry – confirming OpenAI built its consent framework around pre-existing infrastructure rather than the reverse.
Discord Fallout and the Thiel Connection
Discord cut ties with Persona after backlash over an undisclosed UK age-check experiment run on a small number of users for less than one month. Discord CTO Stanislav Vishnevskiy published a blog post announcing a delay to the platform’s global age-check rollout until later in 2026, according to Fortune.
Moreover, Peter Thiel’s Founders Fund invested in Persona – a connection that drew attention given Thiel’s role co-founding Palantir and that firm’s surveillance business ties. Multiple security experts confirmed to DL News the vmfunc findings appear legitimate.
As a result, Discord’s departure demonstrates that Persona’s identity infrastructure is now a liability for consumer platforms facing regulatory scrutiny. That dynamic places the company’s remaining clients – including OpenAI and Roblox – under renewed pressure to explain their own data-sharing arrangements.
Pseudonymous researcher The Rage described what the source code revealed to multiple security researchers, in findings covered by Ars Technica:
“In 2,456 publicly accessible files, the code revealed the extensive surveillance Persona software performs on its users, bundled in an interface that pairs facial recognition with financial reporting – and a parallel implementation that appears designed to serve federal agencies.”
The Rage, Pseudonymous security researcher (via Ars Technica)
Persona’s Denials and What Remains Unanswered
Faced with the published findings, Persona’s leadership moved to contain the damage. Persona COO Christie Kim told Biometric Update the company has no contracts with ICE or any DHS agency. CEO Rick Song told DL News the company does not work with any federal agency today.
However, Kim stated that Thiel has no board seat, advisory role, or operational involvement, and that Persona and Palantir share no business relationship. This positions Persona’s response as a statement of current intent rather than an explanation of existing architecture. The watchlist subdomain, the intelligence program tags, and the government domain with OpenAI’s API in its security policy are code artifacts – they do not require a current contract to exist or to raise questions.
Consequently, Kim simultaneously acknowledged active government contract negotiations:
“Transparently, we are actively working on a couple of potential contracts which would be publicly visible if we move forward. However, these engagements are strictly for workforce account security of government employees and do not include ICE or any agency within the Department of Homeland Security.”
Christie Kim, COO, Persona (via Biometric Update)
Moreover, Song added separately that the company does not want its technology “used by ICE or the government” for surveillance. Persona still serves Roblox, ChatGPT, and Lime for age assurance. Neither response addressed the watchlist subdomain’s operational history.
In an addendum dated February 18, the vmfunc.re researchers reported direct correspondence with CEO Rick Song, describing him as responsive and engaged in good faith. Core findings, however, remain unaddressed pending his written answers to 18 filed questions.
For the millions of users whose facial scans, device fingerprints, and government IDs sit in a database they had no knowledge of, Part 2 carries concrete stakes. Either Persona’s answers close the accountability gap, or they create a documented record of evasion that regulators in the US, UK, and EU are already positioned to act on.

