TL;DR
- EU Talks: The European Commission is reportedly pressing OpenAI and Anthropic for direct access to advanced AI models.
- Access Split: OpenAI is offering GPT-5.5-Cyber access, while Anthropic’s talks have not yet reached system-access negotiations.
- Model Scope: OpenAI’s access offer reportedly extends to EU institutions, governments, businesses, and cyber authorities.
- Risk Pressure: Mythos security concerns and an August 2026 enforcement deadline are raising the stakes for Brussels.
European Commission officials have opened talks with OpenAI and Anthropic to secure direct access to advanced AI models. Brussels is moving beyond broad rule-writing and toward direct inspection of systems that officials say can create new security risks.
OpenAI has already offered one route into that process. Anthropic remains earlier in its talks with the Commission, leaving the EU to manage one company that is prepared to open a model for review and another that is still limiting what officials can see.
Regnier described the Anthropic outreach as active: “We’re reaching out to the platform, to Anthropic. We have received certain information” and said the Commission wants access that lets officials test safety claims against real deployments rather than company summaries alone.
Why Brussels Wants Direct Model Access
OpenAI’s offer is tied to a named system. Its access offer to the Commission includes EU access for GPT-5.5-Cyber, with preview availability also extending to European businesses, governments, cyber authorities, and EU institutions including the AI Office.
Brussels is asking for more than policy assurances or safety summaries. Officials want visibility into a specific frontier system, a live deployment path, and the conditions under which the model is being rolled out to public and private users across Europe.
Regnier also said further discussions were planned so officials could follow deployment and address security concerns as OpenAI rolls the model out. Commission officials have welcomed the company’s cooperation, suggesting voluntary access could still shape frontier-model oversight before formal enforcement powers arrive.
Anthropic is on a different track. Officials have held several meetings with the company, but its talks had not yet advanced to negotiating access to Anthropic’s systems. OpenAI is already discussing model visibility, while Anthropic is still at the stage of contact, risk discussion, and follow-up meetings rather than hands-on inspection.
Brussels still lacks the same direct comparison point for the Anthropic lane because the company has not granted the EU preview access to Mythos that OpenAI is already offering for its cyber model. That gap helps explain why Anthropic has become the more urgent oversight problem.
One earlier contact point is already on the record. A first meeting with Anthropic took place Wednesday, and more discussions were expected to follow.
Regnier summarized the Commission’s rationale in the clearest terms after that status update.
“We have a new AI model that is being released. It comes with a certain number of risks. We need information when it comes to these risks.”
Thomas Regnier, European Commission spokesman
His explanation turns the dispute into a practical oversight question: whether regulators can inspect powerful systems before access becomes a formal demand.
Anthropic’s Risk Debate Raises the Stakes
Anthropic’s lane carries extra pressure because Claude Mythos was already presented as a cybersecurity concern. In April 2026, Anthropic limited Mythos to 40 major tech players so vulnerabilities could be fixed before attackers exploited them.
That restricted release gives the Commission a concrete reason to press for model access rather than rely on second-hand descriptions. A system that is being held back over misuse risks creates a different oversight problem from one that is already being opened for supervised review.
European Parliament officials will discuss the dangers of the Mythos model with Commission officials and the bloc’s cyber agency ENISA on Wednesday. Parliament’s hearing broadens the pressure from a company-regulator exchange into a wider institutional debate over how much visibility public authorities should have into high-risk systems.
What Comes Next for the EU AI Office
The current access push also extends a longer EU rulemaking project. In 2023, EU institutions were still fighting over 2023 generative AI regulation battles during the AI Act negotiations. In 2024, the bloc published its 2024 first general-purpose AI code draft to spell out how those obligations could apply to large model providers.
August 2026 is the next hard checkpoint. When the AI Office’s enforcement powers start in August 2026, officials could decide whether these talks remain voluntary or become a direct request for model access.

