TL;DR
- Policy Change: OpenAI announced it will notify law enforcement when users pose credible threats, lowering its threshold for police referrals.
- Background: OpenAI banned the shooter’s account in June 2025 but skipped a police referral; she bypassed the ban and killed eight people.
- Government Pressure: Canada’s AI Minister called his OpenAI meeting a “failure,” and Premier David Eby secured a direct meeting with CEO Sam Altman.
- New Safeguards: OpenAI committed to a direct law enforcement contact in Canada and will add mental health experts to evaluate high-risk cases.
OpenAI banned the Tumbler Ridge shooter’s ChatGPT account in June 2025 after flagging violent content, yet its own monitoring had detected warning signs eight months before tragedy struck with no alert ever reaching police.
Jesse Van Rootselaar killed eight people in Tumbler Ridge, British Columbia, before taking her own life on February 10, 2026. She first killed her mother and 11-year-old stepbrother at home before attacking a nearby school, where five young children and an educator died.
Following Canada’s deadliest rampage since 2020, OpenAI announced it will notify law enforcement whenever a user poses a credible threat.
The Ban That Failed
That commitment follows a sequence of decisions that exposed the limits of OpenAI’s existing protocols. Automated monitoring detected Van Rootselaar’s first account in June 2025 and sent it to human reviewers, who assessed whether the activity warranted a law enforcement referral. Those reviewers debated whether to contact Canadian police but chose not to, instead banning the account for what OpenAI described as “potential warnings of committing real-world violence.”
Ann O’Leary, OpenAI’s vice president for global policy, said reviewers determined the June 2025 activity did not meet the company’s threshold for a police referral: it lacked credible and imminent planning of violence. It was not the first time ChatGPT had been implicated in violence, a former Green Beret had used the AI tool to gather information for the 2025 Las Vegas Cybertruck blast.
Van Rootselaar then bypassed the company’s anti-circumvention systems and opened a second ChatGPT account. OpenAI discovered that account only after the RCMP publicly named her following the February 10 shooting.
Eight months elapsed between the account ban and the shooting, exposing a structural flaw in OpenAI’s moderation framework. Requiring evidence of a specific target, means, and timing before contacting police set a bar that demanded near-certain harm before intervention, leaving room for a second account, a second opportunity to plan, and no warning to authorities.
OpenAI’s New Commitments
Confronted with that accountability gap, O’Leary outlined the company’s response in a letter to Canadian officials, first reported by Politico and The Washington Post. Under its enhanced referral protocol, OpenAI will notify law enforcement when it detects threats meeting a newly lowered danger threshold, one that no longer requires evidence of a specific target or method. OpenAI also shared the second account’s data with the RCMP upon discovering it after the shooting.
That letter committed the company to establishing a direct contact with Canadian law enforcement and to enlisting mental health experts to assess high-risk cases. O’Leary made the threshold change explicit and acknowledged the inadequacy of the prior standard:
“With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today.”
Ann O’Leary, Vice President for Global Policy at OpenAI (via The Washington Post)
Adding behavioral health experts to the review process introduces clinical risk assessment into what had previously been a policy and legal calculus, potentially flagging earlier-stage warning signs. Rather than relying solely on policy staff, OpenAI would share responsibility for threat evaluation with trained professionals, a structural change that could also provide legal cover in future liability cases.
Government Pressure and Broader Context
Those internal changes emerged under direct political pressure. Canada’s Artificial Intelligence Minister Evan Solomon convened a meeting with OpenAI in Ottawa on February 25, then told reporters he left “disappointed” and described the outcome as a “failure,” saying “all options are on the table” as the government develops a suite of measures on online harms. British Columbia Premier David Eby confirmed that Sam Altman agreed to meet with him following public disclosure of OpenAI’s internal deliberations.
“They tragically missed the mark in not bringing this information forward. The consequences of that will be borne by the families of Tumbler Ridge for the rest of their lives.”
David Eby, British Columbia Premier (via BBC News)
Tumbler Ridge accelerates a pattern of institutional pressure on OpenAI. As WinBuzzer previously reported, a wrongful death lawsuit followed a December 2025 killing, and a wave of lawsuits over ChatGPT-related harm was filed in November 2025. Altman’s decision to meet Premier Eby directly, rather than delegating to policy staff, signals that OpenAI recognizes the political exposure this case carries across borders.
OpenAI’s protocol changes currently apply to Canada, and the company has not confirmed whether the same standards govern its conduct elsewhere. Whether Canadian regulators accept voluntary commitments or pursue binding requirements will likely shape how other governments respond to the next AI-related public safety incident.

