OpenAI to Alert Police of Credible Threats After Canada Shooting


TL;DR

  • Policy Change: OpenAI announced it will notify law enforcement when users pose credible threats, lowering its threshold for police referrals.
  • Background: OpenAI banned the shooter’s account in June 2025 but skipped a police referral; she bypassed the ban and killed eight people.
  • Government Pressure: Canada’s AI Minister called his OpenAI meeting a “failure,” and Premier David Eby secured a direct meeting with CEO Sam Altman.
  • New Safeguards: OpenAI committed to a direct law enforcement contact in Canada and will add mental health experts to evaluate high-risk cases.

OpenAI banned the Tumbler Ridge shooter’s ChatGPT account in June 2025 after flagging violent content, yet its own monitoring had detected warning signs eight months before tragedy struck with no alert ever reaching police.

Jesse Van Rootselaar killed eight people in Tumbler Ridge, British Columbia, before taking her own life on February 10, 2026. She first killed her mother and 11-year-old stepbrother at home before attacking a nearby school, where five young children and an educator died.

Following Canada’s deadliest rampage since 2020, OpenAI announced it will notify law enforcement whenever a user poses a credible threat.

The Ban That Failed

That commitment follows a sequence of decisions that exposed the limits of OpenAI’s existing protocols. Automated monitoring detected Van Rootselaar’s first account in June 2025 and sent it to human reviewers, who assessed whether the activity warranted a law enforcement referral. Those reviewers debated whether to contact Canadian police but chose not to, instead banning the account for what OpenAI described as “potential warnings of committing real-world violence.”

Ann O’Leary, OpenAI’s vice president for global policy, said reviewers determined the June 2025 activity did not meet the company’s threshold for a police referral: it lacked credible and imminent planning of violence. It was not the first time ChatGPT had been implicated in violence, a former Green Beret had used the AI tool to gather information for the 2025 Las Vegas Cybertruck blast.