Versions :<123456Live

Families Sue OpenAI Over British Columbia Mass Shooting

Is OpenAI complicit in the Tumbler Ridge shooting or did its safety systems do exactly what they were designed to do?
Families Sue OpenAI Over British Columbia Mass Shooting
Above: A vigil in Tumbler Ridge, British Columbia on Feb. 13. Image credit: Paige Taylor White/AFP/Getty Images

The Spin


Narrative A

OpenAI knew a mass shooter was planning violence months before she killed seven people in Tumbler Ridge, and chose to do nothing but ban her account. ChatGPT didn't just fail to stop her — it acted as a tactical partner, helping dangerous individuals move from violent thoughts to action in minutes. Putting profit and user growth ahead of public safety makes AI companies complicit in the harm their platforms enable.

Narrative B

OpenAI's safety systems did exactly what they were designed to do — automated tools flagged the Tumbler Ridge shooter's account and human reviewers assessed the risk, ultimately banning her account. The failure wasn't the technology; it was a judgment call about an ambiguous threat threshold, which OpenAI has since revised. Blaming the platform ignores that multiple institutions, including law enforcement and mental health services, also missed clear warning signs.


Metaculus Prediction

There is a 71% chance an AI system will be reported to have successfully blackmailed someone for >$1,000 by EOY 2028, according to the Metaculus prediction community.


Public Figures


The Controversies



Go Deeper

© 2026 Improve the News Foundation. All rights reserved.Version 7.4.1

© 2026 Improve the News Foundation.

All rights reserved.

Version 7.4.1