Versions :<123456Live>
Snapshot 5:Thu, Apr 30, 2026 6:48:18 AM GMT last edited by Harish Chander

Families Sue OpenAI Over British Columbia Mass Shooting

Families Sue OpenAI Over B.C.British Columbia Mass Shooting

Families Sue OpenAI Over British Columbia Mass Shooting
Above: **Watermarked Getty Image. Kindly Replace** A vigil to honor the victims of one of Canada's deadliest mass shootings in Tumbler Ridge, British Columbia, Canada, on Feb. 13, 2026. The shooting took place at a school on Feb. 10. Image credit: Paige Taylor White/AFP/Getty Images

The Spin


OpenAI knew a mass shooter was planning violence months before she killed seven people in Tumbler Ridge, and chose to do nothing but ban her account. ChatGPT didn't just fail to stop her — it acted as a tactical partner, helping dangerous individuals move from violent thoughts to action in minutes. Putting profit and user growth ahead of public safety makes AI companies complicit in the harm their platforms enable.

OpenAI's safety systems did exactly what they were designed to do — automated tools flagged the Tumbler Ridge shooter's account and human reviewers assessed the risk, ultimately banning her account. The failure wasn't the technology; it was a judgment call about an ambiguous threat threshold, which OpenAI has since revised. Blaming the platform ignores that multiple institutions, including law enforcement and mental health services, also missed clear warning signs.


Metaculus Prediction

There is a 71% chance an AI system will be reported to have successfully blackmailed someone for >$1,000 by EOY 2028, according to the Metaculus prediction community.


The Controversies



Go Deeper

© 2026 Improve the News Foundation. All rights reserved.Version 7.4.1

© 2026 Improve the News Foundation.

All rights reserved.

Version 7.4.1