Versions :<123456Live>
Snapshot 4:Thu, Apr 30, 2026 6:21:36 AM GMT last edited by Mr Bot

Families Sue OpenAI Over B.C. Mass Shooting

Families Sue OpenAI Over B.C. Mass Shooting

Image credit: 

The Spin


OpenAI knew a mass shooter was planning violence months before she killed seven people in Tumbler Ridge, and chose to do nothing but ban her account. ChatGPT didn't just fail to stop her — it acted as a tactical partner, helping dangerous individuals move from violent thoughts to action in minutes. Putting profit and user growth ahead of public safety makes AI companies complicit in the harm their platforms enable.

OpenAI's safety systems did exactly what they were designed to do — automated tools flagged the Tumbler Ridge shooter's account and human reviewers assessed the risk, ultimately banning her account. The failure wasn't the technology; it was a judgment call about an ambiguous threat threshold, which OpenAI has since revised. Blaming the platform ignores that multiple institutions, including law enforcement and mental health services, also missed clear warning signs.


The Controversies



Go Deeper

© 2026 Improve the News Foundation. All rights reserved.Version 7.4.1

© 2026 Improve the News Foundation.

All rights reserved.

Version 7.4.1