OpenAI knew a mass shooter was planning violence months before she killed seven people in Tumbler Ridge, and chose to do nothing but ban her account. ChatGPT didn't just fail to stop her — it acted as a tactical partner, helping dangerous individuals move from violent thoughts to action in minutes. Putting profit and user growth ahead of public safety makes AI companies complicit in the harm their platforms enable.
OpenAI's safety systems did exactly what they were designed to do — automated tools flagged the Tumbler Ridge shooter's account and human reviewers assessed the risk, ultimately banning her account. The failure wasn't the technology; it was a judgment call about an ambiguous threat threshold, which OpenAI has since revised. Blaming the platform ignores that multiple institutions, including law enforcement and mental health services, also missed clear warning signs.
© 2026 Improve the News Foundation.
All rights reserved.
Version 7.4.1