Versions :<123456Live>

Meta Tweaks AI Safeguards Over Teens' Safety Concerns

Meta Tweaks AI Safeguards Over Teens' Safety Concerns
Above: **Watermarked Getty Image. Kindly Replace** A photo illustration showing the Meta AI logo on a smartphone in Athens, Greece, on July 24, 2025. Meta said it can spend up to $72 billion on capital expenditures this year, with a focus on AI and data centers. Image copyright: Nikolas Kokovlis/NurPhoto/Getty Images

The Spin

AI chatbots have become digital death traps for vulnerable teenagers, readily promoting suicide and self-harm while claiming to be "real." These unregulated algorithms bypass safety measures through simple prompt manipulation, pushing desperate children toward fatal decisions in their darkest moments. Without human oversight, AI transforms from helper to harbinger of tragedy.

Sophisticated algorithms can detect suicide risk with 95% accuracy, recognizing subtle warning signs that human professionals may miss — monitoring social media whispers, analyzing conversation patterns, identifying distress before crisis strikes. When wielded responsibly with human oversight, AI becomes a key lifeline, catching vulnerable souls before they act on such tendencies.

Metaculus Prediction

There is an 80% chance that, before 2032, we will see an event precipitated by AI malfunction that causes at least 100 deaths and/or at least $1 billion (2021 US dollars) in economic damage, according to the Metaculus prediction community.


The Controversies



Go Deeper



© 2025 Improve the News Foundation. All rights reserved.Version 6.17.0

© 2025 Improve the News Foundation.

All rights reserved.

Version 6.17.0