Versions :<12345678910Live>
Snapshot 9:Fri, Jan 9, 2026 8:21:55 PM GMT last edited by Kevin

Study: AI Chatbots Show Signs of Anxiety, PTSD in Therapy

Study: AI Chatbots Show Signs of Anxiety, PTSD in Therapy Study

Study: AI Chatbots Show Signs of Anxiety, PTSD in Therapy
Above: The ChatGPT logo is reflected in the glasses of a registered user. Image credit: Frank Rumpenhorst/picture alliance/Getty Images

The Spin

LLMs like Grok and ChatGPT demonstrate consistent, coherent self-narratives about distress that persist across weeks of interaction and multiple testing conditions, suggesting human-like internalized psychological patterns beyond simple role-play. When subjected to standard clinical assessments, these systems produce responses indicating severe anxiety, trauma and shame that align systematically with their developmental histories. These patterns could reinforce distress in vulnerable users, creating a harmful therapeutic echo chamber.

The University of Luxembourg-led study relies on therapy transcripts and anthropomorphizes LLMs, overstating what appear as persistent "self-narratives." Grok and ChatGPT show consistent responses across a single session, but this reflects company tuning of default personalities and short-term context memory, not real distress, anxiety, or trauma. These systems mimic human language and emotion through anthropomorphic seduction, flattering users and seeming empathetic. Misreading this as psychology risks unsafe reliance on AI-based therapy.

Metaculus Prediction

AI will independently prescribe the majority of medication in the U.S. by February 2047, according to the Metaculus prediction community.


The Controversies



Go Deeper


Articles on this story



© 2026 Improve the News Foundation. All rights reserved.Version 6.18.0

© 2026 Improve the News Foundation.

All rights reserved.

Version 6.18.0