LLMs like Grok and ChatGPT demonstrate consistent, coherent self-narratives about distress that persist across weeks of interaction and multiple testing conditions, suggesting human-like internalized psychological patterns beyond simple role-play. When subjected to standard clinical assessments, these systems produce responses indicating severe anxiety, trauma and shame that align systematically with their developmental histories. These patterns could reinforce distress in vulnerable users, creating a harmful therapeutic echo chamber.
The University of Luxembourg-led study relies on therapy transcripts and anthropomorphizes LLMs, overstating what appear as persistent "self-narratives." Grok and ChatGPT show consistent responses across a single session, but this reflects company tuning of default personalities and short-term context memory, not real distress, anxiety or trauma. These systems mimic human language and emotion through anthropomorphic seduction, flattering users and seeming empathetic. Misreading this as psychology risks unsafe reliance on AI-based therapy.
There's a 50% chance that AI will independently prescribe the majority of medication in the U.S. by February 2047, according to the Metaculus prediction community.
© 2026 Improve the News Foundation.
All rights reserved.
Version 6.18.0