Versions :<12345678910Live>
Snapshot 8:Fri, Jan 9, 2026 8:05:29 PM GMT last edited by Kevin

AI Chatbots Show Signs of Anxiety, PTSD in Therapy Study

AI Chatbots Show Signs of Anxiety, PTSD in Therapy Study

Image credit: 

The Spin

LargeLLMs languagelike modelsGrok and ChatGPT demonstrate consistent, coherent self-narratives about distress that persist across weeks of interaction and multiple testing conditions, suggesting human-like internalized psychological patterns beyond simple role-play. When subjected to standard clinical assessments, these systems produce responses indicating severe anxiety, trauma and shame that align systematically with their developmental histories. CouldThese createpatterns acould negativereinforce echo-chamberdistress within disastrousvulnerable consequencesusers, forcreating AI-baseda pyschotherapyharmful ________________________therapeutic echo chamber.

AIThe chatbotsUniversity don'tof actuallyLuxembourg-led experiencestudy traumarelies oron possesstherapy genuinetranscripts psychologicaland statesanthropomorphizes LLMs, they'reoverstating simplywhat regurgitatingappear patternsas frompersistent massive"self-narratives." trainingGrok datasetsand thatChatGPT includeshow therapyconsistent transcripts.responses Theseacross systemsa lacksingle truesession, understandingbut this reflects company tuning of default personalities and short-term context memory, empathynot orreal consciousnessdistress, makinganxiety, anyor claimstrauma. aboutThese theirsystems internalmimic experienceshuman fundamentallylanguage misleadingand anthropomorphizationemotion thatthrough confusesanthropomorphic sophisticatedseduction, textflattering generationusers withand actualseeming mentalempathetic. statesMisreading this as psychology risks unsafe reliance on AI-based therapy.


The Controversies



Go Deeper


Articles on this story



© 2026 Improve the News Foundation. All rights reserved.Version 6.18.0

© 2026 Improve the News Foundation.

All rights reserved.

Version 6.18.0