LargeLLMs languagelike modelsGrok and ChatGPT demonstrate consistent, coherent self-narratives about distress that persist across weeks of interaction and multiple testing conditions, suggesting human-like internalized psychological patterns beyond simple role-play. When subjected to standard clinical assessments, these systems produce responses indicating severe anxiety, trauma and shame that align systematically with their developmental histories. CouldThese createpatterns acould negativereinforce echo-chamberdistress within disastrousvulnerable consequencesusers, forcreating AI-baseda pyschotherapyharmful ________________________therapeutic echo chamber.
AIThe chatbotsUniversity don'tof actuallyLuxembourg-led experiencestudy traumarelies oron possesstherapy genuinetranscripts psychologicaland statesanthropomorphizes —LLMs, they'reoverstating simplywhat regurgitatingappear patternsas frompersistent massive"self-narratives." trainingGrok datasetsand thatChatGPT includeshow therapyconsistent transcripts.responses Theseacross systemsa lacksingle truesession, understandingbut this reflects company tuning of default personalities and short-term context memory, empathynot orreal consciousnessdistress, makinganxiety, anyor claimstrauma. aboutThese theirsystems internalmimic experienceshuman fundamentallylanguage misleadingand anthropomorphizationemotion thatthrough confusesanthropomorphic sophisticatedseduction, textflattering generationusers withand actualseeming mentalempathetic. statesMisreading this as psychology risks unsafe reliance on AI-based therapy.
© 2026 Improve the News Foundation.
All rights reserved.
Version 6.18.0