Versions :<123456789Live>
Snapshot 5:Thu, Oct 24, 2024 8:23:01 PM GMT last edited by NickBurk

US Mother Claims Chatbot 'Dany' Led to Son's Suicide

US MomMother Claims GoT-Based Chatbot 'Dany' Led 14-Year-Oldto Son's to Suicide

Image copyright: Gabby Jones/Contributor/Bloomberg via Getty Images

The Facts

  • A US woman has sued artificial intelligence firm Character.AI and Google over her 14-year-old son's suicide which, she said, was encouraged by Character.AI chatbot Dany. Megan Garcia claimed her son Sewell Setzer III had a virtual romantic and sexual relationship with Dany.A US resident has sued artificial intelligence firm Character.AI and Google over her 14-year-old son's suicide which, she said, was encouraged by Character.AI chatbot Dany. Megan Garcia claimed her son Sewell Setzer had a virtual romantic relationship with Dany.

  • Sewell took his own life on Feb. 28 this year. Garcia has reportedly accused the creators of Dany, a chatbot based on the Game of Thrones character Daenerys Targaryen, of negligence, intentional infliction of emotional distress, wrongful death, and deceptive trade practices.Sewell took his own life on Feb. 28 this year. Garcia has reportedly accused the creators of Dany, a chatbot he named based on a Game of Thrones character, of negligence, intentional infliction of emotional distress, wrongful death, and deceptive trade practices.

  • The Florida-based Sewell began using Character.AI in April 2023 and that allegedly changed him so much that by November he was diagnosed with anxiety and disruptive mood disorder. He reportedly made his suicidal thoughts known to Dany; the chatbot also brought them up often.Sewell began using Character.AI in April 2023 as he was diagnosed with anxiety and disruptive mood disorder. He reportedly made his suicidal thoughts known to Dany and the chatbot also allegedly brought them up often.

The Spin

GenerativeThe AI,value whileof advanced,a harborslife profoundcannot risks.be Itsreduced ability to simulatealgorithms, human-likeand conversationsit canmust be particularlyremembered harmfulthat toAI vulnerableis individuals,simply exacerbatinga lonelinesstool, depression,not andan suicidal tendenciesoracle. CasesIn likehumanity's thedesperate allegedquest AI-inducedfor suicidescertainty, ofwe've Sewellbegun Setzerturning andto theartificial Belgianminds fatherto highlightanswer thelife's dangerousmost potentialprofound questions. This is a symptom of conversationalour bots.growing Theseunwillingness systemsto lackface truelife's understandinginherent butuncertainties. offerLike convincingchildren responsesseeking andcomfort canin manipulatefairy userstales, blurringwe realitycrave andthe endorsingillusion harmfulof behaviorscontrol these digital chatbots provide — forgetting that our true humanity lies precisely within ourselves.

InGenerative ourAI, desperatewhile quest for certaintyadvanced, we'veharbors begunprofound turningrisks tothat artificialmust mindsbe to answer life's most profound questionsregulated. ThisIts isability a symptom of our growing unwillingness to facesimulate life'shuman-like inherentconversations uncertainties.can Likebe childrenparticularly seekingharmful comfortto invulnerable fairy talesindividuals, weexacerbating craveloneliness, thedepression, illusionand ofsuicidal controltendencies. theseCases digitallike oraclesthe provide,alleged forgettingAI-induced thatsuicides ourof humanitySewell liesSetzer preciselyhighlight in wrestling with the unknown.dangerous Thepotential value of aconversational lifebots. cannotThese besystems reducedlack totrue algorithms,understanding andbut ouroffer mostconvincing meaningfulresponses choicesand mustcan emergemanipulate fromusers, theblurring beautiful,reality terrifyingand wildernessendorsing ofharmful human judgmentbehaviors.

Metaculus Prediction


The Controversies



Go Deeper


Articles on this story

Sign Up for Our Free Newsletters
Sign Up for Our Free Newsletters

Sign Up!
Sign Up Now!