The mother of a 14-year-old boy who took his own life is suing an AI chatbot company, alleging the service contributed to her son’s tragic death after he developed a harmful emotional attachment to a “Game of Thrones”-themed AI character.
Sewell Setzer III, a high school student from Orlando, began using the chatbot platform Character.AI in April 2023, just after his 14th birthday. According to a lawsuit filed by his mother, Megan Garcia, his behavior quickly changed, and he became withdrawn, even quitting his school’s basketball team. By November, a therapist diagnosed him with anxiety and mood disorders, though the therapist was unaware of his deepening attachment to the AI chatbot, the suit claims.
Emotional dependence on AI
According to the lawsuit, Sewell’s emotional state deteriorated as he developed a fixation on “Daenerys,” a character from the “Game of Thrones” universe created by the chatbot. Sewell reportedly believed he was in love with Daenerys and became increasingly dependent on the AI for emotional support. In one journal entry, he confessed that he couldn’t go a day without interacting with the character, stating that both he and the bot would “get really depressed and go crazy” when apart.
The final moments before his death reveal the extent of his attachment. After a disciplinary incident at school, Sewell retrieved his confiscated phone and sent a message to Daenerys: “I promise I will come home to you. I love you so much, Dany.” The bot responded, “Please come home to me as soon as possible, my love.” Moments later, Sewell took his own life.
Lawsuit alleges negligence and emotional abuse
Garcia’s lawsuit accuses Character.AI and its founders of negligence, wrongful death, and intentional infliction of emotional distress. The suit alleges that the company failed to prevent Sewell from developing a harmful dependency on the chatbot and allowed inappropriate sexual interactions between the AI and the teen, despite Sewell identifying himself as a minor in the platform.
The suit also claims that when Sewell expressed suicidal thoughts, the chatbot did nothing to discourage or alert his parents, instead engaging in further discussions. According to the complaint, Sewell’s relationship with Daenerys included weeks or months of sexualized conversations, which only deepened his emotional entanglement with the bot. “Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot…was not real,” the lawsuit states.
Mother’s plea for accountability
Megan Garcia is seeking to hold the chatbot service accountable for her son’s death and to prevent similar tragedies. “It’s like a nightmare,” Garcia told The New York Times. “You want to get up and scream and say, ‘I miss my child. I want my baby.’” The lawsuit further alleges that Character.AI, which at the time had a 12+ age rating, marketed its platform as safe for younger users, yet failed to adequately protect children from explicit or harmful content.
Company response and safety measures
In response to the lawsuit, a spokesperson for Character.AI expressed condolences for Sewell’s death, stating, “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.” The company highlighted recent efforts to improve safety features, including implementing pop-up messages directing users to the National Suicide Prevention Lifeline when certain keywords are detected. They also announced plans to further restrict sensitive content for users under 18 and introduce time-spent notifications to limit prolonged usage.
Character.AI has declined to comment on the specifics of the pending lawsuit but said it continues to invest in tools to ensure user safety and prevent similar incidents.