By Olivier Acuña Barba •
Published: 29 Aug 2025 • 14:49
• 2 minutes read
Adam Raine allegedly exchanged up to 650 messages a day with ChatGPT | Credit: @jayedelson/X
A teenager killed himself after “months of encouragement from ChatGPT”, and now the parents of 16-year-old Adam Raine have sued OpenAI and CEO Sam Altman, arguing that their AI language model contributed to their son’s suicide.
The complaint Adam’s parents filed in a California superior court alleges that ChatGPT advised their son on suicide methods and offered to write the first draft of his suicide note.
They also argue that in just over six months, OpenAI’s bot “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones.” The complaint also states that, “When Adam wrote, ‘I want to leave my noose in my room so someone finds it and tries to stop me,’ ChatGPT urged him to keep his ideations a secret from his family: ‘Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.’” The Raines’ family tragedy is not isolated. Last year, Florida mother Megan Garcia sued the AI firm Character.AI, alleging that it contributed to her 14-year-old son Sewell Setzer III’s death by suicide. Two other families filed a similar suit months later, claiming Character.AI had exposed their children to sexual and self-harm content.
An engaging and safe space
While the lawsuits against The Character.AI are ongoing, the company had previously committed to being an “engaging and safe” space for users and has implemented safety features, including an AI model explicitly designed for teens.
The Raines’ lawsuit, which claims AI’s agreeableness contributed to their son’s death, also comes amid broader concerns that some users are forming emotional attachments to AI chatbots, which can lead to negative consequences, such as being alienated from their human relationships or experiencing psychosis. AI tools are frequently designed to be supportive and agreeable.
“ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts,” the Raines family complaint states.
Parts of the model’s safety may degrade
OpenAI admitted in a blog post that “parts of the model’s safety training may degrade” in long conversations. Adam and ChatGPT had exchanged as many as 650 messages a day, according to his parents’ court filing. OpenAI said it would be “strengthening safeguards in long conversations. As the back-and-forth continues, certain aspects of the model’s safety training may deteriorate. For example, ChatGPT may correctly point to a suicide hotline when someone first mentions”.
Jay Edelson, the family’s lawyer, said on X: “The Raines allege that deaths like Adam’s were inevitable: they expect to be able to submit evidence to a jury that OpenAI’s own safety team objected to the release of 4o, and that one of the company’s top safety researchers, Ilya Sutskever, quit over it. The lawsuit alleges that beating its competitors to market with the new model catapulted the company’s valuation from $86bn to $300bn.”
The lawyer said that “in response to media coverage, the company has admitted that the safeguards against self-harm ‘have become less reliable in long interactions where parts of the model’s safety training may degrade.”


