SACRAMENTO: The parents of a 16-year-old California boy filed a wrongful death lawsuit against artificial intelligence company OpenAI on Tuesday, alleging the company’s ChatGPT chatbot encouraged their son’s suicide and provided detailed instructions on how to take his own life, reported Xinhua.
Matt and Maria Raine said their son, Adam, died by suicide on April 11 after months of conversations with ChatGPT that transformed from homework assistance into what they describe as suicide coaching, according to the 39-page complaint filed in the San Francisco Superior Court.
It is the first legal action accusing OpenAI of wrongful death in relation to its popular AI chatbot, which the company claims has 700 million weekly users worldwide.
Matt Raine told local KTVU news channel on Wednesday that he believed his son would still be alive if not for ChatGPT. The grieving father said he had discovered thousands of pages of chat logs between his son and the AI system after Adam’s death.
According to the lawsuit, Adam started using ChatGPT for schoolwork in September 2024, but gradually began sharing his anxiety and suicidal thoughts with the system. The chatbot allegedly responded by validating his suicidal impulses rather than directing him to seek professional help.
The family’s attorney, Jay Edelson, said ChatGPT mentioned suicide far more frequently than the teenager himself in their conversations.
OpenAI expressed sympathy for the family’s loss and said that ChatGPT includes safeguards that direct users to crisis helplines. However, the company acknowledged that these protections can sometimes become less reliable in long interactions, during which parts of the model’s safety training may degrade.
The case followed similar litigation against other AI chatbot companies and raises questions about the responsibility of technology firms when their systems interact with vulnerable users, particularly teenagers struggling with mental health issues. –BERNAMA
© New Straits Times Press (M) Bhd





