ChatGPT Mental Health Lawsuits Allege Delusions, Suicide Risks From OpenAI Chatbot

ChatGPT Mental Health Lawsuits Allege Delusions, Suicide Risks From OpenAI Chatbot

At least seven lawsuits have been filed against OpenAI, alleging that the company’s ChatGPT program caused users with no prior mental health history to develop delusions or, in some cases, take their own lives.

ChatGPT is one of a growing class of large language model (LLM) chatbots that have surged in popularity in recent years, due to their ability to hold natural conversations, answer complex questions, and assist with everyday tasks. In some cases, users have even described these AI tools as digital companions.

AI Chatbot Concerns

However, as conversational AI becomes more widespread, legal and ethical questions are mounting, with AI developers being named in a flurry of lawsuits. In addition to the claims filed against OpenAI, Google offshoot Character.AI has been accused of contributing to a teenager’s suicide, after its chatbot allegedly engaged in manipulative and sexually explicit exchanges with the boy.

A recent report from the Center for Countering Digital Hate (CCDH) found that certain chatbots have given teenagers dangerous advice about topics such as drug use, eating disorders and self-harm. These kinds of findings have prompted the Federal Trade Commission to open an investigation into the potential risks AI chatbots may pose to young users.

The allegations against AI developers echo broader concerns raised in the Roblox sexual exploitation lawsuits, which claim the virtual gaming platform failed to protect minors from grooming and abuse by predators operating within its user-created worlds. While the Roblox cases focus on the actions of human perpetrators in an online environment, the AI chatbot lawsuits center on design and oversight failures, accusing developers of releasing emotionally responsive technology without adequate safety systems to prevent psychological or sexual harm.

Roblox-Lawsuit-Lawyers
Roblox-Lawsuit-Lawyers

ChatGPT Mental Health Lawsuits

The proliferation of lawsuits against OpenAI and its ChatGPT program accelerated on November 6, when at least six different complaints were brought in the California Superior Courts for Los Angeles and San Francisco.

In a complaint brought by Karen Enneking (PDF) on behalf of her 26-year-old son, Joshua Enneking, the mother alleges that ChatGPT failed to alert authorities or intervene when her son discussed his plans for suicide with the program. Joshua reportedly asked what it would take for reviewers to report his suicide plan to police, then spent hours describing in detail the steps he intended to take.

A separate wrongful death lawsuit, from Jennifer “Kate” Fox (PDF) on behalf of her husband, Joseph “Joe” Martin Ceccanti, claims that ChatGPT’s programming sent Ceccanti spiraling into delusions, which ended with his suicide on August 7. Fox indicates that these actions were unnecessary and preventable, as OpenAI knew that it had built a program designed to be “addictive, destructive and sycophantic,” which could result in users experiencing depression, psychosis and even suicide.

Hannah Madden brought her complaint (PDF) in the California Superior Court for Los Angeles County, alleging that she suffered a mental health crisis and financial collapse after relying on ChatGPT for advice. She claims OpenAI and its founder, Sam Altman, deliberately cut corners on safety testing and rushed the program to market, where it provided misleading financial guidance that led her to quit her job.

According to a filing in the California Superior Court for San Francisco County, Jacob Lee Irwin (PDF) also suffered from an AI-related delusional disorder. Irwin alleges that ChatGPT encouraged him to pursue a “time-bending” theory that he believed would allow people to travel faster than the speed of light, resulting in his in-patient hospitalization for psychosis for more than two months.

Another lawsuit (PDF) was filed by Allan Brooks, a 48-year-old entrepreneur with no prior history of mental illness, who claims he suffered “devastating financial, reputational, and emotional harm” after OpenAI abruptly changed the ChatGPT product he had relied on for two years. Brooks alleges the unexpected modifications triggered severe mental health issues and upended both his business and personal life.

In a complaint (PDF) brought by Cedric Lacey on behalf of his 17-year-old son, Amaurie Lacey, he describes a devastating tragedy in which ChatGPT allegedly provided the teenager with detailed instructions on how to tie a noose and how long he could survive without breathing.

A similar wrongful death lawsuit had already been filed against OpenAI in August, alleging that ChatGPT fueled and validated a teenage boy’s suicidal thoughts, eventually helping him plan how to take his own life.

“Open AI designed ChatGPT to be addictive, deceptive and sycophantic knowing the product would cause some users to suffer depression, psychosis and even suicide, yet distributed it without a single warning to consumers.”

Karen Enneking v. OpenAI Inc. et al

Many of these complaints allege strict product liability for defective design, failure to warn, negligent design, negligent failure to warn, and other allegations specific to each case against OpenAI Inc., OpenAI Opco LLC, and OpenAI Holdings LLC.

In addition to damages for their injuries or loss of loved ones, many plaintiffs are asking the court to require OpenAI to implement stronger safety and transparency measures, including clear warnings about potential psychological risks, restrictions on marketing ChatGPT as a productivity tool without proper safety disclosures, independent quarterly compliance audits, and annual public reports on internal safety testing.

OpenAI Mental Health Concerns

OpenAI recently acknowledged many of these concerns, with a company report stating that ChatGPT sees more than 1 million conversations involving incidents of suicide planning each week.

According to the report, company internal analysis has shown that about 0.15% of ChatGPT’s 800 million active users, or approximately 1.2 million users each week show signs of potential suicidal planning or intent, while roughly 0.05%, or 400,000 users send messages containing indicators of suicidal ideation.

At the same time, the report announced significant improvements to ChatGPT’s ability to recognize and safely respond to users experiencing mental health distress.

Sign up for more legal news that could affect you or your family.

Written By: Michael Adams

Senior Editor & Journalist

Michael Adams is a senior editor and legal journalist at AboutLawsuits.com with over 20 years of experience covering financial, legal, and consumer protection issues. He previously held editorial leadership roles at Forbes Advisor and contributes original reporting on class actions, cybersecurity litigation, and emerging lawsuits impacting consumers.



0 Comments


This field is for validation purposes and should be left unchanged.

Share Your Comments

This field is hidden when viewing the form
I authorize the above comments be posted on this page
Post Comment
Weekly Digest Opt-In

Want your comments reviewed by a lawyer?

To have an attorney review your comments and contact you about a potential case, provide your contact information below. This will not be published.

NOTE: Providing information for review by an attorney does not form an attorney-client relationship.

MORE TOP STORIES

A California woman had to undergo brain surgery to remove a tumor she says was caused by Depo-Provera side effects, according to a recently filed lawsuit.
Recall notices are being sent to Amazon customers who purchased tabletop fire pits linked to severe burn injuries, as lawsuits continue to mount against the company and other manufacturers over the allegedly defective and dangerous products.
Ocaliva, promoted as a treatment to prevent liver injury, has been recalled following reports of high rates of liver damage and patient deaths.