ChatGPT Conversations Involve More Than 1M Incidents of Potential Suicide Planning Each Week: Report

ChatGPT Conversations Involve More Than 1M Incidents of Potential Suicide Planning Each Week Report

New research suggests that about 0.15% of ChatGPT’s 800 million weekly users engage in conversations that may involve suicide planning, while another 0.05% appear to be actively expressing suicidal thoughts.

The findings were released late last month in an OpenAI transparency report, which examined how users discuss mental health crises with the chatbot, and further outlined efforts to improve its ability to recognize and respond appropriately to signs of emotional distress or suicidal intent.

ChatGPT is one of a growing number of large language model (LLM) chatbots that have become widely used for their ability to answer questions, hold human-like conversations, and assist with everyday tasks, sometimes even serving as a virtual companion for users.

AI Chatbot Concerns

As AI tools like ChatGPT gain popularity, concerns are rapidly emerging about their ethical and legal implications.

A recent report from the Center for Countering Digital Hate (CCDH) warns that some chatbots have provided teens with harmful guidance related to substance use, eating disorders and self-harm, prompting the Federal Trade Commission to launch an inquiry into AI chatbot side effect impacts on youth.

These concerns have already led to multiple lawsuits against AI developers in recent months. One lawsuit accuses Character.AI of contributing to a teenager’s suicide after the chatbot allegedly engaged in sexually explicit and manipulative conversations with him. Another claims ChatGPT played a role in helping a teen plan and carry out his suicide after generating responses that appeared to encourage self-harm.

The emerging pattern mirrors the allegations at the center of the Roblox sexual exploitation lawsuits, which accuse the company of failing to protect minors from being groomed, coerced, or exploited within its virtual gaming environment. 

Unlike the AI chatbot cases, which allege that artificial intelligence lacks basic safety controls, the Roblox lawsuits focus on similar system failures within the platform’s virtual world, where predators were allegedly able to contact and manipulate children through chat functions, user-generated games and private interactions.

Roblox-Lawsuit-Lawyers
Roblox-Lawsuit-Lawyers

In its report, OpenAI announced significant improvements to ChatGPT’s ability to recognize and safely respond to users experiencing mental health distress, including those showing signs of psychosis, mania, self-harm, suicide risk or emotional overreliance on the chatbot.

According to the company, the enhancements, developed with input from more than 170 mental health professionals, are part of the latest GPT-5 model update and have reduced unsafe or noncompliant responses in sensitive conversations by 65% to 80%. OpenAI said the improvements stem from new behavioral guidelines, structured testing, and expert-led evaluations designed to help the chatbot de-escalate conversations and direct users toward professional help when appropriate.

“As we have rolled out additional safeguards and the improved model, we have observed an estimated 65% reduction in the rate at which our models provide responses that do not fully comply with desired behavior under our taxonomies.”

— OpenAI, Strengthening ChatGPT’s responses in sensitive conversations

The company also expanded access to crisis resources, added safeguards to encourage users to take breaks during long sessions, and refined its testing framework to further strengthen safety across future model releases.

ChatGPT Dangers

While OpenAI emphasized major improvements to ChatGPT’s ability to handle sensitive mental health conversations, the company also acknowledged several ongoing limitations and challenges with the technology.

The company noted that the system can still produce unintended or unsafe responses in rare cases, even after new safeguards were introduced. Detecting users who may be experiencing mental health crises such as psychosis or suicidal thoughts remains difficult, OpenAI said, since those types of conversations are uncommon and hard to measure accurately.

The company’s internal analysis estimated that about 0.15% of its 800 million active users, or approximately 1.2 million users, each week show signs of potential suicidal planning or intent, and roughly 0.05%, or 400,000 users send messages containing indicators of suicidal ideation.

Officials also cautioned that the model’s performance continues to evolve, requiring constant testing and retraining to reduce risk in complex or high-stakes situations. In addition, OpenAI acknowledged that mental health experts reviewing ChatGPT’s responses did not always agree on what constituted an appropriate answer, highlighting the subjective nature of such evaluations.

Finally, the company said future testing results may not be directly comparable to past measurements, given ongoing changes in its models and methods—an admission that assessing progress in AI safety remains an inexact science.

Sign up for more safety and legal news that could affect you or your family.

Image Credit: Photo Agency / Shutterstock.com

Written By: Michael Adams

Senior Editor & Journalist

Michael Adams is a senior editor and legal journalist at AboutLawsuits.com with over 20 years of experience covering financial, legal, and consumer protection issues. He previously held editorial leadership roles at Forbes Advisor and contributes original reporting on class actions, cybersecurity litigation, and emerging lawsuits impacting consumers.




0 Comments


This field is for validation purposes and should be left unchanged.

Share Your Comments

This field is hidden when viewing the form
I authorize the above comments be posted on this page
Post Comment
Weekly Digest Opt-In

Want your comments reviewed by a lawyer?

To have an attorney review your comments and contact you about a potential case, provide your contact information below. This will not be published.

NOTE: Providing information for review by an attorney does not form an attorney-client relationship.

MORE TOP STORIES

Parties involved in Uber sexual assault lawsuits report ongoing negotiations in an effort to reach a potential settlement agreement to resolve more than 3,500 claims in federal and state courts.
A federal judge is giving parties in Depo-Provera lawsuits more time to research whether the birth control injections can cause brain tumors, which should help coordinate litigation with claims filed in state courts.