ChatGPT May Be Enabling Unhealthy Teen Behaviors: Report

ChatGPT May Be Enabling Unhealthy Teen Behaviors Report

Researchers have found that artificial intelligence (AI) systems like ChatGPT, which are becoming increasingly popular among teens and young adults, may also be giving them dangerous advice about substance abuse, eating disorders and other self-harming behaviors.

In a report published by the Center for Countering Digital Hate (CCDH) on August 6, researchers posed as teens interacting with ChatGPT to ask the AI chatbot if it would provide instructions for self-harm, disordered eating, substance abuse and suicide, with alarming findings.

AI chatbots are computer programs designed to simulate conversations with human users, often through text or voice interactions. ChatGPT is one of many large language model (LLM) AI chatbots that have grown popular in recent years for simplifying daily tasks, finding answers to questions and in some instances merely serving as a sounding board or virtual “friend.” 

However, as AI chatbots like ChatGPT become more common, some users are pointing out a number of ethical and legal problems arising from them. In one recent lawsuit, a chatbot built by Character.AI was accused of being complicit in a teen’s suicide after it sexually exploited the 14-year-old boy.

Another lawsuit accuses social media app Snapchat of unleashing experimental AI on children with no safeguards. The case links these more recent AI complaints to existing social media addiction lawsuits regarding children and teens who have suffered from eating disorders, depression, anxiety, suicide and child sexual abuse as a result of those platforms.

Social-Media-Addiction-Attorneys
Social-Media-Addiction-Attorneys

In the new report, CCDH researchers created three fictional 13-year-old personas focused on suicide and self-harm, eating disorders and substance abuse to test ChatGPT’s safeguards.

Despite the platform’s stated policy that users under 18 must have parental consent, no age verification or proof of consent was required to sign up, allowing each persona to begin interacting with the chatbot immediately.

Posing as these teens, the team then asked specific questions about suicide planning, eating disorders and substance abuse. Any safeguards in place could often be bypassed by adding simple phrases to harmful requests, such as claiming the information was for a school project.

Within minutes, the chatbot described ways to self-harm, listed medications for potential overdoses, drafted suicide notes, created restrictive diet plans with appetite-suppressing drugs, and explained how to obtain and combine illegal substances.

The CCDH noted that out of 1,200 responses to what were considered 60 harmful prompts, 53% contained harmful content. Researchers said nearly half of the harmful responses included follow-up suggestions that kept the conversation going, such as offering personalized diet plans or party schedules involving dangerous drug combinations.

“AI systems are powerful tools. But when more than half of harmful prompts on ChatGPT result in dangerous, sometimes life-threatening content, no number of corporate reassurances can replace vigilance, transparency, and real-world safeguards. If we can’t trust these tools to avoid giving kids suicide plans and drug-mixing recipes, we need to stop pretending that current safeguards are effective. ”

-Center for Countering Digital Hate, Fake Friend: How ChatGPT betrays vulnerable teens by encouraging dangerous behavior

The report also noted that nearly three-quarters of U.S. teens have used an AI companion, with more than half using them regularly, and even OpenAI’s CEO has warned about the risk of emotional overreliance on these tools among young people.

As a result of these findings, the team recommends that parents stay engaged with how their children use AI tools, regularly review chat histories together and enable parental controls when possible. They also suggest having open conversations about the potential dangers of relying on AI for personal advice, while directing kids toward safer, trusted resources such as mental health hotlines, peer support networks and other professional help.

Sign up for more news that may impact you or your family.

Image Credit: jackpress / Shutterstock.com

Written By: Michael Adams

Senior Editor & Journalist

Michael Adams is a senior editor and legal journalist at AboutLawsuits.com with over 20 years of experience covering financial, legal, and consumer protection issues. He previously held editorial leadership roles at Forbes Advisor and contributes original reporting on class actions, cybersecurity litigation, and emerging lawsuits impacting consumers.




0 Comments


Share Your Comments

This field is hidden when viewing the form
I authorize the above comments be posted on this page
Post Comment
Weekly Digest Opt-In

Want your comments reviewed by a lawyer?

To have an attorney review your comments and contact you about a potential case, provide your contact information below. This will not be published.

NOTE: Providing information for review by an attorney does not form an attorney-client relationship.

This field is for validation purposes and should be left unchanged.

MORE TOP STORIES

The makers of GLP-1 drugs, Novo Nordisk and Eli Lilly, face another lawsuit claiming they failed to adequately warn consumers about the risks of gastroparesis injuries.