Lawsuits Against Character.AI Over Child Suicide Risks Lead Platform To Ban Minors

Lawsuits Against Character.AI Over Child Suicide Risks Lead Platform To Ban Minors

Character.AI plans to restrict minors from using its chatbot feature, while still permitting them to create videos, stories and livestreams with their AI characters.

The Character.AI platform was developed in 2021 as a virtual conversation and AI roleplay tool, which uses artificial intelligence to allow users to chat with different customizable personas. Users can create their own AI characters or select from a library of fictional figures, real-life inspirations or other original creations.

However, as artificial intelligence platforms like Character.AI have grown more advanced and integrated into daily life, experts have raised concerns about potential risks, particularly for younger users. In a recent announcement, the U.S. Federal Trade Commission (FTC) said it would be investigating multiple AI chatbot platforms, requiring them to provide information on how they account for any potentially negative impacts their technology may have on minors.

This regulatory action occurred after multiple lawsuits alleged these kinds of online products are sexually grooming minors and contributing to other forms of self-harm among a generation of children and teens. In this way, AI chatbots are facing similar regulatory scrutiny as those prompted by numerous social media addiction lawsuits and Roblox sexual exploitation lawsuits.

Roblox-Lawsuit-Lawyers
Roblox-Lawsuit-Lawyers

According to a press release issued by Character.AI on October 29, the company will be updating its platform for users under 18 in response to growing concerns about how AI interactions may affect children and teenagers.

Company officials said the decision follows recent media reports and regulatory inquiries questioning the types of content minors may encounter and the broader impact of open-ended AI conversations, even when safety controls are functioning as intended.

After reviewing feedback from safety experts, parents and regulators, the company said it will redesign its teen platform to create a more secure, age-appropriate experience, by implementing three main initiatives to help promote the safety of younger users:

  • Character.AI will stop open-ended AI chats for users under 18. Instead, the company will develop a new teen-friendly experience focused on creative tools like videos, stories and character-based content.
  • A new age verification system will be introduced to ensure users access age-appropriate experiences. The company has created its own age assurance model and will supplement it with third-party tools such as Persona, an online identity verification and management service.
  • The company also plans to launch and fund the AI Safety Lab, an independent nonprofit dedicated to advancing safety and ethical alignment for AI-driven entertainment, through research into new safety methods.

In its press release, Character.AI says its Safety Lab will collaborate with its internal departments and outside experts.

“These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers. But we believe they are the right thing to do. We want to set a precedent that prioritizes teen safety while still offering young users opportunities to discover, play, and create.”

— Character.AI

All AI roleplay conversations for users under 18 will cease on the Character.AI platform by November 25. During the transition, chat use for minors will be capped at two hours per day, with further reductions leading up to the change.

GUARD Act To Address AI Concerns

In addition to the actions being taken by Character.AI, a bipartisan coalition of U.S. senators, including Senators Mark R. Warner (D-VA), Josh Hawley (R-MO), Richard Blumenthal (D-CT), Chris Murphy (D-CT), and Katie Britt (R-AL), introduced a new bill (PDF) on October 28, known as the Guidelines for User Age-Verification and Responsible Dialogue (GUARD) Act of 2025.

According to a press release issued with the bill, the GUARD Act will:

  • Ban AI chatbot companions for minors
  • Mandate that all AI chatbots disclose their status as non-human
  • Create new crimes for companies that allow AI to make sexually explicit content available to minors

In light of the concerns raised by multiple independent reports and lawsuits over the kinds of conversations AI chatbots are engaging in with youth, the Senators have announced their intentions to use the GUARD Act to provide safety rails for the entire industry.

“AI chatbots pose a serious threat to our kids. More than seventy percent of American children are now using these AI products. Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology.”

— Senator Josh Hawley

The GUARD Act has been introduced to the U.S. Senate. However, it has yet to be voted on in committee.

Sign up for more safety and legal news that could affect you or your family.

Image Credit: Mijansk786 / Shutterstock.com

Written By: Michael Adams

Senior Editor & Journalist

Michael Adams is a senior editor and legal journalist at AboutLawsuits.com with over 20 years of experience covering financial, legal, and consumer protection issues. He previously held editorial leadership roles at Forbes Advisor and contributes original reporting on class actions, cybersecurity litigation, and emerging lawsuits impacting consumers.




0 Comments


This field is for validation purposes and should be left unchanged.

Share Your Comments

This field is hidden when viewing the form
I authorize the above comments be posted on this page
Post Comment
Weekly Digest Opt-In

Want your comments reviewed by a lawyer?

To have an attorney review your comments and contact you about a potential case, provide your contact information below. This will not be published.

NOTE: Providing information for review by an attorney does not form an attorney-client relationship.

MORE TOP STORIES

A Florida jury has ordered Johnson & Johnson to pay $20 million to the family of a man who died of mesothelioma after using the company’s talc-based products for 50 years.
Sanofi indicates Dupixent sales are growing stronger as the medication gathers more indications for use worldwide, despite recent cancer concerns.