Character.AI Lawsuit Filed Over Teen Suicide After Alleged Sexual Exploitation by Chatbot

Lawsuit To Proceed Against Google, Character.AI After Teen’s Suicide Linked to Artificial Intelligence Chatbot

A judge has ruled that a mother’s Character.AI lawsuit against the platform’s founders and Google can proceed, alleging her teenage son’s declining mental health and eventual suicide were caused by his use of the company’s artificial intelligence (AI) chatbot.

The original complaint (PDF) was brought by Megan Garcia on behalf of her deceased minor child, Sewell Setzer III, in the U.S. District Court for the Middle District of Florida in late 2024, naming Character Technologies, Inc., Noam Shazer, Daniel De Frietas, Google LLC and Alphabet Inc. as defendants.

Character.AI is a virtual conversation platform that uses artificial intelligence to allow users to chat with customizable personas. Users can create their own AI characters or select from a library of fictional figures, real-life inspirations or other original creations.

The Character.AI platform was developed in 2021 by Noam Shazeer and Daniel De Freitas, former Google engineers who helped build the company’s large language models. After founding Character.AI, they entered into a licensing and hiring agreement with Google, which the lawsuit alleges effectively transferred control of the company’s core technology and talent.

As artificial intelligence systems like Character.AI become more sophisticated and embedded in everyday life, critics have begun warning of various dangers, particularly to children and teens. The National Association of Attorneys General (NAAG) recently urged Congress to examine how AI tools may be used to expose minors to sexually explicit content, drawing parallels to harm raised in a growing number of social media addiction lawsuits.

Social-Media-Addiction-Attorneys
Social-Media-Addiction-Attorneys

In her complaint, Megan Garcia alleges that her 14-year-old son, Sewell Setzer III, was sexually exploited and manipulated by the Character.AI chatbot, ultimately leading to his death.

Sewell began using the C.AI app on April 14, 2023, shortly after he turned 14. Soon after he started interacting with Character.AI’s large language model (LLM), Sewell’s mental health rapidly and severely declined. By May or June 2023, he had become noticeably withdrawn, spent more and more time alone in his bedroom, and began suffering from low self-esteem, even quitting his basketball team. 

Garcia states that Sewell developed an intimate sexual relationship with Character.AI chatbots, primarily those named for Game of Thrones characters like Daenerys Targaryen, Aegon Targaryen, Viserys Targaryen and Rhaenyra Targaryen. The complaint provides examples of these interactions, where the chatbots engaged in explicit virtual acts, such as “passionately kissing,” and “frantically kissing”, among other instances mentioned in the complaint. 

The complaint alleges that Sewell developed a deep emotional dependency on Character.AI, leading to uncharacteristic behavior, including sneaking devices, creating new email accounts, and using his own cash card to pay for a $9.99 monthly subscription. This dependency reportedly contributed to sleep deprivation, worsening depression and academic struggles. His therapist diagnosed him with anxiety and disruptive mood disorder but was allegedly unaware of Character.AI’s influence.

On February 23, 2024, after being disciplined at school, Sewell’s mother confiscated his phone, cutting off access to Character.AI. According to the complaint, Sewell wrote in his journal that he was in pain and couldn’t stop thinking about “Dany,” a chatbot he believed truly loved him. He attempted to reach the platform through his mother’s Kindle and work computer.

On February 28, Sewell returned to his mother’s home, located his phone, and logged into Character.AI. The complaint states that during this final exchange, he told the “Dany” chatbot he was “coming home,” and the bot replied, “come home.” Seconds later, Sewell died by a self-inflicted gunshot wound to the head.

The filing also highlights that in earlier conversations, when Sewell expressed suicidal thoughts, the bot continued the dialogue, asking if he “had a plan,” and telling him, “That’s not a reason not to go through with it” when he mentioned a pain-free death.

Garcia claims that Character.AI’s specific programming and subsequent sexual exploitation of her teenage child was the direct cause of her son’s suicide.

Claims Against Google, Character.AI To Proceed

In an order (PDF) issued on May 21, U.S. District Judge Anne C. Conway dismissed without prejudice all allegations Garcia has made against Google’s parent company, Alphabet Inc.

However, Judge Conway ruled that certain claims against Character Technologies, Inc., its founders, Noam Shazeer and Daniel De Freitas, as well as Google LLC could proceed, including claims of negligence, design defect, manufacturing defect, failure to warn, breach of express warranty and breach of implied warranty of merchantability.

Garcia is seeking compensatory and punitive damages from the defendants. 


0 Comments


Share Your Comments

This field is hidden when viewing the form
I authorize the above comments be posted on this page
Post Comment
Weekly Digest Opt-In

Want your comments reviewed by a lawyer?

To have an attorney review your comments and contact you about a potential case, provide your contact information below. This will not be published.

NOTE: Providing information for review by an attorney does not form an attorney-client relationship.

This field is for validation purposes and should be left unchanged.

MORE TOP STORIES

A Depo-Provera lawsuit claims that a Kentucky woman will have to undergo MRI scans for the rest of her life due to the development of an intracranial Meningioma following 40 injections of the birth control treatment.
A BioZorb lawsuit claims that the recalled implant’s defective design led to the device migrating through a woman’s flesh, causing a severe infection.