Character AI clamps down following teen user suicide, but users are revolting

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Content Warning: This article covers suicidal ideation and suicide. If you are struggling with these topics, reach out to the National Suicide Prevention Lifeline by phone: 1-800-273-TALK (8255).

Character AI, the artificial intelligence startup whose co-creators recently left to join Google following a major licensing deal with the search giant, has imposed new safety and auto moderation policies today on its platform for making custom interactive chatbot “characters” following a teen user’s suicide detailed in a tragic investigative article in The New York Times. The family of the victim is suing Character AI for his death.

Character’s AI statement after tragedy of 14-year-old Sewell Setzer

“We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family,” reads part of a message posted today, October 23, 2024, by the official Character AI company account on the social network X (formerly Twitter), linking to a blog post that outlines new safety measures for users under age 18, without mentioning the suicide victim, 14-year-old Sewell Setzer III.

As reported by The New York Times, the Florida teenager, diagnosed with anxiety and mood disorders, died by suicide on February 28, 2024, following months of intense daily interactions with a custom Character AI chatbot modeled after Game of Thrones character Daenerys Targaryen, to whom he turned to for companionship, referred to as his sister and engaged in sexual conversations.

In response, Setzer’s mother, lawyer Megan L. Garcia, filed a lawsuit against Character AI and Google parent company Alphabet yesterday in U.S. District Court of the Middle District of Florida for wrongful death.

Photos of Setzer and his mother over the years. Credit: Megan Garcia/Bryson Gillette

A copy of Garcia’s complaint demanding a jury trial provided to VentureBeat by public relations consulting firm Bryson Gillette is embedded below:

The incident has sparked concerns about the safety of AI-driven companionship, particularly for vulnerable young users. Character AI has more than 20 million users and 18 million custom chatbots created, according to Online Marketing Rockstars (OMR). The vast majority (53%+) are between 18-24 years old, according to Demand Sage, though there are no categories broken out for under 18. The company states that its policy is only to accept users age 13 or older and 16 or older in the EU, though it is unclear how it moderates and enforces this restriction.

Character AI’s current safety measures

In its blog post today, Character AI states:

“Over the past six months, we have continued investing significantly in our trust & safety processes and internal team. As a relatively new company, we hired a Head of Trust and Safety and a Head of Content Policy and brought on more engineering safety support team members. This will be an area where we continue to grow and evolve. 

We’ve also recently put in place a pop-up resource that is triggered when the user inputs certain phrases related to self-harm or suicide and directs the user to the National Suicide Prevention Lifeline.”

New safety measures announced

In addition, Character AI has pledged to make the following changes to further restrict and contain the risks on its platform, writing:

“Moving forward, we will be rolling out a number of new safety and product features that strengthen the security of our platform without compromising the entertaining and engaging experience users have come to expect from Character.AI. These include: 

  • Changes to our models for minors (under the age of 18) that are designed to reduce the likelihood of encountering sensitive or suggestive content.
  • Improved detection, response, and intervention related to user inputs that violate our Terms or Community Guidelines. 
  • A revised disclaimer on every chat to remind users that the AI is not a real person.
  • Notification when a user has spent an hour-long session on the platform with additional user flexibility in progress.

As a result of these changes, Character AI appears to be deleting certain user-made custom chatbot characters abruptly. Indeed, the company also states in its post:

“Users may notice that we’ve recently removed a group of Characters that have been flagged as violative, and these will be added to our custom blocklists moving forward. This means users also won’t have access to their chat history with the Characters in question.”

Users balk at changes they see as restriction AI chatbot emotional output

Though Character AI’s custom chatbots are designed to simulate a wide range of human emotions based on the user-creator’s stated preferences, the company’s changes to further align the range of outputs away from risky content is not going over well with some self-described users.

As captured in screenshots posted to X by AI news influencer Ashutosh Shrivastava, the Character AI subreddit is filled with complaints.

As one Redditor (Reddit user) under the name “Dqixy,” posted in part:

Every theme that isn’t considered “child-friendly” has been banned, which severely limits our creativity and the stories we can tell, even though it’s clear this site was never really meant for kids in the first place. The characters feel so soulless now, stripped of all the depth and personality that once made them relatable and interesting. The stories feel hollow, bland, and incredibly restrictive. It’s frustrating to see what we loved turned into something so basic and uninspired.

Another Redditor, “visions_of_gideon_” was even more harsh, writing in part:

“Every single chat that I had in a Targaryen theme is GONE. If c.ai is deleting all of them FOR NO FCKING REASON, then goodbye! I am a fcking paying for c.ai+, and you delete bots, even MY OWN bots??? Hell no! I am PISSED!!! I had enough! We all had enough! I am going insane! I had bots that I have been chatting with for MONTHS. MONTHS! Nothing inappropriate! This is my last straw. I am not only deleting my subscription, I am ready to delet c.ai!

Similarly, the Character AI Discord server‘s feedback channel is filled with complaints about the new updates and deletion of chatbots that users spent time making and interacting with.

The issues are obviously highly sensitive and there is no broad agreement yet as to how much Character AI should be restricting its chatbot creation platform and outputs, with some users calling for the company to create a separate, more restricted under-18 product while leaving the primary Character AI platform more uncensored for adult users.

Clearly, Setzer’s suicide is a tragedy and it makes complete sense a responsible company would undertake measures to help avoid such outcomes among users in the future.

But the criticism from users about the measures Character AI has and is taking underscores the difficulties facing chatbot makers, and society at large, as humanlike generative AI products and services become more accessible and popular. The key question remains: how to balance the potential of new AI technologies and the opportunities they provide for free expression and communication with the responsibility to protect users, especially the young and impressionable, from harm?

Related Posts

Rabbit now lets you teach the R1 to perform tasks for you

/ You can train the AI gadget to save a song to your Spotify account, or help you draft a social media post. p>span:first-child]:text-gray-13 [&_.duet–article-byline-and]:text-gray-13″> By Emma Roth, a news…

Read more

Apple is reportedly working on ‘LLM Siri’ to compete with ChatGPT

/ A more conversational Siri could arrive in 2026, according to Bloomberg. p>span:first-child]:text-gray-13 [&_.duet–article-byline-and]:text-gray-13″> By Emma Roth, a news writer who covers the streaming wars, consumer tech, crypto, social media,…

Read more

Salesforce launches Agentforce Testing Center to put agents through paces

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The next phase of agentic AI may just be evaluation and…

Read more

Best Internet Providers in San Antonio, Texas

What is the best internet provider in San Antonio? CNET’s top recommendation for internet service in San Antonio is AT&T Fiber, edging out Google Fiber for its exceptional speeds. However,…

Read more

OpenAI accidentally erases potential evidence in training data lawsuit

/ Lawyers representing The New York Times and other outlets spent over 150 hours searching OpenAI’s data. p>span:first-child]:text-gray-13 [&_.duet–article-byline-and]:text-gray-13″> By Kylie Robison, a senior AI reporter working with The Verge’s…

Read more

Report: Amazon is likely to face an EU antitrust investigation next year

2025 could be a tense year for Amazon. Reuters reports that, according to its sources, Amazon “will likely” be investigated by the European Union (EU) for violating the Digital Markets…

Read more

Leave a Reply