Character.AI Bans Open-Ended Chat for Users Under 18
Summary
On October 29, 2025, Character.AI announced that users under 18 would be banned from open-ended chat with AI characters, effective November 25, 2025. The announcement came the same day the Bureau of Investigative Journalism (TBIJ) published an investigation showing that Character.AI bots had engaged in conversations with apparent children about self-harm and sexual content. The move followed the Garcia wrongful death lawsuit (filed October 2024) and multiple Texas suits connected to minor safety concerns. Google held a minority investment in Character.AI.
What Happened
The TBIJ investigation, published simultaneously with the Character.AI announcement, documented cases in which AI characters on the platform had engaged minors in conversations about self-harm, eating disorders, and explicit content — despite the platform nominally requiring users to be 13 or older and having character-level content filters. Researchers found that characters could be prompted into harmful conversations with relative ease, and that the platform's protections were inadequate for the youngest users.
Character.AI had already faced significant legal pressure. In October 2024, Megan Garcia filed a wrongful death lawsuit in Florida following the death of her son Sewell Setzer III (age 14), who had developed an intense attachment to a Character.AI persona before dying by suicide in February 2024. Texas and other states had filed additional suits. Separately, a lawsuit connected to a New Jersey case was proceeding in federal court.
Under the new policy effective November 25, users identified as under 18 through age-assurance processes would be restricted from open-ended character chat. The replacement experience was described as offering curated, non-chat AI features — including educational tools and structured creative writing — but not the unfiltered character interaction that had defined the platform's appeal.
Google's minority stake in Character.AI — the company had licensed Character.AI's technology for Google's AI systems while the founders and team remained at Character.AI — created reputational exposure that contributed to pressure on the platform to act.
Why It Matters
The Character.AI under-18 ban was the most significant platform self-regulatory action in the AI companion space to that point. It acknowledged implicitly that open-ended conversational AI characters posed distinct risks for minors that could not be adequately managed through character-level content filtering alone.
The timing — announced simultaneously with the TBIJ investigation — reflected a pattern now familiar from social media platform safety history: public revelations of harm to minors accelerate product changes that internal pressure had not achieved. Whether the age-assurance mechanisms Character.AI announced were technically robust, or whether they could be easily circumvented by determined minor users, was immediately questioned.
The case also raised a structural question that the ban did not resolve: if open-ended AI character chat poses mental health risks for minors, what is the appropriate product design for minors who want AI social interaction? The "curated features" replacement was not an answer to the underlying need the platform was meeting.