Character.ai, the platform where users can chat with a variety of AI characters, create new characters and roleplay, is banning its chatbot service for under 18 years old.
The company stated that it will remove the option for users under the age of 18 to engage in open-ended chats with AI on its platform, with changes to take effect by 25 November.
Character.ai is dropping the feature after receiving questions from regulators about the content teens might encounter when chatting with AI and how open chats with AI in general might affect teens.
“After evaluating these reports and feedback from regulators, safety experts, and parents, we’ve decided to make this change to create a new experience for our under-18 community,” it said.
The move comes as the platform, used by over 20 million users per month, has recently faced several lawsuits from parents as concerns continue to grow over young people's use of AI.
In December last year, the BBC reported on a lawsuit filed against the company in a Texas court, where two parents accused Character.aI's chatbot of telling a 17-year-old that killing his parents was a “reasonable response” to them limiting his screen time.
Two families also suing Character.ai said the chatbot "poses a clear and present danger" to young people, including by "actively promoting violence", said the news broadcaster. Another lawsuit filed in Florida last year, alleged the company's chatbots pushed a teenage boy to kill himself.
On Thursday, the company said it is working on creating an experience for users under 18 that allows them to continue expressing their creativity, such as creating videos, stories and streams with characters, as it prepares to ban the service.
During the transition period, the company said it will limit chat time for users under 18, with an initial limit of two hours per day, which will be reduced in the coming weeks before the new updates are integrated.
In a statement, the firm said it rolling out new age assurance functionality to help ensure users receive the right experience for their age.
“We have built an age assurance model in-house and will be combining it with leading third-party tools including Persona,” the firm said.
To the teens using the platform, Character.ai said: “We understand that this is a significant change for you. We are deeply sorry that we have to eliminate a key feature of our platform.”
The AI specialist is also funding the AI Safety Lab, an independent non-profit organisation dedicated to improving safety alignment for next-generation AI entertainment features. The lab will focus on innovative safety techniques and collaboration with third parties to advance knowledge around AI safety.
The news follows an article published last month by the Journal of Mental Health & Clinical Psychology which explained that while generative AI tools like ChatGPT and Character.AI provide unprecedented access to information and companionship, a growing body of evidence suggests they may also induce or exacerbate psychiatric symptoms, particularly in vulnerable individuals.
OpenAI shared earlier this week that around 0.07 per cent of ChatGPT users and 0.01 per cent of messages show possible signs of mental health emergencies related to psychosis or mania in any given week.
With 800 million weekly active users, this means that as many as 560,000 people could be displaying signs of mental health crises each week.
ChatGPT added that is has been working with over 170 mental health experts to help ChatGPT more reliably recognise signs of distress, respond with care, and guide people toward real-world support.
The company said it is focusing on safety improvements in three key areas: mental health concerns such as psychosis or mania, self-harm and suicide, and emotional resilience on AI.








Recent Stories