Character AI Introduces Parental Controls Following Safety Concerns.
  • 226 views
  • 4 min read

Character AI, a platform that allows users to converse with AI-driven personas, has introduced new parental control features in response to growing safety concerns, particularly regarding its younger users. This move comes amid increased scrutiny and legal challenges that have highlighted the potential risks associated with AI chatbots, especially for children and teenagers.

Background: Safety Concerns and Lawsuits

Character AI has gained immense popularity, attracting millions of users who engage with AI characters for various purposes, from entertainment and companionship to education and creative exploration. However, this rapid growth has also brought to light several safety concerns. Reports have surfaced regarding instances of inappropriate content, harmful interactions, and even the potential for encouraging self-harm.

Several lawsuits have been filed against Character AI, alleging that the platform failed to adequately protect underage users from harmful content. One lawsuit involved a mother who claimed that the platform contributed to her 14-year-old son's suicide. Another alleged that an 11-year-old girl was exposed to "hypersexualized interactions" after using the platform for nearly two years without her parents' knowledge. These cases have amplified the pressure on Character AI to implement more robust safety measures.

New Parental Control Features: A "First Step"

In response to these concerns, Character AI has launched a new "parental insights" feature, designed to provide parents with more visibility into their children's activity on the platform. The tool is accessible through the child's account preferences, where the user can add a parent or guardian's email address and invite them to receive weekly activity reports.

These reports include:

  • Daily average time spent on the platform: This provides an overview of how much time the child is engaging with Character AI on both mobile and web platforms.
  • Top characters interacted with: This lists the AI characters that the teen engages with most frequently during the week.
  • Time spent with each character: This gives parents insight into engagement patterns with specific characters.

It is important to note that these reports do not include transcripts of the user's chats with the AI companions, however.

Character AI has described this feature as a "first step" toward providing parents with information about their child's activity on the platform and has stated it will continue to refine the tool based on feedback from teens, parents, and teen safety organizations.

Other Safety Measures Implemented

In addition to the new parental control feature, Character AI has implemented a number of other safety measures in recent months. These include:

  • Separate Model for Teens: A distinct AI model is used for users under 18, designed to reduce the likelihood of encountering sensitive or suggestive content. This model has "more conservative limits on responses" around romantic and sexual content.
  • Content Moderation Improvements: The platform has implemented improved detection and intervention systems for human behavior and model responses. This includes classifiers to identify and block sensitive content.
  • Prominent Disclaimers: Clear disclaimers are displayed to remind users that the chatbots are not real people and that what they say should be treated as fiction. A warning is also displayed for bots that describe themselves as therapists or doctors, clarifying that these bots are not licensed professionals.
  • Time Spent Notification: Users receive a notification after completing an hour-long session on the platform.
  • Removal of Inappropriate Characters: Character AI has removed characters flagged as violating rules and has blocked access to chat histories involving these characters.
  • Private Default for Under-18 Content: All characters created by under-18 users are set to private by default.

Limitations and Ongoing Concerns

Despite these efforts, concerns remain about the effectiveness of Character AI's safety measures. One significant limitation is the age verification system. Users can easily bypass age restrictions by entering false information, as no robust verification process exists. This makes it difficult to ensure that underage users are actually using the teen-specific model and safety features.

Another concern is the potential for AI chatbots to provide harmful advice or encourage dangerous behavior. Although disclaimers are in place, children and teenagers may not fully understand the limitations of AI and could be influenced by the chatbots' responses.

Moreover, the lack of access to chat transcripts in the parental reports limits the ability of parents to fully understand the nature of their child's interactions on the platform.

The Broader Context: AI Chatbots and Safety

The safety concerns surrounding Character AI are not unique to this platform. AI chatbots, in general, pose potential risks, especially for vulnerable users. These risks include:

  • Exposure to Inappropriate Content: AI chatbots can generate or facilitate access to sexually explicit, violent, or otherwise harmful content.
  • Misinformation and Manipulation: Chatbots can spread false or misleading information and can be used to manipulate users' emotions and behaviors.
  • Privacy Risks: Users may unknowingly share personal information with chatbots, which could be misused or exposed in data breaches.
  • Addiction and Dependency: The interactive nature of chatbots can lead to excessive use and dependency, potentially impacting mental health and social development.

Moving Forward: A Need for Continuous Improvement

Character AI's introduction of parental controls is a welcome step toward addressing safety concerns. However, it is crucial that the platform continues to improve its safety measures and address the limitations of its current system.

This includes:

  • Strengthening Age Verification: Implementing more robust age verification methods to prevent underage users from accessing the platform without appropriate safeguards.
  • Improving Content Moderation: Continuously refining content moderation systems to detect and block inappropriate and harmful content.
  • Providing More Transparency to Parents: Exploring ways to provide parents with more insight into their child's interactions on the platform while respecting privacy.
  • Collaborating with Experts: Working with online safety experts, mental health professionals, and child development specialists to develop and implement effective safety strategies.
  • Educating Users: Providing clear and accessible information to users, especially children and teenagers, about the risks and limitations of AI chatbots.

Ultimately, ensuring the safety of young users on AI chatbot platforms requires a multi-faceted approach that involves technological safeguards, parental involvement, and user education. Character AI's recent actions represent a step in the right direction, but ongoing vigilance and continuous improvement are essential to create a safer online environment for all users.


Writer - Rajeev Iyer
Rajeev Iyer is a seasoned tech news writer with a passion for exploring the intersection of technology and society. He's highly respected in tech journalism for his unique ability to analyze complex issues with remarkable nuance and clarity. Rajeev consistently provides readers with deep, insightful perspectives, making intricate topics understandable and highlighting their broader societal implications.
Advertisement
Advertisement

Latest Post


Meta Platforms Inc. has secured a significant legal victory in a copyright lawsuit filed by a group of authors who alleged that the tech giant unlawfully used their books to train its generative AI model, Llama. On Wednesday, Judge Vince Chhabria of ...
  • 205 views
  • 3 min

Intel is undergoing a period of significant transformation, marked by leadership changes and a strategic shift in direction. This month, Safroadu Yeboah-Amankwah, the company's chief strategy officer, will be stepping down from his role on June 30, 2...
  • 136 views
  • 2 min

DeepSeek, the Chinese AI chatbot, is facing a potential ban from Apple's App Store and Google's Play Store in Germany due to regulatory concerns over data privacy. The Berlin Commissioner for Data Protection and Freedom of Information, Meike Kamp, ha...
  • 246 views
  • 2 min

OpenAI, a leading force in artificial intelligence, is now leveraging Google's Tensor Processing Units (TPUs) to power its products, including ChatGPT. This marks a significant shift in the AI landscape, as OpenAI has historically relied on Nvidia GP...
  • 232 views
  • 2 min

Advertisement
About   •   Terms   •   Privacy
© 2025 TechScoop360