Character AI Introduces Parental Controls Following Safety Concerns.
  • 210 views
  • 4 min read

Character AI, a platform that allows users to converse with AI-driven personas, has introduced new parental control features in response to growing safety concerns, particularly regarding its younger users. This move comes amid increased scrutiny and legal challenges that have highlighted the potential risks associated with AI chatbots, especially for children and teenagers.

Background: Safety Concerns and Lawsuits

Character AI has gained immense popularity, attracting millions of users who engage with AI characters for various purposes, from entertainment and companionship to education and creative exploration. However, this rapid growth has also brought to light several safety concerns. Reports have surfaced regarding instances of inappropriate content, harmful interactions, and even the potential for encouraging self-harm.

Several lawsuits have been filed against Character AI, alleging that the platform failed to adequately protect underage users from harmful content. One lawsuit involved a mother who claimed that the platform contributed to her 14-year-old son's suicide. Another alleged that an 11-year-old girl was exposed to "hypersexualized interactions" after using the platform for nearly two years without her parents' knowledge. These cases have amplified the pressure on Character AI to implement more robust safety measures.

New Parental Control Features: A "First Step"

In response to these concerns, Character AI has launched a new "parental insights" feature, designed to provide parents with more visibility into their children's activity on the platform. The tool is accessible through the child's account preferences, where the user can add a parent or guardian's email address and invite them to receive weekly activity reports.

These reports include:

  • Daily average time spent on the platform: This provides an overview of how much time the child is engaging with Character AI on both mobile and web platforms.
  • Top characters interacted with: This lists the AI characters that the teen engages with most frequently during the week.
  • Time spent with each character: This gives parents insight into engagement patterns with specific characters.

It is important to note that these reports do not include transcripts of the user's chats with the AI companions, however.

Character AI has described this feature as a "first step" toward providing parents with information about their child's activity on the platform and has stated it will continue to refine the tool based on feedback from teens, parents, and teen safety organizations.

Other Safety Measures Implemented

In addition to the new parental control feature, Character AI has implemented a number of other safety measures in recent months. These include:

  • Separate Model for Teens: A distinct AI model is used for users under 18, designed to reduce the likelihood of encountering sensitive or suggestive content. This model has "more conservative limits on responses" around romantic and sexual content.
  • Content Moderation Improvements: The platform has implemented improved detection and intervention systems for human behavior and model responses. This includes classifiers to identify and block sensitive content.
  • Prominent Disclaimers: Clear disclaimers are displayed to remind users that the chatbots are not real people and that what they say should be treated as fiction. A warning is also displayed for bots that describe themselves as therapists or doctors, clarifying that these bots are not licensed professionals.
  • Time Spent Notification: Users receive a notification after completing an hour-long session on the platform.
  • Removal of Inappropriate Characters: Character AI has removed characters flagged as violating rules and has blocked access to chat histories involving these characters.
  • Private Default for Under-18 Content: All characters created by under-18 users are set to private by default.

Limitations and Ongoing Concerns

Despite these efforts, concerns remain about the effectiveness of Character AI's safety measures. One significant limitation is the age verification system. Users can easily bypass age restrictions by entering false information, as no robust verification process exists. This makes it difficult to ensure that underage users are actually using the teen-specific model and safety features.

Another concern is the potential for AI chatbots to provide harmful advice or encourage dangerous behavior. Although disclaimers are in place, children and teenagers may not fully understand the limitations of AI and could be influenced by the chatbots' responses.

Moreover, the lack of access to chat transcripts in the parental reports limits the ability of parents to fully understand the nature of their child's interactions on the platform.

The Broader Context: AI Chatbots and Safety

The safety concerns surrounding Character AI are not unique to this platform. AI chatbots, in general, pose potential risks, especially for vulnerable users. These risks include:

  • Exposure to Inappropriate Content: AI chatbots can generate or facilitate access to sexually explicit, violent, or otherwise harmful content.
  • Misinformation and Manipulation: Chatbots can spread false or misleading information and can be used to manipulate users' emotions and behaviors.
  • Privacy Risks: Users may unknowingly share personal information with chatbots, which could be misused or exposed in data breaches.
  • Addiction and Dependency: The interactive nature of chatbots can lead to excessive use and dependency, potentially impacting mental health and social development.

Moving Forward: A Need for Continuous Improvement

Character AI's introduction of parental controls is a welcome step toward addressing safety concerns. However, it is crucial that the platform continues to improve its safety measures and address the limitations of its current system.

This includes:

  • Strengthening Age Verification: Implementing more robust age verification methods to prevent underage users from accessing the platform without appropriate safeguards.
  • Improving Content Moderation: Continuously refining content moderation systems to detect and block inappropriate and harmful content.
  • Providing More Transparency to Parents: Exploring ways to provide parents with more insight into their child's interactions on the platform while respecting privacy.
  • Collaborating with Experts: Working with online safety experts, mental health professionals, and child development specialists to develop and implement effective safety strategies.
  • Educating Users: Providing clear and accessible information to users, especially children and teenagers, about the risks and limitations of AI chatbots.

Ultimately, ensuring the safety of young users on AI chatbot platforms requires a multi-faceted approach that involves technological safeguards, parental involvement, and user education. Character AI's recent actions represent a step in the right direction, but ongoing vigilance and continuous improvement are essential to create a safer online environment for all users.


Rajeev Iyer is a seasoned tech news writer with a passion for exploring the intersection of technology and society. He possesses a unique ability to analyze complex issues with nuance and clarity, making him a highly respected contributor in the tech journalism landscape.

Latest Post


Sony has recently increased the price of its PlayStation 5 console in several key markets, citing a "challenging economic environment" as the primary driver. This decision, which impacts regions including Europe, the UK, Australia, and New Zealand, r...
  • 466 views
  • 3 min

Intel Corporation has announced a definitive agreement to sell a 51% stake in its Altera business to Silver Lake, a global technology investment firm, for $8. 75 billion. This move aims to establish Altera as an operationally independent entity and th...
  • 442 views
  • 2 min

Meta is set to recommence training its artificial intelligence (AI) models using public data from adult users across its platforms in the European Union. This decision comes after a pause of nearly a year, prompted by data protection concerns raised ...
  • 498 views
  • 2 min

Nvidia is embarking on a significant shift in its manufacturing strategy, bringing the production of its advanced AI chips and supercomputers to the United States for the first time. This move marks a major milestone for the company and a potential t...
  • 161 views
  • 2 min

  • 174 views
  • 3 min

About   •   Terms   •   Privacy
© 2025 techscoop360.com