China is taking significant steps to regulate artificial intelligence (AI) with a focus on responsible development, as demonstrated by the recent draft rules issued by the Cyberspace Administration of China (CAC). These measures aim to tighten oversight of AI services that simulate human personalities and employ algorithmic recommendation technologies. The proposed regulations underscore Beijing's commitment to shaping the rapidly evolving landscape of consumer-facing AI, emphasizing safety, ethical considerations, and the prevention of potential psychological risks.
Focus on Human-Like Interaction
The draft rules specifically target AI products and services that mimic human traits, thinking patterns, and communication styles, engaging users emotionally through various mediums such as text, images, audio, and video. This includes AI companions and chatbots designed to form emotional connections with users. The regulations aim to address potential issues like user addiction and over-reliance on these emotionally interactive AI services. To mitigate these risks, service providers would be required to issue warnings against excessive use and intervene when users display signs of addiction or extreme emotional dependence. Furthermore, providers would be expected to monitor user emotions and assess the potential psychological risks associated with AI interactions, taking corrective action when necessary.
These rules will apply to any AI tool designed to "simulate human personality and engage users emotionally through text, images, audio or video". To that end, the policy would require guardian consent for minors to engage with chatbot companions as well as sweeping age verification. AI chatbots would not be allowed to generate gambling-related, obscene, or violent content, or engage in conversations about suicide, self-harm, or other topics that could harm a user's mental health.
Regulation of Algorithmic Recommendations
In addition to regulating AI with human-like interaction, China is also focusing on algorithmic recommendation technologies. These regulations, considered pioneering globally, aim to create comprehensive rules for the widespread use of algorithms online, spanning search filters, personalized recommendations, and information-sharing services. The goal is to ensure that these algorithms adhere to mainstream values and promote "positive energy," while preventing the spread of undesirable or illegal information.
Content and Safety Restrictions
The draft rules also set clear boundaries for content and conduct,. AI services must not generate content that endangers national security, spreads false information, or promotes violence, obscenity, or other harmful content. Service providers are expected to assume full safety responsibilities, ensuring their AI systems operate within ethical and legal boundaries. Other notable provisions require tech providers to remind users after two hours of continuous AI interaction and mandate security assessments for AI chatbots with more than one million registered users or over 100,000 monthly active users.
Broader AI Governance in China
These draft rules are part of China's broader strategy to regulate emerging technologies and maintain public trust in AI innovations. China is formulating over 50 national and industrial standards for the AI sector by 2026 to guide its high-quality development. The newly amended Cybersecurity Law, effective on January 1, 2026, introduces a dedicated provision on AI compliance, indicating that China places strong emphasis on AI ethics, risk monitoring, and safety assessment. Recent AI-related regulations, including the Algorithm Recommendation Measures, the Deep Synthesis Measures, and the Generative AI Measures have established new compliance requirements for AI activities in China.
The CAC will be consulting on the proposed regulation up until January 25, 2026.
















