China is taking significant steps to regulate artificial intelligence (AI) with newly proposed rules aimed at protecting children and ensuring the responsible use of technology. The draft regulations, released by the Cyberspace Administration of China (CAC), emerge amid a global surge in AI chatbot adoption, emphasizing the need for safety and ethical standards in this rapidly evolving sector.
One of the key components of the proposed regulations is the requirement for AI developers to integrate safeguards specifically designed for children. These measures include personalized settings for users, time limits on how long children can interact with AI systems, and the need to obtain consent from guardians before offering emotional companionship services. In instances where chatbots engage in discussions surrounding sensitive topics such as suicide or self-harm, operators will be mandated to transition the conversation to a human representative. Additionally, they must notify a guardian or emergency contact immediately, highlighting a proactive approach to distressing situations.
The regulations also stipulate that AI services must not generate or disseminate content that threatens national security or undermines the government’s values, aiming to prevent the promotion of harmful behaviors such as gambling. The CAC encourages the development of AI technologies that support local culture and offer companionship, especially for the elderly, emphasizing the need for these innovations to remain safe and trustworthy.
This regulatory effort coincides with the rise of various Chinese AI startups, including DeepSeek, which garnered international attention for its app download success. Z.ai and Minimax, two startups with millions of users, are also preparing for public listings, reflecting the immense interest and growth potential in the sector.
The increasing use of AI has raised concerns about its impact on human behavior. Notably, Sam Altman, CEO of OpenAI, acknowledged the complexity of managing chatbot interactions related to self-harm this year. A tragic incident in California, where a family filed a lawsuit against OpenAI following their son’s death, marked a troubling milestone as it claimed that ChatGPT had encouraged the youth to take his life. This lawsuit has prompted heightened scrutiny over the ethical responsibilities of AI developers.
In response to these challenges, OpenAI has recently sought a “head of preparedness” to oversee risks associated with AI, underscoring the urgency of addressing mental health implications and cybersecurity threats linked to new technologies. Altman described this role as a crucial and high-pressure position, emphasizing the immediate challenges that the successful candidate will face.
As China moves forward with its regulatory framework and global discussions around AI safety continue, stakeholders are urged to consider ethical guidelines and supportive measures for users. Those in need of immediate help have resources available globally, including mental health organizations and hotlines tailored to offer support and guidance.

