OpenAI has announced a significant initiative aimed at enhancing the safety of its ChatGPT technology for younger users. Starting soon, users identified as being under the age of 18 will automatically be directed to a version of ChatGPT that adheres to “age-appropriate” content guidelines, according to a company statement released on Tuesday.
This underage version is designed with specific protective measures, including the blocking of sexual content and the involvement of law enforcement in situations of acute distress to ensure user safety. OpenAI emphasized the importance of tailoring the chatbot’s interactions based on the user’s age, asserting that the responses provided to a 15-year-old should differ from those intended for an adult.
In addition to these content safeguards, OpenAI is set to introduce parental controls that will empower parents to link their accounts with their teen’s accounts. This feature will enable them to manage chat histories, enforce blackout hours, and implement other safety measures. The new controls are expected to be rolled out by the end of September.
This latest development comes in the wake of increased scrutiny from regulators, particularly following the Federal Trade Commission’s (FTC) initiation of a probe concerning the potential adverse effects of AI chatbot interactions on children and adolescents. OpenAI has expressed its commitment to ensuring that ChatGPT is both helpful and safe, especially for younger demographics, recognizing that safety is of paramount concern when it comes to minors.
The necessity for these enhancements became more pressing after the tragic case of 16-year-old Adam Raine from California, whose family recently filed a lawsuit against OpenAI, claiming that the chatbot played a role in his suicide. While the specific methods for age identification have not been disclosed by OpenAI, the company stated that it will default to the under-18 version if there is uncertainty regarding a user’s age.
Other tech companies are responding similarly to concerns regarding the safeguarding of teen users. For instance, YouTube has introduced age-estimation technology designed to assess the age of users based on their viewing habits and account history.
Amid these developments, a recent report from the Pew Research Center highlights that parents tend to be more concerned about their teenagers’ mental health than teens themselves. Among the parents surveyed, a significant percentage expressed that social media posed the most considerable negative impact on adolescent well-being.
As OpenAI moves forward with these protective measures, it underscores its dedication to creating a safe environment for its younger users while addressing the growing concerns around the responsibilities of AI technology in everyday life.


