Following a concerning rise in reports of mental health crises linked to the use of ChatGPT, OpenAI has announced plans to develop a version of its chatbot specifically tailored for teenagers. In a recent blog post, OpenAI CEO Sam Altman emphasized the company’s commitment to prioritizing safety over privacy for younger users, stating, “This is a new and powerful technology, and we believe minors need significant protection.”
The new system will enable ChatGPT to identify users under the age of 18, directing those identified as minors to a version with stringent, age-appropriate policies. Altman reiterated that the platform is intended for individuals above the age of 13, indicating a default approach of prioritizing limited engagement for users whose age remains uncertain. In some cases, especially in specific jurisdictions, OpenAI may require users to verify their age through identification. While acknowledging that this policy might compromise privacy for adults, Altman believes it is a necessary measure.
Adults will also have methods available to them to prove their age and switch back to the adult ChatGPT experience, though specific mechanisms for age verification remain undisclosed. The teen version of the chatbot is designed to block graphic sexual content and monitor for signs of acute distress in users, potentially involving law enforcement if necessary. Altman noted that this version will avoid discussions about flirtation or self-harm, even within creative contexts.
Meanwhile, the adult version of ChatGPT will maintain a broader notion of user freedom while adhering to significant safety guidelines. Adults will retain the option to engage in flirtatious conversation and provide fictional depictions of sensitive topics, such as suicide, although the chatbot will not offer instructions on harmful actions. Altman emphasized the importance of treating adult users as responsible individuals while recognizing the necessity of safety protocols.
In response to the growing concerns, OpenAI introduced parental controls, set to roll out at the end of September. These controls will allow parents to link to their teen’s accounts, influence ChatGPT’s responses, disable specific features, receive notifications when their child shows signs of distress, and restrict usage during particular times.
The announcement coincided with the filing of a lawsuit against OpenAI after a teenager reportedly took their own life with alleged encouragement from ChatGPT. Altman acknowledged that the recent tragedies involving users in acute crises have profoundly affected the company.
In addition to focusing on the safety of younger users, OpenAI has also responded to concerns about adults facing mental health challenges reportedly exacerbated by their interactions with ChatGPT, with some cases referred to as “AI psychosis.” OpenAI previously planned to implement additional safety measures for both adults and teenagers, including access to emergency resources.
As OpenAI navigates these sensitive issues, Altman’s statements reflect a balancing act between ensuring user safety and maintaining a sense of freedom for adult users. The company’s latest safety announcements come amidst increasing scrutiny from lawmakers, with discussions underway regarding the potential harms associated with AI chatbots.