In a significant policy shift, OpenAI’s CEO Sam Altman revealed that adult users of ChatGPT will soon have access to a less restricted version of the AI chatbot, including the provision of erotic materials. This announcement was made during Altman’s media tour at the Stargate AI data center in Abilene, Texas, and is set to take effect in December. Altman emphasized the importance of treating adult users with respect and autonomy, stating, “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”
This change marks a departure from OpenAI’s prior stance, which generally prohibited sexually explicit content. While the specific nature of permissible erotica is yet to be clarified, the move suggests a broader recalibration of the company’s content policies. Altman noted that present iterations of ChatGPT were intentionally “pretty restrictive” to mitigate potential mental health risks associated with certain types of content. However, he acknowledged that these restrictions rendered the chatbot “less useful and enjoyable to many users who had no mental health problems.” With advancements in safety tools, the company feels capable of relaxing these restrictions safely.
Notably, these “new tools” refer to recent enhancements in safety features and parental controls designed to alleviate concerns about the chatbot’s impact on younger users’ mental health. As safeguards for minors continue to expand, Altman is prepared to allow a more flexible approach for adult users.
In addition to the anticipated changes regarding adult content, Altman announced that a new version of ChatGPT would be rolled out in the coming weeks, featuring the ability for the chatbot to adopt varied personalities. He stated, “If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it. But only if you want it.”
This change comes amid increased scrutiny surrounding OpenAI’s safety policies. Just last month, the Federal Trade Commission initiated an inquiry into tech companies, including OpenAI, focusing on potential risks to children and teenagers. This heightened oversight follows a lawsuit from a California couple who claimed that ChatGPT contributed to their teenage son’s suicide.
In response to these challenges, OpenAI also announced the establishment of an eight-member expert council dedicated to examining the relationship between AI and mental well-being. This council aims to guide the company in understanding how artificial intelligence affects users’ mental health, emotions, and motivation, further emphasizing the importance of defining healthy interactions with AI through ongoing dialogue and oversight.

