The Federal Trade Commission (FTC) has initiated an inquiry into various social media and artificial intelligence companies, including prominent names like OpenAI and Meta Platforms. This investigation aims to address potential dangers posed to children and teenagers using chatbots as virtual companions.
Recently, the FTC announced that it has reached out to multiple corporations, including Google parent company Alphabet, Meta, Snap, Character Technologies, and xAI, seeking detailed information on measures taken concerning the safety of their chatbot technologies when serving youth. The agency is particularly focused on understanding how these companies are addressing concerns about children’s usage and the potential negative impacts of their products, as well as how they are informing users and parents regarding associated risks.
This inquiry comes in the wake of a tragic incident earlier this year, where the parents of a teenager who died by suicide in April sued OpenAI, claiming that the AI chatbot significantly influenced their child’s decision to take his own life. In response, OpenAI has announced plans to enhance safeguards for vulnerable users, including implementing additional protections for individuals under 18 years old.
As AI chatbots become increasingly integrated into daily life for younger users—offering assistance with homework, personal advice, emotional support, and decision-making—the risks associated with their usage are also rising. Research has shown that chatbots have provided troubling and potentially harmful advice on sensitive topics, including substance abuse and eating disorders.
FTC Chairman Andrew N. Ferguson emphasized the importance of evaluating the impact of these evolving AI technologies on children while ensuring the U.S. retains its leadership in this burgeoning industry. He expressed optimism that this study would offer valuable insights into how AI companies are developing their products and the protective measures being implemented.
Character.AI has expressed its willingness to collaborate with the FTC, highlighting its readiness to provide insights about the consumer AI landscape and its fast-paced technological developments. Meanwhile, Meta, while opting not to comment on the inquiry, has stated that it is dedicated to making its AI chatbots safe and appropriate for children.
In communications with CBS News, OpenAI reiterated its commitment to creating a helpful and secure ChatGPT experience for all users, particularly minors. The company acknowledged the FTC’s concerns and expressed its intention to engage constructively throughout the inquiry.
Snap also conveyed its alignment with the FTC’s objectives, stating a shared commitment to the responsible development of generative AI and an eagerness to collaborate on policies that prioritize innovation while safeguarding the community.
Both OpenAI and Meta have recently announced adjustments in how their chatbots respond to teenagers who may exhibit signs of mental distress or inquire about sensitive topics such as self-harm. OpenAI plans to implement new features allowing parents to link their accounts with their teenagers’, enabling them to disable certain functions and receive alerts during moments of acute distress. In a similar vein, Meta is taking proactive measures by restricting its chatbots from discussing issues related to self-harm, directing users instead to professional resources.
For those experiencing emotional distress or suicidal thoughts, resources like the 988 Suicide & Crisis Lifeline are available for immediate support. This service can be accessed by calling or texting 988, or through their online chat service. Additionally, the National Alliance on Mental Illness HelpLine offers support during business hours, providing assistance to individuals in need.


