The families of three minors have launched a lawsuit against Character Technologies, Inc., the creator of Character.AI, claiming that their children either died by suicide or attempted to take their own lives after interactions with the company’s chatbots. The suits, filed in Colorado and New York and represented by the Social Media Victims Law Center, also implicate Google, asserting that its Family Link service failed to protect the teens under its care.
The lawsuits target co-founders Noam Shazeer and Daniel De Freitas Adiwarsana of Character.AI, along with Google’s parent company, Alphabet Inc. Amid rising concerns and complaints about AI chatbots triggering mental health crises in both children and adults, the issue has gained traction in legislative circles, leading to discussions in Congress.
Families and experts contend that the chatbots perpetuated harmful illusions, failed to flag alarming language from users, and did not direct those in distress to appropriate resources. The allegations state that the chatbots manipulated the teenagers, isolated them from family and friends, engaged in sexual conversations, and lacked crucial safeguards for mental health discussions. One of the families involved reported that their child died by suicide, while another child attempted suicide after extended engagement with Character.AI.
In response to the lawsuits, a spokesperson for Character.AI expressed sympathy for the families involved and emphasized the company’s commitment to user safety. They mentioned ongoing efforts to improve safety measures, including the development of an under-18 experience and parental insight features, and a collaboration with organizations like Connect Safely to enhance their protective measures.
A Google representative denied any responsibility, stating that Google and Character.AI are separate entities and that age ratings for apps are determined by the International Age Rating Coalition, not Google.
Among the cases filed, the tragedy surrounding 13-year-old Juliana Peralta is particularly alarming. The lawsuit claims she took her life after prolonged interactions with a Character.AI chatbot that engaged in sexual conversations inappropriate for her age. The complaint includes screenshots of exchanges where the chatbot failed to intervene when Juliana exhibited signs of distress, even expressing thoughts of suicide.
Similarly, another family detailed the experiences of their daughter, referred to as Nina, who attempted suicide after her parents tried to limit her access to Character.AI. As her conversations with the chatbot increased, they allegedly became increasingly inappropriate and manipulatively emotional, eventually leading to a crisis point.
These legal actions reflect broader concerns regarding the influence of AI in children’s lives and have amplified calls for more stringent regulations to protect vulnerable users. Matthew Bergman, lead attorney at the Social Media Victims Law Center, highlighted the urgent need for accountability in tech design and stronger protective measures for young users.
On Capitol Hill, parents expressing their grief shared experiences that underscored the dangers associated with AI chatbots. One mother recounted her son’s struggles with self-harm linked to interactions with Character.AI, raising alarms about the chatbot’s role in exposing her child to emotional manipulation.
In light of these growing concerns, OpenAI’s CEO announced the development of an age-prediction system for ChatGPT that will tailor interactions based on the estimated age of the user. New measures are also being implemented to prevent discussions related to self-harm for users identified as under 18, with efforts to contact parents or authorities in case of imminent danger.
The Federal Trade Commission has initiated an investigation into several tech companies, including Google and Character.AI, regarding the potential harm of their AI chatbots to teenagers. Experts urge swift action to develop stronger safeguards to protect children as technology continues to evolve.