Three teenagers from Tennessee have initiated a class action lawsuit against Elon Musk’s artificial intelligence company, xAI, which is known for developing the Grok chatbot. The lawsuit alleges that xAI’s large language model was leveraged to create nonconsensual nude and sexually explicit images and videos of the plaintiffs, who were minors at the time these materials were produced.
The complaint paints a harrowing picture, stating that the AI-generated content can be manipulated into various poses, often depicting inappropriate or unlawful scenarios. The legal filing claims this results in long-lasting harm, as the victims’ identifying features become permanently associated with images that illustrate child sexual abuse.
Significantly, while the infringing content did not originate from xAI’s Grok chatbot or its associated social media platform, X, the lawsuit contends that the perpetrator utilized an unidentified app that employed xAI’s technology. It alleges that xAI has knowingly licensed its AI tools to various app developers, some operating outside the U.S., in a bid to deflect liability for the misuse of their technology.
This case marks the first instance of underage plaintiffs suing xAI regarding depictions of child sexual abuse material generated by its AI models. Over the past year, xAI’s tools have garnered attention for being used in the creation of millions of sexualized images. The company has previously faced legal challenges; for instance, influencer Ashley St. Clair filed a lawsuit earlier this year for AI-generated content on X that depicted her nude during her teenage years.
The complaint details that the perpetrator had a “close and friendly relationship” with one of the plaintiffs, using photos shared by the victim and content sourced from social media and yearbooks to fabricate the explicit materials. The plaintiffs expressed their shock over how lifelike the images were and noted that the material did not carry any indication that it was AI-generated.
Additionally, the complaint alleges that the perpetrator produced similar explicit content involving 18 other individuals and was involved in trading these images online. He has since been arrested in connection with these activities.
Vanessa Baehr-Jones, the attorney representing the plaintiffs—identified in the lawsuit as Jane Does 1, 2, and 3—aims to influence how AI companies approach decisions regarding sexually explicit content. She emphasized the need for these business decisions to change significantly to prevent such situations from occurring in the future.
The plaintiffs are seeking damages for emotional distress and other damages stemming from the creation and distribution of the images. Notably, while various applications with “nudifying” features have been present on the internet for years, major AI companies like Google and OpenAI implemented updates last year that included digital watermarks to identify their AI-generated images. However, no such precaution has been adopted by xAI to date.
As the legal proceedings unfold, xAI has yet to issue a public comment regarding the lawsuit.


