In a recent development, Anthropic has acknowledged that it is investigating a potential breach involving access to its proprietary AI model, Claude Mythos. The company reported receiving information regarding unauthorized access allegedly facilitated through one of its third-party vendors. An Anthropic spokesperson conveyed to Bloomberg, “We’re investigating a report claiming unauthorized access to Claude Mythos Preview.”
Bloomberg’s inquiry indicates that it has verified the breach, receiving live demos and screenshots from an individual claiming affiliation with the unauthorized group. This anonymous source reportedly identified themselves as a worker at a contractor for Anthropic and employed commonly utilized internet sleuthing tools, similar to those adopted by cybersecurity researchers, to discover access points to the model.
While the situation raises concerns about the security of Claude Mythos—described by Anthropic as a model too dangerous for public release—the anonymous source has apparently reassured onlookers that their intentions are more benign than malevolent. They conveyed to Bloomberg that the group’s interest lies in experimenting with new AI models, rather than exploiting them for harmful purposes.
The timeline of these events outlines a series of coordinated activities by the group. Initially, a Discord channel was established where members utilized bots to scour GitHub for information on unreleased AI models. Complicating matters further, a recent data breach at the AI training startup Mercor presumably provided additional context. This, in conjunction with information gleaned from the source’s role at an Anthropic contractor, enabled the group to hypothesize the online location of Claude Mythos.
Since April 7—coinciding with the announcement of Anthropic’s Project Glasswing—the group claims to have been actively engaged with the Claude Mythos model. This scenario paints a complex picture: while Anthropic purports to control one of the most advanced AI systems, an anonymous entity claims to have accessed it, leading to a precarious balance of trust borne by both the company and the broader community.
As the investigation continues, the implications of unauthorized access to such a powerful AI technology remain a critical concern. Despite the group’s assertions of harmless intentions, the potential for misuse underscores the broader challenges of AI governance and security in an era where access to groundbreaking technology can inadvertently fall into the wrong hands.


