Anthropic, the developer behind the AI model Claude, has initiated testing of a new artificial intelligence model that reportedly exceeds the capabilities of all previous versions. This model, described by the company as representing a significant advancement in performance, is currently in a trial phase with a select group of early access customers as Anthropic seeks to evaluate its functionalities and associated risks.
In the wake of this announcement, several companies in the cybersecurity sector, including Palo Alto Networks (PANW), Crowdstrike (CRWD), and Fortinet (FTNT), have seen their stock prices drop sharply, with declines ranging from 4% to 6%. The iShares Expanded Tech-Software Sector ETF (IGV) also faced a hit, declining by 2.5%.
The news of this new model appears to have been influenced by a recent incident in which internal documents pertaining to Anthropic were inadvertently made accessible online. Approximately 3,000 assets linked to the company’s blog were exposed, which included unpublished draft announcements and various internal communications. Among these files was a draft blog post that referred to the upcoming model as “Claude Mythos.” This document cautioned that the new AI system could introduce significant cybersecurity concerns, noting its potential ability to detect and exploit software vulnerabilities effectively.
Currently, Anthropic offers three distinct tiers of AI models—Opus, Sonnet, and Haiku—differentiated by their size, cost, and performance. The leaks hint at the development of an additional tier, tentatively named “Capybara,” which is projected to be larger and more advanced than Opus, the company’s leading model as of now. These developments underscore the rapidly evolving landscape of artificial intelligence and its implications for various industries, particularly cybersecurity, as organizations grapple with the potential risks posed by more sophisticated AI systems.


