Concerns have been raised by Canadian computer scientist Yoshua Bengio regarding the restricted release of Anthropic’s latest AI model, Claude Mythos. Renowned for his contributions to deep learning, Bengio has highlighted the risks associated with centralized decision-making power held by private companies over critical cybersecurity infrastructure. He argues that limiting access to such an influential system enables one organization to dictate which countries and companies can fortify themselves against emerging cyber threats.
In remarks made during an interview with Fortune, Bengio expressed concern over the implications of a single entity controlling who benefits from vital cybersecurity measures. “It doesn’t make sense that private individuals are deciding the fate of infrastructure for everyone else. What about all the companies and all the countries that didn’t get access?” he questioned. These comments coincide with the selective sharing of Mythos, which can identify thousands of previously unknown vulnerabilities, primarily with a small group of US-based entities.
Anthropic has defended its limited rollout citing the dual-use nature of Mythos: while it has the potential to bolster cybersecurity, it could also be weaponized for cyberattacks against critical infrastructure. As a precaution, the company is distributing access to select American technology firms and briefing US governmental bodies in preparation for broader access.
This approach has sparked a significant debate regarding governance and fairness in AI. Reports indicate that numerous governments and institutions are eager for access to assess possible vulnerabilities within their systems. Notably, the Bank of England has indicated that Anthropic has provided assurances regarding near-term access for UK banks. At recent meetings of the IMF and World Bank, concerns about the model’s capacity to uncover weaknesses in global financial systems were prevalent, especially among regulators and companies that have yet to evaluate its capabilities.
Bengio advocates for greater international involvement in AI regulation, suggesting the establishment of an international authority to oversee the production and application of sophisticated AI technologies. He firmly believes that stringent regulations are necessary to prevent the misuse of advanced AI tools from adversely affecting the infrastructure of nations worldwide. “There needs to be an agency really in charge of overseeing these kinds of decisions,” Bengio stated, emphasizing the importance of international collaboration as AI continues to evolve.
This debate is further intensifying a push for “AI sovereignty,” as countries aim to limit reliance on foreign tech providers amid rising geopolitical tensions. The U.S. government is also taking steps to secure its own access to Mythos, with a recent memo from the White House Office of Management and Budget indicating that various federal departments, including Defense, Treasury, and Homeland Security, are set to begin utilizing a version of the model.
In addition to concerns over proprietary platforms like Mythos, Bengio warned against the use of open-source AI models, which, while beneficial for collaboration and security, have advanced to the point where they can scan open-source software for vulnerabilities. He underscored the necessity of including China in any global AI governance framework, noting the technological rivalry between the U.S. and China. Although he perceives a lag in Chinese AI advancements, he cautioned that this gap does not diminish the associated risks.
Bengio’s criticisms tap into a more profound issue concerning the implications of AI decision-making. As AI systems grow increasingly capable, the choices surrounding their deployment and access have far-reaching consequences for people worldwide. Allowing a single organization to dictate access to such vital systems could leave significant parts of the globe vulnerable and centralize too much power over essential infrastructure in the hands of a few individuals.


