At a recent event in New Delhi, Sam Altman, CEO of OpenAI, announced a significant development in the company’s work with the U.S. Department of Defense. In a post on X, he revealed that OpenAI reached an agreement to deploy its artificial intelligence models within the Department of War’s classified network. This announcement came shortly after President Donald Trump publicly directed federal agencies to cease all collaborations with AI competitor Anthropic.
Altman’s post described the Department of War’s commitment to safety and their collaborative approach in seeking optimal outcomes with OpenAI’s technology. This agreement represents a pivotal moment in the political discourse surrounding AI, especially given the controversies that have emerged in recent weeks.
Earlier on the same day, Defense Secretary Pete Hegseth designated Anthropic as a “Supply-Chain Risk to National Security.” This classification, ordinarily reserved for foreign adversaries, compels Department of Defense contractors to verify they are not using Anthropic models. The decision follows a failed negotiation between Anthropic and the Defense Department regarding the use of its AI models, which had previously been integrated into the classified network. Anthropic sought assurances that its technology would not be employed for autonomous weapons or domestic surveillance, while the DoD insisted that the models be available for all lawful military purposes.
In contrast, Altman indicated that OpenAI presented similar safety parameters, which the DoD accepted. In a memo to employees, he emphasized that OpenAI’s principles include a prohibition on domestic mass surveillance and a commitment to ensure human oversight over the application of force, even in the case of autonomous systems.
While the reasons behind the Department of Defense’s preference for OpenAI over Anthropic remain unclear, government officials have criticized Anthropic for what they perceive as excessive caution regarding AI safety. Altman assured that OpenAI would implement robust technical safeguards and stated that personnel would be deployed to ensure the proper functionality and safety of their models.
He also urged the Department of War to extend similar terms to other AI companies. “We hope to move beyond legal disputes and towards constructive agreements,” Altman expressed in his post.
In response, Anthropic conveyed its dismay at the Pentagon’s designation, pledging to legally contest the decision. The unfolding controversy highlights the complex interplay of AI technology, national security, and the ethical considerations surrounding autonomous systems as the industry continues to evolve.


