In a significant shake-up at OpenAI, Caitlin Kalinowski has announced her resignation from her role as the head of the robotics team, following the company’s recent agreement with the Department of Defense, which has sparked controversy. Kalinowski expressed her concerns regarding the nature of the agreement in a social media post, emphasizing that her departure was driven by principles surrounding the ethical implications of artificial intelligence in national security rather than personal differences.
Kalinowski, who joined OpenAI in November 2024 after leading the augmented reality glasses team at Meta, stated, “This wasn’t an easy call. AI has an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” Her remarks highlight a growing apprehension over the rapid integration of AI technologies within military applications, particularly regarding civil liberties and governance.
Following her initial announcement, Kalinowski took to X to clarify her stance, indicating that the decision was chiefly about governance issues and the need for clearly defined parameters in such significant agreements. “My issue is that the announcement was rushed without the guardrails defined,” she reiterated, signaling her belief that these decisions warrant extensive discussion and careful consideration.
An OpenAI spokesperson confirmed Kalinowski’s resignation, asserting that the agreement with the Pentagon aims to create a responsible framework for the use of AI in national security contexts. The spokesperson stated, “We believe our agreement with the Pentagon creates a workable path for responsible national security uses of AI while making clear our red lines: no domestic surveillance and no autonomous weapons.” The company acknowledged the diverse viewpoints on the matter and expressed a commitment to engage in ongoing discussions with various stakeholders, including employees and civil society.
The agreement in question was formally announced just over a week ago and followed a stalled negotiation between the Pentagon and another AI entity, Anthropic, which was reportedly seeking assurances against the use of its technology for mass surveillance or autonomous weaponry. As a result, the Pentagon designated Anthropic as a supply-chain risk, an action the company intends to contest in court. Despite the turmoil, major tech companies like Microsoft, Google, and Amazon continue to support Anthropic’s Claude for non-defense customers.
In a rapid response to Anthropic’s challenges, OpenAI established its own agreement, which it described as a “more expansive, multi-layered approach” aimed at addressing governance concerns through both contractual obligations and technical safeguards. However, the fallout from the announcement has visibly impacted OpenAI’s reputation, with a reported surge of 295% in uninstalls of ChatGPT, while rival product Claude reportedly ascended to the top of the App Store charts. As of the latest updates, both Claude and ChatGPT hold the positions of the number one and number two free apps in the U.S. App Store, respectively, reflecting a significant shift in consumer sentiment amidst ongoing debates over AI ethics in defense applications.


