Caitlin Kalinowski, OpenAI's head of robotics and consumer hardware, resigned on March 7, 2026, over concerns about the company's recently announced Pentagon deal. She stated that policy guardrails around surveillance and autonomous weapons were not sufficiently defined before OpenAI signed the agreement.
What Happened
Kalinowski posted on social media that she resigned "on principle" after OpenAI announced plans to deploy its AI systems inside secure Defense Department computing environments. She wrote that "surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got."
The resignation follows a sequence of events that began in late February. Anthropic refused the Pentagon's request that contracted AI companies agree to "any lawful use," including mass domestic surveillance and fully autonomous weapons. The Department of War then blacklisted Anthropic as a supply-chain risk. OpenAI moved quickly to fill the gap, signing its own agreement on February 27.
OpenAI CEO Sam Altman later acknowledged the deal "looked opportunistic and sloppy" and said the company would amend the contract with explicit limits on surveillance and autonomous weapons. An OpenAI spokesperson confirmed Kalinowski's departure and reiterated the company's stated red lines: no domestic surveillance and no autonomous weapons.
Why It Matters for Creators
This story reveals the tension between AI companies scaling their revenue through government contracts and maintaining the ethical commitments their users and employees expect. For creators who build on OpenAI's APIs and tools, it raises questions about the company's decision-making priorities. The speed at which OpenAI moved to fill Anthropic's spot, without first establishing clear guardrails, rattled both internal staff and external observers.
The broader implication is that AI companies are under increasing pressure from both government and commercial customers. As a creator using these platforms, understanding which companies prioritize what values helps inform your tool choices and platform dependencies.
What to Do Next
Read OpenAI's public statement about the Pentagon agreement and the specific limitations they claim to have negotiated. If you are concerned about the ethical trajectory of your AI provider, review Anthropic's position on refusing the original deal. For creators evaluating platform risk, diversifying across multiple AI providers remains the safest approach.
This story was covered by Creative AI News.
Subscribe for free to get the weekly digest every Tuesday.