While Anthropic found itself politically exiled after standing by its AI ethics policy, OpenAI has managed to secure a Pentagon deal by threading what may prove to be an impossibly fine needle — claiming the same ethical principles while still satisfying government demands. Whether the balance holds will define the company’s relationship with both the government and its own workforce.
The crisis began when the Trump administration grew frustrated with Anthropic’s refusal to allow its Claude AI to be used for mass surveillance or autonomous weapons. Pentagon officials escalated the dispute until the president himself intervened, ordering all government agencies to immediately cut ties with the company.
In the hours that followed, Sam Altman positioned OpenAI as a willing partner, announcing a Pentagon deal on social media and issuing an internal memo to reassure employees. He stated clearly that OpenAI shares Anthropic’s positions on mass surveillance and autonomous weapons, and claimed those limits are built into the new contract.
The challenge for Altman is that nearly 500 of his own employees had already signed a public letter backing Anthropic, warning against allowing the government to divide the industry by offering favorable contracts to compliant companies. The letter’s message — that the Pentagon is trying to split the industry — was a direct challenge to Altman’s decision.
Anthropic, standing alone after the political firestorm, maintained its defiant tone. The company insisted that no intimidation would change its stance and noted pointedly that its two restrictions have never blocked any lawful government operation — calling into question why the Pentagon needed to make this a confrontation in the first place.