In the biggest story in the AI world right now, Pete Hegseth's Department of War has been pressuring AI companies to allow the United States government and military to have unfettered access to the programs for security purposes. AI superpower Anthropic has resisted Hegseth's pressure, but Sam Altman's OpenAI has signed on the dotted line without hesitation. Hegseth is using a variety of strategies to force the hand of these tech companies, and one by one, they're folding. Now, Donald Trump has ordered the US government to stop using all Anthropic products, and is ‘encouraging' a shift to OpenAI software.
Threatening tech giants
Pete Hegseth and the US government are using a variety of strategies to strong-arm these AI giants, not the least of which includes banning them from operating in the United States. Currently, Anthropic has ironclad restrictions that prohibit its AI models from being used for mass surveillance or incorporated into lethal autonomous weapons, so that they can make decisions to attack without human intervention. These are the restrictions the US wants removed. Hegseth began by threatening to cancel the $200 million contract with the Department of War, and then took things up a notch.

Hegseth threatened that if Anthropic does not remove the restrictions by February 28 (a deadline that has since passed), he will label the company a ‘supply-chain risk'. If the label is placed on Anthropic, no company doing business with the Department of War would be allowed to use Anthropic's software. The threat would effectively end Anthropic's meteoric growth over the last 18 months. Anthropic is valued at just under $400 billion and owns the Claude AI model.
What is Anthropic AI
Anthropic AI was founded in 2021 by a number of ex-OpenAI employees. With a team of just seven, Anthropic built its AI model Claude in just a year, and Claude 1 was released in March 2023. After an initial investment of $1 billion from Google in 2021, Anthropic has built unimaginable wealth in less than five years, latching on and benefitting from the AI boom in 2025. In 2024, Databricks announced that Claude would be integrated into its software, marking a serious achievement for Anthropic. Two years later, Anthropic is valued at nearly three times what Databricks is, a company nearly 10 years its senior.
OpenAI folds to pressure, Altman comments
OpenAI, the creator of ChatGPT, faced the same conditions as Anthropic but chose to cave to government pressure. Sam Altman described being ‘rushed' into the deal on February 28, and penned an explanation on X. Altman claimed that he ‘shouldn't have rushed' into signing the DoW's contract, and that the whole experience was ‘a learning experience' for the billionaire CEO. Sam Altman is worth nearly $4 billion. According to Altman, he was trying to avoid a ‘much worse outcome,' and when his attempts to de-escalate discussions failed, he chose to sign a dangerous contract instead of risking financial consequences for his stockholders.

Altman called the situation ‘super complicated' and also outlined the details of his contract with Hegseth's DoW. According to Altman, it includes prohibiting the AI system from being intentionally used for domestic surveillance of US persons and nationals. Altman also claimed that agreement also restricts the deliberate tracking or monitoring of individuals, including through commercially acquired personal information, and confirmed that it will not be used by DoW intelligence agencies. Whether the DoW will follow the contract is unknown, but the contract at least includes some restrictions.
Altman received immediate backlash for folding to government pressure, but defended himself in another statement. Altman claims that ‘unelected officials' should not decide how technology should be used by the government. Altman also said he doesn't want OpenAI to decide what to do in the event of an emergency in the US, specifically a nuclear attack. Altman believes that AI experts are unequipped to make decisions about their own software, instead trusting the US government. The US government echoed the same argument against Anthropic, but to no avail.
If Hegseth commits and labels Anthropic a supply chain risk, it would be the first time an American company has been given the designation, and it would create a dangerous precedent of US government overreach into the private sector.

Created by humans, assisted by AI.