
The Pentagon may sever its relationship with AI firm Anthropic due to a dispute over restrictions on military applications of its technology.
WASHINGTON: The Pentagon is considering ending its relationship with artificial intelligence company Anthropic. This potential move stems from a dispute over the company’s insistence on maintaining certain restrictions on how the US military can use its AI models.
According to an Axios report citing an administration official, the Pentagon is pushing four major AI companies to allow military use of their tools for “all lawful purposes”. These purposes reportedly include weapons development, intelligence collection, and battlefield operations.
Anthropic has not agreed to these broad terms after months of negotiations. The other companies involved in these discussions are OpenAI, Google, and xAI.
An Anthropic spokesperson clarified that conversations with the US government have focused on specific usage policy questions. These include establishing hard limits around fully autonomous weapons and mass domestic surveillance, which the spokesperson stated are not related to current operations.
“The company had not discussed the use of its AI model Claude for specific operations with the Pentagon,” the spokesperson said. This statement follows a Wall Street Journal report that Claude was used in the US military’s operation to capture former Venezuelan President Nicolas Maduro.
Claude was reportedly deployed via Anthropic’s partnership with data firm Palantir. The Pentagon did not immediately respond to a request for comment from Reuters.
This development follows a Reuters report that the Pentagon was pushing top AI companies to make their tools available on classified networks with fewer standard restrictions. The ongoing dispute highlights the tension between commercial AI ethics policies and national defence objectives.
The Sun Malaysia

