Anthropic Challenges U.S. Government Over Military AI Use Restrictions
Anthropic, a prominent artificial intelligence company, has initiated legal proceedings to prevent the enforcement of what it describes as retaliatory measures by the Trump administration. The company’s lawsuit seeks to reverse a Pentagon decision that labeled it a ” supply chain risk,” and to challenge an executive order by President Donald Trump that restricts federal use of its AI tool, Claude.
This legal action marks a significant escalation in the debate over the deployment of AI in military and surveillance capacities, highlighting tensions between Anthropic and its industry peers, including OpenAI. OpenAI, known for its ChatGPT, entered into a Pentagon partnership shortly after Anthropic faced government sanctions.
On Monday, Anthropic filed two lawsuits—one in California’s federal court and another in the federal appeals court in Washington, D.C.—contesting different aspects of the government’s actions. The company’s lawsuit argues, “These actions are unprecedented and unlawful,” emphasizing that the government cannot punish a company for its protected speech without congressional authorization.
The Defense Department declined to comment, adhering to its policy of not discussing ongoing litigation.
Anthropic aims to limit its AI technology from being used in mass surveillance and autonomous weaponry. Defense Secretary Pete Hegseth and other officials have publicly insisted on the acceptance of “all lawful” uses of Claude, threatening repercussions if Anthropic does not comply.
Being labeled a supply chain risk effectively halts Anthropic’s defense projects, a designation usually reserved for foreign threats to national security. Hegseth, in a letter dated March 4, stated that the designation was essential for national security.
President Trump has directed federal agencies to phase out the use of Claude within six months, although it is already integrated into various classified military systems, notably those involved in the Iran conflict. Other federal departments, including Treasury and State, are also named in Anthropic’s lawsuit due to their directives against using Claude.
Michael Pastor, a professor at New York Law School, noted the uniqueness of the case, stating, “I’ve never seen a case like this.” He highlighted the unusual nature of the federal government’s approach, which he described as threatening the company’s existence.
Despite the legal battle, Anthropic is working to assure businesses and other government entities that the supply chain risk label strictly applies to military contractors using Claude for Department of Defense tasks. This clarification is vital for Anthropic, which anticipates $14 billion in revenue this year, primarily from non-military clients.
Anthropic reaffirmed its commitment to AI safety and national security, stating that legal action is a necessary step to protect its business and stakeholders.
Founded in 2021 by Dario Amodei and former OpenAI colleagues, Anthropic has always opposed the use of its technology in lethal autonomous warfare or mass surveillance, citing safety concerns and lack of testing in these areas.
Previously, Anthropic was the sole provider authorized to supply AI models to classified military projects. However, the Pentagon is now considering alternatives such as Google’s Gemini and OpenAI’s ChatGPT.
The lawsuit accuses the administration’s actions of damaging Anthropic’s reputation, threatening significant business contracts, and undermining the economic value of one of the fastest-growing private companies globally.
This conflict has paradoxically enhanced Anthropic’s standing among some customers and tech professionals who support its stance against government pressure. OpenAI CEO Sam Altman’s attempt to replace Claude with ChatGPT was criticized as opportunistic.
Anthropic’s user base has grown, surpassing downloads of ChatGPT and Gemini. The ongoing debate over ethical AI use has also impacted talent retention within the industry, evidenced by OpenAI’s head of robotics, Caitlin Kalinowski, resigning due to the company’s Pentagon collaboration.
More than 30 AI experts from OpenAI and Google, including Google’s chief scientist Jeff Dean, have filed a legal brief in support of Anthropic, warning against reckless designations and the suppression of AI safety discourse.



