Home > Technology > Anthropic Takes the Pentagon to Court Over “Supply Chain Risk” Blacklisting

Anthropic Takes the Pentagon to Court Over “Supply Chain Risk” Blacklisting

//
/
Comments are Off

Anthropic, the San Francisco-based artificial intelligence company behind the chatbot Claude, has filed two federal lawsuits against the Trump administration in a bid to reverse the Pentagon’s decision to designate it a “supply chain risk.” The label, typically reserved for companies tied to foreign adversaries, effectively bars defence contractors from using Anthropic’s technology in Pentagon-related work and represents the first known instance of the designation being applied to an American company.

The lawsuits, filed on Monday in the U.S. District Court for the Northern District of California and the federal appeals court in Washington, D.C., allege that the government’s actions are “unprecedented and unlawful” and violate Anthropic’s First Amendment and due process rights. The company has asked a judge to block the designation and prevent federal agencies from enforcing it.

The dispute centres on two red lines Anthropic drew in contract negotiations with the Department of Defense: that its AI tool Claude would not be used for mass surveillance of U.S. citizens, and that it would not power fully autonomous weapons without human oversight. The Pentagon insisted on access to Claude for “all lawful purposes,” arguing that a private company should not be able to dictate how the military operates, particularly during a national security emergency.

After talks collapsed on February 27, Defence Secretary Pete Hegseth announced the supply chain risk designation and said the military would phase out Claude over six months. The same day, President Trump posted on Truth Social that Anthropic had made a “disastrous mistake” and ordered all federal agencies to cease using the company’s technology. Just hours later, OpenAI struck its own deal with the Pentagon, drawing criticism from observers who questioned whether its contract offered meaningfully different protections than what Anthropic had been pushing for. OpenAI later acknowledged the announcement appeared “sloppy and opportunistic.”

The financial stakes are significant. Anthropic’s chief financial officer stated in a court filing that the government’s actions could reduce the company’s 2026 revenue by “multiple billions of dollars.” Most of Anthropic’s projected $14 billion in annual revenue comes from businesses and government agencies using Claude for coding, data processing, and other non-military tasks.

Despite the legal battle, Claude has reportedly continued to be used in active military operations, including intelligence processing and target identification in the ongoing U.S.-Israeli conflict with Iran. Anthropic was the first frontier AI company cleared for use on classified military networks, and the Pentagon has given itself six months to transition away from the tool to avoid operational disruption.

The case has drawn notable support from across the AI industry. More than 30 scientists and researchers from OpenAI and Google DeepMind filed an amicus brief in their personal capacities backing Anthropic, arguing that the designation could undermine U.S. competitiveness and stifle public debate on AI safety. OpenAI’s head of robotics, Caitlin Kalinowski, resigned over her company’s Pentagon deal, writing that “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation.”

The White House has pushed back firmly. Spokeswoman Liz Huston said the president “will never allow a radical left, woke company to jeopardise our national security by dictating how the greatest and most powerful military in the world operates.”

Anthropic CEO Dario Amodei has maintained that the designation has a narrow scope and that businesses can continue using Claude for non-Pentagon work. But the broader implications of the case extend well beyond Anthropic’s bottom line. The outcome will likely set precedent for how AI companies negotiate with governments over the use of their technology in warfare and surveillance, and whether safety restrictions can be treated as grounds for punitive federal action.

The Pentagon has declined to comment on the pending litigation.