The Department of War’s (DoW) legal fight with Anthropic lurched forward Tuesday in a San Francisco federal courtroom. At issue is whether the DoW can compel the AI company to allow its software’s use to surveil Americans and as an autonomous weapon in war.
The stakes are tremendous.
On March 3, the Pentagon listed the San Francisco company, now valued at $380 billion, as a “supply chain risk” after it refused to agree to guardrails Anthropic insisted on for its AI model, Claude — universally agreed to be the most versatile and powerful AI resource in existence. The proposed guardrails prevent its use in lethal autonomous weaponry or in mass surveillance of Americans.
The Pentagon’s designation effectively shuts Anthropic out of all government-related contracts and constrains other defense contractors’ use of Claude.
Nevertheless, the presiding judge in the case, U.S. District Judge Rita Lin pressed DoW’s lawyer Tuesday about whether the Pentagon’s move is legally binding. DoW had neither explored less extreme contractual options or notified Congress of its decision, as required by law. DoW’s lawyer conceded that it was not a legal action, just a statement of DoW’s intentions.
Documents in the case (Anthropic PBC v. U.S. Department of War) provide a frightening window into the broad scope of potential uses of mass surveillance technology. The DoW insistence on unfettered use of AI in war and at home is meant to guarantee it can use all the data-weaponry it deploys abroad in tracking, targeting, and detaining U.S. residents, immigrant and citizen alike.
Critics fear that could include protesters and political opponents, now often referred to by members of the administration as “domestic terrorists” and “radical left lunatics.”
Even as the suit plays out in court, Claude continues to be used by software developer Palantir, which holds multi-billion dollar contracts with DoW and the Department of Homeland Security (DHS), among other agencies, as well as in the U.S. war in Iran.
Surprisingly, a central argument in Anthropic’s initial complaint seeking a preliminary injunction focuses on how its own technology could be mis-used by the government.
Anthropic currently does not have confidence, for example, that Claude would function reliably or safely if used to support lethal autonomous warfare. These usage restrictions are therefore rooted in Anthropic’s unique understanding of Claude’s risks and limitations—including Claude’s capacity to make mistakes and its unprecedented ability to accelerate and automate analysis of massive amounts of data, including data about American citizens.
Anthropic says without the injunction the company stands to lose billions.
President Trump responded to the complaint by characterizing Anthropic as a “radical left, woke” company. Secretary of War Pete Hegseth criticized what he called Anthropic’s “defective altruism,” soon after proclaiming the company a national security risk.
In its defense, Anthropic argues it is being punished for exercising its First Amendment rights, a line Judge Lin picked up on, questioning whether DoW was simply punishing a “stubborn” contractor. In response, the DoW’s attorney insisted that Anthropic’s statements were not “expressive” speech, but rather represented actions to unacceptably influence DoW decision-making.
He concluded that, in any case, the court should give deference to the president’s statements.
Defendants in the case include 16 federal departments or agencies, in addition to the DoW, all of which might seek access to the military version of Claude (designed to have less constraints than its commercial sibling).
A number of amicus briefs have been submitted in support of Anthropic, including from: Microsoft, AI experts from Google, Google Deep Mind, Open AI, leading public-interest groups, a group of former national security officials, and leading public interest organizations focused on protecting individual privacy (the Electronic Frontier Foundation, The Cato Institute, and 1st Amendment Lawyers Association).
The amicus brief from experts at Google and Open AI — a direct competitor of Anthropic — sums up the inherent risks to the public involved in the case. It points to the proliferation of data gathering devices and the power of AI to synthesize what are now divergent data streams.
Social media platforms log not just what people post, but what they read, how long they browse, and what they posted before deleting it…What does not yet exist is the AI layer that transforms this sprawling, fragmented data landscape into a unified, real-time surveillance apparatus. Today, these streams are siloed, inconsistent, and require significant human effort to connect. From our vantage point at frontier AI labs, we understand that an AI system used for mass surveillance could dissolve those silos, correlating face recognition data with location history, transaction records, social graphs, and behavioral patterns across hundreds of millions of people simultaneously.
A central question hovering over the case is to what degree lawmakers are prepared to understand the full technological impacts of AI on contemporary society. In his response to Anthropic, Trump derided the company’s position, insisting the administration would only use Claude for “lawful purposes.”
While some courts have continued with the “presumption of regularity” — that is, faith that the federal government reliably acts to follow the law — the more than 700 pending lawsuits against the Trump Administration for unlawful behavior raise serious questions about who will make that determination.
And while ostensibly about the future of the AI industry’s commercial relationships with the federal government, also implicated in the case is the Trump Administration’s mass deportation campaign, which is harnessing AI tools to more effectively track and target individuals and entire communities.
A ruling from Judge Lin could come as early as this week. Whatever the outcome, it is reasonable to assume the case will be appealed and eventually land at the Supreme Court.
Nonetheless, the current proceedings are a key development that will shape debates over fundamental Constitutional rights and the ability of the federal government to harness emerging technologies to protect these rights or to violate them.













