A US appeals court ruling has left Anthropic in “supply-chain risk” limbo, creating uncertainty about whether—and how—the US military can use the company’s Claude model. On Wednesday, the Washington, DC appeals court ruled that Anthropic had not met the requirements to temporarily undo a Pentagon-imposed designation, contradicting a separate lower-court decision issued last month by a San Francisco judge that ordered the label removed.
Two supply-chain laws, two courts, and conflicting outcomes
At the center of the dispute is a designation imposed by the Department of Defense under two different supply-chain laws with similar effects. The government sanctioned Anthropic under both laws, but each court is addressing only one of them: the San Francisco case and the Washington, DC appeals case.
This structure creates a procedural complexity where a company can face overlapping compliance and access constraints even while one court orders relief under a different statutory basis. It was not immediately clear how the conflicting preliminary judgments would be resolved.
Anthropic has said it is the first US company to be designated under both laws. These laws are typically used to punish foreign businesses that pose a risk to national security. If accurate, the case could establish a reference point for how supply-chain risk frameworks are interpreted when the target is a domestic AI provider rather than the foreign entities these statutes traditionally address.
What the appeals court decided—and why
In the Washington, DC ruling on Wednesday, a three-judge appellate panel said Anthropic “has not satisfied the stringent requirements” to temporarily lose the supply-chain-risk designation imposed by the Pentagon. The panel acknowledged the potential financial harm to Anthropic but focused on avoiding operational disruption.
The panel wrote that granting a stay would force the US military to continue dealing with an “unwanted vendor of critical AI services” during “a significant ongoing military conflict.” The court stated it did not want to risk “a substantial judicial imposition on military operations” or “lightly override” the military’s national security judgments.
From a technology access standpoint, the court’s standard for temporary relief appears to be tightly constrained. Even if a lower court identifies issues in the underlying designation process, the appeals court’s position suggests that—at least at the preliminary stage—military operations and national security determinations may receive deference that limits how quickly AI tool access can change.
Lower-court order in San Francisco—and the Pentagon’s response
In contrast, the San Francisco judge found that the Department of Defense likely acted in “bad faith” toward Anthropic. The lower court judge viewed the government’s actions as driven by frustration over Anthropic’s proposed limits on how its technology could be used and the company’s public criticism of those restrictions.
Based on that finding, the San Francisco judge ordered the supply-chain risk label removed last month. The Trump administration complied by restoring access to Anthropic AI tools inside the Pentagon and throughout the rest of the federal government.
This sequence illustrates how AI deployment can depend on legal status as much as on model performance. For organizations relying on AI systems, access is an operational variable controlled by policy and procurement rules. Court decisions therefore directly affect whether systems like Claude can be used in government contexts.
Anthropic’s position and what to watch next
Following the appeals ruling, Anthropic spokesperson Danielle Cohen said the company is grateful the Washington, DC court “recognized these issues need to be resolved quickly” and remains confident “the courts will ultimately agree that these supply chain designations were unlawful.”
Because the appeals court ruling addresses one supply-chain law while the San Francisco decision concerns the other, the immediate question becomes how the government will treat access when both legal designations remain active. It was not immediately clear how the conflicting preliminary judgments would be resolved.
In practical terms, observers may watch for whether the government’s use of Claude-like capabilities is constrained by the continuing designation tied to the Washington, DC case, or whether restoration from the San Francisco ruling continues to apply in whole or in part. The outcome could influence Anthropic’s near-term deployments and how future AI vendors interpret supply-chain risk designations—especially if, as Anthropic claims, a domestic AI company is treated under frameworks previously aimed at foreign entities.
More broadly, this case highlights the intersection of AI product access, national security procurement, and legal process. Even when a lower court orders relief and agencies comply, an appeals court can reinstate uncertainty by denying temporary stays under a stringent standard. For the AI industry, this suggests a deployment risk model where regulatory and judicial timing can be as consequential as technical readiness.
Source: WIRED