Pentagon Designates Anthropic as Supply-Chain Risk Over AI Policy Dispute

This article was generated by AI and cites original sources.

The Defense Department has formally labeled Anthropic a “supply-chain risk” following disputes over the AI company’s acceptable use policies. According to The Wall Street Journal, defense contractors will be prohibited from collaborating with the government if they incorporate Claude, Anthropic’s AI program, in their products. While typically applied to foreign entities linked to adversarial governments, this is the first instance of an American firm receiving such a designation.

The conflict revolves around Anthropic’s refusal to permit the Pentagon to utilize Claude for autonomous lethal weapons and mass surveillance without human oversight. The Pentagon contends that ceding control to a private entity would result in excessive power concentration, whereas Anthropic raised concerns about the government’s respect for their boundaries. Negotiations soured as the Pentagon threatened to impose the supply-chain risk label if Anthropic didn’t comply. Subsequently, following Anthropic’s non-compliance announcement, the Pentagon implemented the designation.

The scope of the Pentagon’s enforcement strategy remains uncertain. Defense Secretary Pete Hegseth announced that any company engaged in “any commercial activity” with Anthropic, even beyond Pentagon projects, would risk contract cancellations. Anthropic countered, deeming such a broad application illegal.

Source: The Verge