The US Justice Department has issued a court filing countering Anthropic’s claims that the government unlawfully penalized the AI developer for restricting the use of its Claude AI models in military applications. According to the filing, the government argued that Anthropic’s attempt to impose limitations on government use does not align with the First Amendment rights, emphasizing that such actions cannot unilaterally dictate terms to the government.
The response, submitted in a federal court in San Francisco, addresses Anthropic’s legal challenge against the Pentagon’s decision to categorize the company as a supply-chain risk, potentially hindering its participation in defense contracts due to security concerns. Anthropic, facing the risk of significant revenue loss, seeks to resume regular operations during the litigation process, with a hearing scheduled by Judge Rita Lin to review this request.
The Justice Department, representing the Department of Defense and other relevant agencies, dismissed Anthropic’s fears of financial harm as legally insufficient and urged the court to reject the company’s plea for relief. The government’s stance revolves around preventing potential misuse of its technology systems by Anthropic, with concerns raised about the company’s future conduct if granted continued access.
This legal dispute underscores the complex intersection of technology, government regulations, and national security, highlighting the evolving challenges faced by AI developers in navigating defense-related applications and contractual obligations.
Source: WIRED