In a recent development, the Pentagon and AI companies are at odds over the potential use of AI technology for military applications. According to a report by Axios, Anthropic, a prominent AI company, is resisting the Pentagon’s request to use their technology for ‘all lawful purposes,’ specifically concerning the use of Claude for mass domestic surveillance and autonomous weapons.
The government is reportedly pressuring Anthropic, along with OpenAI, Google, and xAI, to comply with its demands. While one company has reportedly agreed, the others, including Anthropic, are showing resistance. This standoff has put Anthropic’s $200 million contract with the Pentagon at risk.
Recent reports indicate a significant disagreement between Anthropic and Defense Department officials regarding the permissible applications of their Claude models. The Wall Street Journal highlighted Claude’s involvement in a military operation targeting the former Venezuelan President Nicolás Maduro.
Anthropic, in response to the situation, emphasized its focus on specific usage policies, particularly around the limitations related to fully autonomous weapons and mass domestic surveillance. The company clarified that discussions with the Department of War have not involved specifics on operational use.
As the debate continues, the clash underscores the growing concerns surrounding the ethical implications of deploying AI in military contexts, raising questions about the boundaries of AI ethics and military technology.
Source: TechCrunch