The Pentagon is urging artificial intelligence firms to permit the U.S. military to utilize their technologies for “all lawful purposes,” but Anthropic is resisting these demands, according to a recent report from Axios. Reports suggest that the government is reaching out to other AI leaders like OpenAI, Google, and xAI with similar requests. An anonymous official from the Trump administration indicated that one of these companies has acquiesced, while the others are reportedly open to negotiations.
Anthropic, however, stands firm against these expectations. The Pentagon has reportedly warned of potentially terminating its $200 million contract with the company due to this resistance. A previous Wall Street Journal article highlighted significant divergences between Anthropic and Defense Department officials concerning the applications of its AI model, Claude. Notably, the report indicated that Claude had been involved in military operations to apprehend former Venezuelan President Nicolás Maduro.
A spokesperson for Anthropic clarified to Axios that there have been no discussions regarding Claude’s use in particular operations with the Department of Defense. Instead, the focus remains on specific policy questions, particularly around prohibitions against fully autonomous weaponry and mass domestic surveillance.
Key Points:
– Pentagon seeks broad AI tech usage for military purposes.
– Anthropic shows notable resistance, risking a $200 million contract.
– Tensions highlighted over the use of the Claude AI model in military operations.
– Anthropic emphasizes a focus on ethical guidelines concerning autonomous weapons and surveillance.
