Anthropic, Pentagon clash over AI use. Here’s what each side wants

Anthropic, Pentagon clash over AI use. Here’s what each side wants


FILE PHOTO: The Pentagon is seen from the air in Washington, U.S., March 3, 2022.

Joshua Roberts | Reuters

Anthropic is at odds with the Department of Defense over how its artificial intelligence models should be used, and its work with the agency is “under review,” a Pentagon spokesperson told CNBC. 

The five-year-old startup was awarded a $200 million contract with the DoD last year. As of February, Anthropic is the only AI company that has deployed its models on the agency’s classified networks and provided customized models to national security customers. 

But negotiations about “going forward” terms of use have hit a snag, Emil Michael, the under secretary of war for research and engineering, said at a defense summit in Florida on Tuesday.

Anthropic wants assurance that its models will not be used for autonomous weapons or to “spy on Americans en masse,” according to a report from Axios.

The DoD, by contrast, wants to use Anthropic’s models “for all lawful use cases” without limitation. 

“If any one company doesn’t want to accommodate that, that’s a problem for us,” Michael said. “It could create a dynamic where we start using them and get used to how those models work, and when it comes that we need to use it in an urgent situation, we’re prevented from using it.”  

It’s the latest wrinkle in Anthropic’s increasingly fraught relationship with the Trump administration, which has publicly criticized the company in recent months.

David Sacks, the venture capitalist serving as the administration’s AI and crypto czar, has accused Anthropic of supporting “woke AI” because of its stance on regulation.

An Anthropic spokesperson said the company is having “productive conversations, in good faith” with the DoD about how to “get these complex issues right.”

“Anthropic is committed to using frontier AI in support of U.S. national security,” the spokesperson said. 

The startup’s rivals OpenAI, Google and xAI were also granted contract awards of up to $200 million from the DoD last year. 

Those companies have agreed to let the DoD use their models for all lawful purposes within the military’s unclassified systems, and one company has agreed across “all systems,” according to a senior DoD official who asked not to be named because the negotiations are confidential. 

If Anthropic ultimately does not agree with the DoD’s terms of use, the agency could label the company a “supply chain risk,” which would require its vendors and contractors to certify that they do not use Anthropic’s models, the person said.

The designation is typically reserved for foreign adversaries, so it would be a complex blow to Anthropic. 

The company was founded by a group of former OpenAI researchers and executives in 2021, and is best known for developing a family of AI models called Claude

Anthropic announced earlier this month that it closed a $30 billion funding round at a $380 billion valuation, more than double what it was worth as of its last raise in September.

WATCH: Anthropic debuts Sonnet 4.6 model

Anthropic debuts Sonnet 4.6 model



<

Leave a Reply

Your email address will not be published. Required fields are marked *