Anthropic fallout Iran strikes fuel tech backlash over military AI use

Anthropic fallout Iran strikes fuel tech backlash over military AI use


U.S. Department of War and Anthropic logos are seen in this illustration taken March 1, 2026.

Dado Ruvic | Reuters

Tech workers at Google, OpenAI, and some of their peers are circulating an array of letters calling for clearer limits on how their employers work with the military after the U.S. carried out strikes on Iran over the weekend and the Pentagon blacklisted AI models from Anthropic.

One open letter, titled “We Will Not Be Divided,” grew from a couple hundred names on Friday to almost 900 by Monday, with nearly 100 signatories from OpenAI and close to 800 from Google. The letter took aim at the Department of Defense’s actions against Anthropic, which refused to allow its technology to be used for mass surveillance or fully autonomous weapons.

“They’re trying to divide each company with fear that the other will give in,” the letter reads. “That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.”

Combat operations began in Iran hours after the Trump Administration’s decision on Friday to block Anthropic and designate the company a “supply chain risk.” While the U.S. government claimed the attack on Iran was necessary to neutralize “imminent threats” from the country’s nuclear and missile programs, the actions appear to have pushed more tech workers to sign their names to various petitions.

Tensions in tech have been escalating for months, largely due to the increased aggressiveness of federal immigration agents, including the killings of two American citizens in Minnesota early this year. Workers in the industry have demanded greater transparency regarding the work their employers do with the government, particularly when it comes to cloud and artificial intelligence contracts.

For Google, the latest backlash comes as the company is reportedly in talks with the Pentagon over bringing its AI model Gemini onto a classified system, reviving a years-old internal fight over military AI.

'What are your red lines?' Activists' chalk appeals to OpenAI employees in wake of Pentagon deal

On Friday, No Tech For Apartheid, a group that’s long been critical of cloud deals between the U.S. government and tech giants, posted a joint statement titled, “Amazon, Google, Microsoft Must Reject the Pentagon’s Demands.”

The coalition said the three leaders in cloud infrastructure should refuse Defense Department terms that would enable mass surveillance or other abusive uses of AI, and called for greater clarity around contracts involving the military and agencies including Department of Homeland Security and Immigration and Customs Enforcement (ICE).

The group pointed to Google directly, citing the potential of a Pentagon deal that could mirror an agreement that allows the Defense Department to deploy Grok, from Elon Musk’s xAI, “in classified environments — as far as we know, without any guardrails.”

“Our own companies are also on the brink of accepting similar contract terms,” the statement said. “Google is in negotiations with the Pentagon to deploy Gemini, its own frontier model, for classified uses.”

While Anthropic and OpenAI have made numerous public statements regarding their negotiations with the DoD and the current status of their contracts, Google parent Alphabet has been silent. The company hasn’t responded to multiple requests for comment.

‘Supply chain risk’

In another effort backing Anthropic, hundreds of tech workers signed an open letter urging the Department of Defense to withdraw its designation of the company as a “supply chain risk.” The list includes dozens of employees from OpenAI, along with workers affiliated with companies including Salesforce, Databricks, IBM and Cursor

The letter calls on Congress to “examine whether the use of these extraordinary authorities against an American technology company is appropriate,” and says Anthropic, and other private companies, should not face retaliation for refusing to accede to the government’s demands.

Similar concerns were floated internally at Google last week, when more than 100 employees who work on AI technology reportedly signed a letter to management, expressing fears about the company’s work with the DoD. They asked the search giant to draw the same red lines as Anthropic, according to The New York Times.

Jeff Dean, Google’s chief scientist, received the memo and appeared to sympathize with at least some of the concerns. He wrote in a thread on X that “mass surveillance violates the Fourth Amendment and has a chilling effect on freedom of expression.”

He added that surveillance systems are prone to misuse for political or discriminatory purposes.”

Dean has experienced related issues at Google in the recent past.

Jeff Dean, head of artificial intelligence at Google LLC, speaks during a Google AI event in San Francisco, California, U.S., on Tuesday, Jan. 28, 2020.

David Paul Morris | Bloomberg | Getty Images

In 2018, the company faced an internal revolt over Project Maven, a Pentagon program that used AI to analyze drone footage. After thousands of employees protested, Google let the contract lapse. The company later established its “AI Principles,” laying out how its technology could be used.

It’s continued to be a source of consternation. In 2024, Google fired more than 50 employees after protests over Project Nimbus, a $1.2 billion joint contract with Amazon for work with the Israeli government. Executives repeatedly said the contract didn’t violate any of the company’s AI Principles. However, documents and reports show the company’s agreement allowed for giving Israel AI tools that included image categorization, object tracking and provisions for state-owned weapons manufacturers.

In December of that year, a New York Times report found that four months prior to the Nimbus agreement, officials at the company worried that signing the deal would harm its reputation and that “Google Cloud services could be used for, or linked to, the facilitation of human rights violations.”

Early last year, Google reportedly revised its AI Principles and removed language that had explicitly prohibited “building weapons” or “surveillance technology.”

WATCH: Anthropic, Pentagon and software sell-off are not separate stories

Anthropic, Pentagon and software sell-off are not separate stories: Plexo's Toney



<

Leave a Reply

Your email address will not be published. Required fields are marked *