Home »  Microsoft Issues a Legal Challenge to the Pentagon to Protect Anthropic and the Principle of Ethical AI

 Microsoft Issues a Legal Challenge to the Pentagon to Protect Anthropic and the Principle of Ethical AI

by admin477351

 

Microsoft has issued a formal legal challenge to the Pentagon to protect both Anthropic and the broader principle that AI companies have the right to develop and enforce ethical standards for their technology, filing an amicus brief in a San Francisco federal court calling for a temporary restraining order. The brief argued that the Pentagon’s supply-chain risk designation threatens critical technology supply chains and sets a dangerous precedent. Amazon, Google, Apple, and OpenAI have also filed in support of Anthropic.

The principle at stake was tested when Anthropic refused to allow its Claude AI to be deployed for mass surveillance or autonomous lethal weapons during a $200 million Pentagon contract negotiation. Defense Secretary Pete Hegseth labeled the company a supply-chain risk following the collapse of talks, and Anthropic’s government contracts began to be cancelled. The company filed two simultaneous lawsuits in California and Washington DC challenging the designation.

Microsoft’s legal challenge is grounded in its direct integration of Anthropic’s technology into military systems and its participation in the Pentagon’s $9 billion cloud computing contract. Additional federal agreements spanning defense, intelligence, and civilian agencies give Microsoft a direct stake in this dispute. Microsoft publicly called for a collaborative approach in which government and industry jointly define responsible standards for AI in national security.

Anthropic’s court filings argued that the supply-chain risk designation was an unconstitutional act of retaliation for its publicly stated AI safety positions, violating its First Amendment rights. The company disclosed that it does not currently believe Claude is safe or reliable enough for lethal autonomous operations. The Pentagon’s technology chief publicly ruled out any renegotiation.

Congressional Democrats have separately demanded answers about whether AI was used in a strike in Iran that reportedly killed over 175 civilians at a school. Their inquiries are adding legislative pressure to a legal challenge that has placed the principle of ethical AI at the center of a national debate. Together, Microsoft’s legal challenge, the industry coalition, and congressional scrutiny are creating an extraordinary test of the principle that AI companies have the right to set ethical limits on how their technology is used.

 

You may also like