Trump Administration Designates Anthropic AI Company as 'Supply Chain Risk' After It Refuses Military Surveillance Use
The Trump administration designated AI company Anthropic as a 'supply chain risk' to national security after the company refused to allow its Claude AI model to be used for mass domestic surveillance or autonomous weapons. The Pentagon's designation, historically reserved for foreign adversaries like Huawei, ordered federal agencies to cease using Anthropic's technology. According to Pentagon sources cited in court filings, officials stated Anthropic was insufficiently 'patriotic' and 'fundamentally incompatible with American principles.' Anthropic filed two lawsuits on March 9, 2026—one in federal court in California and another in the U.S. Court of Appeals for the Federal Circuit—arguing the designation constituted unlawful political retaliation for maintaining its foundational ethical principles regarding AI safety and refusing to participate in programs the company deemed ethically problematic. The case has sparked debate in Silicon Valley about corporate autonomy, military AI u...
"The Pentagon believes that Anthropic is insufficiently 'patriotic' and 'fundamentally incompatible with American principles.'" — Statement from Pentagon sources cited in Anthropic's federal lawsuit filings challenging the supply chain risk designation