Trump Orders Federal Agencies to Halt Anthropic AI Use Amid Pentagon Dispute

Here are today's top AI & Tech news picks, curated with professional analysis.

Warning

This article is automatically generated and analyzed by AI. Please note that AI-generated content may contain inaccuracies. Always verify the information with the original primary source before making any decisions.

President Trump orders federal agencies to stop using Anthropic after Pentagon dispute | TechCrunch

Expert Analysis

President Trump has ordered federal agencies to cease using Anthropic's AI technology, escalating a high-profile dispute between the AI startup and the Pentagon over safety safeguards. On Truth Social, Trump described Anthropic as "radical left" and "woke," mandating an immediate halt for most agencies while granting the Pentagon a six-month phase-out period.

The order came after Anthropic refused to remove safeguards against the use of its AI for mass domestic surveillance and fully autonomous weapons. Defense Secretary Pete Hegseth criticized Anthropic for "arrogance and betrayal," threatening to designate the company as a "supply chain risk" or invoke the Defense Production Act.

This clash could significantly impact Anthropic's government contracts and potentially benefit competitors such as Elon Musk's xAI (Grok), OpenAI, and Google. The dispute reportedly intensified following the capture of Venezuelan President Nicholas Maduro, an operation where Anthropic's Claude technology was said to have assisted the U.S. military.

👉 Read the full article on TechCrunch

  • Key Takeaway: President Trump ordered federal agencies to stop using Anthropic's AI due to the company's refusal to remove safeguards against mass domestic surveillance and fully autonomous weapons, intensifying a dispute with the Pentagon.
  • Author: Russell Brandom

Statement from Dario Amodei on Our Discussions with the Department of War

Expert Analysis

Dario Amodei, CEO of Anthropic, affirmed his deep belief in the existential importance of AI for defending the United States and other democracies. Anthropic has proactively deployed its models to the Department of War and intelligence community, becoming the first frontier AI company to operate within classified networks and national laboratories. Its Claude models are extensively used for mission-critical applications like intelligence analysis, operational planning, and cyber operations.

Anthropic has also made sacrifices to defend America's lead in AI, including forgoing hundreds of millions in revenue by cutting off firms linked to the Chinese Communist Party. However, the company maintains a firm stance against certain AI applications that it believes undermine democratic values or are beyond current technological reliability, specifically mass domestic surveillance and fully autonomous weapons.

Amodei emphasized that current frontier AI systems are not reliable enough for fully autonomous weapons, stating that Anthropic will not knowingly provide products that endanger U.S. warfighters or civilians. Despite threats from the Department of War to designate them a "supply chain risk" or invoke the Defense Production Act if they do not accede to "any lawful use," Anthropic stated it "cannot in good conscience accede to their request."

👉 Read the full article on Anthropic

  • Key Takeaway: Anthropic's CEO Dario Amodei publicly stated the company's refusal to allow its AI models for mass domestic surveillance or fully autonomous weapons, despite threats from the Department of War, citing ethical concerns and current AI reliability limitations.
  • Author: Dario Amodei

Anthropic Tells Pete Hegseth to Take a Hike

Expert Analysis

Anthropic has rejected the Pentagon's demand to remove safeguards in its AI model, Claude, which prohibit its use for mass domestic surveillance and fully automated weapons. Anthropic CEO Dario Amodei stated that the company "cannot in good conscience accede to their request," firmly upholding its ethical stance.

Defense Secretary Pete Hegseth threatened Anthropic with removing Claude from U.S. military systems, designating the company as a "supply chain risk," or invoking the Defense Production Act. Amodei highlighted the inherent contradiction in these threats, noting that labeling Anthropic a security risk while simultaneously deeming Claude essential to national security is incoherent.

Anthropic expressed concern over the government's ability to purchase detailed records of Americans' movements and web browsing without warrants, warning that AI's capacity to aggregate such data at scale poses novel risks to fundamental liberties. Regarding fully autonomous weapons, the company argued that current frontier AI systems are not reliable enough and require proper guardrails, offering R&D collaboration with the Department of War, which was declined.

👉 Read the full article on Gizmodo

  • Key Takeaway: Gizmodo reports Anthropic's firm refusal to remove AI safeguards against mass domestic surveillance and fully autonomous weapons, despite Defense Secretary Pete Hegseth's threats, highlighting the company's ethical commitment and concerns over AI reliability and democratic values.
  • Author: Matt Novak

Follow me!