HomeIntelligenceBrief
🔓 BREACH BRIEF🟠 High📋 Advisory

Federal Agencies Continue Using Anthropic Claude AI Despite Executive Order to Cease Use

Federal staffers across civilian agencies remain active users of Anthropic’s Claude LLM despite a February directive from former President Donald Trump to halt all usage. The Pentagon has labeled Anthropic a supply‑chain risk, raising concerns for third‑party risk managers about uncontrolled model updates and governance gaps.

🛡️ LiveThreat™ Intelligence · 📅 April 16, 2026· 📰 databreachtoday.com
🟠
Severity
High
📋
Type
Advisory
🎯
Confidence
High
🏢
Affected
2 sector(s)
Actions
4 recommended
📰
Source
databreachtoday.com

Federal Agencies Continue Using Anthropic Claude AI Despite Executive Order to Cease Use

What Happened – Federal employees across civilian agencies are still employing Anthropic’s Claude large‑language‑model (LLM) even after a February directive from former President Donald Trump ordering an immediate halt. Agencies have focused on inventorying usage rather than enforcing a shutdown, and some are actively testing Anthropic’s “Mythos” vulnerability‑analysis model.

Why It Matters for TPRM

  • Ongoing use of a vendor flagged as a supply‑chain risk by the Pentagon creates exposure to potential model‑level backdoors or uncontrolled updates.
  • Lack of coordinated enforcement signals weak governance and compliance oversight across the federal supply chain.
  • Continued reliance on Anthropic tools may conflict with contractual or regulatory obligations that prohibit high‑risk third‑party AI services.

Who Is Affected – Federal government (civilian) agencies, particularly State, Treasury, Commerce, and any department that has integrated Claude into research, coding, or analytical workflows.

Recommended Actions

  • Conduct an immediate inventory of all Anthropic AI services used by your organization.
  • Validate that contractual clauses and security controls address supply‑chain risk for AI model providers.
  • Implement a formal de‑provisioning plan aligned with the six‑month phase‑out timeline.
  • Consider alternative vetted AI providers (e.g., OpenAI) and update governance policies to enforce executive directives.

Technical Notes – The issue stems from a third‑party dependency on Anthropic’s LLM APIs. The Pentagon has classified Anthropic as a supply‑chain risk due to concerns over post‑deployment model updates and potential covert influence. No known vulnerability (CVE) is cited, but the risk is strategic rather than technical. Source: DataBreachToday

📰 Original Source
https://www.databreachtoday.com/federal-staffers-are-still-using-claude-despite-trump-orders-a-31437

This LiveThreat Intelligence Brief is an independent analysis. Read the original reporting at the link above.

🛡️

Monitor Your Vendor Risk with LiveThreat™

Get automated breach alerts, security scorecards, and intelligence briefs when your vendors are compromised.