Federal Agencies Continue Using Anthropic Claude AI Despite Executive Order to Cease Use
What Happened – Federal employees across civilian agencies are still employing Anthropic’s Claude large‑language‑model (LLM) even after a February directive from former President Donald Trump ordering an immediate halt. Agencies have focused on inventorying usage rather than enforcing a shutdown, and some are actively testing Anthropic’s “Mythos” vulnerability‑analysis model.
Why It Matters for TPRM –
- Ongoing use of a vendor flagged as a supply‑chain risk by the Pentagon creates exposure to potential model‑level backdoors or uncontrolled updates.
- Lack of coordinated enforcement signals weak governance and compliance oversight across the federal supply chain.
- Continued reliance on Anthropic tools may conflict with contractual or regulatory obligations that prohibit high‑risk third‑party AI services.
Who Is Affected – Federal government (civilian) agencies, particularly State, Treasury, Commerce, and any department that has integrated Claude into research, coding, or analytical workflows.
Recommended Actions –
- Conduct an immediate inventory of all Anthropic AI services used by your organization.
- Validate that contractual clauses and security controls address supply‑chain risk for AI model providers.
- Implement a formal de‑provisioning plan aligned with the six‑month phase‑out timeline.
- Consider alternative vetted AI providers (e.g., OpenAI) and update governance policies to enforce executive directives.
Technical Notes – The issue stems from a third‑party dependency on Anthropic’s LLM APIs. The Pentagon has classified Anthropic as a supply‑chain risk due to concerns over post‑deployment model updates and potential covert influence. No known vulnerability (CVE) is cited, but the risk is strategic rather than technical. Source: DataBreachToday