HomeIntelligenceBrief
🔓 BREACH BRIEF🟠 High🔍 ThreatIntel

Supply Chain Attack on LiteLLM and Anthropic Source‑Code Leak Highlights Enterprise AI Security Gaps

In April 2026 Anthropic’s internal files and Claude Code source code were unintentionally published, while a malicious commit to the open‑source LiteLLM library injected credential‑stealing malware, exposing customer data. The twin events underscore the growing risk of AI supply‑chain and human‑error failures for enterprises that rely on third‑party AI services.

🛡️ LiveThreat™ Intelligence · 📅 April 09, 2026· 📰 proofpoint.com
🟠
Severity
High
🔍
Type
ThreatIntel
🎯
Confidence
High
🏢
Affected
2 sector(s)
Actions
3 recommended
📰
Source
proofpoint.com

Supply Chain Attack on LiteLLM and Anthropic Source‑Code Leak Highlights Enterprise AI Security Gaps

What Happened – In April 2026 two high‑profile AI incidents were disclosed: a packaging error at Anthropic leaked internal files and source code for its Claude Code assistant, and a malicious commit to the open‑source LiteLLM library (used by many AI‑enabled applications) injected credential‑stealing malware, exposing customer data flowing through AI services.

Why It Matters for TPRM

  • Human‑error releases and open‑source supply‑chain compromises can give attackers deep insight into proprietary AI models and the data they process.
  • Third‑party AI components (e.g., LiteLLM) are now a critical attack surface for enterprises that rely on them for internal workflows.
  • Loss of source code and API‑key theft can lead to long‑term espionage, model‑theft, and credential abuse across multiple vendors.

Who Is Affected – Technology / SaaS providers that embed AI models, API‑provider ecosystems, and any enterprise that integrates AI‑driven applications (finance, healthcare, retail, etc.).

Recommended Actions

  • Conduct an inventory of all AI‑related third‑party libraries and enforce strict version‑control and code‑signing policies.
  • Review vendor security‑by‑design practices, especially around release packaging and open‑source dependency management.
  • Implement continuous monitoring for anomalous API‑key usage and enforce least‑privilege access for AI service credentials.

Technical Notes

  • Anthropic leak: Human error during release packaging caused publicly accessible internal files and source‑code exposure. No known CVE; risk stems from lack of secure build pipelines.
  • Mercor attack: Malicious code inserted into the LiteLLM open‑source repository (vulnerability‑type supply‑chain exploit) harvested API keys and redirected data to attacker‑controlled endpoints.
  • Data types exposed include proprietary model code, configuration files, and customer data processed by AI services.

Source: Proofpoint Threat Insight – Anthropic Leak and Mercor AI Attack

📰 Original Source
https://www.proofpoint.com/us/blog/threat-insight/mercor-anthropic-ai-security-incidents

This LiveThreat Intelligence Brief is an independent analysis. Read the original reporting at the link above.

🛡️

Monitor Your Vendor Risk with LiveThreat™

Get automated breach alerts, security scorecards, and intelligence briefs when your vendors are compromised.