Goldman Sachs Tests Anthropic’s “Mythos” AI Model to Probe Future Cyber‑Attack Capabilities
What Happened
Goldman Sachs announced that it is actively testing Anthropic’s restricted AI model, Mythos, as part of a “Project Glasswing” effort to evaluate how frontier generative AI could be used to discover and exploit vulnerabilities. The bank is working with Anthropic and its security vendors to understand the model’s potential impact on financial‑system defenses.
Why It Matters for TPRM
- Emerging AI models can autonomously generate working exploits, raising the baseline cyber‑risk for any vendor that integrates or relies on similar technology.
- Early‑stage testing by a major financial institution signals that regulators and industry leaders view AI‑driven attack tools as a material risk factor for third‑party risk assessments.
Who Is Affected
- Financial services firms and their technology suppliers
- Cloud‑service providers hosting AI workloads
- Security‑tool vendors that may need to adapt detection capabilities for AI‑generated exploits
Recommended Actions
- Review any contracts or projects that involve generative‑AI models and confirm that vendors have AI‑risk mitigation controls.
- Validate that your organization’s monitoring and detection tools can identify AI‑crafted exploit patterns.
- Request from vendors a formal disclosure of AI‑related security testing and any findings that could affect your environment.
Technical Notes
- Attack vector: Autonomous exploit generation by a large‑language model (Mythos) without human guidance.
- CVEs: None disclosed; Mythos reportedly uncovered thousands of previously unknown vulnerabilities across OSes and browsers.
- Data types: No data exfiltration reported; focus is on vulnerability discovery and exploit automation.
Source: https://www.databreachtoday.com/goldman-sachs-hyperaware-as-tests-mythos-for-defense-a-31413