Adversarial misuse of AI has changed the speed and scale of attacks, marking an evolution that demands a shift in defensive strategy. We recently released a new special report, our Mandiant AI risk and resilience report, detailing what we are seeing on the front lines and how defenders need to adapt.
By combining the frontline experience of Mandiant with the deep adversarial research of Google Threat Intelligence Group (GTIG), this report offers a unique perspective. In the report we cover three key areas:
- Adversarial use of AI: Threat actors are moving beyond simple productivity gains to deploying AI-orchestrated espionage and adaptive malware that dynamically obfuscates code to evade detection.
- Securing AI systems: Mandiant conducted numerous AI system assessments, AI threat modeling exercises, and detection workshops globally. We see organizations encountering similar security challenges in their AI pipelines as they do in their overall infrastructure security.
- AI-powered defense: We see AI's rapid transition into practical application in cyber defense and security operations–with specific use cases, such as in threat hunting, gaining significant traction.
What to think about for your SIEM & SOAR: Evolving telemetry and detection
Moving from IOCs to IOAs: As we help customers secure their AI pipelines, we are seeing that traditional static hashes (IOCs) are no longer sufficient. We advise shifting to Indicators of Activity (IOAs) by monitoring for anomalies and mapping these to your organization’s specific use cases.
Request specific telemetry from developers: You should request and send specific AI telemetry to the SIEM, including:
- Tool execution sequences: Alerting on illogical chains of events, such as an internal AI agent querying a sensitive database and immediately making an unauthorized external API call.
- Token usage metrics: Monitoring for sudden spikes in input/output ratios, which may indicate a prompt injection attack or a denial-of-service attempt against your AI applications.
Monitor Prompts Safely: Ingesting logs of AI inputs and outputs is critical for defense-in-depth, but the tools used to view these logs must be hardened. Our red teams have successfully executed Log-Based SSRF attacks by burying malicious code inside seemingly harmless chat transcripts. If your systems aren't secure, the simple act of your server summarizing or rendering that log can trick it into executing the hidden malware and leaking data.
Dive into the full report and fortify your defenses: AI risk and resilience report