Co-Author: Anton Chuvakin
Similar to our 2024 AI blog recap, as we reflected on the customer conversations we had in 2025 and the blogs we’ve written, 3 key themes emerged. The common thread woven throughout these conversations and thinkpieces is the need to adopt a proactive, structured approach to evaluating use cases and mitigating the risks that may be amplified by generative artificial intelligence (genAI) interconnectivity and unique capabilities.
Agentic AI systems in particular, with their ability to reason, plan, and execute complex workflows, inherently elevate their susceptibility to manipulation, adversarial attacks, and potential systemic failures. The autonomy and interconnectedness of these agents create a higher risk profile and a greater attack surface compared to previous genAI tools, making robust governance practices and cyber hygiene more critical than ever.
A recap of our key blogs, papers and podcasts on AI security and governance in 2025 is below.
1. Learning from the past
The genAI adoption curve echoes the early pitfalls of cloud computing, with organizations often repeating the same mistakes. Many organizations have jumped on the AI bandwagon without clearly articulating the business problem, specific goals, or measurable success criteria leading to silver bullet thinking, vague objectives, and wasted resources. This rush to adopt bypasses critical data governance and security fundamentals, creating significant long-term technical debt and vulnerabilities. Organizations can learn from past cloud mistakes to improve AI adoption by focusing on setting a well-defined and holistic strategy, realistic expectations setting, and addressing AI literacy and skills gaps, security concerns, and technical and operational integration issues.
This blog outlines five critical security mistakes organizations are making as they rapidly deploy genAI solutions into production environments. These missteps include (1) weak AI governance, which leads to inconsistent practices and security gaps; (2) bad data, where poor quality data results in flawed AI outputs and erodes trust; (3) excessive, overprovisioned access to sensitive data; (4) neglecting inherited vulnerabilities from foundational models or insecure infrastructure, and (5) assuming risks only apply to public-facing AI while ignoring internal tool vulnerabilities.
2. Establishing governance structures that enable
The rapid evolution of genAI chat interfaces into autonomous agentic AI highlights the need to prioritize security basics like data governance, identity and access management (IAM), and cyber hygiene. AI agents can act autonomously and interact with enterprise systems and this capability amplifies the importance of established cyber security fundamentals. Effective governance for AI agents should leverage existing principles like least privilege by rigorously defining the agent's sphere of influence and enabling mechanisms to ensure every action is auditable. Key security controls include building an isolated sandbox for testing, implementing a "Big Red Button" for immediate shutdown, and keeping humans in the loop for oversight of high-stakes decisions and continuous lifecycle management.
An overly restrictive or prohibitive approach to AI adoption is counterproductive and harmful. Blocking genAI rather than creating a well-lit path for organizations to use it safely and securely is ineffective and often precipitates the proliferation of shadow AI and data leakage, the very risk enterprises were seeking to avoid. It need not be this way - with the emergence of genAI and AI agents, organizations have an opportunity to reimagine governance, structuring it as an agile enterprise enabler rather than a bureaucratic blocker, aligning to form a holistic picture of enterprise risk management, and integrating with data governance, procurement, and third party risk management processes. The secure way forward is to be bold and responsible by empowering the use of the technology within clearly defined guardrails.
We have observed the emergence of a more insidious form of "Shadow AI for business," where employees and teams use enterprise-grade gen AI tools without proper internal governance, driven by a desire for efficiency and a perception that formal approval and acquisition processes are too slow. This lack of oversight leads to security gaps, including the potential exposure of sensitive data to AI models, circumvented procurement resulting in costly redundancies, and integration issues that create data silos. To counter this, organizations should shift to a strategy of guided evolution by making AI governance more agile and enabling, educating employees on responsible use, and establishing clear guardrails to encourage secure experimentation.
The emergence of autonomous AI agents presents a heightened risk to organizations because it amplifies the issues seen with previous "shadow AI" uses of consumer and enterprise-grade generative AI. Unlike previous AI iterations, agents can proactively execute tasks and interconnect with systems, creating greater risk of unintended data exposure, compliance violations, and systemic failure. As an example, consider a scenario where an email agent inadvertently shares confidential client data. The ineffective strategy of outright blocking these tools should be replaced with a focus on proactive risk mitigation through clear governance, comprehensive employee education, and robust security guardrails to responsibly leverage the technology's transformative potential.
AI is evolving from simple generative responses to autonomous AI agents that can reason, plan, and execute complex tasks on behalf of users, creating enormous potential for productivity and innovation. However, the agent's ability to act independently and its deep interconnectedness with sensitive data sources and workflows significantly elevates cybersecurity, data privacy, and governance risks. In this blog, we outline the key security, governance and risk management considerations organizations should assess as they look to implement into their processes and workflows agents.
3. Revisiting security - back to basics
The rise of autonomous AI agents introduces a new challenge for security, as their ability to reason, plan, and execute actions requires a distinct security paradigm beyond securing traditional AI or software. We advocate for a hybrid defense-in-depth approach that combines deterministic measures (like runtime policy enforcement) with reasoning-based defenses (like adversarial training) to create layered security. This strategy is guided by three core principles: ensuring well-defined human controllers, strictly limiting agent powers (least privilege), and mandating that all agent actions are observable and auditable to prevent rogue actions and sensitive data disclosure.
The AI supply chain, which includes data sourcing, model training, and deployment, introduces a new, higher level of complexity and risk compared to traditional software supply chains. While traditional security measures like Supply-chain Levels for Software Artifacts (SLSA) can be adapted, securing AI is unique due to the opacity of models and the critical, less mature aspects of data provenance. Data provenance can be susceptible to poisoning and tampering, so establishing tamper-proof provenance via detailed records of all model and dataset origins and modifications is essential for verifying integrity and mitigating the urgent, real-world risks posed by compromised contemporary AI models.
Think of quality assurance (QA) for products. Security assurance plays a similar, yet distinct, role for systems and applications and, yes, now also for AI. It's all about building high confidence that your security features, practices, and controls are actually doing their job and enforcing your security policies, aiming to identify gaps, weaknesses, and areas where security controls might not be operating as intended. Our goal should be to drive continuous improvement across all security domains. This proactive approach helps build confidence that AI software is not only built securely but continues to operate securely in the face of evolving threats.
For further reference as you explore these topics, check out these related papers and podcasts:
- EP245 From Consumer Chatbots to Enterprise Guardrails: Securing Real AI Adoption
- EP7 Tameron Chappell on Psychology, Trust, and AI Transformation
- EP173 SAIF in Focus: 5 AI Security Risks and SAIF Mitigations
- EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams
- EP230 AI Red Teaming: Surprises, Strategies, and Lessons from Google
- Delivering Trusted and Secure AI
- SAIF in the real world
- Best Practices for Securely Deploying AI on Google Cloud
- Shadow AI - Your next data breach might already be around the corner
