Remember our 2023 spotlight on Shadow AI, where we explored the risks of employees using unapproved consumer AI tools for work? Well, the landscape has shifted somewhat since enterprise-grade gen AI entered the market. As gen AI permeates every aspect of business, we're seeing a new, more insidious form of Shadow AI emerge and it's happening within the walls of the enterprise itself.
In response, many cyber security leaders are tempted to follow the old blueprint for cases where a consumer technology was rapidly adopted by businesses. Think mobile devices, various SaaS services and of course cloud itself. AI does have similarities, but the pace of AI adoption is much faster and barriers to use are much lower.
Comparing AI adoption speed to cloud adoption is like comparing a space launch to a leisurely horse ride. Your old security playbooks? They weren't built for orbital velocity. Some elements of the old blueprints work, but others need to evolve. Otherwise, unsanctioned AI use will become prevalent, attempts to reign it in will prove futile, and the resulting unmitigated risks will exceed manageable levels. The prudent approach has to deliver both agility and security, not just one of these two.
Business AI Gone Rogue: Shadow AI re-emerges
Building on what we’re hearing from customers in 2025, the proliferation of genAI has brought about the emergence of "Shadow AI for business,” with employees, teams, and even departments making use of enterprise-grade AI tools and platforms without proper oversight or governance approval.
The crucial distinction from our previous discussion of Shadow AI is that this new form of Shadow AI involves enterprise AI tools, not consumer-grade tools, but these enterprise-grade tools are being deployed without the necessary enterprise oversight. As a result, these tools are still a form of Shadow AI because they operate outside the established, intended governance framework and pose risks to the organization’s cybersecurity posture.
Why is this happening?
As a recent research paper from NRG highlights, "bottom-up" shadow AI activity is often well-intentioned and driven by a desire for efficiency and innovation. Employees are experimenting with AI, finding operational gains and communication improvements, and even building custom AI solutions to streamline their workflows.
In some cases, it isn’t even a situation of employees going rogue - team leads are actively encouraging the use of specific AI products, bypassing established governance protocols in the interest of capitalizing on the business value which they believe outweighs the risks.
We're seeing this situation happen for several reasons, but the most common is a perceived inefficiency in the governance process that drives employees to pursue workarounds.
The Warning Signs
In a rapidly evolving tech landscape where even minute strides can contribute to a competitive advantage, employees feel compelled to act quickly, believing that going through the formal governance review process will hinder their ability to keep pace. This often leads to:
- “Governance Theater”: Governance policies are either poorly communicated, unenforced, or both, leading to control gaps and limited visibility for leadership. Governance not put into action does not reduce risk, and may in fact increase it.
- Circumvented Procurement: AI governance isn't effectively integrated with procurement, leaving gaps in risk management and leading to increased costs, operational inefficiencies and redundancies across the organization.
For instance, different teams might end up using different AI tools for similar tasks, leading to duplicated efforts in developing models or analyzing data. This lack of centralized procurement and oversight can also result in unanticipated licensing fees for multiple AI tools with overlapping functionalities and make it difficult to track and manage overall AI spending. Consequently, operational inefficiencies can arise, hindering the overall business performance and potentially leading to misallocation of resources.
Circumvented Procurement leads to the 'AI Tool Zoo' – multiple teams paying for slightly different digital monkeys doing the same tricks. It's costly, chaotic, and makes data integration a nightmare."
- Security Gaps: Employees using shadow AI may inadvertently expose sensitive information, including private customer data, confidential company information, and intellectual property, when interacting with AI models. Using ungoverned AI tools for sensitive data isn't just a gap; it's leaving the company jewels on the sidewalk with a 'Free to a Good Home' sign.
AI models, particularly those offered as SaaS, can be trained on user inputs unless users explicitly opt out, creating a pathway for sensitive data to be stored and potentially accessed by unauthorized third parties.
AI tools often lack the robust security measures, such as encryption and access controls, and the continuous monitoring that sanctioned IT systems typically have in place, making them easier targets for exploitation. This expanded attack surface significantly increases the potential for successful cyber intrusions and data exfiltration.
- Integration issues: When different teams utilize various unapproved AI tools, it can create data silos, making it difficult to share and integrate data across the organization. This lack of interoperability can hinder collaboration and preclude the organization from achieving a unified view of its data, ultimately impeding alignment with the overall business strategy.
In an interesting, albeit unfortunate paradox, poor integration can also have the unintended consequence of bridging silos and result in data exposure to the wrong models, causing unintended consequences. This is particularly problematic when it involves PII and other sensitive or restricted access data.
Plugging the Gaps: A Proactive Approach
Ultimately, you want to be “bold and responsible,” but you need to manage risk (yes, really, despite this sounding like a cliche) while encouraging innovation (recognizing that security plays a supporting role). A delicate balance must be struck. The leader who attempts to reduce risk to zero by blocking the use of new technologies isn't a risk manager, rather they become an unwitting anchor that weighs the business down and stifles innovation.
While the need for comprehensive governance isn’t new, what the rise of “Shadow AI for business” highlights is the need to bring this new and increasingly prevalent form of shadow AI into the governance fold. This means revising governance processes to be more streamlined, agile, and user-friendly, effectively meeting folks half way, because the speed of change is far outpacing the current timelines of existing governance processes.
So, how do you navigate this brave new world where employees are essentially wielding powerful, unsupervised digital assistants? You need a strategy that's less about control and more about "guided evolution."
1. Reimagine AI Governance as an enabler
An effective governance committee is more than a bureaucratic hurdle. It’s an agile and flexible body, composed of empowered subject matter experts. Recognize the reality: Shadow AI exists. Acknowledge the enthusiasm and the potential productivity gains AI-enabled applications afford, and frame it as an opportunity, not just a threat. Position experimentation with genAI as a way to learn and innovate, helping employees feel like they're part of the solution, not the problem.
However, set clear guardrails. While embracing exploration and experimentation, clearly define what types of activities are absolutely and unequivocally off-limits, providing concrete examples and rationales. These might include employees including sensitive data in their prompts, such as customer data, code, network configurations, pen testing results, incident reports, email conversations, internal meeting minutes or recordings, without considering where such information would be stored, who may have access to it, or whether it could be used for model training purposes.
Articulating the types of use cases that are ok and those that aren’t will help raise awareness and employees’ judgment, driving a risk-management oriented mindset. Think of this as setting the perimeter fence, even if there are playgrounds inside.
There are always cases that may not have been considered, or where the guidance isn’t apparent. Implement a structured exception process for time-sensitive situations requiring review, with clear documentation, stringent criteria, decisive experts and a streamlined review cycle when certain business-critical criteria are met. Make any exceptions that are granted "friction-heavy" to discourage casual circumvention.
2. Educate and empower
Avoid outright bans on AI use. Instead, create a clear, well-communicated pathway for responsible AI adoption, informed by continuous feedback and iteration. Amplify and reinforce governance policies through clear and consistent communication channels, including Acceptable Use Policy updates, easily accessible documentation on approved use cases, recommended tools, and clear protocols for reporting potential issues or risks. Communicate often, including policy updates and new developments in town halls and team meetings.
Launch comprehensive, general and persona-based training programs. This isn't just about how to craft prompts or use genAI-enabled capabilities in applications, but more importantly, about the risks involved and the best practices to mitigate them, including avenues through which to escalate potential issues and to get help. Focus on data privacy, intellectual property, security best practices, and the potential for hallucinations and inaccuracies in genAI outputs. Make it engaging, maybe even a little entertaining (to the extent that's possible when it comes to security training). Periodic training that is divided into manageable segments is ideal.
Foster a culture of secure experimentation. Encourage employees to explore genAI within defined boundaries and to share their learnings and best practices. Consider evaluating and potentially endorsing certain genAI platforms and applications that meet your security and compliance requirements. This gives employees safer options and enables better oversight.
Consider implementing monitoring and detection mechanisms. This is a tricky one and needs to be done thoughtfully. You don't want to become "Big Brother," but you do need some visibility into how genAI is being used within the organization. Focus on detecting anomalies or potential policy violations rather than snooping on individual usage.
3. Iterate and adapt
Encourage the governance committee to be a source of innovation, fostering use case ideation and providing guidance that matches the speed of business, rather than simply acting as a gatekeeper. A well-lit-path that has friction and slow approvals will not be popular, causing frustration, eroding trust in the process, and leaving employees to seek ways around it.
The approach needs to be agile and adaptable to new threats and opportunities. Gather feedback from employees. Talk to the people on the ground who are actually using these tools. Their insights will be invaluable in refining your strategy.
Measure and analyze the impact on business and security processes. Track metrics such as adoption rates, reported risks, and any observed changes in productivity or innovation, to support an assessment of what's working and what isn't and inform future areas of investment (as well as identifying those that may not be a good fit).
The goal isn't zero Shadow AI – that's a fantasy. It's about coaxing it out of the dark corners and into the supervised, well-governed daylight. It's about bringing it into the light, providing guidance, education, and fostering a culture of responsible innovation. We need to empower employees to leverage the potential of genAI while mitigating the inherent risks. Think of it as teaching them how to drive a race car safely, rather than just locking them out of the garage.
Next steps:
- Streamline AI Governance: Revise governance processes to be more agile and user-friendly.
- Educate and Empower, Don't Ban: Avoid outright bans on AI. Instead, create a well-communicated pathway for secure AI adoption.
- Implement "Guided Evolution": Set clear guardrails for AI use, defining off-limit activities. Encourage secure experimentation within boundaries.
- Monitor and Iterate: Gather feedback from employees and measure the impact of AI adoption. Regularly review and update policies to adapt to new threats and opportunities.