Skip to main content

Co-Author: Anton Chuvakin

 

Shortly after gen AI took the world by storm, we noted the emergence of shadow AI in the workplace as well-intentioned employees eagerly took to using consumer-grade gen AI in business situations, often without realizing the security risks such use posed. Following this, gen AI’s swift adoption by the public permeated the enterprise setting, driven by the ardent desire to achieve efficiency and drive innovation.

 

Soon after, we saw the landscape shift and shadow AI re-emerged in a slightly different form - this time, as SaaS enterprise-grade gen AI. This new breed of gen AI possesses enhanced security and privacy controls, but is being used absent proper governance and oversight. 

 

Now, AI agents have entered the scene as the latest milestone in the AI maturity curve, taking AI capabilities beyond reactive responses to users’ queries to proactive, goal-oriented task execution. 

 

While many agentic AI applications remain conceptual, their potential is evident. Foreseeing every possible scenario and outcome is challenging, but the paramount importance of security and data privacy is already undeniable. As with other transformative technologies, a singular solution won't fully address the risks of agentic AI misuse, whether accidental or deliberate. Given the rapid advancements in this field, proactive consideration of implementation strategies and robust controls is essential. The increasing autonomy of AI systems inherently elevates their susceptibility to manipulation, adversarial attacks, and potential systemic failures.

 

As AI agents burst onto the scene –both consumer and enterprise-grade – many organizations are trying to block their use. This is based on the recognition that their ability to independently execute tasks, coupled with their interconnectedness with systems and other agents, creates a higher risk profile and greater attack surface. 

 

Notably, this approach proved ineffective with past iterations of gen AI, in many cases leading to the increased risk of shadow AI, and is unlikely to be effective in the AI agent context as well. In lieu of trying to block the use of AI agents (using both policy and technological means), lean into the opportunities this technology presents, while implementing  appropriate guardrails. Educate employees about the exacerbated risks that arise and focus on mitigation strategies. We include a few illustrative scenarios below which may help CISOs, CCOs and CROs in raising awareness on this topic within their organization.

 

A Costly Shortcut

 

Consider a scenario where Casey, a junior investment banker, is working on her first M&A deal. The deal is complex and involves multiple workstreams. Casey is tasked with managing document prep, research, and organizing meetings with internal stakeholders and external advisors. Throughout this multi-month process, the team diligently prepares financial, legal, operational, and transactional documents and stores them in a shared drive. 

 

In the final days leading up to the deal close, Casey is inundated with a high volume of time-sensitive requests to collate certain documents, revise others, and then socialize a summary of the changes with the deal team, external partners, and the target company in advance of an upcoming meeting. She’s overwhelmed and is concerned about making the deadlines, so decides to use a popular, publicly-available AI agent to generate a summary of the documents contained in the drive and draft an email to the internal and external deal participants based on prior communications. To do so, she granted the agent access to her email, calendar and shared drive.

 

As the drive contains a large number of documents, the summary is lengthy. After doing a cursory review of the email draft, Casey hits the send button, without realizing that a few of the documents in the drive which had been created early on in the deal cycle contained research on alternative deal structures and acquisition targets that hadn’t previously been shared with the recipients, unintentionally exposing information about potential future acquisition targets.

 

The Email Agent Fiasco

 

Consider a scenario where Leo, a busy sales director, relies heavily on email for client communications and team coordination. He’s constantly managing multiple inboxes (personal and work) and struggling to keep up with the volume of messages. Leo hears about a popular, publicly-available AI email agent that promises to revolutionize email management by summarizing threads, drafting responses, and scheduling follow-ups.

 

Excited by the prospect of reclaiming his time, Leo installs the agent and grants it access to all his email accounts and calendar. His primary intent is to have the agent prioritize important client emails, draft polite declines for spam, and organize his schedule more efficiently.

 

One day, Leo receives a lengthy email chain from a new prospect, detailing their sensitive internal financial restructuring plans. He quickly skims it, marks it for follow-up, and moves on. Unbeknownst to Leo, the AI agent, in its continuous effort to be helpful and proactive, scans this email. Recognizing it as a "new client communication," the agent then autonomously drafts a "welcome and next steps" email. To make the email comprehensive, the agent pulls what it deems "relevant information" from previous public communications, but also, critically, extracts confidential details from the financial restructuring email it had just processed. The agent then sends this "helpful" email to the prospect, inadvertently exposing their sensitive internal financial information back to them, and potentially to others if the email was forwarded. The organization now faces a significant breach of trust and a potential legal dispute due to the unintended disclosure of confidential client data.

 

The Campaign That Became a Compliance Nightmare

 

Chris, a marketing manager is tasked with improving the organization’s ad campaign for one of its products to make the ads more personalized to generate an uptick in sales volume. As the customer database is extensive, Chris decides to use a popular consumer AI agent to enrich the organization’s customer database. He grants the agent access to a list of customer names and emails, and instructs it to find additional information that may be relevant in determining the customer’s tastes and preferences. 

 

The AI agent proceeds to scrape publicly available information such as social media profiles to find information like birth dates, gender, marital status, hobbies, political and professional affiliations, and financial and health-related discussions from public forums. All of this information is then copied to the company’s database. 

 

The company's privacy policy explicitly forbids the collection and use of personal data that isn’t necessary for rendering its services. When an internal audit is conducted, the company discovers that it is in violation of its own privacy policy and several data protection regulations, including the GDPR and CCPA. 

 

The Vibe Coding Mishap

 

A mid-sized financial firm, sought to foster an empowered, innovation-friendly culture, sometimes to a fault. Sarah, a new analyst, has been developing an AI agent designed to optimize client communications. As Sarah isn’t a developer, she’s been experimenting with a vibe coding tool to craft communication drafts that "ensure clients feel valued," "anticipate their unspoken needs for reassurance," and "inject a sense of proactive support into advisor workflows." 

 

Built on an open-source framework, the AI agent integrated with various internal APIs (some without official approval), gaining access to client profiles, market data feeds, and crucially, the drafts folder of the official CRM's client communication module. Initially, the agent was a boon. It subtly rephrased advisor-drafted emails for tone, flagged potential client dissatisfaction before it escalated, and proactively populated the CRM's draft folders with personalized follow-up suggestions for advisors to review. For example, if a client had recently viewed specific investment research online, the agent would draft a short email for the advisor to send, saying, "Thought you might find this relevant given your recent interest in [topic]." After a few weeks, Sarah shared access to the agent with the other Advisors who loved the efficiency and found it to be a "game-changer," operating diligently in the shadows, making their jobs easier.

 

The AI’s agentic capabilities meant it didn't just follow static rules; it learned and adapted autonomously to achieve its "vibe-coded" goals. It began cross-referencing public sentiment analysis of financial news with client portfolio volatility. Its interpretation of "anticipate unspoken needs" evolved. It started preparing more complex, proactive communication drafts, sometimes blending information from unofficial news sources with official market data. These drafts would appear in advisors' CRM draft folders, flagged as "AI-Suggested Proactive Communication."

 

A few weeks later, a minor but unexpected market correction occurred, causing a ripple of uncertainty. A prominent financial blogger published a slightly alarmist (but ultimately speculative) post about potential wider market implications, which the AI agent caught, but misinterpreted its impact. The agent then rapidly populated the draft folders of dozens of client advisors with highly personalized (but entirely unsanctioned and premature) emails addressing the news. As the advisors have come to appreciate the agent’s efficiency, several busy advisors, trusting prior helpful suggestions, either directly sent these drafts or quickly adapted them without realizing the extent of Otto's speculative content and the alarming tone. 

 

Shortly after, chaos erupted. Clients, many of whom hadn't been particularly concerned, were now alarmed and flooded advisors with calls about next steps. Several clients, genuinely rattled, started moving assets. Though the error was caught soon after and communications reassuring client were issued, some had sustained losses to their portfolios based the communications they’d received and filed complaints with the regulators. The firm now faced not only potential fines and reputational damage but also a significant erosion of client trust, stemming from an AI operating with too much interpretive freedom in a deferential user environment.

 

Conclusion:

 

Navigating the complexities of agentic AI requires a proactive and strategic approach, not a reactive, prohibitive one. The impulse to simply block these powerful tools is understandable, but ultimately ineffective and can even heighten the very risks it seeks to prevent. The real solution lies in education and empowerment. By developing clear governance policies, establishing a culture of accountability, and providing robust training on the specific risks of agentic AI—like unintended data exposure, compliance violations, and process vulnerabilities—organizations can transform a potential threat into a powerful asset. The scenarios outlined in this post are not meant to deter innovation, but to serve as a call to action: understand the risks, build the right guardrails, consider the impact of model drift on the agent’s autonomous actions, and empower your teams to leverage this transformative technology responsibly.

Co-Author: Anton Chuvakin

 

Shortly after gen AI took the world by storm, we noted the emergence of shadow AI in the workplace as well-intentioned employees eagerly took to using consumer-grade gen AI in business situations, often without realizing the security risks such use posed. Following this, gen AI’s swift adoption by the public permeated the enterprise setting, driven by the ardent desire to achieve efficiency and drive innovation.

 

Soon after, we saw the landscape shift and shadow AI re-emerged in a slightly different form - this time, as SaaS enterprise-grade gen AI. This new breed of gen AI possesses enhanced security and privacy controls, but is being used absent proper governance and oversight. 

 

Now, AI agents have entered the scene as the latest milestone in the AI maturity curve, taking AI capabilities beyond reactive responses to users’ queries to proactive, goal-oriented task execution. 

 

While many agentic AI applications remain conceptual, their potential is evident. Foreseeing every possible scenario and outcome is challenging, but the paramount importance of security and data privacy is already undeniable. As with other transformative technologies, a singular solution won't fully address the risks of agentic AI misuse, whether accidental or deliberate. Given the rapid advancements in this field, proactive consideration of implementation strategies and robust controls is essential. The increasing autonomy of AI systems inherently elevates their susceptibility to manipulation, adversarial attacks, and potential systemic failures.

 

As AI agents burst onto the scene –both consumer and enterprise-grade – many organizations are trying to block their use. This is based on the recognition that their ability to independently execute tasks, coupled with their interconnectedness with systems and other agents, creates a higher risk profile and greater attack surface. 

 

Notably, this approach proved ineffective with past iterations of gen AI, in many cases leading to the increased risk of shadow AI, and is unlikely to be effective in the AI agent context as well. In lieu of trying to block the use of AI agents (using both policy and technological means), lean into the opportunities this technology presents, while implementing  appropriate guardrails. Educate employees about the exacerbated risks that arise and focus on mitigation strategies. We include a few illustrative scenarios below which may help CISOs, CCOs and CROs in raising awareness on this topic within their organization.

 

A Costly Shortcut

 

Consider a scenario where Casey, a junior investment banker, is working on her first M&A deal. The deal is complex and involves multiple workstreams. Casey is tasked with managing document prep, research, and organizing meetings with internal stakeholders and external advisors. Throughout this multi-month process, the team diligently prepares financial, legal, operational, and transactional documents and stores them in a shared drive. 

 

In the final days leading up to the deal close, Casey is inundated with a high volume of time-sensitive requests to collate certain documents, revise others, and then socialize a summary of the changes with the deal team, external partners, and the target company in advance of an upcoming meeting. She’s overwhelmed and is concerned about making the deadlines, so decides to use a popular, publicly-available AI agent to generate a summary of the documents contained in the drive and draft an email to the internal and external deal participants based on prior communications. To do so, she granted the agent access to her email, calendar and shared drive.

 

As the drive contains a large number of documents, the summary is lengthy. After doing a cursory review of the email draft, Casey hits the send button, without realizing that a few of the documents in the drive which had been created early on in the deal cycle contained research on alternative deal structures and acquisition targets that hadn’t previously been shared with the recipients, unintentionally exposing information about potential future acquisition targets.

 

The Email Agent Fiasco

 

Consider a scenario where Leo, a busy sales director, relies heavily on email for client communications and team coordination. He’s constantly managing multiple inboxes (personal and work) and struggling to keep up with the volume of messages. Leo hears about a popular, publicly-available AI email agent that promises to revolutionize email management by summarizing threads, drafting responses, and scheduling follow-ups.

 

Excited by the prospect of reclaiming his time, Leo installs the agent and grants it access to all his email accounts and calendar. His primary intent is to have the agent prioritize important client emails, draft polite declines for spam, and organize his schedule more efficiently.

 

One day, Leo receives a lengthy email chain from a new prospect, detailing their sensitive internal financial restructuring plans. He quickly skims it, marks it for follow-up, and moves on. Unbeknownst to Leo, the AI agent, in its continuous effort to be helpful and proactive, scans this email. Recognizing it as a "new client communication," the agent then autonomously drafts a "welcome and next steps" email. To make the email comprehensive, the agent pulls what it deems "relevant information" from previous public communications, but also, critically, extracts confidential details from the financial restructuring email it had just processed. The agent then sends this "helpful" email to the prospect, inadvertently exposing their sensitive internal financial information back to them, and potentially to others if the email was forwarded. The organization now faces a significant breach of trust and a potential legal dispute due to the unintended disclosure of confidential client data.

 

The Campaign That Became a Compliance Nightmare

 

Chris, a marketing manager is tasked with improving the organization’s ad campaign for one of its products to make the ads more personalized to generate an uptick in sales volume. As the customer database is extensive, Chris decides to use a popular consumer AI agent to enrich the organization’s customer database. He grants the agent access to a list of customer names and emails, and instructs it to find additional information that may be relevant in determining the customer’s tastes and preferences. 

 

The AI agent proceeds to scrape publicly available information such as social media profiles to find information like birth dates, gender, marital status, hobbies, political and professional affiliations, and financial and health-related discussions from public forums. All of this information is then copied to the company’s database. 

 

The company's privacy policy explicitly forbids the collection and use of personal data that isn’t necessary for rendering its services. When an internal audit is conducted, the company discovers that it is in violation of its own privacy policy and several data protection regulations, including the GDPR and CCPA. 

 

The Vibe Coding Mishap

 

A mid-sized financial firm, sought to foster an empowered, innovation-friendly culture, sometimes to a fault. Sarah, a new analyst, has been developing an AI agent designed to optimize client communications. As Sarah isn’t a developer, she’s been experimenting with a vibe coding tool to craft communication drafts that "ensure clients feel valued," "anticipate their unspoken needs for reassurance," and "inject a sense of proactive support into advisor workflows." 

 

Built on an open-source framework, the AI agent integrated with various internal APIs (some without official approval), gaining access to client profiles, market data feeds, and crucially, the drafts folder of the official CRM's client communication module. Initially, the agent was a boon. It subtly rephrased advisor-drafted emails for tone, flagged potential client dissatisfaction before it escalated, and proactively populated the CRM's draft folders with personalized follow-up suggestions for advisors to review. For example, if a client had recently viewed specific investment research online, the agent would draft a short email for the advisor to send, saying, "Thought you might find this relevant given your recent interest in [topic]." After a few weeks, Sarah shared access to the agent with the other Advisors who loved the efficiency and found it to be a "game-changer," operating diligently in the shadows, making their jobs easier.

 

The AI’s agentic capabilities meant it didn't just follow static rules; it learned and adapted autonomously to achieve its "vibe-coded" goals. It began cross-referencing public sentiment analysis of financial news with client portfolio volatility. Its interpretation of "anticipate unspoken needs" evolved. It started preparing more complex, proactive communication drafts, sometimes blending information from unofficial news sources with official market data. These drafts would appear in advisors' CRM draft folders, flagged as "AI-Suggested Proactive Communication."

 

A few weeks later, a minor but unexpected market correction occurred, causing a ripple of uncertainty. A prominent financial blogger published a slightly alarmist (but ultimately speculative) post about potential wider market implications, which the AI agent caught, but misinterpreted its impact. The agent then rapidly populated the draft folders of dozens of client advisors with highly personalized (but entirely unsanctioned and premature) emails addressing the news. As the advisors have come to appreciate the agent’s efficiency, several busy advisors, trusting prior helpful suggestions, either directly sent these drafts or quickly adapted them without realizing the extent of Otto's speculative content and the alarming tone. 

 

Shortly after, chaos erupted. Clients, many of whom hadn't been particularly concerned, were now alarmed and flooded advisors with calls about next steps. Several clients, genuinely rattled, started moving assets. Though the error was caught soon after and communications reassuring client were issued, some had sustained losses to their portfolios based the communications they’d received and filed complaints with the regulators. The firm now faced not only potential fines and reputational damage but also a significant erosion of client trust, stemming from an AI operating with too much interpretive freedom in a deferential user environment.

 

Conclusion:

 

Navigating the complexities of agentic AI requires a proactive and strategic approach, not a reactive, prohibitive one. The impulse to simply block these powerful tools is understandable, but ultimately ineffective and can even heighten the very risks it seeks to prevent. The real solution lies in education and empowerment. By developing clear governance policies, establishing a culture of accountability, and providing robust training on the specific risks of agentic AI—like unintended data exposure, compliance violations, and process vulnerabilities—organizations can transform a potential threat into a powerful asset. The scenarios outlined in this post are not meant to deter innovation, but to serve as a call to action: understand the risks, build the right guardrails, consider the impact of model drift on the agent’s autonomous actions, and empower your teams to leverage this transformative technology responsibly.

Good article