Co-authors:
- Google Cloud Office of CISO: Daryl Pereira (Director, Head of OCISO, APJ) and Hui Meng Foo (Security Risk & Regulatory Advisor, APJ) and
- Electrolux: Darren Grayson Chng (Regional Data Protection Director | Privacy & AI Governance)
As organisations strive to harness the power of data to drive innovation, the rise of technologies like AI and machine learning, while offering great potential, also raise new security and privacy concerns.
A panel discussion held at the 3rd Governance, Risk & Compliance (“GRC”) summit and organised by Google Cloud’s Office of the CISO (“OCISO”) explored the intersection of AI, cybersecurity and privacy in this digital era, with experts from Technology, Consulting (KPMG) and Data privacy.
Within this article, we discuss how more collaborative approaches are needed to best manage the opportunities and risks presented by AI technologies.

“According to Gartner, through 2026, organizations that don’t enable and support their AI use cases through an AI-ready data practice will see over 60% of their AI projects fail to deliver on business SLAs and be abandoned.”
Key Challenges to AI Deployment
Data Governance
Data governance is not a recent fad, but has always been foundational - it supports the protection of confidential information and compliance with privacy laws. With the advent of gen AI, data governance plays an even critical role in enabling a responsible AI governance strategy for an organisation.
(1) Data Classification and Cross-Border Transfer Challenges
To effectively manage data, organisations will need to understand and classify the types of systems and data they are handling and what is required to move forward. For regulated industries, it is also important to understand the nuances between company confidential, customer, personal & sensitive data. Over time, data laws in some APAC jurisdictions have created new categories of data and attached obligations to each of them. A good example is China’s and Vietnam’s “core”, “important”, and “general” data categories. Further, if data is to be stored or processed overseas for AI/ML purposes, cross-border transfer requirements require organisations to know a dataset’s origin, geographical storage location, and transfer destinations.
(2) Legal Fragmentation Across Jurisdictions
Within APAC, varying definitions of "personal data,” "consent," and cross-border transfer requirements across jurisdictions create fragmentation. This lack of harmonisation makes it challenging for regional or global organisations to apply a single, consistent data governance strategy, thereby increasing the complexity of ensuring compliant and secure implementation of AI technologies.
(3) Less Mature Internal Data Governance
In organisations with less mature data governance, teams may not only need to build, or upgrade their data inventories, but also establish or update policies, processes, and roles and responsibilities for managing data. Coordinating across many departments to uncover and document data handling practices, and to maintain accurate inventories, can be complex and very time-consuming. These necessary but resource-intensive tasks are often seen as burdensome, and frequently no one wants to take ownership. The lack of governance maturity results in significant delayed progression and increases the overall risk exposure to the organisation.
(4) Structural Silos may Cause Blind Spots
Business functions may begin experimenting with genAI using sensitive data, without involving Legal, Risk, Compliance or Privacy teams early on. This creates blind spots and undermines pre-assessment processes that are intended to mitigate risks of potential AI security & privacy risks. Lack of visibility into ‘shadow AI’ risks are immediate and real issues that organisations need to grapple with.
AI Governance
(5) Development of AI Governance Framework
A common question organisations face is whether to integrate AI governance into existing risk management frameworks or to create a standalone AI governance framework.
While there are a few common risk management elements for AI and traditional IT systems, new concerns e.g. explainability and bias, are unique to AI. These new risks may require a dedicated AI governance framework that builds on top of the existing governance processes for technology risk, security & privacy frameworks.
(6) AI Literacy and Talent Gap
A significant shortcoming observed is the current standard of AI literacy and access to appropriate AI talent, particularly within the risk, compliance, audit functions. This leads to an over-reliance on checklists and questionnaires as tools that overlook the process of understanding the underlying intent of the requirements, mindful risk assessment, and lead to check box ticking.

“You can’t use data that you cannot find, understand and trust - data is the foundation of AI/ML.”
Utilising gen AI in a business setting can present various risks on accuracy, privacy and security, and regulatory compliance. We explore the following practical strategies below to manage these AI risks.
(1) Data Governance as a critical pillar for trusted AI
(i) Data Provenance: Data provenance, which is a record of a dataset’s origins, how it has been altered, and by whom, is crucial. It generally supports:
-
Trust in AI output, by understanding where data came from
-
Traceability, for compliance, audits, and investigations
-
Risk management, especially where personal, sensitive, or company confidential data is involved.
(ii) Reduce Silos: The Case for a Cross-Functional, Coherent Approach
Given the overlap in needs across privacy, AI governance, and data governance, it is beneficial to take a broader, cross-functional approach when considering AI data management. Responsible teams can avoid repeated, siloed questionnaires being sent to business functions by sharing information, which reduces duplicated efforts and increases efficiency. This streamlining frees up time and resources.
(2) Develop an AI Strategy Roadmap
(i) AI Governance Committee
Choosing the right leaders and functions to lead AI governance sets the tone for a robust AI governance program within the organisation. An independent AI governance committee, offering diverse views from relevant stakeholders including business, Risk, Compliance & Audit functions (“3 lines of defense model”) is recommended. Including the AI use case owner(s) can also help to promote accountability to be established upfront. In recognition that AI is not solely technology or business driven, including a HR professional in the AI Governance Committee is equally important, as this can help to address the human & societal impact of AI.
(ii) Define key AI principles, risk appetite, policy and standards for AI
-
Clearly define key guiding AI principles that will drive the policy statement forward.
-
Establish the organization's risk appetite, and tolerance limits for AI. This will be unique to each organization’s culture and structure. Clearly define "red lines" – specific AI applications or use cases that should never be engaged and exceed the organisation's risk appetite e.g. using AI for intrusive workforce monitoring. Setting out these red lines upfront saves time and effort further in the implementation timeline, by preventing business functions from pursuing activities that are strictly prohibited.
-
Review and align with existing enterprise-wide risk management approaches to develop a risk assessment methodology.
-
Review existing internal policies and standards, develop new, or uplift existing artifacts where necessary to address AI specific risks. (e.g. AI usage, AI procurement policy, AI incident response plan, AI privacy policy, marketing collaterals, and other documentation to support the explainability, transparency and interpretation of AI.)
(iii) AI Risk Assessment Framework
Google has developed a Secure AI Framework (“SAIF”) that can help organizations to address security risks across AI system components (data, models and outputs) and is aligned with Google’s Responsible AI practices.
SAIF is designed to help security risk practitioners mitigate risks specific to AI systems like model exfiltration, data poisoning, injecting malicious inputs through prompt injection, and sensitive data disclosure from training data.
An interactive SAIF Risk Self Assessment is also available to help security practitioners identify and understand AI risks relevant to their organization. The SAIF risk map includes causes, impact and potential mitigations, with examples of real world exploitation. Each risk is mapped to key controls that can be linked to the role of the model creator and/or model consumer, depending on who is responsible for implementing the controls to mitigate the risks.
In addition, the new ISO/IEC 42005:2025 also provides guidance to organisations conducting AI system impact assessments and ensuring compliance with emerging global regulations. The concept of "foreseeable damage" can help to assess accountability by appropriate business functions. Organisations can also look toward their existing privacy governance documentation and processes to include the assessment of AI privacy specific risks.
(3) Upskill all 3 lines of defenses within organisations
Ensuring AI literacy across all three lines of defense is important. Not everyone will need to be a data scientist, but one will need to understand the key concepts of machine learning to effectively evaluate and apply human judgement to monitor AI risks. AI governance professionals should not only understand AI, but also have experience in governance, risk and compliance, and they should be able to translate legislative requirements into actionable policies. As you embark on your gen AI journey, useful public resources such as Google Cloud Generative AI Skills Boost can be considered for practical, hands-on learning.
Organizations should not aim for perfection initially. It is hence better to pace oneself, by starting with small and yet, purposeful steps forward and mature the standards over time. An outcome should always be measurable, and aligned with the organisation’s overall business strategies.
(4) Future Trends
(i) Policy as a Code
There is a general direction towards "policy or compliance as a code," which aims to translate human-readable common control standards and regulations into standard, machine-readable code. This move toward continuous, automated compliance and audit/controls monitoring is seen as a future trend that can streamline compliance and audit processes internally within an organisation.
(ii) Partnership with AI providers
Prior IAPP research has found that more than 70% of organisations rely on third party AI, so the responsibility for ensuring that an AI system is safe and responsible may be spread across multiple roles - internal and external to an organisation. AI deployment should be treated like a project, requiring ongoing testing, adaptation, and a continuous partnership with AI model providers.
AI model providers, especially foundation model providers, operate under a concept of "shared responsibility" with the deploying organization. Organizations should seek providers who are willing to closely collaborate and provide transparency into their AI model testing, datasets, and guardrails.
At Google Cloud, we believe in partnering closely with organisations, based on our shared fate model, to maximise the security outcomes on your AI workloads. Google continues to uphold our strong AI privacy commitments to protect customer data, enabling organisations to pursue data-rich use cases while complying with relevant regulations and laws.
Next Steps on gen AI
In summary, the key to utilising the full benefits of AI lies in setting out a holistic and collaborative approach within the organisation and with the AI provider, whilst applying robust security and privacy by design approaches throughout the AI development lifecycle.
For more articles like this, be sure to subscribe to our Google Cloud CISO Perspectives newsletter for details on the latest cybersecurity developments at Google Cloud, as well as our upcoming events and webinars.