Skip to main content
Question

How Can Confidential Computing Be Integrated with Third-Party AI Models Securely?

  • July 23, 2025
  • 2 replies
  • 43 views

Ewaz
Forum|alt.badge.img

Hi everyone,

 

I've been exploring Google Cloud's Confidential Computing products like Confidential VMs, GKE Nodes, and Confidential Space. The potential for protecting data-in-use is very promising, especially for sensitive AI workloads.

 

My question is:

How can we securely integrate third-party AI models (e.g., from open source or external vendors) into Confidential Space or Confidential VMs without compromising data privacy or the TEE trust boundary?

 

Are there recommended architectural patterns, isolation practices, or attestation tools we should use when the model provider is not part of our organization?

 

Appreciate any insights, resources, or examples you can share!

 

Thanks,

2 replies

ErikaB
Community Manager
Forum|alt.badge.img+10
  • Community Manager
  • July 29, 2025

Hi ​@Ewaz ,
Thanks for reaching out! It looks like your question is more about Google Cloud Platform products than Google Cloud Security products.
To get the best help, we recommend posting your question in the main Google for Developers Community https://developers.google.com/ under Google Cloud. The experts there will be able to provide more targeted assistance.
Thanks for understanding!


Rene
Staff
  • Staff
  • December 2, 2025

Hi ​@Ewaz ,

Here is a codelab that outlines how one party could securely run a proprietary ML model created by another party - https://codelabs.developers.google.com/codelabs/secure-ml-model-confidential-space#0

Here is an example of how to securely leverage an OSS AI model - https://developers.googleblog.com/en/enabling-more-private-gen-ai/

Both leverage Google Cloud Confidential Space.

Hope these are helpful.