Skip to main content

Hi everyone,

 

I've been exploring Google Cloud's Confidential Computing products like Confidential VMs, GKE Nodes, and Confidential Space. The potential for protecting data-in-use is very promising, especially for sensitive AI workloads.

 

My question is:

How can we securely integrate third-party AI models (e.g., from open source or external vendors) into Confidential Space or Confidential VMs without compromising data privacy or the TEE trust boundary?

 

Are there recommended architectural patterns, isolation practices, or attestation tools we should use when the model provider is not part of our organization?

 

Appreciate any insights, resources, or examples you can share!

 

Thanks,

Hi ​@Ewaz ,
Thanks for reaching out! It looks like your question is more about Google Cloud Platform products than Google Cloud Security products.
To get the best help, we recommend posting your question in the main Google for Developers Community https://developers.google.com/ under Google Cloud. The experts there will be able to provide more targeted assistance.
Thanks for understanding!


Reply