Skip to main content

Hi everyone,

 

I've been exploring Google Cloud's Confidential Computing products like Confidential VMs, GKE Nodes, and Confidential Space. The potential for protecting data-in-use is very promising, especially for sensitive AI workloads.

 

My question is:

How can we securely integrate third-party AI models (e.g., from open source or external vendors) into Confidential Space or Confidential VMs without compromising data privacy or the TEE trust boundary?

 

Are there recommended architectural patterns, isolation practices, or attestation tools we should use when the model provider is not part of our organization?

 

Appreciate any insights, resources, or examples you can share!

 

Thanks,

Be the first to reply!

Reply