Skip to main content
Question

Tech Stack for Agentic AI

  • April 20, 2026
  • 1 reply
  • 10 views

vktaylor

We are developing a B2B agentic AI workflow and have received architecture guidance from Google. The proposed stack includes: Google Identity Platform and App Engine for secure ingress; Cloud Storage with Customer-Managed Encryption Keys (CMEK) for document persistence; Document AI for extraction; Vertex AI for enterprise inference; Cloud DLP for data loss prevention; and Cloud Logging and Monitoring for observability. We have been advised to execute a Business Associate Agreement (BAA) with Google prior to launch.

Our core question: is this architecture sufficient to protect client data at scale, and are there any known vulnerabilities in AI systems we should be addressing? 

Key questions:

  1. Is our proposed stack the right foundation for a secure B2B AI product, and are there any obvious gaps?
  2. How do we make sure one client's data cannot be seen or accessed by another?
  3. Does the system monitor and flag sensitive data in both documents we upload and responses the AI generates?
  4. What happens on Google's side if there is a security breach, and what support do we receive?
  5. What steps should we take to meet compliance requirements, including signing a BAA?

1 reply

dnehoda
Staff
Forum|alt.badge.img+16
  • Staff
  • April 20, 2026

This particular forum is relatred to Google Security Operations.  Was this intended for this forum?