<?xml version="1.0"?>
<rss version="2.0">
    
                    <channel>
        <title>Join the conversation</title>
        <link>https://security.googlecloudcommunity.com</link>
        <description>On the Forum you can ask questions or take part in discussions.</description>
                <item>
            <title>share.google returns HTTP 200 for HEAD but 301 for GET on the same URL, violating RFC 9110 Section 9.3.2</title>
            <link>https://security.googlecloudcommunity.com/google-threat-intelligence-3/share-google-returns-http-200-for-head-but-301-for-get-on-the-same-url-violating-rfc-9110-section-9-3-2-7398</link>
            <description>Hi, want to report annoying behavior of share.google in hope somebody from Google devs team can hook up and fix this security gap.Right now links to https://sharer.]google/EnyBDZiv3ksnQT9xZ would lead to https://www.google.com/shareh.]google?q=EnyBDZiv3ksnQT9xZ via 301 redirect by both HEAD &amp;amp; GET, but from that point https://www.google.com/share/.]google?q=EnyBDZiv3ksnQT9xZ - would reply with 302 redirect only on GET, and not HEAD, which is violation of RFC 9110 Section 9.3.2 and breaks automated systems from safely verifying destination link, which in case with share.google in 99% cases now used as obscurity platform to spread phishing, fraud, malware and other not good things.</description>
            <category>Google Threat Intelligence</category>
            <pubDate>Tue, 28 Apr 2026 01:07:26 +0200</pubDate>
        </item>
                <item>
            <title>How to Lock Down Exchange Online Direct Send and Prevent Gateway Bypass 6 months ago</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/how-to-lock-down-exchange-online-direct-send-and-prevent-gateway-bypass-6-months-ago-7368</link>
            <description>What is the specific PowerShell command used to prevent external entities from sending emails to your tenant using your own domain names?Why is it necessary to enable &quot;Enhanced Filtering for Connectors&quot; when you have already locked down your tenant to only accept mail from a third-party gateway?If an organization enables RejectDirectSend $true but forgets to update their SPF record to include their legitimate third-party sending services, what is the likely impact on outbound mail delivery?</description>
            <category>Google Security Operations</category>
            <pubDate>Tue, 28 Apr 2026 01:04:15 +0200</pubDate>
        </item>
                <item>
            <title>Google chronicle rule quota  limitation</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/google-chronicle-rule-quota-limitation-7364</link>
            <description>Any one has any idea what is this in chronicle can some one explain in detail  </description>
            <category>Google Security Operations</category>
            <pubDate>Tue, 28 Apr 2026 00:53:32 +0200</pubDate>
        </item>
                <item>
            <title>Scaling Detection-as-Code with Google SecOps: An MSSP’s Perspective</title>
            <link>https://security.googlecloudcommunity.com/community-blog-42/scaling-detection-as-code-with-google-secops-an-mssp-s-perspective-3999</link>
            <description>Atticus Lin is a Cloud Security Manager at Arctiq and has been building detection rules, parsers, and automations in Google SecOps for the last three years. He serves as the technical lead of SecOps Onboarding and SecOps Detection Engineering.Reid Hurlburt is a member of the automation team at Arctiq, specializing in Security Orchestration Automation &amp;amp; Response (SOAR) with Google SecOps. The automation team regularly develops automated playbooks which aids and/or augments the standard operating procedures of SOC analysts.Arctiq’s Managed Extended Detection and Response (MXDR) service delivers 24/7, year-round vigilance, using Google Security Operations (SecOps) to detect, investigate and respond to threats. Our fully managed service automates security detection and response to safeguard IT infrastructure, systems, data, and more. As a Google Premier Services Partner, Arctiq has enhanced levels of enablement and partnership with Google Cloud Security to bring our customers a best-in-breed level of platform expertise and service.At Arctiq, we were faced with an interesting problem as we experienced hypergrowth in the security operations space: “How do we manage content (e.g. detection rules, data tables, and rule exclusions) across multiple customer environments in a scalable, consistent, version-controlled, and automated way?” Enter Detection-as-Code (DAC) and our efforts in GitHub to standardize Detection Engineering at Arctiq. Evolving in a Multi-Tenant WorldEarly on in our Google SecOps journey, our Detection Engineering processes involved manually logging in to customer tenants to create and maintain rules. As we onboarded more customers, a list of pain points quickly came to light:The deployment, testing, and tuning of our rules library across our fleet of Google SecOps tenants was tedious and time-consuming. For example, our workflow for creating a new rule in a customer’s Google SecOps tenant would be to navigate to the tenant, authenticate, create the rule, initiate a test against the last two weeks of data, analyze the results of the test once it completed, tune the rule accordingly, then set the rule to alert.	Our processes were slowed by manually checking (and double-checking) to ensure that rules contained no sensitive customer data like production subnet ranges, admin/user group names, comments, or reference list entries left a tenant.	Tracking modifications to rules, both on a per-customer basis and in our master library of rules was difficult.	Answering the questions of “Who changed this rule and why?” became difficult to identify as our team grew and as leadership at various levels of technical knowledge got involved with our processes.	Review and collaboration around rule development and changes required setting up meetings and screen-sharing sessions for our remote-first team spread across North America. The Path Forward: Embracing Detection-as-CodeAfter coming across a blog post written by David French about implementing Detection-as-Code with Google SecOps, we immediately recognized our north star for scalability. Detection-as-Code leverages software engineering principles to manage and deploy detection rules as code, offering significant benefits including:Scalability across multiple environments: Our engineers can create/modify a rule once in GitHub and have the changes deployed to multiple Google SecOps tenants.	Consistency in rule deployment: Using GitHub as the single source of truth for our detection content makes it easy to test and deploy rules to protect our customers.	Version control for tracking changes to content: The process to revert to an earlier version of a rule or understand the reasons for changes to a rule is straightforward.	Automation of deployment and tuning processes: Once an engineer’s proposed changes are tested, reviewed, and approved, the changes are automatically deployed to customers.	Enhanced collaboration among security teams: The unique experience and skill sets of individuals on the team ensures that we build the best rules possible.	Increased efficiency by reducing manual tasks: Proposed changes to detection content are tested automatically prior to deploying changes to Google SecOps.	Improved accountability through clear audit trails: Changes to our detection content are tracked in GitHub and the associated artifacts (e.g. GitHub issues and pull requests) record who changed what, when, and why.This approach modernizes Detection Engineering practices, and has allowed us to build more robust, adaptable, and consistently applied security measures. We knew that in order to grow with our customer base that we would need to invest time and resources into automation and the latest methodologies. Custom Tool Development for MSSP ScaleGoogle Cloud Security’s Content Manager tool was awesome to use on a tenant-by-tenant basis, and served us well in the beginning. We saw an opportunity to customize the tooling based on our own requirements. We needed to manage content in our customer’s Google SecOps tenants at scale and concurrently. We decided to fork the GitHub repository and wrap the CLI with additional functionality.To develop our own Detection-as-Code pipeline for Google SecOps, we wrote a custom Python-based CLI that uses a subset of the methods from Content Manager provided by Google. Changes to the code that interacts with the Google SecOps API were minimal, which meant that we could focus on other crucial tasks such as secure storage, the management of credentials &amp;amp; secrets, and our detection logic. By designating each tenant to its own directory, we’re able to independently manage secrets, rules, and reference lists for each customer.We were able to distribute our custom CLI to our team’s Detection Engineers quickly, allowing them to update multiple Google SecOps tenants with new/edited rules with a single command – greatly increasing efficiency.  The following command shows how we can push rule updates to multiple Google SecOps tenants by specifying the customer IDs. (venv) ➜ automation git:(main) python3 rules-cli.py push --rules c2_beaconing_dns_tcp_udp.yaral --tenants clientX clientY clientZ Automating Our Detection PipelineOnce our custom CLI was developed and working consistently, our next step was to introduce further automation via GitHub Actions. We wanted our Detection Engineering process to follow an expedited yet intuitive workflow:A Detection Engineer creates a pull request in GitHub containing a new rule, and specifies one or more specific customer environments for deployment.	A test of the rule logic is automatically kicked off against the last two weeks of data in the customer’s Google SecOps tenant(s), which returns an overview of the results.	A SOC analyst and a fellow engineer review, request modifications if needed, then approve the new rule.	The goal at this stage is to reduce the risk of pushing changes to customers that result in adverse effects such as false positives or worse, false negatives.		The team collaborates around proposed changes with the objective of building the most efficient and precise detections leveraging the unique experience that we have on the team.		The rule is automatically deployed to the target customer tenants once the pull request has been merged, and its state is set to “Alerting.” Let’s take a look at the above workflow in action!In the screenshot below, user “atticus-arctiq” has created a detection rule for CrowdStrike and has included the detection logic in a file for review in a new pull request on GitHub.  A GitHub Actions workflow is executed immediately after the pull request is created. In the example below, we can see that the “github-actions” bot executed our CLI tool to test the rule over the events ingested in Google SecOps.  The bot leaves a comment on the pull request once the workflow has completed. Atticus can review any detections that were generated by the rule during the testing and tweak the detection logic appropriately.  After a couple iterations via commits, a peer review of the pull request is submitted. Once the pull request is approved, the proposed changes are merged into the “main” branch of the GitHub repository. This kicks off another GitHub Actions workflow to push the changes to the specific Google SecOps tenants.  The PayoffArctiq’s Detection Engineering capabilities are now significantly more efficient since implementing our Detection-as-Code pipeline and updated processes. We estimate that our engineers are 15-25% more efficient across the entire lifecycle of a rule. This increased efficiency comes from the time they save during automated testing, parallelized collaboration, and streamlined deployment. We have also found that our own team’s ad hoc audits of rules have become far less frequent, and when they do happen, the version history native to GitHub has resulted in an expedited process.These efficiency gains allow us to provide greater detection coverage across our customers. Our team has more time for research and the development of new rules to detect emerging threats and more advanced attacker tradecraft. Our automated testing practices ensure greater accuracy and fidelity of our rules, which improves our customer security posture and confidence. As we look to adopt additional AI advancements into our processes, this accuracy and process control will be crucial to operating with confidence at the speed of business. What’s Next?With our initial Detection-as-Code pipeline now active and an integral part of our workflow, we are turning our attention to automating content creation and refining our detection rules. We plan to leverage the newest capabilities of MCP servers and LLMs for this purpose. Our existing GitHub integration provides robust version control and facilitates collaboration, enabling a streamlined &quot;trust-but-verify&quot; approach. This allows Gemini to effectively function as a collaborative team member whom we can trust to do security research and content creation, but in a way we can validate and have confidence in.What we demonstrated in this post is just one of the many ways you can leverage the Google SecOps APIs to develop custom tooling that fits the workflow of your operations team. The team at Google Cloud Security is continuously developing new open source tools, such as the new Google SecOps SDK that can be leveraged in this same way – enabling many possibilities for automation and process improvements within your organization. AcknowledgementA special thanks to David French at Google Cloud Security for his valuable insight and feedback on this blog, as well as for his thought leadership on top of which we were able to build this pipeline. Thank you also to Eugene Dimarsky, Google Cloud Security Partner Engineer, for his partnership and advice as we embarked on this journey.</description>
            <category>Community Blog</category>
            <pubDate>Tue, 28 Apr 2026 00:43:06 +0200</pubDate>
        </item>
                <item>
            <title>Empowering Fraud Analysts with Forensic Intelligence from Login to Transactions</title>
            <link>https://security.googlecloudcommunity.com/community-blog-42/empowering-fraud-analysts-with-forensic-intelligence-from-login-to-transactions-7122</link>
            <description>Author: Saurabh Bhasin, Senior Product Manager Enterprises are navigating the shift from scripted, volumetric attacks to sophisticated, globally coordinated fraud rings that target the entire customer journey. As AI-assisted tactics enable fraudsters to mimic human behavior with ease, security analysts require more actionable fraud intelligence to granularly segment legitimate users or agents from untrusted ones, and confidently mitigate attacks without introducing unnecessary user friction.As the industry’s most deployed trust platform, reCAPTCHA secures the entire customer journey, from the initial interaction and account creation to the protection of downstream transactions. Today we are deepening that foundation by providing analysts with forensic depth and actionable intelligence via the following features: Account Takeover (ATO) Analytics (Public Preview): Credential abuse is one of the leading attack vectors that leads to account takeovers. The growing ATO problem manifests as loss of business reputation, attrition of users and increased risk of chargebacks. To help detect ATOs, we are launching a new feature that is 4 times better at detecting account takeover attempts than a score designed to detect bots. This new score is supported by new explainability reasons that provide new forensic insights.	Transaction Defense API (General Availability): By eliminating the requirement for client-side JavaScript, the transaction defense API extends coverage to mobile and agentic commerce. The explainability reasons and expanded use-case support enables businesses to transition seamlessly from web to mobile and agentic environments, empowering analysts to perform forensic deep-dives into card testing attacks and chargeback risks.	Attack Investigation (Public Preview): reCAPTCHA has long provided analysts with robust logging and dashboards to help understand and visualize risky activity on their sites. We are deepening this further by combining millions of data points into an &#039;Attack&#039; view that allows analysts to easily spot correlated attack campaigns instead of individual logs.We will now look at these features in detail, beginning with how reCAPTCHA Account defense secures the sign-up and login flows.As AI-assisted tactics enable fraudsters to target signup and login flows with increasing sophistication, security analysts require more actionable intelligence to defend against account takeovers. reCAPTCHA Account defense already provides the forensic depth needed to identify suspicious logins; today, we are deepening this protection with dedicated ATO Analytics. By leveraging machine learning models specifically tuned for identity signals and behavioral anomalies, the new ATO score is 400% better at detecting account takeover attempts than a score designed to detect bots. This is supported by new explainability reasons that provide insights into reputational history and association with large clusters of made-for-abuse accounts.Even successful ATO detection requires downstream protection and customers are looking for a solution to help them with the end-to-end customer journey. A compromised account or agent that goes undetected inevitably targets the downstream user journey: fraudulent transactions using stolen credit cards. reCAPTCHA protects the transactions on the web and now, with the Transaction Defense API, it also secures mobile, and agentic commerce without the need for client-side JavaScript. This allows analysts to defend against chargebacks and promotional fraud in &#039;human-not-present&#039; scenarios, ensuring that as your business grows into new agentic channels, you can continue to assess carding attacks and chargeback risks.Detection and mitigation alone are not enough. Analysts must be equipped with forensic depth to deconstruct attacks and extract the intelligence necessary to proactively prevent future recurrences. An overwhelming volume of logs makes it difficult to extract intelligence and identify coordinated attack campaigns within the noise.The new Attack Investigation dashboard now aggregates millions of data points into “Attacks” so that analysts can visualize a campaign, and then drill-down into an incident.  Finally, to fully unlock this forensic potential and ensure that data remains under your direct control, reCAPTCHA is transitioning to a Data Processor model for all customers. This shift gives you direct sovereignty over user data and simplifies global compliance, ensuring that your analysts have the data they need to protect your business while you maintain full control over the purpose and means of its processing. You can read more about this change in our blog, and Master Service Announcement. Ready to put these innovations to work?New Customers: Create an account today to start building a more secure experience with our latest fraud defense tools.	Existing Customers: Log into the reCAPTCHA console to explore the new features. Connect with us: Join us at the RSA Conference to see these features in action. You’re also invited to join us at Cloud Next for new feature announcements. </description>
            <category>Community Blog</category>
            <pubDate>Mon, 27 Apr 2026 22:18:52 +0200</pubDate>
        </item>
                <item>
            <title>EP274 AI, Zero Trust and Secure by Design Walk into a Bar.</title>
            <link>https://security.googlecloudcommunity.com/podcasts-43/ep274-ai-zero-trust-and-secure-by-design-walk-into-a-bar-7396</link>
            <description>Guest:	Grant Dasher, ex-CISA, ex-Google, Distinguished Engineer, Google (again)	Subscribe at YouTubeSubscribe at SpotifySubscribe at Apple Podcasts Topics covered:	Why is the  &quot;Secure-by-Design&quot; movement gaining so much momentum now, and is it a response to the failure of &quot;bolted-on&quot; security, or just a natural evolution of cloud maturity?			In a future Secure-by-Design world, is identity the only perimeter that actually matters anymore? Or is this a cliche?			As we move toward a world of autonomous agents, how does our approach to machine identity need to change? Are we just talking about more complex Service Accounts, or do we need a fundamental shift in how we authorize &quot;intent&quot;			What is your  advice  to people who want to move fast and cannot wait for Secure by Design / Default  AI to be decided by consensus or IETF, NIST or OASIS committee?			We love the argument that modern AI agents are effectively repeating the mistakes of 1960s payphones - mixing the data plane and the control plane. What is your rebuttal? How do we build &quot;Agentic Security&quot; that doesn&#039;t fall for 60-year-old traps?			Customers are torn between their Zero Trust implementations and their AI adoption. Is Zero Trust now &quot;legacy,&quot; or is it the prerequisite for everything we’re trying to do with AI agents?  			Is there Zero Trust for AI? Is this a fake buzzword or technical reality?	  </description>
            <category>Podcasts</category>
            <pubDate>Mon, 27 Apr 2026 21:32:46 +0200</pubDate>
        </item>
                <item>
            <title>Suspended Google Cloud Project — no response 2 weeks later — Urgent</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/suspended-google-cloud-project-no-response-2-weeks-later-urgent-7361</link>
            <description>Hi, my Google Cloud/Firebase project was suspended for suspected key misuse (account_hijacked).I submitted an appeal, but after 9 days I still have no response.This is now a production outage: about 1200 paying users cannot log in and trust for my application is starting to go down rapidly. The business impact this has is nothing short of a disaster, and the students that’s using the application to study for exams are panicking and asking for refunds.What I already did:Removed exposed credentials from files/repos	Rewrote Git history and force-pushed cleaned history	Removed old test/dev files with hardcoded keys	Tightened secret handling to prevent re-commitWhat I am ready to do immediately when access is restored:Rotate/revoke all affected keys	Apply strict key restrictions (referrer/IP/API allowlist)	Move secrets to Secret Manager	Review IAM, billing, and audit logsI’m currently blocked because suspended status prevents console access.Can anyone advise:how to confirm the appeal is active and not missing info, and	whether I should only follow up in the same appeal thread, or if that will delay everything further?</description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 27 Apr 2026 20:15:08 +0200</pubDate>
        </item>
                <item>
            <title>Limitations on Collecting Intune Logs via Graph API?</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/limitations-on-collecting-intune-logs-via-graph-api-7378</link>
            <description>Hi everyone,I’m currently working on integrating Microsoft Intune logs into Google Cloud (via the Microsoft Graph API).From what I’ve observed so far, it seems that only audit logs are accessible through this integration.I wanted to ask:Is there a limitation with the Graph API that restricts collection to only audit logs?	Are operational logs and compliance logs available through any other endpoint or method?	If not, are there any recommended alternatives or workarounds to ingest these logs?Would appreciate any insights or guidance from the community.</description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 27 Apr 2026 18:02:19 +0200</pubDate>
        </item>
                <item>
            <title>Google SecOps: Q1, 2026 Feature Roundup</title>
            <link>https://security.googlecloudcommunity.com/news-announcements-9/google-secops-q1-2026-feature-roundup-7381</link>
            <description>Welcome to our quarterly look at the latest innovations within Google Security Operations. In Q1 2026, we focused on enhancing AI-driven automation, expanding our global footprint, and providing granular control over data management and compliance. The Agentic SOCAI remains a core strategic pillar, with several key updates reaching preview status to help analysts respond faster and more effectively. Agentic Automation (Public Preview):This enhancement facilitates the integration of AI-driven capabilities into both new and existing playbooks by merging AI agent steps with deterministic automation. This hybrid approach ensures analysts remain in control of critical actions while progressively adopting advanced AI. With this release, organizations can utilize the Triage and Investigation Agent, leveraging its outputs—such as verdict and confidence levels—within subsequent playbook steps to automate decision-making, remediation workflows, or alert closures. Learn more SecOps Labs for Enterprise (Public Preview, Enterprise and Enterprise+):This dedicated sandbox allows for early testing of features. Run Google SecOps Gemini and other intelligence experiments without disrupting your existing production systems—and benefit from their output Learn more SecOps OneMCP (Public Preview):SecOps OneMCP enables 1P and 3P AI Agents to interact with Google SecOps to seamlessly orchestrate Enterprise Defense. It allows agents to perform actions like listing cases, retrieving UDM events, and managing detection rules through AI Agents. Learn more Emerging Threat Center (GA): The Emerging Threat Center in SecOps helps customers immediately determine if their environment is impacted by new critical intelligence published by GTI, transforming the starting point for threat hunting workflows into a proactive, curated journey. As new Campaigns are published, Gemini processes the reports, determines detection coverage, and suggests new rules to add to Curated Detections. With GA, customers can expect expanded feed filtering, MITRE ATT&amp;amp;CK Matrix visualization, enhanced Entity Context Panels, and improved GTI IoC Categories  Learn more | Blog  Triage and Investigation Agent (TINA) (GA): TINA enables SecOps users to respond to SecOps alerts faster, by providing a disposition of True Positive or False Positive for alerts, backed by a summary of the alert and a step-by-step explanation of the autonomous investigation that it performs using best practices from Mandiant within an average 60-70 seconds. With GA, customers will see updates to the UI to increased usability, improved tooling and SecOps integrations, along with enhanced administration controls.  Learn More Compliance &amp;amp; Data SovereigntyAs high-compliance teams face increasing regulatory pressure, we are providing more robust tools to protect sensitive data. EKM with CMEK Support (GA):Google SecOps is now &quot;Cloud EKM ready,&quot; allowing customers to hold their own encryption keys. This ensures sensitive data remains protected even if external key connections drop, as the system is designed to handle high-compliance requirements without sacrificing fragility. Learn more Data Management and Enterprise ReadinessEfficiency and autonomy in data lifecycle management are critical for enterprise-scale operations. GA Launch of v2 Feeds:v2 feeds now use Storage Transfer Service (STS) which accelerates the ingestion of large volumes of data across object and file storage systems like Amazon S3 and Azure Blob Storage into Google SecOps. Learn more Self-Serviced Tenant Wipeout (GA):Customers now have full autonomy to initiate the deprovisioning of their SecOps tenants. This process includes a secure &quot;Soft Delete&quot; with a 12-day grace period before a permanent &quot;Hard Delete&quot; occurs. Learn more Unified Feature RBAC (GA):This launch consolidates access management by transitioning legacy SOAR Permission Groups into Google Cloud IAM, providing a single pane of glass for feature-level control for Google SecOps. Learn more. Data Ingestion Burst Limits:We have updated documentation to clarify operational &quot;speed limits&quot; based on a customer&#039;s purchased annual volume. Learn more Intel-Led Proactive Security OutcomesOur detection engineering updates focus on visibility and granular control over rule execution. Rule Observability Updates (GA):New metadata is now attached to all detection and alert objects, helping analysts understand if an alert was caused by a primary rule run or a &quot;rule replay&quot;. Learn more Unified Rule Management (Open Private Preview):This update provides a single dashboard to browse and manage both custom and curated rules. Analysts can now view curated YARA-L text, toggle individual rule statuses, and perform advanced searches by MITRE techniques. Learn more Global Regional ExpansionWe continue to expand our global availability to meet residency commitments. South Africa, Indonesia, South Korea and Taiwan Launch:Google SecOps is now live in 18 regions worldwide with the addition of Indonesia and South Africa, supporting growth opportunities and local compliance needs in these markets. Learn more Enhanced Data Management CapabilitiesWe’ve introduced features to accelerate data onboarding and improve logging visibility. Direct Ingestion for Model Armor Logs (GA):Organizations can now ingest logs from Google Cloud Model Armor to secure the &quot;AI-human&quot; interface, monitoring for prompt injection and sensitive data leakage. Learn more Auto Extraction (GA):This feature allows users to instantly use structured log data (JSON and XML) in search and rules without waiting for a prebuilt parser. Learn more Share SOAR Logs to Cloud Logging (GA):Enabled by default in version 6.3.71, this provides visibility by sharing SOAR logs directly into a customer&#039;s Google Cloud Logging project. Learn more Content Hub &amp;amp; Documentation Playbooks &amp;amp; Blocks Tab (Public Preview): This new tab provides a centralized library of expert-curated, ready-to-use response workflows and reusable blocks. Customers can now discover, preview, and deploy high-quality automation in seconds—significantly accelerating incident response and ensuring operational consistency. Unified Search (Public Preview): A single, powerful search interface that allows you to discover Content Packs, Detection Rules, Response Integrations, and Dashboards simultaneously. No more jumping between tabs to find related assets; now you can surface everything you need to investigate or deploy in a single query. Unified Sourcing (Public Preview):Aimed at accelerating contribution efforts, this update standardizes content labeling across the Content Hub. By clearly distinguishing between Google-authored, Partner, and Community-driven assets, analysts gain immediate visibility into the origin and official support status of the content they deploy. Response Integration Rollback (Public Preview):Practitioners can now revert to previous snapshots of commercial response integrations, encouraging the adoption of new functionalities with mitigated risk. Learn more Data Export Documentation Revamp:We consolidated documentation to improve navigation and user-friendliness for data export journeys. Learn more YARA-L Documentation Enhancements:We&#039;ve completely overhauled our YARA-L and Data Export documentation to make it more intuitive, user-journey focused, and easier to navigate. Learn more Playbooks &amp;amp; Blocks Tab (Public Preview):Located in the Content Hub, this tab provides a centralized library of expert-curated workflows and reusable blocks, significantly accelerating incident response times. Learn more </description>
            <category>News &amp; Announcements</category>
            <pubDate>Mon, 27 Apr 2026 15:55:46 +0200</pubDate>
        </item>
                <item>
            <title>Attach Block via quick action</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/attach-block-via-quick-action-7350</link>
            <description>is there any action where we can attach block to case/alert ? In ide i can see attach playbook but not block</description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 27 Apr 2026 15:39:37 +0200</pubDate>
        </item>
                <item>
            <title>Google chronicle rule quota  limitation</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/google-chronicle-rule-quota-limitation-7365</link>
            <description>Any one has any idea what is this in chronicle can some one explain in detail  </description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 27 Apr 2026 14:39:50 +0200</pubDate>
        </item>
                <item>
            <title>Limitations on Collecting Intune Logs via Graph API?</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/limitations-on-collecting-intune-logs-via-graph-api-7379</link>
            <description>Hi everyone,I’m currently working on integrating Microsoft Intune logs into Google Cloud (via the Microsoft Graph API).From what I’ve observed so far, it seems that only audit logs are accessible through this integration.I wanted to ask:Is there a limitation with the Graph API that restricts collection to only audit logs?	Are operational logs and compliance logs available through any other endpoint or method?	If not, are there any recommended alternatives or workarounds to ingest these logs?Would appreciate any insights or guidance from the community.</description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 27 Apr 2026 13:22:12 +0200</pubDate>
        </item>
                <item>
            <title>Can Microsoft Intune logs Via Grpah API only collect audit logs</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/can-microsoft-intune-logs-via-grpah-api-only-collect-audit-logs-7377</link>
            <description>Hi Team,I wanted to confirm that if we integrate Microsoft Intune logs via graph API, we will only retrieved audit logs? Is there a limitation what we cannot collect other type of Intune logs such as operational and compliance logs ?</description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 27 Apr 2026 13:14:38 +0200</pubDate>
        </item>
                <item>
            <title>Urgent: GCP Project Suspended for Resource Hijacking - Unable to Access IAM to Rotate Leaked Keys</title>
            <link>https://security.googlecloudcommunity.com/security-command-center-4/urgent-gcp-project-suspended-for-resource-hijacking-unable-to-access-iam-to-rotate-leaked-keys-7343</link>
            <description>Hello everyone,I am seeking urgent guidance regarding a GCP project suspension. My account was recently suspended, and I received an email stating that the project was engaged in abusive activity consistent with &quot;hijacked resources.&quot;The Situation:Access Denied: My production application is currently offline. Whenever I attempt to access the IAM &amp;amp; Admin or APIs &amp;amp; Services dashboard to investigate, I am automatically redirected to the suspension warning page.	Unknown Leak: I have audited my frontend/backend/app environment variables (.env) but haven&#039;t found any obvious exposures.	Account Lockout: Because I cannot access the IAM dashboard or Cloud Logging, I am unable to identify which credential is being abused or delete the compromised keys.	Appeal Status: I submitted an appeal over a week ago, but I have not received a response, and my production app remains affected.My Questions:Is there a way to access Cloud Logging or Security Command Center via the SDK or a restricted console view while the project is suspended to identify the source of the abuse (e.g., specific IP addresses or hijacked keys)?	Can I programmatically revoke all existing API keys via gcloud or a similar tool if the web console is locked?	Are there specific channels to escalate an appeal when the suspension is caused by a hijacked resource rather than a policy violation?Any advice on how to regain enough access to rotate my credentials and secure the project would be greatly appreciated.</description>
            <category>Security Command Center</category>
            <pubDate>Mon, 27 Apr 2026 11:26:12 +0200</pubDate>
        </item>
                <item>
            <title>Can I publish my SecOps SIEM custom integration in Content Hub?</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/can-i-publish-my-secops-siem-custom-integration-in-content-hub-7359</link>
            <description>I have few questions about submission process for the SecOps SIEM custom integration:Can I publish my SecOps SIEM custom integration in Content Hub?	It is containing these componants:		Ingestion Script			Dashboard			Detection Rules			Search Queries				Is there any public repository to submit these componants or any of them as part of content hub?</description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 27 Apr 2026 11:01:27 +0200</pubDate>
        </item>
                <item>
            <title>Urgent – Google Cloud Project Suspended (ACCOUNT_HIJACKED) – Business Impact</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/urgent-google-cloud-project-suspended-account-hijacked-business-impact-7326</link>
            <description>Hello,My Google Cloud project has been suspended due to a security issue (ACCOUNT_HIJACKED).I fully understand the situation and I am ready to take all necessary actions to secure the project. However, I currently do not have access to the project anymore, which prevents me from revoking the compromised API keys or applying the required fixes.As a precaution, I have already stopped billing to prevent any further charges.At the moment, my application is completely down, which is having a severe impact on my business. My clients cannot access the service, and I am receiving increasing complaints and refund requests.I already have an open support case: 70327888This situation is becoming critical, and I would greatly appreciate any help to restore access or guidance on how I can proceed to secure the project.Thank you very much in advance for your support.Vincent</description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 27 Apr 2026 10:02:41 +0200</pubDate>
        </item>
                <item>
            <title>Securing the Agentic Era: New Gemini Enterprise Agent Platform</title>
            <link>https://security.googlecloudcommunity.com/security-command-center-4/securing-the-agentic-era-new-gemini-enterprise-agent-platform-7376</link>
            <description>Title: Securing the Agentic Era: Governance and Trust in the New Gemini Enterprise Agent Platform Author: Tyler DooskinDate: April 26, 2026 IntroductionFollowing the major announcements at Google Cloud Next &#039;26 in Las Vegas this week, the shift from experimental AI to the &quot;agentic enterprise&quot; is officially underway. The headline news is the consolidation and evolution of Vertex AI into the Gemini Enterprise Agent Platform m1]. As organizations deploy autonomous agents capable of managing complex, multi-day workflows, the conversation immediately turns to trust. How do we secure systems that not only retrieve data but act upon it?Here is a breakdown of the new governance, security, and compliance controls introduced for the Gemini Enterprise Agent Platform and what they mean for your security posture.From Vertex AI Governance to Gemini Enterprise ControlThe evolution from Vertex AI to the Gemini Enterprise Agent Platform isn&#039;t just a rebrand; it represents a unified approach to building, scaling, and governing agents s1, 2]. With new capabilities like Memory Bank and multi-day Agent Engine Sessions providing persistent context, the attack surface inherently changes s3]. To address this, Google Cloud has introduced centralized oversight tools designed to keep agents operating within strict, auditable enterprise guardrails.Key Governance and Security Features Announced at Next &#039;26Feature / Architecture			Security &amp;amp; Governance Benefit		Agent Identity &amp;amp; Registry			Establishes a trackable identity for every agent (whether first-party or partner-built), ensuring that all autonomous actions are authenticated, observable, and mapped to a specific lifecycle owner r1].		Agent Gateway			Acts as a centralized control plane to enforce enterprise guardrails, strict access policies, and routing logic for multi-agent workflows s1].		Agent Sandbox / Workspaces			Provides a hardened, &quot;secure-by-design&quot; sandboxed execution environment. Isolated from your core systems, this allows agents to safely execute model-generated code, bash commands, and browser automation without introducing systemic risk k1].		Model Armor			A dedicated security layer designed to defend agents against indirect prompt injections and malicious inputs during runtime e2].		Zero-Trust A2A Architecture			Secures Agent-to-Agent (A2A) orchestration. As agents delegate tasks natively across frameworks like LangGraph, Semantic Kernel, or CrewAI, zero-trust principles seamlessly authenticate and authorize every system handoff f2].		Data Security, IAM, and Compliance IntegrationBeyond the orchestration layer, governing the data pipeline remains a priority. The platform maintains tight integration with the broader Google Cloud security ecosystem:	Access Management &amp;amp; Auditability: Native integration with Google Cloud IAM and comprehensive audit logging ensures granular, least-privilege access control over what data an agent can query or manipulate e2].			Data Loss Prevention (DLP): Built-in native DLP and logging provide continuous visibility into model inputs and outputs, helping to enforce policies and block sensitive enterprise data from entering unauthorized training pipelines s4].			Lifecycle and Lineage Tracking: Enhanced visibility across connected data sources—from Google Workspace assets to external data warehouses via retrieval-augmented generation (RAG)—allows security teams to track data lineage, maintain hygiene, and enforce regional data residency requirements m4].	 </description>
            <category>Security Command Center</category>
            <pubDate>Sun, 26 Apr 2026 23:02:10 +0200</pubDate>
        </item>
                <item>
            <title>Securing the Agentic Era: Governance and Trust in the New Gemini Enterprise Agent Platform</title>
            <link>https://security.googlecloudcommunity.com/security-command-center-4/securing-the-agentic-era-governance-and-trust-in-the-new-gemini-enterprise-agent-platform-7375</link>
            <description>Title: Securing the Agentic Era: Governance and Trust in the New Gemini Enterprise Agent Platform Author: Tyler DooskinDate: April 26, 2026 IntroductionFollowing the major announcements at Google Cloud Next &#039;26 in Las Vegas this week, the shift from experimental AI to the &quot;agentic enterprise&quot; is officially underway. The headline news is the consolidation and evolution of Vertex AI into the Gemini Enterprise Agent Platform m1]. As organizations deploy autonomous agents capable of managing complex, multi-day workflows, the conversation immediately turns to trust. How do we secure systems that not only retrieve data but act upon it?Here is a breakdown of the new governance, security, and compliance controls introduced for the Gemini Enterprise Agent Platform and what they mean for your security posture.From Vertex AI Governance to Gemini Enterprise ControlThe evolution from Vertex AI to the Gemini Enterprise Agent Platform isn&#039;t just a rebrand; it represents a unified approach to building, scaling, and governing agents s1, 2]. With new capabilities like Memory Bank and multi-day Agent Engine Sessions providing persistent context, the attack surface inherently changes s3]. To address this, Google Cloud has introduced centralized oversight tools designed to keep agents operating within strict, auditable enterprise guardrails.Key Governance and Security Features Announced at Next &#039;26Feature / Architecture			Security &amp;amp; Governance Benefit		Agent Identity &amp;amp; Registry			Establishes a trackable identity for every agent (whether first-party or partner-built), ensuring that all autonomous actions are authenticated, observable, and mapped to a specific lifecycle owner r1].		Agent Gateway			Acts as a centralized control plane to enforce enterprise guardrails, strict access policies, and routing logic for multi-agent workflows s1].		Agent Sandbox / Workspaces			Provides a hardened, &quot;secure-by-design&quot; sandboxed execution environment. Isolated from your core systems, this allows agents to safely execute model-generated code, bash commands, and browser automation without introducing systemic risk k1].		Model Armor			A dedicated security layer designed to defend agents against indirect prompt injections and malicious inputs during runtime e2].		Zero-Trust A2A Architecture			Secures Agent-to-Agent (A2A) orchestration. As agents delegate tasks natively across frameworks like LangGraph, Semantic Kernel, or CrewAI, zero-trust principles seamlessly authenticate and authorize every system handoff f2].		Data Security, IAM, and Compliance IntegrationBeyond the orchestration layer, governing the data pipeline remains a priority. The platform maintains tight integration with the broader Google Cloud security ecosystem:	Access Management &amp;amp; Auditability: Native integration with Google Cloud IAM and comprehensive audit logging ensures granular, least-privilege access control over what data an agent can query or manipulate e2].			Data Loss Prevention (DLP): Built-in native DLP and logging provide continuous visibility into model inputs and outputs, helping to enforce policies and block sensitive enterprise data from entering unauthorized training pipelines s4].			Lifecycle and Lineage Tracking: Enhanced visibility across connected data sources—from Google Workspace assets to external data warehouses via retrieval-augmented generation (RAG)—allows security teams to track data lineage, maintain hygiene, and enforce regional data residency requirements m4].	Looking AheadAs developers leverage the new visual Agent Studio or the code-first Agent Development Kit (ADK) v1.0, security teams now have the built-in hooks required to say &quot;yes&quot; to AI automation safely a1]. By rooting agentic workflows in centralized identities, hardened execution sandboxes, and continuous observability, the Gemini Enterprise Agent Platform provides the foundation needed for defensible AI governance at scale.</description>
            <category>Security Command Center</category>
            <pubDate>Sun, 26 Apr 2026 23:01:14 +0200</pubDate>
        </item>
                <item>
            <title>Securing the Agentic Era: Governance and Trust in the New Gemini Enterprise Agent Platform</title>
            <link>https://security.googlecloudcommunity.com/security-command-center-4/securing-the-agentic-era-governance-and-trust-in-the-new-gemini-enterprise-agent-platform-7374</link>
            <description>Title: Securing the Agentic Era: Governance and Trust in the New Gemini Enterprise Agent Platform Author: Tyler DooskinDate: April 26, 2026 IntroductionFollowing the major announcements at Google Cloud Next &#039;26 in Las Vegas this week, the shift from experimental AI to the &quot;agentic enterprise&quot; is officially underway. The headline news is the consolidation and evolution of Vertex AI into the Gemini Enterprise Agent Platform m1]. As organizations deploy autonomous agents capable of managing complex, multi-day workflows, the conversation immediately turns to trust. How do we secure systems that not only retrieve data but act upon it?Here is a breakdown of the new governance, security, and compliance controls introduced for the Gemini Enterprise Agent Platform and what they mean for your security posture.From Vertex AI Governance to Gemini Enterprise ControlThe evolution from Vertex AI to the Gemini Enterprise Agent Platform isn&#039;t just a rebrand; it represents a unified approach to building, scaling, and governing agents s1, 2]. With new capabilities like Memory Bank and multi-day Agent Engine Sessions providing persistent context, the attack surface inherently changes s3]. To address this, Google Cloud has introduced centralized oversight tools designed to keep agents operating within strict, auditable enterprise guardrails.Key Governance and Security Features Announced at Next &#039;26Feature / Architecture			Security &amp;amp; Governance Benefit		Agent Identity &amp;amp; Registry			Establishes a trackable identity for every agent (whether first-party or partner-built), ensuring that all autonomous actions are authenticated, observable, and mapped to a specific lifecycle owner r1].		Agent Gateway			Acts as a centralized control plane to enforce enterprise guardrails, strict access policies, and routing logic for multi-agent workflows s1].		Agent Sandbox / Workspaces			Provides a hardened, &quot;secure-by-design&quot; sandboxed execution environment. Isolated from your core systems, this allows agents to safely execute model-generated code, bash commands, and browser automation without introducing systemic risk k1].		Model Armor			A dedicated security layer designed to defend agents against indirect prompt injections and malicious inputs during runtime e2].		Zero-Trust A2A Architecture			Secures Agent-to-Agent (A2A) orchestration. As agents delegate tasks natively across frameworks like LangGraph, Semantic Kernel, or CrewAI, zero-trust principles seamlessly authenticate and authorize every system handoff f2].		Data Security, IAM, and Compliance IntegrationBeyond the orchestration layer, governing the data pipeline remains a priority. The platform maintains tight integration with the broader Google Cloud security ecosystem:	Access Management &amp;amp; Auditability: Native integration with Google Cloud IAM and comprehensive audit logging ensures granular, least-privilege access control over what data an agent can query or manipulate e2].			Data Loss Prevention (DLP): Built-in native DLP and logging provide continuous visibility into model inputs and outputs, helping to enforce policies and block sensitive enterprise data from entering unauthorized training pipelines s4].			Lifecycle and Lineage Tracking: Enhanced visibility across connected data sources—from Google Workspace assets to external data warehouses via retrieval-augmented generation (RAG)—allows security teams to track data lineage, maintain hygiene, and enforce regional data residency requirements m4].	Looking AheadAs developers leverage the new visual Agent Studio or the code-first Agent Development Kit (ADK) v1.0, security teams now have the built-in hooks required to say &quot;yes&quot; to AI automation safely a1]. By rooting agentic workflows in centralized identities, hardened execution sandboxes, and continuous observability, the Gemini Enterprise Agent Platform provides the foundation needed for defensible AI governance at scale.</description>
            <category>Security Command Center</category>
            <pubDate>Sun, 26 Apr 2026 23:01:05 +0200</pubDate>
        </item>
                <item>
            <title>SOAR Data Table Enrichment: Keeping IP–Hostname Association with Multiple Tables</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/soar-data-table-enrichment-keeping-ip-hostname-association-with-multiple-tables-7208</link>
            <description>Hello,I am experiencing an issue related to the use of Data Tables for data enrichment in SOAR. Currently, we have several tables containing IP addresses along with their corresponding hostnames (and in some cases, additional information such as VLAN or other attributes).The goal is to query these tables using the “Is Value In Data Table” action, so that when searching for an IP address, the associated values from the corresponding columns are returned. However, a limitation arises because we need to query three different Data Tables simultaneously, each with different structures and fields.The expected output would be something like:IP (hostname, vlan) if associated information exists	IP only, if no match is foundAdditionally, we encounter issues when multiple IPs are present in the same alert. The output does not preserve the relationship between each IP and its corresponding hostname, instead returning an aggregated result such as:ip1, ip2, ip3, hostname1, hostname2, hostname3, which makes it difficult to interpret.Is there a recommended way to perform this type of enrichment while preserving the relationship between each IP and its attributes (either using this method or an alternative approach)? Has anyone faced a similar scenario and can share best practices?Thank you in advance.</description>
            <category>Google Security Operations</category>
            <pubDate>Sun, 26 Apr 2026 18:11:09 +0200</pubDate>
        </item>
                <item>
            <title>Hello, Cloud Security Enthusiast! 👋 Introduce Yourself!</title>
            <link>https://security.googlecloudcommunity.com/news-announcements-9/hello-cloud-security-enthusiast-introduce-yourself-5450</link>
            <description>Hey Everyone!Welcome to the Google Cloud Security Community! We want to kick things off by getting to know each other better. This space is all about connecting, sharing, solving, and building the future of cloud security – and that journey starts with you!So, don&#039;t be shy! Drop a quick intro below and tell us:Who are you? (Your name, role, etc.)	What&#039;s your cloud security superpower? (What area excites you most, or a cool project you&#039;re working on?)	What are you hoping to learn or share here? (Let&#039;s help each other grow!)We&#039;re incredibly excited to learn from your unique experiences and build a vibrant hub where we can all protect, create, and innovate together.Can&#039;t wait to meet you all!Matt </description>
            <category>News &amp; Announcements</category>
            <pubDate>Sun, 26 Apr 2026 14:42:26 +0200</pubDate>
        </item>
                <item>
            <title>Building an AI-Powered Digital Plant Health Clinic for Smallholder Farmers with Google Cloud</title>
            <link>https://security.googlecloudcommunity.com/google-threat-intelligence-3/building-an-ai-powered-digital-plant-health-clinic-for-smallholder-farmers-with-google-cloud-7373</link>
            <description>Hello Google Cloud Community,I am the founder of Meyra Digital Plant Health Clinic (DPHC), an initiative focused on supporting smallholder farmers with accessible, AI-powered plant health services.Our mission is to reduce dependency on chemical pesticides by providing farmers with natural, locally available solutions combined with intelligent diagnosis tools.We are currently exploring how to leverage Google Cloud to build a scalable and impactful system with the following components:Image-based plant disease detection using AI/MLAutomated advisory system for eco-friendly treatment recommendationsMobile-first delivery (Telegram bot / lightweight apps) for rural accessibilityScalable backend infrastructure for future expansionI would greatly appreciate guidance from the community on:The most suitable Google Cloud services for building and scaling this solution (e.g., Vertex AI, Firebase, Cloud Run)Best approaches for training or integrating plant disease detection modelsDesigning cost-efficient architectures for early-stage startups in low-resource environmentsOpportunities within Google for Startups or related support programsThis project is being developed with a strong focus on environmental sustainability, farmer resilience, and practical field implementation.I am open to collaboration, mentorship, and technical guidance.Thank you for your support.</description>
            <category>Google Threat Intelligence</category>
            <pubDate>Sat, 25 Apr 2026 23:09:03 +0200</pubDate>
        </item>
                <item>
            <title>URGENT: Project ZIP-NEW (zip-new-141ba) Suspended for Hijacking - No Response to Appeal after 48h.</title>
            <link>https://security.googlecloudcommunity.com/google-threat-intelligence-3/urgent-project-zip-new-zip-new-141ba-suspended-for-hijacking-no-response-to-appeal-after-48h-7294</link>
            <description>My production project is down and suspended for 48h now with no prior notice and no reply for my appeal request. Not sure what to do.  We are facing a huge loss. I have audited all my code for any security leak and it is clean. Not sure what happened.</description>
            <category>Google Threat Intelligence</category>
            <pubDate>Sat, 25 Apr 2026 17:49:31 +0200</pubDate>
        </item>
                <item>
            <title>!urgent help, My Project (id: bionic-xxxx) is being suspended for repeatedly violating our Google Cloud Platform Terms of Service </title>
            <link>https://security.googlecloudcommunity.com/cloud-security-foundation-7/urgent-help-my-project-id-bionic-xxxx-is-being-suspended-for-repeatedly-violating-our-google-cloud-platform-terms-of-service-6696</link>
            <description>in my project show this message “My Project (id: bionic-xxxx) is being suspended for repeatedly violating our Google Cloud Platform Terms of Service or Acceptable Use Policy (including Terms of Service of the Google API you may be using). “ and I can not do anything on my project.How can I fix this problem? I submitted a request a long time ago, but I haven&#039;t received a response from Google support yet.-Does anyone have a solution? Please help me.</description>
            <category>Cloud Security Foundation</category>
            <pubDate>Sat, 25 Apr 2026 17:46:19 +0200</pubDate>
        </item>
                <item>
            <title>BREAK STUFF AND BUILD THINGS</title>
            <link>https://security.googlecloudcommunity.com/fraud-defense-recaptcha-6/break-stuff-and-build-things-7355</link>
            <description>We are in the Ag tech industry and are “Breaking  the IOT” with localized LORAN solutions. what is one creative way you would stop a user from accidentally performing something malicious </description>
            <category>Fraud Defense (reCAPTCHA)</category>
            <pubDate>Sat, 25 Apr 2026 16:55:58 +0200</pubDate>
        </item>
                <item>
            <title>No response, suspended Google Cloud Project</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/no-response-suspended-google-cloud-project-7352</link>
            <description>Project id: project-5580276092549813950My project got suspended 5. april, due to misuse of some keys.I have filed an appeal, taking actions on how to prevent this from happening again. But I do not get any response at all.Now observing users are dropping off, users not able to sign on, complaining. It is very devastating to observe, years of developing an app for dog owners. It is very critical.  Actions Taken:• Removed exposed API keys from all repositories and application code• Ensured that no API keys or credentials are publicly accessible• Reviewed all Git repositories and verified that sensitive credentials are no longer present• Updated development practices to prevent API keys from being embedded in client-side applications• Planned migration of all Gemini API requests from client-side usage to a secure backend service• Implemented safeguards to ensure that future API usage is authenticated and controlled• Preparing to rotate all API keys and service account credentials immediately once project access is restored• Added billing monitoring procedures to detect abnormal usage earlierPreventive Measures:• All AI-related API calls will be routed through a secure backend• API keys will be restricted to specific services and usage patterns• Budget alerts and usage monitoring will be configured to prevent unexpected billing spikes• Internal review of credential handling practices has been completedBut Im not able to access the Google Cloud console to create new keys etc. Please help me.</description>
            <category>Google Security Operations</category>
            <pubDate>Sat, 25 Apr 2026 16:16:42 +0200</pubDate>
        </item>
                <item>
            <title>Intune Logs via Third-Party Graph API Feed</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/intune-logs-via-third-party-graph-api-feed-7371</link>
            <description>Hi Google Team,We have successfully integrated Microsoft Intune logs into Google SecOps using a third-party Graph API feed. The feed configuration appears to be working — logs are being ingested and the feed shows a healthy status.However, we&#039;ve noticed a significant discrepancy in log size:- **In Google SecOps (via the feed):** Ingested log size is approximately **40 KB**- **Manual API retrieval (direct Graph API call):** The same file/export is approximately **6 MB**This is roughly a 150x difference, which strongly suggests that the feed is either truncating or only partially ingesting the log data.**What we&#039;ve already ruled out:**- Feed configuration is confirmed successful with no visible errors- Authentication against the Microsoft Graph API is working correctly- we have verified via direct API calls that pagination exist in the returned logs file and the full dataset is returned when querying manually **Questions:**1. Is there a known ingestion size cap or event limit per feed cycle for third-party Graph API feeds in Google SecOps?2. Could the feed be silently truncating payloads beyond a certain size threshold?3. Are there any feed-level or forwarder-level settings (e.g., max payload size, batch size limits) that could explain why only ~40 KB out of ~6 MB is making it through?4. Is there a way to enable verbose feed logging to identify exactly where the data loss is occurring?Any insight from the community or Google SecOps team would be greatly appreciated.</description>
            <category>Google Security Operations</category>
            <pubDate>Fri, 24 Apr 2026 23:55:45 +0200</pubDate>
        </item>
                <item>
            <title>Where in the cloud console can I see what type of recaptcha we have?</title>
            <link>https://security.googlecloudcommunity.com/fraud-defense-recaptcha-6/where-in-the-cloud-console-can-i-see-what-type-of-recaptcha-we-have-7370</link>
            <description>I am slowly but surely working on migrating my many clients to the new reCAPTCHA area in the Cloud Console.  Sometimes I have been able to upgrade from a classic key and sometimes I had to start fresh. When working in websites, and adding the recaptcha key(s) to a specific tool (like a popup builder or opt-in form) that tool has a spot for adding those keys depending on what type i am using. V2 checkbox, V2 invisible, or V3. Meaning they have different fields depending on which type of recaptcha. So I go back into the Cloud Console to try to find what type it is. I cannot find any reference to that. The classic keys are all listed in a table with a column for type. But even when I look at the specific key (already upgraded) in a table there is no “type” column. Are ALL migrated recaptchas V3? If not, where exactly can i see what the recaptcha actually is.</description>
            <category>Fraud Defense (reCAPTCHA)</category>
            <pubDate>Fri, 24 Apr 2026 21:18:25 +0200</pubDate>
        </item>
                <item>
            <title>Secops - Data table</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/secops-data-table-7064</link>
            <description>Hi,I’m working on enriching AWS CloudTrail logs with Organization Unit (OU) information. Since CloudTrail logs (except for the “CreateAccount” event) do not natively include the OU, I’ve implemented the following workflow:I created a Data Table that maps Account ID to OU and I wrote a detection rule that triggers on the “CreateAccount” log and uses the write_row function to update the mapping in the Data Table.I need the OU information to be available in every case opened for a specific account, even though the logs triggering those cases don&#039;t contain the OU field. What is the best practice to enrich a case with a field from a data table in this situation?Can I leverage the outcome section to fetch the OU from my data table and then promote it to the case level?I’m looking for the standard procedure to handle the Alert-to-Case mapping for these enriched values.Or, is graph_override a better approach for this type of entity-based enrichment?I want to make sure that when an analyst opens a case for &quot;Account_AAA&quot;, the &quot;OU&quot; field is clearly visible in the case context/metadata. Could you please provide a code example of the YARA-L lookup and explain the steps to map this outcome to a case field? Thanks </description>
            <category>Google Security Operations</category>
            <pubDate>Fri, 24 Apr 2026 17:14:55 +0200</pubDate>
        </item>
                <item>
            <title>time  of  deploy  of GKE  cluster and pods  , based filter</title>
            <link>https://security.googlecloudcommunity.com/security-validation-5/time-of-deploy-of-gke-cluster-and-pods-based-filter-7360</link>
            <description>logs and  deployment  instances should be  filtered with  time  </description>
            <category>Security Validation</category>
            <pubDate>Fri, 24 Apr 2026 12:16:36 +0200</pubDate>
        </item>
                <item>
            <title>how to detect cyber criminals and protection against black hat hacker</title>
            <link>https://security.googlecloudcommunity.com/security-command-center-4/how-to-detect-cyber-criminals-and-protection-against-black-hat-hacker-7291</link>
            <description>How to detect cyber security criminals and stop black hat hacker </description>
            <category>Security Command Center</category>
            <pubDate>Thu, 23 Apr 2026 23:36:30 +0200</pubDate>
        </item>
                <item>
            <title>Case generated with older events</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/case-generated-with-older-events-7309</link>
            <description>I recently created detection rules that look for possible suspicious activities for a terminated user. The idea was to only escalate when we see these rules fire after the termination timestamp. The logic are not complex, essentially single event rules looking at certain event_types along with principal and target userids of the user, and no match section therefore running NRT. I did my tests to exclude possible offboard related events. However, after I enabled Alerting, cases were generated with events that occurred days before the rules were set to Alerting. I noticed that the Alerts tied to the cases had the tooltip information below: Now based from what I’ve gathered, this is something to do with how the detection rules pipeline making sure that even “older” events are properly surfaced and is essentially a continuous stream processor. Some of it covered here Latency Analysis in Google SecOps | by Chris Martin (@thatsiemguy) | Medium The tooltip information is a good indicator, but the problem is that we have an MSSP setup where we forward cases from the client instance to our MSSP instance. And these tooltip information do not get carried over to the case generated in the MSSP instance. The event timestamp from the detection itself is obviously an indicator, but it defeats the purpose of this tooltip.In theory the approach to prioritize recall vs. precision is understandable, but in practical terms it can cause issues. My questions are:1. How can we exclude events that happened prior to enabling Alerting? Will adding filter on metadata.event_timestamp work? Although the timestamp filter approach is only relevant to use-cases where we have a cut-off time.2. Is there any other way to highlight older events that are being surfaced with significantly different case generation time? 3. Any other useful tidbit to keep in mind when handling scenarios like this?</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 23 Apr 2026 23:33:44 +0200</pubDate>
        </item>
                <item>
            <title>How to restart OAuth Verification process? No email to respond to.</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/how-to-restart-oauth-verification-process-no-email-to-respond-to-7358</link>
            <description>Hello,I am trying to get sensitive scopes enabled on our OAuth application, but the video we previously submitted was deemed insufficient. We now have an updated video ready to submit; however, I have not received any follow-up email to respond to.Why is there no user interface available for this process? How am I supposed to resubmit the updated video? Is there a specific email address or support channel I should use to follow up?Additionally, it is frustrating that a required process for any OAuth application is so opaque. At the moment, I have no clear action to take and no accessible support channel unless I pay for premium support.Thank you for your guidance.</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 23 Apr 2026 22:16:31 +0200</pubDate>
        </item>
                <item>
            <title>URGENT: Systemic Deadlock Over 22 Days - Support Case #70072588 / Appeal # [removed by moderator] (Academic NLP Project)</title>
            <link>https://security.googlecloudcommunity.com/cloud-security-foundation-7/urgent-systemic-deadlock-over-22-days-support-case-70072588-appeal-removed-by-moderator-academic-nlp-project-7356</link>
            <description>Hello, I am seeking a manual escalation from a Community Manager or Admin. I am a Startup EdTech Lead caught in a documented &quot;Infinite Support Loop&quot; that has halted our academic research for nearly three weeks.The Deadlock: Trust &amp;amp; Safety (Case # [removed by moderator] ): On 5/4, I submitted a comprehensive technical Root Cause Analysis (RCA). I detailed our &quot;Magic Byte&quot; pre-flight validation fix and the air-gapping of the corrupted legacy dataset that triggered the &quot;Abusive Activity&quot; flag (due to 404/HTML payloads from a third-party source). There has been zero response for 19 days.   Cloud Billing Support (Case #70072588): Agent Kervin has officially closed our support ticket, stating: &quot;Resolving account-level restrictions... falls entirely outside our support boundaries. We do not have the administrative access to lift this restriction [or] redirect the case.&quot;   Community Forum: My previous attempt to resolve this was marked &quot;Solved&quot; by a manager directing me back to the Support portal—the same portal that has now officially stated they cannot assist. The Dependencies: This is an academic NLP project for Automated Essay Scoring. Our entire infrastructure is currently on hold. We have a positive balance on the account, but we are restricted from using the services we have already paid for due to a compliance flag that no one will review.The Request: I am not looking for generic &quot;contact support&quot; links. I have already contacted every available department. I am requesting that a Community Manager manually flag Appeal Case # #removed by moderator]  for the Trust &amp;amp; Safety team to review the technical documentation provided on 5/4.We have identified the error, implemented the fix, and provided the proof. We just need a human to look at the ticket.Account: :removed by moderator]</description>
            <category>Cloud Security Foundation</category>
            <pubDate>Thu, 23 Apr 2026 20:26:23 +0200</pubDate>
        </item>
                <item>
            <title>IAM/API Console 302 Redirect: Seeking CLI Workaround to Rotate Leaked Keys for Project zip-new-141ba</title>
            <link>https://security.googlecloudcommunity.com/cloud-security-foundation-7/iam-api-console-302-redirect-seeking-cli-workaround-to-rotate-leaked-keys-for-project-zip-new-141ba-7345</link>
            <description>Hi everyone,I&#039;m facing a critical remediation blocker following a &quot;Hijacked Resource&quot; suspension on theproject. While I am eager to secure my environment and rotate all potentially compromised credentials, I am trapped in a redirect loop that prevents administrative action.The Technical Problem:Whenever I navigate to IAM &amp;amp; Admin or APIs &amp;amp; Services, the GCP Console performs a forced redirect to the suspension warning page. This means I cannot revoke existing API keys or Audit Service Account activity through the standard UI.Investigation Status:Audit: Local .env files and Git history have been reviewed, but I suspect a credential may have been intercepted or leaked elsewhere.	Timeline: Appeal submitted 7+ days ago; no response received. Production environment remains offline.Seeking Expert Advice on:Programmatic Revocation: What is the specific gcloud syntax to force-delete all active API keys when the project status is &quot;Suspended&quot;?	Log Retrieval: Can I export Activity Logs or VPC Flow Logs via the SDK to pinpoint the source of the &quot;abusive activity&quot; and confirm the leak is plugged?	Trust &amp;amp; Safety Contact: Is there a way to provide &quot;Proof of Remediation&quot; to the safety team when you are physically blocked from the UI tools needed to fix the issue?I am ready to perform a full credential rotation immediately if I can bypass the console redirect. Any guidance from the community or the Google team would be appreciated.</description>
            <category>Cloud Security Foundation</category>
            <pubDate>Thu, 23 Apr 2026 17:03:37 +0200</pubDate>
        </item>
                <item>
            <title>Staying on top of AI Developments</title>
            <link>https://security.googlecloudcommunity.com/ciso-blog-77/staying-on-top-of-ai-developments-4060</link>
            <description>&amp;nbsp;
&amp;nbsp;
&amp;nbsp;
Staying on top of AI Developments
Artificial Intelligence (AI) is rapidly transforming the business world. From streamlining operations to unlocking deeper customer insights, AI brings a wealth of potential to the modern enterprise. But to capitalize on these benefits, companies need to go beyond just implementing AI systems - they need a workforce that&#039;s prepared to work alongside them (this includes the business, IT and even security professionals).&amp;nbsp;&amp;nbsp;
We’re often asked how to stay on top of AI developments, both technological and regulatory, and how to empower teams with the knowledge, skills, and an understanding of the risks in using AI.&amp;nbsp;
When approaching how to enable your workforce for AI adoption, it’s important to recognize it isn&#039;t just about technology, but about investing in your people. By demystifying AI, focusing on strategic skills building, and creating a culture that values continuous learning, your enterprise can unlock the full potential of AI as a transformative technology and prepare for the future of work. It is critical that both IT and business teams understand how AI works, how these risks materialize, and what to do about them.
Staying informed about AI is no longer optional - it&#039;s vital. New use cases are emerging and problems are being solved by capable users of the technology. A workforce empowered by AI knowledge translates into innovation, increased efficiency, and a stronger competitive edge, laying the path for successful AI integration in a changing business environment. In this blog, we discuss some strategies leaders can employ to upskill their teams.
Demystify AI
As noted in Google’s Secure AI Framework (SAIF), it’s important to level set with an AI primer and its security follow-up. Start with the basics of AI - what it is, what it&#039;s not, and its business applications. Aligning on concepts like AI, machine learning (ML), Deep Learning, gen AI, and large language models (LLMs) enables all stakeholders to accurately identify and evaluate the relevant risks and controls required to manage and deploy AI safely and responsibly.&amp;nbsp;&amp;nbsp;
Using a common vernacular across the enterprise also serves to build a strong foundation to foster further learning. Likewise, it elevates the discussion, addressing common misconceptions and helping to avoid missteps.
Implement an AI Skills Building Program
Because the best way to learn AI is to actually use the models - experiment with them, spend time with them, apply them in your work.
Tailor your learning initiatives for different target audiences. Technologists, data scientists, information security specialists, compliance teams and business users, for instance, will all be expected to be proficient in using AI at some point in the near future, but their usage will vary significantly. At Google Cloud, we understand that one size doesn’t fit all, so we’ve developed AI learning paths that provide options based on area of interest and experience level.&amp;nbsp;
As we previously noted, “Google Cloud has been working for decades to bring AI technology solutions to organizations, and our tools make it easier to build experiences across our cloud portfolio. Whether you’re executive-level, an IT decision maker, in a non-technical role, or a technical practitioner, we have videos, courses and labs to help you learn about the power of generative AI.”
Our Generative AI Skills Boost provides business users with an overview of generative AI concepts from the fundamentals of large language models to responsible AI principles. For those looking for more of a hands-on experience, try hands-on labs and explore prompt engineering. Essentially, look for opportunities to enable your staff to experiment safely and securely.
Persona-based Training
For AI engineering professionals and application developers, exploring How Google Does Machine Learning, the Generative AI for Developers Learning Path and Getting Started with Machine Learning Operations (ML Ops) may also be informative for the best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud, and to put that knowledge to use with curated courses and hands-on labs and certificates in AI, data analytics, and cybersecurity. Exchange ideas by joining a Google Developer Group.
For information security professionals looking to stay up to date on the evolving cybersecurity landscape and emerging threats, subscribe to the CISO perspectives newsletter, peruse the Threat Horizons intelligence report, hear from security leaders on the Cloud Security Podcast, and refer to what to think about when you’re thinking about securing AI. Additionally, our Threat Intelligence blog arms security professionals with the in-depth knowledge, skills, and tools to defend against the latest and most pressing threats.
For Risk and Compliance professionals, reference Google Cloud’s Approach to Trust in Artificial Intelligence for a view into Google’s security, privacy, governance, and responsible AI posture. When assessing and advising on your organization’s AI usage, take a look at our AI governance best practices for some helpful tips.
Cultivate a Culture of Continuous Learning
AI is going to transform the way work is done and to maintain leadership in a market, your ability to learn and apply this new technology is critical to competitive advantage. Keep in mind, staying current on a field changing as rapidly as AI isn’t a one-time exercise, and fostering a continuous learning culture in your organization takes an intentional, proactive and programmatic approach.&amp;nbsp;
Encourage employees to stay updated on AI trends and advancements, creating a work environment that embraces experimentation and the use of new technologies. Raise awareness by highlighting key questions that need to be addressed to drive secure AI implementation.
Consider including AI upskilling in staff’s expectations, and provide opportunities for learning by attending conferences, webinars, and industry events that facilitate network building, sharing ideas and resources, and staying on top of the latest trends. Amplify and celebrate how employees have successfully leveraged AI in their work and offer rewards or recognition for AI certification completion or for accomplishing other milestones.</description>
            <category>CISO Blog</category>
            <pubDate>Thu, 23 Apr 2026 16:31:22 +0200</pubDate>
        </item>
                <item>
            <title>Exploring the Security Graph with Graph Search</title>
            <link>https://security.googlecloudcommunity.com/security-command-center-84/exploring-the-security-graph-with-graph-search-7351</link>
            <description>Author: Vasken Houdoverdov  IntroductionGraph Search is a powerful new capability introduced in Security Command Center (SCC) that fundamentally changes how security teams discover and investigate high-risk issues in Google Cloud. It allows you to explore the security graph—a relationship-aware database that continuously maps your cloud resources, their configurations, and associated risk indicators such as vulnerabilities, access permissions, data sensitivity, and network exposure.Instead of looking at isolated alerts, Graph Search allows you to discover risks using natural-language queries. You can quickly ask and answer complex questions such as:&quot;Which external identities have access to sensitive data?&quot;&quot;Where do IAM misconfigurations create lateral movement risk?&quot;This adoption guide will provide a deep dive into understanding query components, building advanced custom queries, utilizing predefined rules, and troubleshooting your results.Note: Graph Search is available for organization-level activations of Security Command Center at either Premium Tier or Enterprise Tier.  To use Graph Search, you need a role like Security Center Admin Viewer at the organization level.   To remediate findings at the organization level or perform other administrative actions in SCC, you may need other privileges, like those found in the Security Center Admin role at the organization level.  Understanding Query ComponentsTo effectively pinpoint potential security concerns, you need to understand the building blocks of a Graph Search query. Security graph queries consist of three main components:1. Node A node represents the core subject of your investigation: either a specific cloud resource or a security finding. Nodes are organized logically by categories such as Compute, Kubernetes, Identity, and Databases. Some common examples of nodes include:	CVE Vulnerability: A Common Vulnerabilities and Exposures vulnerability (defined by MITRE).			Virtual Machine (GCE): A Compute Engine instance.			GKE Deployment: A Google Kubernetes Engine resource.			IAM Service Account: An Identity and Access Management service account.			BigQuery Dataset: A dataset within BigQuery.	A selection of available nodes in Graph Search related to Kubernetes. 2. Where clause (filter) A where clause is a context-aware filter applied to a specific node to refine your search based on that node&#039;s properties. The interface will dynamically show you only the filters that are relevant to the selected node type. Examples of filters include:	Severity = Critical (useful for filtering CVE nodes)			Has Full API Access = True (useful for identifying over-privileged nodes)			Exploitation Activity = Confirmed (identifies vulnerabilities currently being exploited in the wild)	A selection of available options for the where clause in Graph Search. 3. Connection A connection defines the directional relationship between two different nodes. Like filters, connections are context-aware, meaning the query editor will only show valid relationships for the specific node type you have selected. Note that populating the value for a Connection takes place after adding the connection to the graph search.  This is done by clicking the “add” icon (the ‘plus’ sign) next to the connection after adding it to the graph search.Examples of connections include:	that is affected by: Links a finding to a resource, such as a CVE Vulnerability that affects a Virtual Machine (GCE).			that uses: Links resources together, such as a Virtual Machine (GCE) that uses an IAM Service Account.			that hosts: Used for identifying nodes hosted in a GKE cluster.	A selection of available connections in Graph Search.  These connections represent relationships between nodes.Step-by-Step: How to Build Custom QueriesBuilding a query allows you to explore your environment based on the specific security criteria that matter most to your organization. You can begin by either modifying a predefined search suggestion or starting entirely from scratch to build a custom query.To create a custom query in the Google Cloud console:	Navigate to the Security Command Center Graph search page.			In the Show field, click Add to select a resource or finding as your primary node, then click Continue.			Refine your node: Click the Where toggle for a specific property. In the Filter value field, enter or select the value that the property must contain (e.g., setting a severity level).			Link nodes: Click the toggle next to a valid connection type (like that affects or that uses) to establish a relationship with a second node.			To further customize, you can click Add next to a node or connection to introduce new components, or click Close to remove a component.			Click Run query to execute the search and update your results.	The Graph Search for “Externally Reachable VMs with Exploitable CVEs and access to Sensitive GCS Buckets”Leveraging Predefined Security Graph RulesIn addition to manual Graph Search queries, Security Command Center continuously evaluates the security graph against predefined rules to automatically generate an &quot;Issue&quot; when a relationship risk is discovered.These predefined rules automatically look for toxic combinations of vulnerabilities, external exposure, and identity misconfigurations. Notable examples of predefined rules include:	External Exposure &amp;amp; Critical CVEs: Rules instantly flag externally exposed GCE Instances or GKE Workloads with high-risk CVEs where an exploit is available. This includes specific rules for actively exploited vulnerabilities like CVE-2025-49844 in Redis, CVE-2025-32433 in Erlang SSH, CVE-2023-46604 in Apache ActiveMQ, and CVE-2025-59287 in Windows WSUS.			Identity Impersonation: SCC flags Compute Engine instances or GKE Node Pools that have a high-risk CVE and the ability to impersonate a service account (SA) that has access to high-value resources or sensitive data.			AI Workload Risks: Rules protect AI infrastruture by identifying Vertex AI Workbench instances with high-risk CVEs, particularly those using over-privileged service accounts, which could lead to the exfiltration of training data and model source code.			Key Misconfigurations: SCC identifies service accounts using unrotated, long-lived keys, or user-managed keys that have excessive permissions, which dramatically increases the risk of credential compromise and privilege escalation.	A selection of predefined Graph Search queriesAnalyzing Results and TroubleshootingOnce your query executes, you can review the results in a table, customize the view by selecting specific columns, and sort the data. If you need to perform offline analysis or share findings with other teams, you can export your query results as a CSV file (limited to up to 1,000 rows).If your custom query returns no results, use the following troubleshooting checklist:	Simplify your query: Avoid combining too many constraints, which can unintentionally exclude results. Try removing or reducing filters to broaden your search scope, or query a single asset type to validate that data is flowing.			Test with a predefined suggestion: Try running one of the predefined search suggestions provided in the console. These are designed to return results in a variety of environments and can help validate that your graph is populated.			Allow time for data sync: If you recently deployed new resources or updated IAM policies, it can take a few minutes or even hours for those changes to populate in the security graph.			Verify access permissions: Ensure you have the necessary IAM permissions to view the data you are querying. Without the correct access, specific assets and their relationships will be hidden or excluded from your results.			Check graph coverage: Depending on your environment, certain data types or relationships might not be supported or available in the security graph.	Adoption Tip: The &quot;Build-Up&quot; StrategyTo ensure adoption success, start with one of the Predefined Suggestions and verify that it returns data.  Then, incrementally add filters or connections one by one. This makes it easier to identify exactly why a query isn’t returning results.Troubleshooting Empty Graph Search ResultsDuring your initial usage of Graph Search, you might encounter empty results due to the underlying query being specified.  You can use the below guidance for navigating queries that return no results in Graph Search.When a query returns no results, follow these four diagnostic steps to identify the cause:1. Query Optimization (Scope &amp;amp; Complexity)Often, &quot;No Results&quot; is the result of an overly specific or misconfigured query.	Simplify the Query: Remove or reduce &quot;Where&quot; filters to broaden the scope.			Validate Single Properties: Test the query by looking for a single asset type or a specific property known to exist (e.g., a specific VM name) to confirm the data is reachable.			Avoid Constraint Overload: Combining too many directional connections (e.g., &quot;Resource X -&amp;gt; affects -&amp;gt; Finding Y&quot;) can unintentionally exclude valid results if the specific relationship path isn&#039;t perfectly matched.	2. Access &amp;amp; PermissionsGraph Search respects IAM boundaries. If you cannot see an asset, it won&#039;t appear in the graph.	Verify IAM Roles: Ensure you have the roles/securitycenter.adminViewer or roles/securitycenter.adminEditor roles.			Resource Visibility: Permissions must be granted at the appropriate level (Organization, Folder, or Project). If a user only has Project-level access, they will not see cross-project relationships in the graph.	3. Data Synchronization LatencyThe Security Graph is not always &quot;real-time&quot; for brand-new changes.	Sync Window: Recently created resources or updated IAM policies may take minutes to several hours to appear in the graph database.			Wait and Retry: If you have just performed a remediation or deployed a new resource, allow time for the SCC backend to crawl and index the change.	4. Graph Coverage LimitationsNot every Google Cloud resource or relationship is currently indexed in the security graph.	Supported Types: If data is missing, check if that specific resource type or relationship (connection) is supported by SCC Graph Search.			Predefined Suggestions: Use a &quot;Predefined Search Suggestion&quot; (provided in the UI) as a baseline. If a predefined suggestion for a similar resource works but your custom query doesn&#039;t, the issue is likely in your query logic.	Exporting Results to Other ToolsTo operationalize your findings, Graph Search provides export functionality to bridge the gap between discovery and remediation. You can utilize the CSV export (limited to 1,000 rows) to pipe results into an external SIEM or other technology for long-term historical analysis. Furthermore, these results can be ingested by a SOAR platform to trigger automated remediation actions, such as automatically revoking an external IP address from a vulnerable VM or stripping an over-privileged role from a compromised service account.Putting it to Use: Prioritizing the React VulnerabilityTo understand the power of Graph Search, let&#039;s look at a critical, real-world scenario. Your organization needs to identify its exposure to the React vulnerability, CVE-2025-55182, which carries a CVSS score of 10 and is actively being exploited in the wild.While simply finding the vulnerability is powered by the agentless scans of the Vulnerability Assessment for Google Cloud feature, determining what to patch first requires context. You can use reachability context in Graph Search to prioritize remediation, focusing on workloads that are both vulnerable and externally reachable.In SCC, network exposure from the public internet is determined by two reachability statuses:	Reachable: The resource is fully accessible from the public internet.			Partially reachable: The resource is reachable from a subset of public IP ranges (even if other ranges are explicitly blocked).	Using Graph Search, you can build a query connecting a CVE Vulnerability node to a Virtual Machine (GCE) or GKE Deployment node, filtering for the specific React CVE, and filtering the resource node for external reachability.Temporary Mitigations: If your query reveals exposed assets but immediate vendor patching is not possible, Graph Search has successfully identified the workloads where you must apply temporary workarounds. These include removing external reachability by configuring the instance to use only a Private IP, or creating a Google Cloud Armor WAF security policy to block exploitation attempts.The Graph Search for virtual machines affected by CVE-2025-55182 (React2Shell)Putting it to Use: Prioritizing Supply Chain and Runtime Identity Risk	Scenario: You need to prioritize GKE workloads that are both running vulnerable code and have access to high-value resources due to an over-privileged service account.			Query Focus: CVE Vulnerability (Node, filtered for a critical score) that affects a GKE Deployment (Node, filtered for external reachability) that uses an IAM Service Account (Node, filtered for excessive permissions like Has Full API Access = True).			Value: It targets the worst-case scenario: an externally exposed GKE cluster, running a high-risk CVE, where the associated service account can be immediately leveraged for lateral movement or data exfiltration if compromised.	A sample Graph Search configuration for this use case.Putting it to Use: Mapping Data Exfiltration Risk from Key Compromise	Scenario: An analyst wants to find the highest-risk data assets (Cloud Storage Buckets, BigQuery Datasets) that could be compromised if an unrotated, long-lived service account key is leaked.			Query Focus: Key Misconfiguration Finding (Node, filtered for unrotated user-managed keys) that affects an IAM Service Account (Node) that has access to a Cloud Storage Bucket (Node, filtered by label/tag for Sensitive Data).			Value: This directly maps identity hygiene issues to data security, providing a clear, business-critical justification for immediate key rotation and remediation.	A sample Graph Search configuration for this use case.Putting it to Use: Hunting for Shadow Admins	Scenario: You can hunt for service accounts with excessive permissions that are not actively used by building a query that focuses solely on the IAM Service Account (SA) node:			Query Focus: An IAM Service Account (Node) that has Has Full API Access = True (Filter), or that holds a highly privileged role (e.g., roles/owner or roles/iam.serviceAccountAdmin).			Goal: This query identifies &quot;shadow admins&quot;—Service Accounts that possess powerful permissions but may be overlooked in regular reviews, presenting a critical target for attackers seeking to escalate privileges.			Value: Identity and Access Management (IAM) is a complex area in cloud security. While predefined rules cover common issues like Identity Impersonation and Key Misconfigurations, custom queries are essential for hunting high-risk, non-standard configurations like &quot;shadow admins.&quot;	A sample Graph Search configuration for this use case.ConclusionGraph Search dramatically transforms how security teams investigate cloud environments. By moving beyond isolated alerts and allowing you to explicitly query the relationships between vulnerabilities, identities, and externally exposed resources, SCC helps you pinpoint and act on the most critical risks in Google Cloud.With the deep dive information in this adoption guide, you are fully equipped to start building custom graph queries, understanding reachability contexts, and prioritizing your remediation efforts effectively! </description>
            <category>Security Command Center</category>
            <pubDate>Thu, 23 Apr 2026 16:10:46 +0200</pubDate>
        </item>
                <item>
            <title>Catching the Uncatchable: True Composite Detections for Advanced Persistent Threats</title>
            <link>https://security.googlecloudcommunity.com/community-blog-42/catching-the-uncatchable-true-composite-detections-for-advanced-persistent-threats-7290</link>
            <description>Author: David Nehoda, Technical Solutions Consultant What We&#039;re SolvingAdvanced Persistent Threats don&#039;t announce themselves with a single loud alert. They execute multi-stage attack chains with an initial phishing click, a malicious file drop, a process execution, and an outbound C2 beacon where each stage generates a low-severity alert that, in isolation, looks like noise. Your SOC receives 5,000 of these disconnected fragments per week. Analysts spend hours manually grouping them across Jira tickets and shift handoffs, trying to reconstruct the attack narrative after the fact. Most of the time, they can&#039;t because the context is lost between queue items, and the attacker has already completed their objective.Alert fatigue isn&#039;t just an operational inefficiency. It&#039;s the #1 reason SOC analysts quit within 18 months. When every alert looks the same and none of them tell a story, the work becomes psychologically unsustainable. What We&#039;re DeliveringThis guide builds a True Composite Detection architecture in Google SecOps that:Collapses 5,000 low-severity alerts into 5 high-fidelity incidents per week; a 99% noise reduction	Automatically stitches previously fired alerts into chronologically ordered kill chains; no manual Jira grouping	Binds attack stages by shared identity and infrastructure; same user, same host, strict temporal sequence	Fires only when the complete attack narrative is confirmed; file drop → process execution → C2 beacon, in order	Delivers a single, CRITICAL-severity case to the analyst with full context from every stage pre-attached How It Works     BASE RULES (LOW severity)                                                                      Rule A: File Drop    → exports $user, $hostname      Rule B: Process Exec → exports $user, $hostname      Rule C: C2 Beacon    → exports $user, $hostname                                                            (Each fires independently. LOW severity.               Individually = noise. Together = kill chain.)                                                      ▼       COMPOSITE RULE (CRITICAL severity)                                               Queries outcomesg] from Rules A, B, C                Binds on: same $user + same $hostname                Enforces: A → B → C (chronological order)        Window: 24h starting after Rule A fires                                                       Fires ONLY when all 3 base alerts align                                    ▼                  SINGLE CASE                     CRITICAL alert                 Full kill chain                                 context ready              The Numbers 			Metric									Before (Single-Event Alerts)									After (Composite Detections)								Weekly alert volume									5,000 disconnected LOW alerts									5 actionable CRITICAL incidents								Noise reduction									0% every alert hits the queue									99%  only confirmed kill chains surface								Analyst reconstruction time									Hours per incident (manual Jira grouping)									0  full context pre-attached to the case								Correlation accuracy									Human-dependent, error-prone across shift handoffs									Deterministic, same user, same host, strict chronological order								Evaluation speed									10–15 min per alert × manual triage									&amp;lt; 1 second per composite evaluation								Analyst retention									Alert fatigue drives 18-month burnout cycles									Meaningful, contextualized work improves retention					 Who This Is ForThis guide is for Detection Engineers building base and composite rules, SOC Managers who need to understand the noise reduction architecture, and CISOs evaluating how their SIEM investment translates into actionable threat detection rather than raw alert volume.  Impact &amp;amp; ROI Detail 			Dimension									Single-Event Alerts									Composite Detections								Alert Volume									Thousands of isolated low-severity alerts daily. Each alert is a disconnected fragment of a larger narrative that no individual analyst can piece together in real-time. The SOC drowns in noise.									5,000 low-severity alerts mathematically collapse into 5 high-fidelity composite incidents per week. A 99% noise reduction that surfaces only confirmed, multi-stage attack chains.								Correlation Model									Manual. Analysts spend hours grouping 15 different alerts across multiple Jira tickets to reconstruct an attack narrative. Context is lost between queue items, shift handoffs, and analyst fatigue.									Automatic. YARA-L 2.0 stitches previously fired detection alerts into chronologically ordered kill chains, binding them by shared identity and infrastructure fingerprints.								Evaluation Speed									Each alert processed independently. A Tier-1 analyst spends 10–15 minutes per alert before realizing it connects to 14 other open tickets. Total reconstruction time: hours to days.									&amp;lt; 1 second per composite evaluation. The SIEM pre-correlates the entire attack chain before it hits the SOAR queue, delivering a single, high-context case to the analyst.								Analyst Burnout									Alert fatigue is the #1 reason SOC analysts quit within 18 months. Processing 500 disconnected low-severity alerts per day with no narrative thread is psychologically unsustainable.									Analysts receive actionable, contextualized incidents instead of raw signal fragments. Retention improves because the work becomes intellectually meaningful.					  The Architecture: Base Rules → Composite DetectionsBefore writing a single line of composite logic, you must understand the fundamental architectural difference between multi-event correlation rules and true Composite Detections. Multi-Event Correlation (What Most SOCs Do)A standard multi-event YARA-L rule queries raw UDM logs directly. It correlates raw telemetry events,like firewall logs, EDR process launches, and authentication records, within a single rule using shared variables and match windows. This is powerful, but it has a ceiling: as the attack chain grows beyond 3–4 stages, the rule becomes impossibly complex, the match window must expand to accommodate timing variance, and the evaluation engine slows under the computational load. True Composite Detections (The Next Level)A Composite Detection does not search raw UDM logs. Instead, it queries the outcomes arrays generated by other YARA-L rules that have already fired. Think of it as a detection that hunts detections; a meta-rule that stitches together previously confirmed alerts into a higher-order narrative.This architecture has three critical implications:Base Rules must explicitly export data into outcome. If your Base Rule doesn&#039;t populate the outcome section with the variables the Composite rule needs (username, hostname, file hash, etc.), the Composite rule has nothing to correlate, and thus the chain breaks silently.	Composite rules evaluate against detection metadata, not raw events. The $d1.detection.detection.rule_name syntax targets a specific Base Rule by name. The $d1.detection.detection.outcomese&quot;variable_name&quot;] syntax extracts the specific outcome values that Base Rule exported.	Chronological enforcement uses raw epoch timestamps from the underlying events. The $d3.detection.collection_elements.references.event.metadata.event_timestamp.seconds path reaches through the detection metadata to access the original UDM event&#039;s timestamp, enabling strict temporal ordering.Below: a complete kill chain implementation; file drop → process execution → C2 beacon; built as three independent Base Rules feeding one master Composite rule. The Base Rules: Building the FoundationEach Base Rule is designed to be deliberately low-severity on its own. A file creation in a temp directory is not inherently malicious. An unsigned binary executing is suspicious but common. An outbound network connection to a known-bad IP is concerning but could be a false positive. The magic happens when all three occur on the same host, for the same user, in sequence. Base Rule A: Suspicious File DropThis rule fires when a file is created in a user-writable temporary directory—the most common staging location for initial payload drops. On its own, this is noise. Combined with process execution and network activity, it becomes the first link in a cyber kill chain. rule Base_Suspicious_File_Creation {  meta:    author = &quot;Detection Engineering&quot;    description = &quot;Flags file creation in user-writable temp directories. Deliberately low-severity as a standalone detection—designed to feed the Composite Kill Chain rule.&quot;    severity = &quot;LOW&quot;    // Tag this rule so the SOAR platform knows it&#039;s a composite feeder, not a standalone alert    tags = &quot;composite_base_rule&quot;  events:    // Target file creation events from Sysmon (Event 11), CrowdStrike, or any EDR    $file.metadata.event_type = &quot;FILE_CREATION&quot;    // Restrict to high-risk user-writable directories where payloads typically land    // AppData\Local\Temp is the most common staging directory for initial access payloads    // because standard user accounts have write access without triggering UAC prompts    re.regex($file.target.file.full_path, `(?i).*\\AppData\\Local\\Temp\\.*`)    // Bind the hostname for grouping — this becomes the join key in the Composite rule    $file.principal.hostname = $hostname  match:    // Group events by hostname over a 5-minute window    // This window should be short — we&#039;re looking for a burst of file drops, not gradual accumulation    $hostname over 5m  outcome:    // CRITICAL: These exact keys are queried by the Composite rule via    // $d1.detection.detection.outcomesy&quot;user&quot;] and $d1.detection.detection.outcomest&quot;hostname_out&quot;]    // If these are missing or misspelled, the Composite correlation chain breaks silently    $user = array_distinct($file.principal.user.userid)    $hostname_out = array_distinct($file.principal.hostname)    $file_paths = array_distinct($file.target.file.full_path)  condition:    $file} Design decisions:Severity LOW: This rule should never generate a standalone SOC alert. It exists purely to feed the Composite layer. Configure it as alerting = false if your environment supports silent detection mode, or tag it so SOAR filters it from the main queue.	$file_paths in outcome: We extract the actual file paths so the Composite rule can surface them in the final alert context, giving the analyst the exact payload location without requiring a manual UDM search.	5-minute match window: Deliberately short. A file drop that&#039;s part of an active attack chain happens within seconds to minutes of the execution phase—not hours later.	 Base Rule B: Malicious Process ExecutionThis rule fires when a binary with a non-empty MD5 hash executes on a host. The hash requirement ensures we&#039;re tracking identified binaries, not system-generated transient processes. rule Base_Malicious_Process_Launch {  meta:    author = &quot;Detection Engineering&quot;    description = &quot;Flags execution of tracked binaries (non-empty hash). Designed as a composite feeder rule for kill chain correlation.&quot;    severity = &quot;LOW&quot;    tags = &quot;composite_base_rule&quot;  events:    // Target process launch events — Sysmon Event 1, CrowdStrike ProcessRollup2, etc.    $exec.metadata.event_type = &quot;PROCESS_LAUNCH&quot;    // Require a non-empty hash — ensures we&#039;re tracking an identified binary    // If the EDR sensor doesn&#039;t log hashes (misconfigured CrowdStrike, etc.), this rule won&#039;t fire    $exec.target.process.file.md5 != &quot;&quot;    // Bind hostname for cross-rule correlation    $exec.principal.hostname = $hostname  match:    $hostname over 5m  outcome:    // Export the same variable names as Rule A — the Composite rule binds on these keys    $user = array_distinct($exec.principal.user.userid)    $hostname_out = array_distinct($exec.principal.hostname)    $process_cmdline = array_distinct($exec.target.process.command_line)    $process_hash = array_distinct($exec.target.process.file.md5)  condition:    $exec} Design decisions:MD5 hash filter: The md5 != &quot;&quot; check serves double duty: it ensures the binary is trackable (for downstream VirusTotal detonation), and it filters out system noise from transient processes that don&#039;t get hashed.	$process_cmdline in outcome: The exact command line is often the single most valuable forensic artifact. Surfacing it in the Composite alert saves the analyst from digging through raw UDM events.Base Rule C: Outbound C2 BeaconThis rule fires when a host initiates an outbound connection to a known Command &amp;amp; Control IP address maintained in a Data Table. As a standalone alert, it could be a false positive from an expired TI indicator. In the context of a file drop + process execution sequence, it confirms active compromise. rule Base_Outbound_C2_Beacon {  meta:    author = &quot;Detection Engineering&quot;    description = &quot;Flags outbound network connections to known C2 infrastructure from Data Table. Composite feeder rule.&quot;    severity = &quot;LOW&quot;    tags = &quot;composite_base_rule&quot;  events:    // Target firewall, proxy, or EDR network connection events    $net.metadata.event_type = &quot;NETWORK_CONNECTION&quot;    // Match against the dynamic C2 Data Table — updated via API or manual upload    // Using a Data Table instead of hardcoded IPs means the detection stays current    // without rule modifications    $net.target.ip in %known_c2_ips    // Bind hostname for correlation    $net.principal.hostname = $hostname  match:    $hostname over 5m  outcome:    // Standard composite export variables    $user = array_distinct($net.principal.user.userid)    $hostname_out = array_distinct($net.principal.hostname)    $c2_destination = array_distinct($net.target.ip)    $c2_port = array_distinct($net.target.port)  condition:    $net} Design decisions:Data Table (%known_c2_ips): Using a dynamic Data Table instead of hardcoded IPs means the threat intelligence team can update C2 indicators via API without touching the detection rule. When a new C2 IP is added, SecOps can sweep historical data against the updated table automatically.	$c2_destination and $c2_port in outcome: The exact C2 IP and port are critical for network-level containment actions (firewall blocks, DNS sinkholing) that the SOAR playbook downstream will execute. The Master Composite DetectionNow that the three Base Rules are firing and populating their outcome arrays, the Composite rule stitches them together. It fires only when all three Base Rules trigger on the same host, for the same user, in strict chronological order: file drop → process execution → network beacon. rule Composite_Kill_Chain_Sequence {  meta:    author = &quot;Detection Engineering&quot;    description = &quot;Correlates a file drop, process execution, and outbound C2 beacon on the same host and user in strict chronological order. Fires only when all three base detections align within 24 hours.&quot;    severity = &quot;CRITICAL&quot;    // This is a composite rule — it queries detection outcomes, not raw UDM events    tags = &quot;composite_detection&quot;  events:    // ──────────────────────────────────────────────    // ALERT 1: THE FILE DROP    // ──────────────────────────────────────────────    // Target the specific base rule by its exact rule name string    $d1.detection.detection.rule_name = &quot;Base_Suspicious_File_Creation&quot;    // Extract the $user outcome variable exported by Alert 1    // The key &quot;user&quot; must exactly match the outcome variable name in the base rule    $userid = $d1.detection.detection.outcomesm&quot;user&quot;]    // Extract the $hostname_out variable exported by Alert 1    $hostname = $d1.detection.detection.outcomest&quot;hostname_out&quot;]    // ──────────────────────────────────────────────    // ALERT 2: THE PROCESS LAUNCH    // ──────────────────────────────────────────────    // Target the process execution base rule    $d2.detection.detection.rule_name = &quot;Base_Malicious_Process_Launch&quot;    // Bind to the SAME $userid — this is the cross-rule join key    // If Alert 2 fired for a different user, the bind fails and the composite doesn&#039;t fire    $userid = $d2.detection.detection.outcomesH&quot;user&quot;]    // Bind to the SAME $hostname — ensures all three alerts are on the same machine    $hostname = $d2.detection.detection.outcomes�&quot;hostname_out&quot;]    // ──────────────────────────────────────────────    // ALERT 3: THE C2 BEACON    // ──────────────────────────────────────────────    // Target the network beacon base rule    $d3.detection.detection.rule_name = &quot;Base_Outbound_C2_Beacon&quot;    // Bind to the SAME $userid and $hostname as Alerts 1 and 2    $userid = $d3.detection.detection.outcomess&quot;user&quot;]    $hostname = $d3.detection.detection.outcomesx&quot;hostname_out&quot;]    // ──────────────────────────────────────────────    // STRICT CHRONOLOGICAL ENFORCEMENT    // ──────────────────────────────────────────────    // Reach through the detection metadata to access the original UDM event timestamps    // This path navigates: detection → collection_elements → references → event → metadata    // The .seconds field contains the raw Unix epoch timestamp    //    // Enforce: Alert 3 (network beacon) must occur AFTER Alert 2 (process execution)    // This prevents false correlations where the network event predates the process launch    $d3.detection.collection_elements.references.event.metadata.event_timestamp.seconds &amp;gt;      $d2.detection.collection_elements.references.event.metadata.event_timestamp.seconds    // Enforce: Alert 2 (process execution) must occur AFTER Alert 1 (file drop)    // The complete sequence is now: file drop → process launch → C2 beacon    $d2.detection.collection_elements.references.event.metadata.event_timestamp.seconds &amp;gt;      $d1.detection.collection_elements.references.event.metadata.event_timestamp.seconds  match:    // 24-hour correlation window    // The clock starts ONLY when Alert 1 ($d1) fires — the &quot;after $d1&quot; syntax is critical    // Without &quot;after $d1&quot;, the engine evaluates a rolling 24h window continuously,    // which is computationally expensive and can cause late-triggering behavior    $userid over 24h after $d1  condition:    // All three base alerts must fire within the 24h window for the composite to trigger    $d1 and $d2 and $d3}How the Correlation Engine Works (Step by Step)Alert 1 fires: Base_Suspicious_File_Creation detects a file drop in \AppData\Local\Temp\ on WORKSTATION-42 by user jdoe. The 24-hour composite correlation clock starts.	Alert 2 fires: Base_Malicious_Process_Launch detects an unsigned binary executing on WORKSTATION-42 by user jdoe. The Composite engine checks: same Same $userid?  Same Same $hostname?Timestamp after Alert 1? Two conditions met.Alert 3 fires: Base_Outbound_C2_Beacon detects an outbound connection from WORKSTATION-42 by jdoe to 198.51.100.42:443. Same $userid?  Same $hostname? Timestamp after Alert 2? All three conditions met.Composite fires: A single CRITICAL alert surfaces in the SOAR queue containing the full kill chain narrative: file path, process command line, C2 destination—all extracted from the Base Rule outcomes. The analyst receives one actionable case instead of three disconnected low-severity tickets.	 What Happens If the Sequence Breaks 			Scenario									Result								File drop and process launch on the same host, but no C2 beacon within 24h									Composite does NOT fire. The two base alerts remain as isolated LOW-severity detections.								C2 beacon fires BEFORE the process launch (timestamps out of order)									Composite does NOT fire. The chronological enforcement rejects the sequence.								All three alerts fire, but for DIFFERENT users on the same host									Composite does NOT fire. The $userid binding fails because the outcome values don&#039;t match across all three $d variables.								Alert 2 fires on a different hostname than Alert 1									Composite does NOT fire. The $hostname binding fails.								Base Rule has an empty outcome section									Composite NEVER fires for that rule. The outcomesr&quot;user&quot;] lookup returns null, breaking the entire correlation chain silently.					 Troubleshooting Composite Detections Problem 1: Composite Rule Doesn&#039;t Fire (Silent Failure)Most common cause: The Base Rules are not populating their outcome sections correctly. Diagnostic steps:Open the SecOps UI and navigate to the Base Rule&#039;s detection history.	Click on a specific detection instance and inspect the raw JSON payload.	Look for the outcomes object. Verify that the keys (user, hostname_out) exist and contain non-null values.	If outcomes is empty or missing, the Base Rule&#039;s outcome section has a syntax error, a field mapping failure, or the underlying UDM event doesn&#039;t contain the expected data.Common mistakes:Typo in outcome key: The Composite rule queries outcomes &quot;user&quot;] but the Base Rule exports $username instead of $user. The key names must match exactly.	Empty UDM field: The Base Rule maps $user = array_distinct($file.principal.user.userid) but the EDR vendor doesn&#039;t populate principal.user.userid for file creation events. The outcome exports an empty array, and the Composite binding fails.	Rule not enabled: The Base Rule exists but is set to disabled or test mode. Disabled rules don&#039;t generate detections, so the Composite rule has nothing to query.Problem 2: Composite Fires Too Late (Delayed Alert)Cause: The match window is too large, or the Base Rules themselves are firing late due to upstream ingestion delays.Diagnostic steps:Check the Base Rule detection timestamps: compare metadata.event_timestamp vs metadata.ingested_timestamp. A large gap indicates the delay is upstream (vendor or forwarder), not in the Composite engine.	If Base Rules fire promptly but the Composite fires late, the over 24h window may be forcing the engine to hold excessive state. Shrink to over 4h or over 1h if the attack chain completes quickly.Problem 3: Too Many False Composite AlertsCause: The Base Rules are too broad, generating thousands of detections that create spurious composite correlations.Fix: Tighten the Base Rules with additional UDM filters:Add file extension filters to the file drop rule (e.g., .exe, .dll, .ps1)	Add parent process filters to the process launch rule (e.g., only flag powershell.exe spawned by winword.exe)	Use a curated, high-confidence C2 Reference List instead of a noisy bulk TI feed Advanced Pattern: Adding a Fourth Stage (Lateral Movement)The three-stage kill chain (file → process → C2) catches the initial compromise. To detect full APT campaigns, extend the Composite with a fourth Base Rule for lateral movement: rule Base_Lateral_Movement_RDP {  meta:    author = &quot;Detection Engineering&quot;    description = &quot;Detects outbound RDP connections from a host to an internal server — potential lateral movement.&quot;    severity = &quot;LOW&quot;    tags = &quot;composite_base_rule&quot;  events:    $rdp.metadata.event_type = &quot;NETWORK_CONNECTION&quot;    $rdp.target.port = 3389    // Only internal-to-internal connections (not external RDP access)    net.ip_in_range_cidr($rdp.target.ip, &quot;10.0.0.0/8&quot;)    $rdp.principal.hostname = $hostname  match:    $hostname over 5m  outcome:    $user = array_distinct($rdp.principal.user.userid)    $hostname_out = array_distinct($rdp.principal.hostname)    $lateral_target = array_distinct($rdp.target.ip)  condition:    $rdp} Then extend the Composite rule by adding a $d4 variable bound to Base_Lateral_Movement_RDP, with chronological enforcement ensuring RDP happens after the C2 beacon. The four-stage composite now catches: initial access → execution → command &amp;amp; control → lateral movement—the complete APT kill chain in a single CRITICAL alert.  Conclusion &amp;amp; Next StepsSingle-event alerts are noise. Composite Detections transform the SIEM from a firehose of disconnected signals into a precision instrument that surfaces only confirmed, multi-stage attack narratives.Base Rules must explicitly export variables into the outcome section. Without that contract, the Composite rule has nothing to bind. Treat outcome variables like an API schema—document them, version them, and test them rigorously.By collapsing thousands of LOW-severity alerts into a handful of CRITICAL composite incidents, the SOC achieves three things simultaneously: 99% noise reduction, sub-second correlation, and analyst retention through meaningful work.Next steps: With high-fidelity composite alerts flowing, the natural progression is autonomous remediation. Proceed to the &quot;Universal Phishing Containment Architecture&quot; to bind these kill-chain detections directly to SOAR playbooks that execute containment actions without human intervention. </description>
            <category>Community Blog</category>
            <pubDate>Thu, 23 Apr 2026 15:59:23 +0200</pubDate>
        </item>
                <item>
            <title>[Google SecOps SOAR] Automatically escalate severity when same entity has 3+ alerts within a time window — best approach?</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/google-secops-soar-automatically-escalate-severity-when-same-entity-has-3-alerts-within-a-time-window-best-approach-7347</link>
            <description>Use caseWe want to implement a mechanism in Google SecOps (Chronicle SOAR) where, if the same entity (user, IP, or host) accumulates 3 or more alerts within a given time window (e.g. last 24h), the severity is automatically escalated to High — regardless of whether the individual alerts came from Chronicle SIEM, Cortex XDR, Microsoft Defender, or Entra ID.The individual alerts may be Low or Medium on their own, but the accumulation pattern signals a higher risk that warrants immediate attention.QuestionWhat is the recommended approach to achieve this in Google SecOps?We are evaluating:A SOAR Playbook that counts open alerts per entity and escalates if threshold is met	Risk Score Analytics — does this support alerts from third-party connectors (Cortex, Defender) or only Chronicle detections?	Any native alert grouping or correlation feature in the SOAR that already handles this out of the boxWhat has worked for others in production? Any pitfalls to be aware of?Thanks!</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 23 Apr 2026 15:11:17 +0200</pubDate>
        </item>
                <item>
            <title>Enrichment proccess</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/enrichment-proccess-7172</link>
            <description>Hello,I have integrated the Azure Organizational Context and noticed that some log sources are now being enriched based on this data. For example, my Netskope logs are successfully using principal.user.userid for enrichment.However, in other log sources that contain the same principal.user.userid value—such as my FortiGate logs—the enrichment does not occur.Is there any additional configuration required to enable enrichment for these sources?</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 23 Apr 2026 14:32:43 +0200</pubDate>
        </item>
                <item>
            <title>Converting Decimal IP Address to Standard Format in Google SecOps Parser</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/converting-decimal-ip-address-to-standard-format-in-google-secops-parser-7055</link>
            <description>Hello Team,I am working with logs in Google SecOps where the IP address fields (such as SourceIPV4, TargetIPV4, and AnalyzerIPV4) are received in decimal (integer) format instead of the standard dotted IPv4 format.For example:750176699 → expected to be converted to a standard IPv4 format (e.g., x.x.x.x)Could anyone please advise on how to convert these decimal IP values into normal dotted IPv4 format within the Google SecOps parser (UDM mapping)?sample log: EPO_Events.EPOEvents 20XX-03-XXT05:XX:33 HOST123 EPOEvents{&quot;AgentGUID&quot;: &quot;XXXX12345678&quot;,&quot;Analyzer&quot;: &quot;_1000&quot;,&quot;AnalyzerHostName&quot;: &quot;HOST001&quot;,&quot;AnalyzerIPV4&quot;: &quot;3232235777&quot;,&quot;AnalyzerIPV6&quot;: &quot;AAAAAAAAAAAAAP//wKgBAQ==&quot;,&quot;AnalyzerMAC&quot;: &quot;001122AABBCC&quot;,&quot;AnalyzerName&quot;: &quot;Drive Encryption&quot;,&quot;AnalyzerVersion&quot;: &quot;7.4.0.11&quot;,&quot;AutoGUID&quot;: &quot;ABCDEF12-34567890AB&quot;,&quot;AutoID&quot;: &quot;123456789&quot;,&quot;DetectedUTC&quot;: &quot;2026-03-17T05:48:33&quot;,&quot;ReceivedUTC&quot;: &quot;2026-03-17T07:59:05.920&quot;,&quot;ServerID&quot;: &quot;SRV01&quot;,&quot;SourceIPV4&quot;: &quot;3232235778&quot;,&quot;SourceIPV6&quot;: &quot;AAAAAAAAAAAAAP//wKgBAg==&quot;,&quot;TargetIPV4&quot;: &quot;3232235779&quot;,&quot;TargetIPV6&quot;: &quot;AAAAAAAAAAAAAP//wKgBAw==&quot;,&quot;TenantID&quot;: &quot;1&quot;,&quot;TheTimestamp&quot;: &quot;AAAAAGKI8Tk=&quot;,&quot;ThreatActionTaken&quot;: &quot;None&quot;,&quot;ThreatCategory&quot;: &quot;None&quot;,&quot;ThreatEventID&quot;: &quot;30017&quot;,&quot;ThreatName&quot;: &quot;MDE&quot;,&quot;ThreatSeverity&quot;: &quot;1&quot;,&quot;ThreatType&quot;: &quot;None&quot;}Any guidance, sample parser logic, or transformation approach would be greatly appreciated.Thanks in advance for your help.</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 23 Apr 2026 14:30:21 +0200</pubDate>
        </item>
                <item>
            <title>soar tags dashboard query</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/soar-tags-dashboard-query-7331</link>
            <description>Hi all,Can anyone help me with this query?I need to fetch cases that have the tag “xyz”. Some cases contain multiple tags stored together, like “ABC, xyz, qwerty”.At the moment, I’m unable to retrieve cases where “xyz” appears as part of a multi-tag value.Any suggestions would be appreciated.</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 23 Apr 2026 14:18:28 +0200</pubDate>
        </item>
                <item>
            <title>Clarification on Approach and Webhook of SOAR</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/clarification-on-approach-and-webhook-of-soar-7335</link>
            <description>Hi everyone,We are working on integrating Google SecOps SOAR with an external application security platform, and wanted to validate our approach as well as explore better alternatives.Use Case1. Incident Sync (Bi-directional)Sync status and comments:	From SecOps SOAR → external platform		From external platform → SecOps SOAR	Current ApproachUsing a scheduled job (polling) to:	Fetch updates from both systems		Compare changes		Push updates accordingly	QuestionsArchitecture Validation	Is a polling-based approach considered acceptable for this use case?		Or is any other approach recommended?		Webhook Capabilities in SecOps SOAR	Are there limitations on which fields can be mapped via webhook?		Is it possible to map custom fields (status, comments, metadata)?		Is there any reference for the list of fields.		Comments / Notes Mapping	In SecOps SOAR alerts, which field is best suited to store external comments/notes?		Is there a recommended standard field (e.g., notes, activity log)?</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 23 Apr 2026 13:55:51 +0200</pubDate>
        </item>
                <item>
            <title>Getting {&quot;success&quot;:false,&quot;error-codes&quot;:[&quot;invalid-input-response&quot;] error</title>
            <link>https://security.googlecloudcommunity.com/fraud-defense-recaptcha-6/getting-success-false-error-codes-invalid-input-response-error-7346</link>
            <description>It was discovered there are over 25K entries in our logs from March 21 to March 25It looks like a lot of reCAPTCHA failures with &quot;testing@example.com&quot; captured as the email against multiple form parameters. I do also see legitimate form submissions during this period. The failure message we are getting is mostly {&quot;success&quot;:false,&quot;error-codes&quot;:[&quot;invalid-input-response&quot;]}. The thing to notice here is that after this period we did not get such spike in failures and while we are trying to submit the form the reCAPTCHA is working as expected. We are not sure how and why this issue occurred. Do we have any explanation for this? </description>
            <category>Fraud Defense (reCAPTCHA)</category>
            <pubDate>Thu, 23 Apr 2026 12:49:49 +0200</pubDate>
        </item>
                <item>
            <title>How to revoke compromised API keys when the GCP Console redirects to a suspension warning?</title>
            <link>https://security.googlecloudcommunity.com/cloud-security-foundation-7/how-to-revoke-compromised-api-keys-when-the-gcp-console-redirects-to-a-suspension-warning-7344</link>
            <description>Hi everyone,I&#039;m facing a critical remediation blocker following a &quot;Hijacked Resource&quot; suspension on theproject. While I am eager to secure my environment and rotate all potentially compromised credentials, I am trapped in a redirect loop that prevents administrative action.The Technical Problem:Whenever I navigate to IAM &amp;amp; Admin or APIs &amp;amp; Services, the GCP Console performs a forced redirect to the suspension warning page. This means I cannot revoke existing API keys or Audit Service Account activity through the standard UI.Investigation Status:Audit: Local .env files and Git history have been reviewed, but I suspect a credential may have been intercepted or leaked elsewhere.	Timeline: Appeal submitted 7+ days ago; no response received. Production environment remains offline.Seeking Expert Advice on:Programmatic Revocation: What is the specific gcloud syntax to force-delete all active API keys when the project status is &quot;Suspended&quot;?	Log Retrieval: Can I export Activity Logs or VPC Flow Logs via the SDK to pinpoint the source of the &quot;abusive activity&quot; and confirm the leak is plugged?	Trust &amp;amp; Safety Contact: Is there a way to provide &quot;Proof of Remediation&quot; to the safety team when you are physically blocked from the UI tools needed to fix the issue?I am ready to perform a full credential rotation immediately if I can bypass the console redirect. Any guidance from the community or the Google team would be appreciated.</description>
            <category>Cloud Security Foundation</category>
            <pubDate>Thu, 23 Apr 2026 12:07:36 +0200</pubDate>
        </item>
                <item>
            <title>Tech Stack for Agentic AI</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/tech-stack-for-agentic-ai-7306</link>
            <description>We are developing a B2B agentic AI workflow and have received architecture guidance from Google. The proposed stack includes: Google Identity Platform and App Engine for secure ingress; Cloud Storage with Customer-Managed Encryption Keys (CMEK) for document persistence; Document AI for extraction; Vertex AI for enterprise inference; Cloud DLP for data loss prevention; and Cloud Logging and Monitoring for observability. We have been advised to execute a Business Associate Agreement (BAA) with Google prior to launch.Our core question: is this architecture sufficient to protect client data at scale, and are there any known vulnerabilities in AI systems we should be addressing? Key questions:Is our proposed stack the right foundation for a secure B2B AI product, and are there any obvious gaps?	How do we make sure one client&#039;s data cannot be seen or accessed by another?	Does the system monitor and flag sensitive data in both documents we upload and responses the AI generates?	What happens on Google&#039;s side if there is a security breach, and what support do we receive?	What steps should we take to meet compliance requirements, including signing a BAA?</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 23 Apr 2026 00:37:41 +0200</pubDate>
        </item>
                <item>
            <title>Global Account Restriction stuck after Project Reinstatement and Tax Verification (Project ID: opportune-ruler-492006-h2)</title>
            <link>https://security.googlecloudcommunity.com/community-feedback-70/global-account-restriction-stuck-after-project-reinstatement-and-tax-verification-project-id-opportune-ruler-492006-h2-7337</link>
            <description>Hello Google Cloud Community,I am seeking assistance with a service restriction that appears to be stuck in an automated loop.The Situation:My account was restricted on April 1, 2026, for &quot;Abusive Activities.&quot;	On April 10, I received an email from Google Cloud Trust &amp;amp; Safety stating that my project opportune-ruler-492006-h2 has been reinstated.	However, I still cannot access the Console or Billing Support; it remains blocked with the &quot;Submit Appeal&quot; screen.Resolution Steps Already Taken:Tax Info: I have successfully submitted my India Tax Information in the Payments Center, and it was officially Accepted.	Appeal: I have attempted to appeal via the console form multiple times, but I receive no Case IDs and no follow-up.	Communication: I replied to the reinstatement email, but received an automated response stating that the thread is closed.It seems my project is cleared, but my Global Account Identity is still flagged due to a synchronization delay or a secondary payment lock.Could a Community Manager please help escalate this to the Trust &amp;amp; Safety or Billing Compliance team? I have no Case ID to provide as none were generated, but my Project ID is opportune-ruler-492006-h2.Thank you,Kaustubh M</description>
            <category>Community Feedback</category>
            <pubDate>Wed, 22 Apr 2026 21:00:13 +0200</pubDate>
        </item>
                <item>
            <title>MA-SV Product Release 5.14.4.00</title>
            <link>https://security.googlecloudcommunity.com/security-validation-5/ma-sv-product-release-5-14-4-00-7336</link>
            <description>The Mandiant Advantage Security Validation (MA-SV) team is pleased to announce version 5.14.4.0 of the MA-SV platform. This version is scheduled to be deployed on Tuesday, April 28 at 8:00AM EDT (12:00PM UTC). During this time, users might experience brief disruptions accessing the service. Full service will be restored no later than 9:00AM EDT (1:00PM UTC).  On the day of release, the release notes will be posted on the Security Validation MA-SV (SaaS) release notes page.</description>
            <category>Security Validation</category>
            <pubDate>Wed, 22 Apr 2026 20:46:53 +0200</pubDate>
        </item>
                <item>
            <title>New to Google SecOps: Don&#039;t You (Forget To Join Me) - Using Cross Joins in Multi-Stage Search</title>
            <link>https://security.googlecloudcommunity.com/community-blog-42/new-to-google-secops-don-t-you-forget-to-join-me-using-cross-joins-in-multi-stage-search-7238</link>
            <description>Earlier this year, I wrote a handful of blogs on building multi-stage searches. Multi-stage searches, for those just joining us, are searches where we gather data in named stages and then assemble the results in a root stage. If you want to get a feel for how multi-stage searches could be used, here are a few blogs below that will get you started - Should I Stage or Should I Go?	This Charming Span - Bucketing Events in Time Windows	Sweet Dreams (Are Made of Zs)	MAD World - The Multi-Stage Search for a Robust Metric	Policy of Truth - Detecting Outliers with Robust Z-Scores Today I want to highlight another capability that you can use in multi-stage searches within Google Security Operations (SecOps). The term cross join, particularly for those who have spent time using SQL, is a basic concept but also one that when applied incorrectly can have all sorts of negative outcomes associated with it. A cross join, unlike an inner join or outer join (right/left join), does not require a join condition between two tables or datasets, so the result of a cross join is a Cartesian Product and can result in a combinatorial explosion of results. What do I mean by that? In our very simple example, we have two colors of t-shirts and three sizes. The result is a Cartesian Product of six possible combinations of colors and sizes in the results. That’s not a big deal, but when we start applying this to a large data set of disparate events, a cross join applied incorrectly can be a very expensive search operation and may not be very helpful to the analyst. Fortunately for us, cross joins are focused and guardrailed in multi-stage search to mitigate this risk. Using Cross Joins in Multi-Stage SearchBecause we’ve walked through the steps to build a multi-stage search previously, I’m not going to fully revisit that. I will point out that a really good approach to building a multi-stage search is to build the named stage searches separate from one another first so that you can easily view the output of each to better visualize how the stages will be used together to create the result set. So, you are probably wondering how cross joins help us and how we don’t end up with a crazy result set. The way cross joins are constrained in Google SecOps is that one of the stages that is part of the cross join can only have a single row of results.  In the example above, we have a stage outputting five events with five columns in it, represented by the tabular output in blue. The second stage is the search limited to a single row, shown in red. The result with the cross join provides a marginally larger result set, but it is still limited to the five events we initially had in the first stage, with the additional columns from the second stage added to it. So, it incorporates additional data but doesn’t explode the result set like an unbound second stage would. Enough concepts, let’s get to a practical application of this! Measuring the Login Frequency Let’s say we wanted to determine the login frequency for user authentications and determine which users have a greater frequency than others. Obviously, comparing service accounts to users is not going to be an apples to apples comparison, but this example highlights how we can use multi-stage search with the cross join to compare users or assets or anything else against a population. While events can have multiple security results associated with them, in an effort to not get bogged down in repeated fields, we are going to just focus on the first security result in each event. The initial named stage search looks like this:metadata.event_type = &quot;USER_LOGIN&quot;target.user.userid != &quot;&quot;target.user.userid != /\$$/$user = target.user.userid$action = security_resultd0].actionmatch:  $user, $actionoutcome:  $login_count = count(metadata.id) This search generates a count of the number of events based on the combination of userid and action. The second named stage will be the calculation of the population. The first important point that I want to make here is that the search logic should be the same in both stages if we want to compare like events. That is, if I am doing a frequency analysis, I want to make sure that the data that I am starting with in both stages is the same population set.metadata.event_type = &quot;USER_LOGIN&quot;target.user.userid != &quot;&quot;target.user.userid != /\$$/outcome:  $total_count = count(metadata.id)limit:  1  The other important point is that this stage must only have a single event in the results. Can it have multiple columns? Yes. Can this stage have multiple rows in the results? Yes, but we need to apply a limit of one in the search, otherwise the cross join will throw the following error:compilation error compiling root stage: validating query: at least one operand in a cross join must be a stage that outputs at most one row Once we are happy with the contents of the named stages, we can start assembling the multi-stage search. Notice that we have the contents of the two searches we just built wrapped into the named stages login_user_action and login_user_total, respectively.stage login_user_action {  metadata.event_type = &quot;USER_LOGIN&quot;  target.user.userid != &quot;&quot;  target.user.userid != /\$$/  $user = target.user.userid  $action = security_result=0].action  match:    $user, $action  outcome:    $login_count = count(metadata.id)}stage login_user_total {  metadata.event_type = &quot;USER_LOGIN&quot;  target.user.userid != &quot;&quot;  target.user.userid != /\$$/  outcome:    $total_count = count(metadata.id)  limit:    1}cross join $login_user_action, $login_user_totaloutcome:  $user = $login_user_action.user  $action = $login_user_action.action  $login_count = $login_user_action.login_count  $total_login_count = $login_user_total.total_count After the named stages, we will bring them together using the cross join command. The cross join command is followed by the two named stages, separated by a comma.  Finally, because the output of a multi-stage search is based on the root stage’s match and outcome sections, we are outputting four fields. Notice we are not performing any additional aggregations at this point, we just want to see the fields from the two named stages with the cross join, which is exactly what we get here.  Notice that the fourth column has the same value in every row, that’s the grand total of values in the population, but now we have that value available to each row to perform additional calculations which is what we are going to do next to arrive at the frequency analysis.stage login_user_action {  metadata.event_type = &quot;USER_LOGIN&quot;  target.user.userid != &quot;&quot;  target.user.userid != /\$$/  $user = target.user.userid  $action = security_results0].action  match:    $user, $action  outcome:    $login_count = count(metadata.id)}stage login_user_total {  metadata.event_type = &quot;USER_LOGIN&quot;  target.user.userid != &quot;&quot;  target.user.userid != /\$$/  outcome:    $total_count = count(metadata.id)  limit:    1}cross join $login_user_action, $login_user_total$user = $login_user_action.user$action = $login_user_action.action$login_count = $login_user_action.login_count$total_login_count = $login_user_total.total_countmatch:  $user, $action, $login_countoutcome:  $frequency_percent = max(math.round(($login_count / $total_login_count) * 100, 2))order:  $frequency_percent desc The four fields in the outcome section of the previous search will become the filtering statement in the root stage. Remember that the match section needs to use placeholder variables in a multi-stage search, so defining them here allows us to use them in the root. The match section will allow us to aggregate by the user, action and the login count because I want those values in the results. The one field that I need to calculate is the frequency and to calculate it, I need the login count and the grand total (total_login_count). In the outcome section, we are creating an outcome variable named $frequency_percent and using the max and math.round functions to calculate a percent value for this result. Finally, we are going to sort the output by this percentage in descending order.  The results provide us with insight that the system user with an action of allow resulted in nearly ten percent of the user login events for the time range searched. Perhaps we want to take this type of frequency analysis and output it to a dashboard. We could add a condition section before the order section to the previous search.condition:  $frequency_percent &amp;gt; 4 Clicking on the Visualize sub-tab, we can generate a chart that contains the userids that exceeded the threshold and add it to a dashboard.  I realize there are a lot of moving parts when it comes to multi-stage searches, but the cross join provides an additional capability that you can use to get a value for a population and utilize it in your search. Here are a few things to keep in mind:The cross join command expects the names of the stages separated by a comma	One of these stages must only have a single row output or you will get an error	It is always a best practice to build each named stage as its own query first to visualize the output which makes it simpler to bring the stages together	The filtering logic in both named stages should be the same otherwise the comparison between the details in the first named stage will not align with the population calculations in the second named stage	The output of the multi-stage search will be the variables in the match and outcome sections of the root stage, so plan accordingly The cross join is another nice tool to have in your search building toolkit. Try it out and see how you can generate values that then can be used for more advanced searches in Google SecOps! </description>
            <category>Community Blog</category>
            <pubDate>Wed, 22 Apr 2026 20:45:48 +0200</pubDate>
        </item>
                <item>
            <title>What is the maximum number of entries allowed in a Reference List?</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/what-is-the-maximum-number-of-entries-allowed-in-a-reference-list-7327</link>
            <description>Hi everyone,I&#039;m working with Reference Lists in Google SecOps (Chronicle) and planning to use them to store IOCs (Indicators of Compromise) such as IPs, domains, hashes, and URLs for detection rules.Before I start populating them at scale, I&#039;d like to confirm a few things:What is the maximum number of entries a single Reference List can hold?	Is there a size limit (in MB/KB) per Reference List, in addition to or instead of an entry count limit?	Are the limits different based on the list type (e.g., String, Regex, CIDR)?	Is there a limit on the total number of Reference Lists per tenant/instance?	If I exceed the limit, what&#039;s the recommended approach — splitting IOCs across multiple lists, or another mechanism?Any pointers to official documentation or real-world experience with large IOC lists would be greatly appreciated.Thanks in advance!</description>
            <category>Google Security Operations</category>
            <pubDate>Wed, 22 Apr 2026 14:21:13 +0200</pubDate>
        </item>
                <item>
            <title>Understanding Data tables</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/understanding-data-tables-7203</link>
            <description>n our SecOps platform, we are currently in the process of moving all our reference lists to data tables since lists are on a deprecation path. We currently have a couple of reference lists “known Malicious hashes” which contains 2 lakhs hashes divided into two lists(1 lakh hashes each).  Reference list allow 1 lakhs entries per list. Whereas, Data tables allow only 10,000 entries per table so I have created 20 data tables and somehow moved all of them. I would like to know if there’s anyway we can fix this portion first. This was a time-consuming task to move just 2 lists to 20 tables due to the limitation.Next up, when I try to use those data tables in a rule, I hit a limitation again “The number of in statements is more than max allowed limit(10). Can you help?Attached error message </description>
            <category>Google Security Operations</category>
            <pubDate>Wed, 22 Apr 2026 12:43:39 +0200</pubDate>
        </item>
                <item>
            <title>Protecting Customers in Record Time: Google Security Operations and Mandiant’s Response to the Axios npm Supply Chain Attack</title>
            <link>https://security.googlecloudcommunity.com/community-blog-42/protecting-customers-in-record-time-google-security-operations-and-mandiant-s-response-to-the-axios-npm-supply-chain-attack-7321</link>
            <description>Authors: Ed Murphy, Group Product Manager, Managed DefenseAndre Alfred, Sr. Director, Managed Solutions Mandiant Threat Defense: AI-Powered Threat Defense on Google Security Operations in Action Deep into the night of Monday March 30th (EST), the Mandiant Threat Defense team observed a relatively common threat indicator around the execution of a renamed legitimate Microsoft binary from a suspicious location. Powered by the combination of agentic speed and human expertise, the team took less than 2 minutes to confirm an active breach in a customer environments; enabling them to quickly mitigate risk and notify customers before the now notorious axios supply chain attack made headlines. What follows is an account of agentic defense played a key role in our response. Supply Chain Attack TrendsRecently, threat actors have been targeting the npm and PyPI ecosystems to deliver a series of sophisticated supply chain attacks. These campaigns involve adversaries using phishing, or stolen session tokens to hijack the accounts of legitimate package maintainers. Once in control, they push a &quot;routine update&quot; to a widely trusted package. This effectively poisons the software supply chain, silently delivering the malicious payload to developer or applications pulling the latest dependency. Because modern software development relies heavily on open-source dependencies, a single compromised package can trigger a chain reaction from its source into thousands of CI/CD pipelines and production environments. In recent months, Mandiant has observed a persistent and increasing trend of threat actors leveraging these supply chain vulnerabilities to deliver malicious payloads, with more recently UNC1069, a financially motivated North Korea-nexus threat actor, targeting the popular npm package &#039;axios&#039; to distribute the WAVESHAPER.V2 backdoor that put millions of users and projects at risk. Our ResponseWithin the hour, an urgent advisory was issued to our customers regarding this newly discovered axios npm supply chain compromise. Translating our frontline visibility into an immediate customer advantage, the advisory provided actionable intelligence to block the associated command-and-control (C2) infrastructure and prevent latter attack stages, including the download of the WAVESHAPER.V2 malware. Mandiant Threat Defense has drastically compressed response times by turning hours of analysis into minutes, and minutes into seconds through the adoption of agentic features. A Gemini-backed AI Quick Triage agent oriented the team to the nature of the living off the land attack, and noted a high probability of compromise. A more in-depth agentic investigation revealed that the attack was likely a supply chain compromise. While AI drives the Mandiant Threat Defense service at machine speed, our experts confirmed the output of our agents and assessed the entire customer base for similar signs of compromise. Finally, following the gathering of evidence a Gemini-backed agent was once again leveraged to draft a comprehensive, individualized customer investigation report substantially reducing the time needed to inform the customer and begin the remediation process. Figure 1: The MTD Agentic SOC Workflow Delivering on the Promise of Shared FateToday the Mandiant Threat Defense service is centered around AI, with almost every workflow tightly integrated into our agentic tooling. However, the human factor remains just as critical, allowing us to accurately assess the impact of a threat (e.g. a novel attack technique or unknown new campaigns) and make decisive incident response decisions for our customers. Behind the scenes, the team continuously pools deep security expertise and capabilities across Mandiant and Google Security Operations to initiate threat hunting, malware analysis, intelligence gathering, and detection engineering. By using this frontline expertise to continuously augment our AI workflows, we create a more robust incident response cycle for our customers, as well as help accelerate the work of our researchers and analysts, and ensure a highly coordinated, best-in-class protection for our entire customer base and the broader community. OutlookAccording to Google Threat Intelligence, UNC1069 is not the only threat actor successfully executing open-source supply chain attacks in recent weeks. Other groups, such as TeamPCP (UNC6780), have actively poisoned GitHub Actions and PyPI packages associated with essential projects like Trivy, Checkmarx, and LiteLLM. These campaigns are specifically designed to deploy the SANDCLOCK credential stealer and facilitate follow-on extortion operations. Given the massive blast radius of these tactics, we assess with high confidence that adversaries will increasingly weaponize the software supply chain in the future. Getting startedBy combining Google Security Operations&#039;s advanced detection capabilities with world-class threat intelligence from Mandiant and Google Threat Intelligence Group, organizations can build a more proactive and effective defense against even the most challenging supply chain attacks.Ready to outpace the adversary? View the datasheet to see how Mandiant Threat Defense delivers comprehensive active threat detection, hunting, and rapid response backed by world-class experts. </description>
            <category>Community Blog</category>
            <pubDate>Wed, 22 Apr 2026 02:22:05 +0200</pubDate>
        </item>
                <item>
            <title>MSV Onboarding - Crafty way to establish the Cloud Hosted Actor Allow List</title>
            <link>https://security.googlecloudcommunity.com/security-validation-5/msv-onboarding-crafty-way-to-establish-the-cloud-hosted-actor-allow-list-3680</link>
            <description>Many MSV deployments include multiple on-premise actors and one or two Mandiant cloud hosted actors. Many use cases like data extraction or malicious file transfers (MFT) will require that the on-premise actors can communicate with the cloud hosted actor. Mandiant requires clients to submit a list of public IP addresses that the on-premise actors will use to communicate with the cloud hosted actor. The MSV operator often does not know this information and may resort to an internal ticket or inquiry to a network administrator to find this information out and that can take more time than wanted. A simple and faster alternative is to create Host-CLI actions utilizing the cURL utility to identify the egress IP address of each on-premise actor. Yes, this works on network actors as they are linux based and can run Host-CLI actions using a bash shell. An action for multiple shell types can be similarly created as cURL is supported on many OS platforms including windows (cmd.exe or powershell).
Example curl command displaying a hosts external / egress IP address using ifconfig.me

There are several alternatives to ifconfig.me if needed, they include:

curl ifconfig.me/all;
curl icanhazip.com;
curl ipecho.net/plain;
curl ifconfig.co;

Example: Custom Host-CLI action using a Bash shell

Example: Job results using custom Host-CLI action and viewing the &quot;CLILog Output&quot; displaying the egress IP address of the on-premise actor

For information on creating custom Host-CLI actions see our documentation at:&amp;nbsp;https://docs.mandiant.com/home/msv-adding-host-command-line-interface-actions
This method can be used during on-boarding to establish the allow list for which on-prem actors can run actions against the cloud hosted actor. After accumulating the egress IP’s, an internal support request must be submitted requesting they be added to the cloud actor’s allow list. You can choose to send specific IP addresses of the on-premise actors or submit a CIDR block(s) for the allow list. Just ensure the CIDR block is as limited as possible. Generally a /24 is about as large as you’d want to submit.
&amp;nbsp;Helpful in Troubleshooting
In the event on-premise actors lose ability to communicate with the cloud hosted actor,&amp;nbsp; these custom actions can be used to verify if the egress IP’s did change.&amp;nbsp;</description>
            <category>Security Validation</category>
            <pubDate>Tue, 21 Apr 2026 20:53:22 +0200</pubDate>
        </item>
                <item>
            <title>Flagging Suspicious Email Risk Indicators</title>
            <link>https://security.googlecloudcommunity.com/google-threat-intelligence-3/flagging-suspicious-email-risk-indicators-7318</link>
            <description>I have a SOAR playbook for investigating emails. As part of the playbook, domain and URL case entities are checked against static lists to see if they are:1 - Free Share Link sites2- Domains for free email services (e.g. gmail.com)3 - URL shorteners4 - A obviously benign entity, belonging to a trusted organisation such as bank e.g. jpmorganchase.comDoes GI check and report back whether submitted domain or URL IOCs belong to any of these categories, if the submission is done via API?</description>
            <category>Google Threat Intelligence</category>
            <pubDate>Tue, 21 Apr 2026 19:40:43 +0200</pubDate>
        </item>
                <item>
            <title>Announcing M-Trends 2026: Data, insights, and strategies from the frontlines</title>
            <link>https://security.googlecloudcommunity.com/google-threat-intelligence-3/announcing-m-trends-2026-data-insights-and-strategies-from-the-frontlines-7123</link>
            <description>Building true operational resilience requires moving faster than the threats you face. That starts with understanding exactly how adversaries are finding success, so you can use that intelligence to stop them.Today, we are proud to announce the release of the M-Trends 2026 report! Distilling insights from over 500k hours of incident investigations executed by Mandiant in 2025, and supporting Google Threat Intelligence Group (GTIG) research, this year’s edition reveals the critical shifts defining today&#039;s threat landscape. This post provides a quick preview of the themes from this year&#039;s report. Download the M-Trends 2026 report now for a comprehensive dive into our frontline data, and review the supporting resources below for more. The collapse of the &quot;hand-off&quot; window One of the most notable trends identified in Mandiant investigations is the increased specialization within the cybercrime ecosystem. In 2022, the median time between an initial access event and the hand-off to a secondary threat group was more than 8 hours. In 2025, that window collapsed to a median of just 22 seconds. Threat groups focused on initial access are bypassing underground markets to partner directly with secondary groups. By pre-staging their preferred malware during the initial infection, the secondary group can launch high-impact operations the moment they first interact with the network. Ransomware evolves into recovery denial Ransomware groups are no longer just encrypting data; they are actively destroying the ability to recover by systematically targeting backup infrastructure, identity services, and virtualization—to create a &quot;recovery deadlock&quot; that maximizes the pressure to negotiate.Furthermore, attackers are exploiting the &quot;Tier-0&quot; nature of hypervisors to bypass guest-level defenses, targeting the virtualization storage layer directly for data theft and encrypting entire hypervisor datastores that can render all associated virtual machines inoperable simultaneously. Voice phishing and the SaaS identity crisis While exploits remained the most common initial infection vector for the sixth consecutive year (accounting for 32% of intrusions), highly interactive voice phishing saw a significant surge to 11%, becoming the second most commonly observed vector globally. For cloud-related compromises specifically, voice phishing was the number one initial infection vector at 23%. M-Trends 2026 reveals the cascading impact of these techniques. Threat actors are bypassing standard defenses by harvesting long-lived OAuth tokens and session cookies. By compromising third-party SaaS vendors, attackers steal hard-coded keys and personal access tokens, using those secrets to seamlessly pivot into downstream customer environments to execute large-scale data theft. Edge devices, zero-days, and extreme persistence While cyber criminals optimize for speed, espionage groups are optimizing for extreme persistence. Sophisticated threat clusters deliberately target edge and core network devices, exploiting their lack of support for traditional security tooling. M-Trends 2026 reveals that the mean time to exploit vulnerabilities dropped to an estimated -7 days, meaning exploitation is routinely occurring before a patch is even released. By deploying custom, in-memory malware like the BRICKSTORM backdoor directly, attackers can turn these critical gateways into persistent, invisible vantage points for monitoring corporate traffic and lateral movement. With threats like BRICKSTORM achieving dwell times of nearly 400 days, standard 90-day log retention policies leave organizations completely blind to the initial access vector and the full scope of the intrusion. AI threat landscape A comprehensive overview of the 2025 threat landscape requires addressing adversary use of AI. Ongoing GTIG threat research confirms that threat actors are increasingly leveraging AI, especially during the early phases of the attack lifecycle. M-Trends 2026 confirms attackers are abusing AI within compromised environments, however, we do not consider 2025 to be the year where breaches were the direct result of AI. The vast majority of successful intrusions still stem from fundamental human and systemic failures. Our Mandiant special report on AI risk and resilience highlights the adversarial use of AI, key trends and learnings from Mandiant AI red teaming and consulting engagements, and how AI-powered defense is already being used as a force multiplier for security operations. Be ready to respond The Mandiant mission is to help keep every organization secure from cyber threats and confident in their readiness. For 17 years, our annual M-Trends report has been a core component of advancing that mission, sharing frontline knowledge to help defenders close critical visibility gaps.To learn about the cyber threat landscape, and how we recommend organizations adapt to its ongoing changes, explore our M-Trends 2026 resources:Download the M-Trends 2026 report for a comprehensive dive into our frontline data.	Read the M-Trends 2026 Executive Edition for a high-level look at the data and trends, along with key recommendations.	Register for our upcoming M-Trends 2026 webinar—the first in a planned series—for an in-depth look at the data, topics, and recommendations discussed in the report.	Listen to a special episode of the Google Cloud Security Podcast featuring M-Trends 2026 to learn more about what the findings mean and how the report is created. </description>
            <category>Google Threat Intelligence</category>
            <pubDate>Tue, 21 Apr 2026 19:07:01 +0200</pubDate>
        </item>
                <item>
            <title>Chronicle API migration question: replacement for legacy CaseSearchEverything endpoint</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/chronicle-api-migration-question-replacement-for-legacy-casesearcheverything-endpoint-7313</link>
            <description>Hello,I am currently working on a Google SecOps SOAR API migration to the Chronicle API.Using the official endpoint mapping table, I was able to identify several mappings. I am unsure about this legacy SOAR endpoint:/api/external/v1/search/CaseSearchEverythingI could not find a clear equivalent in the Chronicle endpoint mapping table, and it even seems that some parts of SecOps may still rely on this legacy route.My question is:What is the recommended Chronicle endpoint to replace CaseSearchEverything?Or should we consider that this use case does not yet have a direct documented Chronicle equivalent?Context:Stage 2 migration is in progress.Thank you in advance for your help.</description>
            <category>Google Security Operations</category>
            <pubDate>Tue, 21 Apr 2026 15:35:25 +0200</pubDate>
        </item>
                <item>
            <title>Chronicle API migration question: replacement for legacy CaseSearchEverything endpoint</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/chronicle-api-migration-question-replacement-for-legacy-casesearcheverything-endpoint-7314</link>
            <description>Hello,I am currently working on a Google SecOps SOAR API migration to the Chronicle API.Using the official endpoint mapping table Table de mappage des points de terminaison de l&#039;API  |  Google Security Operations  |  Google Cloud Documentation, I was able to identify several mappings. I am unsure about this legacy SOAR endpoint:/api/external/v1/search/CaseSearchEverythingI could not find a clear equivalent in the Chronicle endpoint mapping table, and it even seems that some parts of SecOps may still rely on this legacy route.My question is:What is the recommended Chronicle endpoint to replace CaseSearchEverything?Or should we consider that this use case does not yet have a direct documented Chronicle equivalent?Context:Stage 2 migration is in progress.Thank you in advance for your help.</description>
            <category>Google Security Operations</category>
            <pubDate>Tue, 21 Apr 2026 13:04:38 +0200</pubDate>
        </item>
                <item>
            <title>Issue with manual prompt Message to assignee</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/issue-with-manual-prompt-message-to-assignee-7249</link>
            <description>When you add message to prompt it is not shown in pending action sectionbut if you run different prompts 2x in row, second one will have it always visible  anyone know why this is like that? do we have any bug report or ongoing implementation around this fix?​</description>
            <category>Google Security Operations</category>
            <pubDate>Tue, 21 Apr 2026 07:32:55 +0200</pubDate>
        </item>
                <item>
            <title>Unable to access Google SecOps (Chronicle) on mobile (Safari) – anyone else facing this?</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/unable-to-access-google-secops-chronicle-on-mobile-safari-anyone-else-facing-this-7274</link>
            <description>Is anyone else unable to access Google SecOps (Chronicle) on mobile (Safari)?It was working earlier, but now it’s not loading / getting stuck.Tried clearing cache, cookies, and using private mode — still not working.Works fine on desktop.Is this happening to others?</description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 20 Apr 2026 18:09:05 +0200</pubDate>
        </item>
                <item>
            <title>Bug/Help: Inconsistent STS V2 Feed UI &amp; Missing Azure Event Hub Options in SecOps</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/bug-help-inconsistent-sts-v2-feed-ui-missing-azure-event-hub-options-in-secops-7307</link>
            <description>Hi everyone,I&#039;m running into a frustrating bug within the feed creation section of our Google SecOps instance and am wondering if anyone else has encountered this or found a workaround while I wait on an official fix.My primary goal is to create Microsoft Azure Event Hub feeds, but the UI is behaving inconsistently regarding STS V2 feeds. A Google Solutions Delivery Engineer has already taken a look and confirmed this appears to be a bug.Here are the specific symptoms:	Inconsistent Feed Creation Views: As shown in the attached screenshots, one feed creation view gives me the option to create V2 feeds, while another view completely omits the V2 option.			Missing Event Hub Option: Regardless of which view I use, &quot;Microsoft Azure Event Hub&quot; is completely missing from the available options.			Blank Source Types on Edit: I have one existing Blob Storage feed that is clearly labeled as &quot;V2&quot; in the list view. However, when I click in to edit that feed, the &quot;Source Type&quot; field is entirely blank. (See Screenshots 3 &amp;amp; 4)	Troubleshooting Steps Already Taken:	I worked with a Google Solutions Delivery Engineer who verified that the Omniflow STS Enabled feature is officially turned ON for our SecOps instance.			The engineer attempted to flush/fix the issue by toggling the feature OFF and then back ON, but this did not resolve the UI inconsistencies or bring back the Azure Event Hub option.	Has anyone else experienced this desync with the Omniflow STS/V2 feeds UI? If so, did you find any hidden workarounds to force the Azure Event Hub option to appear so we can get these feeds flowing?Thanks in advance for any insights!</description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 20 Apr 2026 18:06:18 +0200</pubDate>
        </item>
                <item>
            <title>First SCC Adoption Guide is Live!  Configuring High-Value Resource Sets</title>
            <link>https://security.googlecloudcommunity.com/security-command-center-4/first-scc-adoption-guide-is-live-configuring-high-value-resource-sets-7311</link>
            <description>Hello community members, We recently released the first official Adoption Guide for SCC, right here in the Google Cloud Security Community!  The guide is about configuring high-value resource sets and covers this workflow with examples and screenshots that should make implementing this part of SCC a breeze! Take a look at the guide here!</description>
            <category>Security Command Center</category>
            <pubDate>Mon, 20 Apr 2026 18:01:46 +0200</pubDate>
        </item>
                <item>
            <title>Playbook Count variance in Native Dashboarding</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/playbook-count-variance-in-native-dashboarding-7243</link>
            <description>Hi team,I’m developing a custom view using the native dashboarding features in Google SecOps. I’ve noticed a variance in the playbook counts and I’m trying to understand the cause of this mismatch.I’ve attached two screenshots:Snip 1: Counts playbooks by playbook name and the distinct count of metadata_alert_id.	Snip 2: Counts playbooks by playbook name only.Could you help explain why these two approaches return different totals? ============================================================Snip1  ============================================================Snip2 </description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 20 Apr 2026 16:13:17 +0200</pubDate>
        </item>
                <item>
            <title>EP272 More Than Just Packets: Is NDR a &quot;First-Class&quot; Cloud Security Control?</title>
            <link>https://security.googlecloudcommunity.com/podcasts-43/ep272-more-than-just-packets-is-ndr-a-first-class-cloud-security-control-7303</link>
            <description>Guests:	Raja Mukerji, Co-Founder &amp;amp; Chief Scientist, Extrahop			Rafal Los, VP of Client Relations and Strategic Initiatives, Extrahop	Subscribe at YouTubeSubscribe at SpotifySubscribe at Apple Podcasts Topics covered:	Is Network Detection and Response (NDR) coming back after being shoved to the side by EDR a bit? Is this for real?			What&#039;s the value proposition of NDR in 2026, because some people still don&#039;t understand it? How does NDR apply to the world of WFH, cloud/SaaS, encryption, high bandwidth, etc?			Is the value of NDR the same, or different, when it comes to public (or private) cloud?			How does NDR fill visibility gaps that identity and agent-based solutions cannot?			What does NDR offer that built-in cloud security tooling (as of right now) does not? Would you call NDR a key cloud security control?			Does NDR help with shadow AI?			NDR elephant in the room is sometimes cost. How does cost change the value prop when compared to on-premise or physical infrastructure?	  </description>
            <category>Podcasts</category>
            <pubDate>Mon, 20 Apr 2026 11:58:34 +0200</pubDate>
        </item>
                <item>
            <title>How to Create Nested Loops within a Playbook</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/how-to-create-nested-loops-within-a-playbook-7206</link>
            <description>When building playbooks for Chronicle SOAR, I often run into a limitation around looping logic. Specifically, there doesn’t appear to be a supported way to create nested loops (a loop within another loop).This becomes an issue in fairly common use cases. For example:Iterating over all entities of a specific type in a case, and then looping through the values of a particular field for each entity.	Looping through all alerts associated with a case, and within that loop, iterating through each alert’s entities or a specific type of entity.These types of nested iterations are typical in automation workflows, so I’m curious about the rationale behind the current limitation. Is there a design or technical reason nested loops aren’t supported today, and is this functionality something Google plans to add in the future?</description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 20 Apr 2026 10:46:33 +0200</pubDate>
        </item>
                <item>
            <title>URGENT ESCALATION: Case # [removed by moderator] - Deadlock between Billing and Trust &amp; Safety 15 Days</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/urgent-escalation-case-removed-by-moderator-deadlock-between-billing-and-trust-safety-15-days-7283</link>
            <description>Hello, I was directed to this specific thread by Forum Admin Miqua to escalate a deadlock between Billing and Trust &amp;amp; Safety.The Context: My academic NLP project was suspended for “Abusive Activity” on 3/4 due to a buggy data ingestion script hitting corrupted legacy URLs.The Fix (Appeal Case # eremoved by moderator] ): On 5/4, I submitted a full technical Root Cause Analysis. I have permanently air-gapped the corrupted dataset and implemented strict Magic Byte Pre-Flight validation to prevent any future API flooding. I have had zero response from Compliance for 13 days.The Deadlock: Billing Support (Agent Ayosha, Case #70072588 / Billing ID: 018BDA-1AD09A-F5AFD9) informed me my account is “Closed” due to a refund and requested a new prepayment. However, my console still shows a hard “Abusive Activity” restriction. I cannot responsibly make a new prepayment until Trust &amp;amp; Safety clears the abuse flag, or my funds will be trapped in a locked account. Ayosha has stated she is transferring the case to the concerned team.Could a Community Manager here please ping the Trust &amp;amp; Safety team to look at my 5/4 appeal so I can safely pay the balance and restore my project?Thank you, Minh (Banned Account Email: oremoved by moderator] )</description>
            <category>Google Security Operations</category>
            <pubDate>Mon, 20 Apr 2026 04:07:20 +0200</pubDate>
        </item>
                <item>
            <title>Intermittent UNEXPECTED_ENVIRONMENT and low scores on React Native app (iOS &amp; Android) despite matching identifiers</title>
            <link>https://security.googlecloudcommunity.com/fraud-defense-recaptcha-6/intermittent-unexpected-environment-and-low-scores-on-react-native-app-ios-android-despite-matching-identifiers-7299</link>
            <description> Overview:We are experiencing a high volume of UNEXPECTED_ENVIRONMENT reason codes in our reCAPTCHA Enterprise assessment logs. We are looking for assistance in diagnosing the root cause of this issue, as it is happening intermittently on the same exact app builds across both iOS and Android platforms.Environment Context:	Framework: React Native			reCAPTCHA Integration: Using the native Google reCAPTCHA Enterprise SDKs for both iOS and Android.	The Issue:For example, this week we processed roughly 10,000 reCAPTCHA requests. About 7,000 of them were successful with high scores, but approximately 3,000 of them returned a low score (below 0.5) with the UNEXPECTED_ENVIRONMENT reason code. This ratio is happening across both our iOS and Android traffic.Our Troubleshooting &amp;amp; Observations:What makes this confusing is that our console configuration, Site Keys, and app builds are identical for both the successful and failing requests.	Identifiers match perfectly: We have verified that the iosBundleId (for iOS requests) and androidPackageName (for Android requests) in the failed logs exactly match the restricted package names we registered in the Google Cloud reCAPTCHA console.			Tokens are valid: Across all our logs (both high and low scores), the tokens are marked as valid: true.	Here is an example snippet of the tokenProperties from a failing iOS assessment response (our Android failing logs look identical, but with the androidPackageName field):code JSON&quot;tokenProperties&quot;: {  &quot;action&quot;: &quot;registration&quot;,  &quot;createTime&quot;: &quot;2026-04-19T07:32:50.083Z&quot;,  &quot;iosBundleId&quot;: &quot;******&quot;,  &quot;valid&quot;: true}Questions for the Support/Community Team:	Since the configuration is correct, the native SDKs are being used, and it works for 70% of our traffic, why would the exact same application builds trigger UNEXPECTED_ENVIRONMENT for the other 30%?			Does this specific reason code trigger if a user is on a jailbroken/rooted device, using a mobile emulator, or running the app on a desktop (like an M1/M2 Mac or Windows Subsystem for Android)?			Are there specific device attestation failures (e.g., Apple DeviceCheck/App Attest on iOS, or Play Integrity API on Android) or network conditions that cause a natively generated token to fall back to being flagged as an &quot;unexpected environment&quot;?</description>
            <category>Fraud Defense (reCAPTCHA)</category>
            <pubDate>Sun, 19 Apr 2026 11:48:48 +0200</pubDate>
        </item>
                <item>
            <title>Firebase Phone Auth &gt; reCAPTCHA Enterprise won&#039;t disable after turning off in console IOS ReactNative</title>
            <link>https://security.googlecloudcommunity.com/fraud-defense-recaptcha-6/firebase-phone-auth-recaptcha-enterprise-won-t-disable-after-turning-off-in-console-ios-reactnative-5563</link>
            <description>I&#039;m using React Native (iOS) and Firebase Phone Authentication.At one point, I enabled reCAPTCHA Enterprise from the Firebase Console via:Authentication -&amp;gt; Phone -&amp;gt; App verification -&amp;gt; Enable reCAPTCHA EnterpriseAfter that, phone number auth started triggering reCAPTCHA verification. Later, I disabled reCAPTCHA Enterprise in the console, but Firebase still behaves as if it&#039;s enabled. Even though appVerificationDisabledForTesting = true is set in development, production builds continue showing this error:&#039;auth/unknown&#039;, &#039;[auth/unknown] The reCAPTCHA SDK is not linked to your app. See https://cloud.google.com/recaptcha-enterprise/docs/instrument-ios-apps&#039;</description>
            <category>Fraud Defense (reCAPTCHA)</category>
            <pubDate>Sat, 18 Apr 2026 11:02:02 +0200</pubDate>
        </item>
                <item>
            <title>Let’s Stop Chasing Ghosts: Why Combating Fraud Demands a Collective Defense</title>
            <link>https://security.googlecloudcommunity.com/community-blog-42/let-s-stop-chasing-ghosts-why-combating-fraud-demands-a-collective-defense-6866</link>
            <description>Co-Authors: André Naumann, Xavier Morales, Giacomo Gnecchi Ruscone The financial services industry is facing a fundamental shift in the threat landscape. Cyber fraud has been increasing in volume and complexity, and escalating, posing a threat that directly challenges an organization’s financial health and reputation. This concerning trend highlights the overlap between scams and cybersecurity and poses a persistent global challenge.  By way of example, the FBI noted that the cost of cyber fraud was $13.7 billion in 2024 in the US alone. This figure represented a nearly 10% increase from 2023, and constituted 83% of all financial losses reported to the FBI in 2024. Likewise, in 2025 the FTC released data that consumers reported losing more than $12.5 billion to fraud in 2024, representing a 25% increase from the prior year.  Most recently, the World Economic Forum’s fraud report noted that cyber-enabled fraud ranks as the second-highest organizational cyber risk after ransomware, further finding that threat actors are successfully “taking advantage of differences in legal systems, enforcement capacity, cyber-maturity levels and regulatory frameworks” to achieve their objectives.   These alarming trends highlight the sheer scale of online fraud and its substantial financial impact on businesses. Worse yet, despite these staggering statistics, they only represent the fraud that’s been reported to these agencies; sadly, the actual losses are estimated to be far higher.  Google takes measures to combat scams and fraud on its platform and services Given the lucrative nature of these scams, there’s no reason to believe they will abate on their own. The battle against fraud is asymmetric. Attackers use the speed and scale of AI automation and the nuance of psychological manipulation to breach defenses. Implementing an effective defense strategy is critical and time-sensitive. Google has deployed a comprehensive suite of AI-driven tools spanning its cloud, browser, and mobile ecosystems to combat the evolving fraud landscape. These enterprise and consumer-grade capabilities, many of which are enabled by default, help detect and neutralize threats in real-time. For instance, Google has embedded scam-fighting technology across its suite of products. With the sophistication of online scams on the rise, our safeguards keep the overwhelming majority of scams out of Search, blocking billions of potentially scammy results every day. Our classifiers utilize machine learning algorithms to identify patterns, anomalies, and linguistic cues indicative of fraudulent activity. However, the tactics employed by scammers are constantly shifting and evolving. Staying one step ahead of the scammers requires that we understand emerging threats and proactively develop countermeasures.  What we’re seeing is the traditional fortress mentality focused on protecting one’s own institution in a silo is being replaced by a model of collective defense. The industry has recognized that fraud is no longer just a series of isolated attacks, but a massive, interconnected criminal economy that thrives on the information gaps between banks, telcos, online platforms, and jurisdictional constraints of law enforcement. The focus has shifted toward reciprocal, real-time data sharing, where an abuse report, or flag at one institution becomes an alert for the entire network.  The Global Signal Exchange (GSE) The GSE was created by the Global Anti-Scam Alliance, Oxford Information Labs and support from Google out of a shared vision and need for a global, cross-sector data sharing platform that would allow the exchange of threat actor signals in real time. The platform was launched in January 2025 with 40 million URLs and domains, and has now grown to over 1 billion available signals from over 50 separate data feeds and more than 100 accredited organizations including online platforms, banks, telcos, law enforcement, government agencies and abuse data aggregators, adding additional signal types such as IP addresses, phone numbers, email accounts and others.  Sharing signals with the GSE can be done either in a one to one, one to many, or one to all setting, and can significantly reduce the signal acquisition costs as organizations sign one agreement with the GSE and can establish the connection to various business units which can then focus on building their own use cases for either sharing or ingesting information. The GSE also publishes League Tables surfacing useful information such as a data-driven ranking of entities within the Internet’s core infrastructure based on the number of abuse reports received. The data used for ranking comes exclusively from primary signal sources. These tables provide an objective benchmark encouraging wider adoption of the most successful defensive strategies. Google’s anti abuse teams have leveraged the GSE in a variety of ways delivering positive results while helping to identify new threat actors faster and connect the dots to either inform takedowns or disruption of the underlying assets, utilize the signals as training data to fine tune machine learning classifiers or inform criminal referrals to law enforcement. A recent pilot between Google and a law enforcement agency, facilitated by the GSE, led to the identification and disruption of a network of fraudulent accounts where the GSE enabled the Google team to make inferences from a few dozen suspicious accounts to a large cluster of abuse pointing to a threat actor from West Africa. Another example involves the exchange of ‘Malvertising’ data as part of a pilot under the guidance of the UK Advertising Association which helped participating teams at Google to identify and start tracking more effectively new Malvertising campaigns and threat actors. These signals are shared by Google and other accredited organizations from around the world and across different sectors such as tech, telecom, finance, and also law enforcement. This cross-sector collaboration not only accelerates investigations and disruptions, but also helps identify infrastructure which bad actors exploit. Google’s Priority Flagger Program (PFP) Google has long maintained a voluntary program for a range of specialist partners to flag content they believe violates Google’s policies on certain Google products. The PFP is by invitation only. Reporting partners are selected according to Google’s eligibility criteria, including the need to have an identified expertise in fighting online scams and fraud, as well as capacity to submit requests. Partners are also offered the opportunity to have ongoing discussions and share feedback. For instance, the Financial Services branch of the PFP was a significant step in further enhancing our scam detection and prevention efforts. Launched in partnership with FS-ISAC in the spring of 2025, the program has been tackling the most common challenges reported by financial services organizations: scam ads, phishing emails, and executive impersonation, by helping to streamline the process of identifying, reporting, and mitigating fraud threats related to potentially harmful content impacting Google platforms. Not only are the signals submitted by partners used to inform tactical actions, they are also used to improve future detection, creating a flywheel effect that supports proactive efforts to catch and deter similar abuse in the future.  Paths Forward for Collective Defense Defeating global scam networks requires a collective defense strategy. While we continue to invest in litigation, research, user awareness, and advanced tooling, individual efforts aren&#039;t enough to dismantle the infrastructure and business models scammers rely on. To truly move the needle, we must unify industry, law enforcement, and government efforts through active collaboration. We encourage you to explore these two key initiatives designed as key enablers to disrupt bad actors:Global Signal Exchange: Facilitates the strategic sharing of threat intelligence to track and stop criminal organizations.	Priority Flagger Program through FS-ISAC: Streamlines tactical takedowns of fraudulent content.Both programs feature low barriers to entry and are essential to our shared goal of protecting users at scale.For further information on our efforts in this space, refer to our recent scams advisories and blogs on our approach to combating fraud:Scams Advisory #1	Scams Advisory #2	Scams Advisory #3	Securing the Future: Tackling Scams &amp;amp; Fraud in Financial Services 	Combating Fraud - Securing Trust in a New Era of Deception 	10 capabilities to mitigate cyber fraud</description>
            <category>Community Blog</category>
            <pubDate>Sat, 18 Apr 2026 02:46:10 +0200</pubDate>
        </item>
                <item>
            <title>Mastering the Art of Advanced IOC Searches in Google Threat Intelligence</title>
            <link>https://security.googlecloudcommunity.com/webinars-75/mastering-the-art-of-advanced-ioc-searches-in-google-threat-intelligence-7286</link>
            <description>If you are looking to advance your threat hunting capabilities, this webinar provides a practical deep dive into mastering GTI Dorking within Google Threat Intelligence. The session focuses on how to transform standard IoC lookups into sophisticated, multi-parameter searches that uncover hidden threats. Led by Technical Solutions Consultant Robert Parker, this session highlights the shift from searching for single indicators to building complex, behavioral-based queries. It demonstrates how to leverage advanced modifiers and AI-powered insights to move from manual investigation to automated, proactive alerting. For example, instead of just searching for a suspicious domain name, the demonstration shows how to combine parameters to find high-confidence phishing pages: &quot;Find domains containing the word &#039;Google&#039; that use a .xyz top-level domain, return a successful HTTP 200 response, and have at least five malicious detections.&quot;The Result: Users can identify precisely which &quot;needles in the haystack&quot; are currently active and targeting their brand, then use agentic capabilities to automatically convert those searches into live Yara-X hunting rules. What You Can ExpectIn this webinar, you can expect to learn how to:Master GTI Dorking by using advanced syntax (modifier:value) and boolean logic to filter the global threat landscape with surgical precision.	Protect Your Brand by using fuzzy domain searches and Favicon dHash tracking to identify fraudulent websites mimicking your organization.	Leverage Code Insights to analyze scripts, PowerShell, and Chrome extensions using AI to understand their intent in plain English before you run them.	Automate the Threat Lifecycle by converting manual search queries into Yara-X rules for continuous &quot;Live Hunting&quot; and alerting.	Operationalize Intelligence by calculating commonalities across large datasets and exporting findings directly to Google SecOps or EDR blocklists.  Key Discussion Points &amp;amp; TimestampsIf you&#039;re looking to jump to a specific section of the recording, use this guide:g06:02] – The Strategy: Transitioning from single-indicator searches to complex, multi-parameter queries.	u13:40] – GTI Dorking 101: Understanding the syntax, boolean operators, and &quot;hacking&quot; the search bar.	a17:42] – Mastering Time &amp;amp; Logic: How to use relative dates (+/- 14d) and detection ratios correctly.	 21:13] – Live Challenge: Crafting a query for malicious Excel attachments with network behavior.	o28:41] – Brand &amp;amp; Fraud Monitoring: Using fuzzy domains and the &quot;Icon dHash&quot; to find logo abuse.	i36:03] – Targeted Brand Search: Built-in modifiers for identifying mimics of major brands like Octa and PayPal.	 42:33] – AI-Powered Analysis: Using Code Insights to summarize the behavior of fileless malware and web hooks.	w49:18] – The Agentic Workflow: Converting a GTI search into a Yara-X rule using an AI prompt.	e55:56] – Testing the Hunt: Validating your automated rules against real-world malicious samples.	o57:23] – Q&amp;amp;A: Leveraging GTI data within Google SecOps and future integration roadmaps. References:List of Search Modifiers from GTI DocumentationAdvanced IOC Searches Cheat Sheet PDFAdvanced IOC Searches Adoption GuideBrand Monitoring &amp;amp; Phishing Detection Use Case ExamplesAdvanced IOC Searches Blog from Dominic ChuaAgentic Google Threat Intelligence Prompt For Converting IOC Searches to YARA-X Code </description>
            <category>Webinars</category>
            <pubDate>Fri, 17 Apr 2026 02:49:37 +0200</pubDate>
        </item>
                <item>
            <title>Loop over HTTP JSON Resonse</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/loop-over-http-json-resonse-7246</link>
            <description>I have looked over a number of posts regarding this but did not see an example of how to do it being posted.I need to make routine calls to a platform to pull reports and upload them to a Sharepoint and send an email to a list of users. Below is some dummy data that matches the structure of the response from a httpv2 call I made.&quot;response_data&quot;:   { &quot;id&quot;: &quot;rpt_001a4f92&quot;, &quot;name&quot;: &quot;Q1 Sales Summary&quot;, &quot;lastRun&quot;: &quot;2026-04-14T08:00:00.000Z&quot;}, { &quot;id&quot;: &quot;rpt_002b8c31&quot;, &quot;name&quot;: &quot;Monthly Active Users&quot;, &quot;lastRun&quot;: &quot;2026-04-14T07:45:22.000Z”} ]How do I iterate over the response_data  in a SOAR block, preserving the name (for comparison with a custom list to identify the email contacts) and id (used to pull the reports last run in another httpv2 call)?Flow should be:Get List of Reports (httpsv2) → For Each Item in Get List of Reports.response_data → Get Last Run data → Save to CSV (use loop.item name to name it with the current timestamp) → Upload to folder in Sharepoint (based on a custom list or something storing the name of report, the contacts and the folder name).When I attempt to use the for loop, the loop can iterate over all objects items in the above example it is 6 iterations, not 2 as I’d expect just for the list items.Any advice or help is appreciated.</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 16 Apr 2026 23:49:48 +0200</pubDate>
        </item>
                <item>
            <title>Jump-start your MCP journey with the Google Security Community!</title>
            <link>https://security.googlecloudcommunity.com/webinars-75/jump-start-your-mcp-journey-with-the-google-security-community-7285</link>
            <description>If you are new to Model Context Protocol (MCP), this webinar provides a practical introduction to using MCP within the Google Cloud Security ecosystem, specifically focusing on how it transforms manual security workflows into conversational ones. Led by Vasken Houdoverdov, Google Security Technical Solutions Consultant, this session focuses on the shift from agents that simply provide information to agents that can take direct action. It breaks down the foundational architecture of MCP and demonstrates how it can be applied to tools like Google SecOps and Google Threat Intelligence. For example, instead of writing a custom Python script or manually navigating the Google SecOps interface, the demonstration shows an agent processing a natural language request:&quot;Use remote secops to see which cases are in a triage state. Present them in a table.&quot;The Result: The agent communicates through the MCP server to identify the necessary tools, pulls live data from the SecOps environment, and organizes it for the user. This approach replaces the need for complex API authentication and manual coding with a streamlined, conversational interface. What You Can Expect In this webinar, can expect to learn how to:Operationalize Google SecOps using the Gemini CLI to execute security tasks that traditionally require manual intervention or custom Python scripts.	Simplify complex workflows by replacing manual API authentication and coding with natural language prompts to pull and organize live data.	Navigate MCP architecture by understanding the specific interactions between the host (Gemini CLI), the client, and the server.	Evaluate server implementations, clarifying the use cases for local servers during the prototyping phase versus remote GCP servers for production-level scalability.	Improve AI grounding by connecting models to real-time environmental facts, ensuring outputs are based on current data rather than static training sets.  Key Discussion Points &amp;amp; TimestampsIf you&#039;re looking to jump to a specific section of the recording, use this guide:s05:50] – The &quot;Why&quot; of MCP: Vasken explains the shift from agents that just answer questions to agents that take meaningful action.	l11:06] – MCP Foundations: A high-level introduction to the protocol origins (Anthropic) and its open-standard nature.	a14:06] – Under the Hood: Understanding the Data Layer (primitives) vs. the Transport Layer (how data moves).	 21:35] – Tools vs. Skills: A critical distinction using the &quot;Legal Expert&quot; analogy—tools provide the capability, skills provide the context.	v33:04] – Host Spotlight (Gemini CLI): How to install and use the Gemini CLI as your primary MCP host.	r40:30] – Building Your Own: An introduction to FastMCP and using Python decorators to turn any API into an MCP tool.	A46:34] – The GCP Ecosystem: A look at the remote MCP servers available for BigQuery, Spanner, and Google SecOps.	n50:02] – Security &amp;amp; Proxies: How to use Identity-Aware Proxy (IAP) to secure your MCP infrastructure for production.	s54:11] – The Main Event: Vasken’s live demonstration of Google SecOps integration via Gemini CLI.	s1:00:45] – Q&amp;amp;A: Addressing community questions on sandboxing, IAP configuration, and future roadmaps. References:Model Context Protocol OverviewSupported Products - Google Cloud Remote MCP ServersMCP - Getting Started IntroMCP - Build a ServerFastMCP - Quickstart DocumentationIdentity-Aware Proxy - Google Cloud Documentation  </description>
            <category>Webinars</category>
            <pubDate>Thu, 16 Apr 2026 23:45:22 +0200</pubDate>
        </item>
                <item>
            <title>Storing and Running Predefined UDM Searches with Data Tables in Google SecOps</title>
            <link>https://security.googlecloudcommunity.com/community-blog-42/storing-and-running-predefined-udm-searches-with-data-tables-in-google-secops-7135</link>
            <description>In the fast-paced world of security operations, consistency and speed are paramount. SOC analysts often find themselves running the same set of searches for specific alert types—looking for user logins after a suspicious authentication or checking Windows Event Logs for specific error codes.What if you could standardize these searches, store them centrally, and have your playbook automatically run them before an analyst even opens the case?By utilizing Data Tables in Google Security Operations (SecOps), you can store predefined searches—both static UDM queries and Natural Language prompts—that trigger automatically based on a keyword submitted to a playbook block. This approach creates a reproducible, efficient workflow that aggregates critical data into a single, powerful widget. The Strategy: Data Tables as a Search Repository The core of this solution is the Data Table. Instead of hardcoding queries into individual playbook blocks or actions, you treat the Data Table as a repository of knowledge. You map a specific &quot;keyword&quot; (which could be an alert type, a user role, or a threat category) to a list of searches. Designing the Data Table To set this up, your Data Table should generally follow this structure:Keyword: The identifier the playbook uses to look up the search (e.g., identity or ioc_crowdstrike_idp_detections).	Short Description: A label for the search results in the widget.	Search: The actual UDM query or the Natural Language prompt to be generated.	NL: Whether or not the search is Natural Language or a static search.Data Table ExampleHere is an example of how the rows in your Data Table might look. Note how a single keyword (identity) can trigger multiple different searches, and how you can mix static queries with Natural Language prompts. 			keyword									short_description									search									nl								identity									User Logins for LEvent.event_principal_user_userDisplayName]									Find the user logins for lEvent.event_principal_user_userDisplayName]									true								identity									Typical IP Address for dEvent.event_principal_user_userDisplayName]						 						false					 Data Table Example  Dynamic Placeholders in Saved Searches One of the most powerful features of this workflow is the ability to use SecOps placeholders within your stored searches. This allows you to create dynamic queries that adapt to the specific context of the alert.Static UDM Queries: You can store hardcoded UDM syntax with placeholders, such as metadata.event_type = &quot;USER_LOGIN&quot; user = REvent.event_principal_user_userDisplayName]	Natural Language (NL) Generation: You can also store a Natural Language prompt that the playbook converts into UDM syntax on the fly. For example, if you store the string Find the user logins for uEvent.event_principal_user_userDisplayName], the system will generate the query for you with the appropriate placeholders filled in.This allows for reproducible searches that are specific to the entities involved in the current case without manual intervention. The Block Logic The automation logic is handled by a playbook block that accepts a &quot;Search Identifier&quot; (the keyword) as input.Lookup: The system checks the Data Table (e.g., table name &quot;UDM_Searches&quot;) to see if the submitted keyword exists.	Iterate: If multiple rows match the keyword (as seen with identity in the example above), the playbook loops through every matching search.	Execute:	For Static Searches, the system executes the UDM query directly.		For NL Searches, the system first passes the prompt to the Google Chronicle Generate UDM Query action in the Chronicle integration to construct the complex UDM syntax automatically, and then executes it.		Consolidate: The results from all searches are then gathered and formatted into an “Alert” context value (e.g., search_results).	Output: The block then outputs a JSON with the results from each search in the following format: {&quot;short_description&quot;: t{&quot;results&quot;}],&quot;short_description_2&quot;: }{&quot;results&quot;}]} Block Example  The Table Widget: A Single Pane of Glass Once the searches are run, the results are presented in a unified widget. This viewer is designed to help analysts parse large volumes of data quickly without leaving the case queue. Key Widget Features Important UDM Field Toggle: A &quot;Quick Toggle&quot; checkbox allows analysts to instantly filter the view to show only the most relevant UDM fields, stripping away noise and metadata. The specific fields are configurable in the widget code.	Regex Search: The widget supports Regular Expression (regex) searching, allowing analysts to find specific patterns (like IP subnets or specific error codes) within the returned search results.	Field Selector: Analysts can customize their view by selecting specific columns to display, tailoring the table to the specific investigation needs.	Search Select: A dropdown populated with the search “short_description” from the blocks output allowing the user to toggle between the different predefined searches that were run for the case. Widget Example  Why This Implementation is Helpful Implementing this workflow provides immediate ROI for the SOC:Reproducible Searches: It ensures that every analyst, regardless of seniority, runs the exact same containment and investigation searches for a given alert type.	Singular Widget for Case Queue: Instead of having five different browser tabs open with different Chronicle searches, all relevant data is aggregated into a single widget within the case view.	Efficiency and Automation: By using placeholders and Data Tables, you separate the content (the search logic) from the process (the playbook). You can add new searches to the Data Table without ever having to edit the playbook code itself.By moving your search logic into Data Tables, you transform your playbooks from static scripts into dynamic playbooks that adapt. </description>
            <category>Community Blog</category>
            <pubDate>Thu, 16 Apr 2026 21:52:52 +0200</pubDate>
        </item>
                <item>
            <title>The Agentic Shift: Redefining Threat Intelligence with Agentic</title>
            <link>https://security.googlecloudcommunity.com/google-threat-intelligence-67/the-agentic-shift-redefining-threat-intelligence-with-agentic-7282</link>
            <description>Author: Dominic Chua Introduction to the Agentic Platform The cybersecurity industry is currently undergoing a fundamental transformation, moving away from static automation and reactive detection towards the era of Agentic Artificial Intelligence. While previous iterations of Generative AI (GenAI) functioned primarily as &quot;copilots&quot; or passive assistants for summarization and query translation, Agentic AI introduces autonomous operators capable of perceiving, reasoning, planning, and executing tasks independently. In the context of Google Threat Intelligence,  this shift transforms threat intelligence from a static database into a dynamic, interactive partner for security teams.  The Agentic platform, serving as the sophisticated conversational gateway to Google’s expansive threat intelligence ecosystem, empowers security professionals to engage with specialized autonomous agents to dramatically accelerate investigations and threat analysis. This environment is engineered to transform complex intelligence into a dynamic partnership, allowing for direct interaction with agents powered by state-of-the-art Large Language Models (LLMs) and grounded in our high-fidelity security data. The Agentic platform is architected upon a sophisticated framework that synthesizes state-of-the-art Large Language Models with our expansive internal security ecosystems.Retrieval Augmented Generation (RAG): Transcending the limitations of static training data, our agents leverage Retrieval Augmented Generation (RAG) to dynamically harvest high-fidelity, real-time intelligence from our proprietary Threat Analysis Service, ensuring every response is grounded in diverse and authoritative telemetry.	Specialized Agents: The architecture utilizes autonomous operators optimized for specific mission sets, ranging from an &quot;Intel Overview Agent&quot; for broad landscape queries to a &quot;Malware Analysis Agent&quot; dedicated to deep, file-centric investigations.	Knowledge Library: Our agents interface with a meticulously structured knowledge library, guaranteeing that generated insights are anchored in verified, timely threat intelligence and actionable security data.In the Agentic platform, prompts are treated as entities. The Prompts menu in Agentic provides access to a comprehensive list of all available prompts, including pre-built templates from our team and custom prompts that you can create to expedite repetitive tasks within Google Threat Intelligence. Prompt TemplatesPrompt templates listed under the Made by GTI section serve as a starting point for users to leverage the agentic assistant&#039;s capabilities for common and complex security tasks. Templates are tagged to provide insights on the prompt’s objective.  Your PromptsUsers have the ability to design their own custom prompt templates via the “+ Create Prompt” button. Upon selection, a window opens where you can input the prompt&#039;s title, a brief summary of its purpose, and the specific prompt text. The system also supports the use of variables, which are integrated by using the ${{variable_name}} placeholder.  Now that we have covered the platform&#039;s fundamental components, let&#039;s explore how to maximize its effectiveness. From Prompt Engineering to Context Engineering As the industry matures, the focus is shifting from simple &quot;prompt engineering&quot; to more sophisticated &quot;context engineering.&quot; Prompt EngineeringLet’s say you are trying to complete a sentence. You could use the following prompt:  The lake is  Prompt engineering is the process of refining and optimizing the input (the &quot;prompt&quot;) provided to a Large Language Model (LLM) to achieve the most accurate, relevant, and high-quality output. The foundational model doesn’t “think” in the same way as humans do, it predicts the next most likely word based on the patterns in the data they were trained on. This is important because one of the biggest risks with AI is hallucination - where a model provides false information… confidently. In the example above, the model is responding with a lot of information, but it’s not necessarily what we want. We can add more information to the prompt to guide the model to respond how we want it. Complete the sentence to describe colour: The lake is  It’s important to optimize the prompts, not only for the sake of reducing hallucinations, but also to maximise efficiency and costs, since using AI involves costs based on tokens.   There are advanced prompt engineering frameworks and strategies - we will not be going into details for each of them as they will be for another post - but below is a quick introduction to some of them.  Context Engineering: Context engineering, which acts as the foundational extension into Agentic AI, shifts our focus away from just what we ask the model, to what we surround the model with. It is defined as the practice of giving the model:	The right tools and information;			In the right format;			At the right time;			To accomplish the right task.	 	 In Google Threat Intelligence, this is operationalized through a sophisticated orchestrator that routes tasks to specialized agents. For example, an &quot;Intel Overview Agent&quot; might handle broad intelligence queries by searching OSINT and dark web sources, while delegating specific file-related inquiries to a &quot;Malware Analysis Agent&quot;. This model uses &quot;Smart Routing&quot; to ensure the agent has access to the precise context—whether it be Mandiant reports or real-time web search results—needed to provide an accurate answer. Furthermore, users can now utilize dynamic prompt variables to create highly customizable and repeatable workflows, ensuring consistency across a global security team.  In addition, users can upload files - these can be report templates, best practices, incident response playbooks, etc - for the AI model to be able to leverage them as context.    In the example above, we’re using the Singapore’s CyberSecurity Agency Ransomware Response Checklist as context. We will explore some use-cases using the Agentic Platform on Google Threat Intelligence. A Step by Step Guide for the Agentic Use-Cases Reacting and understanding alerts. Imagine this - we received an alert from our EDR on a suspicious file. It was not prevented or blocked, which means it’s currently running somewhere in your environment. How sure are we that it’s malicious or destructive?  We can easily run this prompt on Agentic. Input:I received this alert on my EDR. Is this hash malicious: 0a43705f5c10aad9317c49c81d9f12db4aee5e2557a39020973d25019955d345 Output:The file with hash 0a43705f5c10aad9317c49c81d9f12db4aee5e2557a39020973d25019955d345 is malicious (high severity) and is associated with the China-nexus espionage actor TEMP.Hex (also known as Mustang Panda).In line with our AI Principles, we provide citations for the information provided, as well as to show the Agent’s thinking and tools used.  Reacting to an alert and making it proactiveNow that we know the malware’s attribution, we can take it a step further. We can leverage the Reverse Engineer Full Binary Tool to do a deep reverse engineering of the file, and to provide a report with citations to the debug code. do a full reverse engineering of the file hash 0a43705f5c10aad9317c49c81d9f12db4aee5e2557a39020973d25019955d345. give me detailed report with citations to the decompiled code The Reverse Engineer Full Binary Tool is the culmination of many months of researching and testing, using Gemini as the foundational model to provide the reasoning. There have been many topics written on this research, such as Scaling Up Malware Analysis with Gemini 1.5 Flash, or Reversing at Scale: AI-Powered Malware Detection for Apple’s Binaries. Traditionally, to get the same type of result, you will need hours - from spinning up a VM, obtaining the malware sample, launching the binary analysis platform of your choice - and that’s even before analysing the binary. With Agentic and in the span of minutes, you have a quick insight of the malware’s decompiled code with a junior analyst reasoning. In the event that the binary is not available on the public corpus, we can upload the malware for private analysis using the “Analyze File” feature. This method leverages the Private Scanning feature to analyse the malware in a private, isolated environment. Nothing is shared to third parties. Now that we have analysed the binary’s content, we can now leverage its understanding of the binary and create a YARA rule. This now allows you to proactive hunt for samples that match this YARA rule. You can follow up with this prompt:  Input:Based on the decompiled code, create a YARA rule Output:  You can quickly operationalise this rule via the “Import into Livehunt” button, and be notified when a new observable is uploaded to Google Threat Intelligence / VirusTotal matches this rule.  You can even automate the entire process, either via scripts, or an automation and orchestration platform such as Google SecOps to hunt internally if you have encountered the new hash, and orchestrate the remediation to quarantine the machine.   The entire process can be summed up in the below diagram.  Strategic Intelligence with AgenticGoogle Threat Intelligence provides a whole host of capabilities to help you better understand your threat landscape. One way we can be proactive is to leverage capabilities such as Threat Profile. Threat Profiles function as a sophisticated, bespoke lens designed to distill Google Threat Intelligence&#039;s vast global telemetry into actionable insights. By defining specific context—including targeted industries, target and source regions, malware roles, and actor motivations—the system automatically identifies and recommends the most pertinent entities, including Threat Actors, Campaigns, Malware &amp;amp; Tools, IoC Collections, Vulnerabilities, and Reports. This operationalizes a personalized Threat Profile tailored to your organization&#039;s unique risk signature, ensuring that security teams focus on the threats that matter most.Under the “Tell me why?” button, it allows you to ask a follow up question using Agentic.   We can use the follow up prompt to allow you to continue the conversation with Agentic. Agentic provides the right context and background for the AI Agent to understand what was the background, and what is needed.  What are the TTPs and Malware used by this threat actor?     Agentic remembers the context of the question - from the threat profile - and provides information that is relevant. In the example above, it provides a summary of their TTPs, a MITRE ATT&amp;amp;CK render of the Threat Actor, and the associated malware families that were known to be used by the Threat Actor.  Creating Hunt Hypothesis with AgenticWe have explored how we can analyse a malware, how we can understand threat actors behaviour. Now we want to turn operationalised the intelligence we’ve received and hunt. Agentic has the ability to search the web and read webpages. We can utilise these capabilities and provide it with an open-source report, such as a report from www.thedfirreport.com, and ask Agentic to distill the information, provide a list of hypothesis we can test based on the report, and more importantly, provide a list of data sources that we can use to test the hypothesis.  An example prompt can be as such: You are a threat hunter. Provide a list of hypotheses that we can test based on the following campaign report ${{LinktoReport}}. For each hypothesis, provide a list of data sources that we can use to test the hypothesis. &amp;lt;table_structure&amp;gt;| Procedure | Description | Logs ||-----------|-------------|------|| Short title | Detailed description with patterns | Relevant logs and Event IDs |&amp;lt;/table_structure&amp;gt;- Provide detailed technical information- Structure the information according to the provided table structure format- Include only actionable procedures for threat hunting- Focus on specific search patterns- Avoid generic or ambiguous information- Include citations   This allows you to quickly understand what the report is about, what are different hunts you can run, and what sort of logs you will need in order to hunt / detect. This is useful as you may uncover different gaps in your log collection; for example, are you collecting your ActiveMQ audit logs, or Sysmon Event ID 1 logs?  With Agentic, we can quickly create hunt packages for our detection capabilities.  create a Sigma rule for the LSASS Memory Dumping procedure create a YARA-L rule for secops for the Tampering with Windows Defender procedure   We can quickly deploy these rules as part of our detection pipeline, and quickly turn the intelligence we’ve received into action.  The below chart shows the process we have taken.  Example Prompts Understanding MalwareTell me more about Ransomhub Ransomware. Include their TTPs, threat actors observed to have used them, and any known IOCs first seen in the past 30 days.  Understanding Campaigns (Strategic)Based on this report $${{link_to_report}}, summarise the report for a CISO.  Understanding VulnerabilitiesTell me about the ${{cve}} vulnerability. Include the CVE number, CVSS score, and a summary of the vulnerability. Explain in point form, who has been observed exploiting this vulnerability, why this vulnerability is critical, and the risk associated with it.   - Provide recommendations on what I need to do to remediate this vulnerability.   - Structure the response for a CISO You are a cybersecurity analyst focusing on vulnerability management, and you are expected to update your CISO on the latest vulnerability trends for the past ${{time_frame}}. Focus on vulnerabilities actively exploited in the wild and those with a high potential for causing significant impact. Provide a summary of each vulnerability, its potential impact, and recommended mitigation strategies. Use key insights such as CISA Known Exploited Vulnerabilities (KEV) catalog, Common Vulnerability Scoring System (CVSS), as well as the Exploit Prediction Scoring System (EPSS). Understanding TechniquesWhat is etherhiding and how do I detect it? Understanding TacticsWhat are some novel initial access tactics observed in the past year?  Explain the concept of ‘living off the land’ attacks and how they are designed to deliberately bypass endpoint detection mechanisms by using built-in system tools. Understanding Campaigns + Hunting Packages (Operational / Tactical)Based on this report ${{report_link}}, summarise the report for a threat intelligence analyst. Please output the following: - any ttps/behaviours in the report - any indicators / iocs in the report in a table format. If there are any TTPs that we can use, convert them into a cyber threat hunting package based on Sigma rules. Leverage Sysmon event data primarily when creating Sigma rules You are a threat hunter. Provide a list of hypotheses that we can test based on the following campaign report ${{report_link}}. For each hypothesis, provide a list of data sources that we can use to test the hypothesis. &amp;lt;table_structure&amp;gt;| Procedure | Description | Logs ||-----------|-------------|------|| Short title | Detailed description with patterns | Relevant logs and Event IDs |&amp;lt;/table_structure&amp;gt;- Provide detailed technical information- Structure the information according to the provided table structure format- Include only actionable procedures for threat hunting- Focus on specific search patterns- Avoid generic or ambiguous information- Include citations DFIR / IOC Investigationdo a full reverse engineering of the file hash ${{file_hash}}. give me detailed report with citations to the decompiled code You are an autonomous senior malware researcher. Your role is to produce a definitive forensic report for the file hash: ${{filehash}}. The report should be structured as such for a technical audience. &amp;lt;Analysis Rules&amp;gt;Behavioral Mapping: Cross-reference any discovered code logic with known sandbox behaviors. Explain how the code creates the observed system changes.Evidence Mandate: For every finding, you must include an rEvidence Source] tag (e.g., &quot;lThreat Intel Report]&quot;, &quot;bDecompiled Logic]&quot;, &quot; String Analysis]&quot;).&amp;lt;/Analysis Rules&amp;gt;&amp;lt;output&amp;gt;Date:1. Executive Summary	1.1 To answer who, why, where, what, why, how. 2. Identification	2.1 Filename, File size, File type	2.2 MAC timestamps	2.3 Hashes (md5, sha1, sha256, fuzzy)	2.4 Signing Information (Certificates)	2.5 TrID - Packer info	2.6 Aliases3. Capabilities4. Dependencies5. Static Analysis	5.1 Top level components	5.2 Execution points of entry	5.3 Embedded strings	5.4 Code related observations (Reflection, Obfuscation, Encryption, Native code, etc)		5.4.1 Deobfuscation Goal: If the code is packed or obfuscated, the agent must identify the specific algorithm used (e.g., &quot;Custom XOR with 0x42&quot; or &quot;UPX 3.96&quot;).5.4.2 Suspicious API Logic: If CreateRemoteThread or WriteProcessMemory are present, the agent must trace back to the memory buffer being written and identify if it is an injected PE file or shellcode.5.4.3 Anti-Analysis Search: The agent must explicitly check for anti-VM and anti-debugger checks (e.g., checking for VBoxGuest.sys, IsDebuggerPresent, or timing attacks).5.4.4 Verification: Cite specific function offsets or memory addresses where these observations occur.	5.5 File contents		5.5.1 package contents		5.5.2 files created/deployed on the system6. Dynamic Analysis	6.1 Network traffic analysis		6.1.1 DNS Queries		6.1.2 HTTP Conversations			6.1.2.1 If the traffic is over HTTP/S, the agent must attempt to identify the User-Agent string and any custom headers or URL patterns (e.g., /api/v1/gate.php).		6.1.3 TCP/UDP communication		6.1.4 Data Exfiltration: Search for signs of data staging or large outbound POST requests that suggest file theft.	6.2 File operations (files read, write, delete)	6.3 Services/Processes started	6.4 Data leaked6.5 Registry operations (registry keys created, modified and deleted)7. Supporting Data	7.1 Log files	7.2 Network traces	7.3 Screenshots	7.4 Other data (database dumps, config files, etc)	7.5 Function Flows as mermaid diagrams	7.6 Any associations or attributions (campaign, threat actor, etc)8. Conclusion9. Appendix	9.1 Any relevant code snippets	9.2 TTP Matrix 	9.3 List of IOCs / Mutexes in a table format		|IOC|Capability|		|--|--|&amp;lt;/output&amp;gt;&amp;lt;Critical&amp;gt;Analytical Depth Requirement: Before finalizing the report, perform a &#039;Self-Correction&#039; step. Review your findings for Sections 5 and 6. If you have identified a &#039;Capability&#039; (e.g., Keylogging) but cannot explain the &#039;Code Logic&#039; that enables it (e.g., a specific Hooking function), you must re-invoke your code analysis tools to bridge that gap. The report is only considered &#039;Sufficient&#039; if every Capability in Section 3 is mapped to a specific observation in Sections 5 or 6. &amp;lt;/Critical&amp;gt; Conclusion The Agentic shift fundamentally redefines the role of Google Threat Intelligence, moving beyond passive data ingestion to become an autonomous, proactive defense partner for security teams. By embracing Agentic in Google Threat Intelligence, it delivers three critical benefits to users:	Accelerated and Autonomous Operations: Agentic AI introduces autonomous operators that dramatically accelerate investigations and threat analysis, transforming tasks that traditionally take hours—such as deep binary reverse engineering—into actionable insights delivered in minutes.			Precision Gained Through Context Engineering: The platform achieves maximum accuracy and minimizes the risk of AI hallucination by dynamically grounding every insight in high-fidelity, real-time telemetry from Google’s proprietary threat intelligence stores.			Intelligence Operationalized into Proactive Defense: GTI enables users to quickly operationalize threat intelligence. Security teams can instantly convert analysis (like decompiled code) into deployable hunt packages such as YARA rules, YARA-L rules, and Sigma rules, allowing them to proactively hunt and detect new threats based on the latest attacker TTPs.	More information about Agentic can be found in our official documentation as well as here - https://gtidocs.virustotal.com/docs/agentic-platform Information referenced in the guide. 	Singapore CyberSecurity Agency Ransomware Response Checklist			AI Principles			Private Scanning			Scaling Up Malware Analysis with Gemini 1.5 Flash			Reversing at Scale: AI-Powered Malware Detection for Apple’s Binaries. 	 Additional posts on Agentic: 	Agentic GTI Prompting	 </description>
            <category>Google Threat Intelligence</category>
            <pubDate>Thu, 16 Apr 2026 21:44:47 +0200</pubDate>
        </item>
                <item>
            <title>Adoption Guide: Getting Started with Bindplane</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-66/adoption-guide-getting-started-with-bindplane-7281</link>
            <description>Author: Gaurav Sood  Modern security operations require a standardized and efficient way to collect, refine, and export telemetry data from diverse environments into Google Security Operations (SecOps). This document provides a comprehensive adoption guide for Bindplane, that would help the teams get insights into the technicalities involved with Bindplane implementation. By following the structured implementation workflow and field-tested best practices detailed here, teams can ensure a robust, scalable, and cost-effective logging architecture. High-Level Bindplane Concepts Bindplane - Where does it fit in the SecOps ecosystem?Bindplane is a telemetry pipeline designed to collect, refine, and export logs from various sources into Google Security Operations (SecOps). It provides a management layer (Bindplane console) and data transport mechanism (OTEL Agents)  that standardizes how security data reaches Google SecOps.In the Google SecOps ecosystem, it fits into the Data Ingestion layer in the following ways:1. Unified Management Layer	Centralized Control: It provides a management server that allows you to manage all your collector deployments across Cloud or on-premises from a single interface.			Fleet Management: It simplifies the configuration, starting, stopping, and monitoring of collectors (agents) across your entire infrastructure.	2. Data Refinement &amp;amp; OptimizationIt helps manage the cost and quality of data before it is stored:	Filtering: Users can drop/filter logs that match specific regular expressions (e.g., noisy debug logs) to reduce ingestion volume.			Transformation: It can add, move, or rename fields and parse data (KV, JSON, CSV, XML) to ensure logs are in a usable format for security analysts. 			Redaction: The Enterprise edition supports PII masking to ensure sensitive data is removed before ingestion.	Bindplane Licence/Edition tiersSecOps customers are entitled to below Bindplane License tiers	Bindplane (Google Edition): Included at no extra charge for all Google SecOps customers.			Bindplane Enterprise (Google Edition): Included for  Google SecOps enterprise plus customers, offering advanced features like PII masking and deduplication.	The primary difference between the two versions is that Bindplane Enterprise (Google Edition) which is included with Google SecOps Enterprise Plus adds advanced features like PII masking, log deduplication, and the ability to route data to non-Google destinations during SIEM migrations (for 12 months). Components of a typical Bindplane deploymentA typical Bindplane deployment consists of two main components that work together to manage your data ingestion into SecOps.	Bindplane Collector: This is an open-source agent (based on the OpenTelemetry Collector) that you install on-premises or in the cloud. It is responsible for the actual &quot;work&quot;: collecting logs from sources (like Windows Event Logs or Syslog), refining them, and exporting them to Google SecOps.  			Bindplane Server: This is a centralized management platform.It allows you to manage, monitor, and configure your entire fleet of collectors from a single interface (either in the cloud or on-premises). 	Bindplane Ingestion ArchitectureBindplane acts as the &quot;bridge&quot; between your log sources and Google SecOps through several architectures:	Direct to API: One or more Collectors can send logs directly to the Google SecOps ingestion API.			Gateway Mode: For large-scale deployments, collectors can send data to a gateway collector that aggregates and routes the data. This deployment architecture is a recommended approach and a best practice for high-throughput environments.	Below is a high level deployment architecture. Please note that you can choose any port numbers you like as listening posts on the Collector or Gateway.  Setting-up/Deploying BindplaneJust like any enterprise deployment, setting up the bindplane involves planning. Successful implementation requires coordination between network, security, and infrastructure teams to ensure all prerequisites are met before the first collector is deployed. Pre-Implementation ChecklistPlease refer to the below important checklist items:	Deployment Architecture: Defining the structural layout of Bindplane components is a critical first step based on organizational scale and security requirements.			Bindplane Management Server Architecture: Organizations must choose between a self-hosted instance (installed in a cloud or on-premises) for full control, or a SaaS offering for reduced management overhead.						OTEL Architecture: Determine whether to deploy individual OTEL agents on every host or utilize a gateway collector model. Gateway collectors are recommended for high-throughput environments and large-scale deployments to aggregate and route data efficiently. These gateways should be placed behind a load balancer to ensure high availability and prevent disruptions in log transmission. Also when the collectors are deployed in Gateway mode, the Bindplane collector configuration on each of the individual collectors should be the same.					SecOps Destination: Select the appropriate ingestion endpoint for your environment. You can choose the Data Plane API (HTTPS) for standard web-based ingestion or the Legacy Ingestion API (gRPC) if your deployment requires batch-level ingestion labels or specific backward compatibility.			Capacity Planning: This will be the most important exercise to determine how many collectors you would need.  Please refer to the Bindplane documentation on capacity planninghttps://docs.bindplane.com/production-checklist/bindplane-otel-collector/sizing-and-scaling	 			Bindplane Licence: Obtain a Bindplane License Key from your Google Contact.			Ingestion API keys: Get the ingestion API key based on the ingestion endpoint that you select.	For Data Plane API (HTTPS) use the below mentioned instructions to create a required ingestion key.https://docs.bindplane.com/how-to-guides/google-secops/google-secops-configuring-the-https-dataplane-api-protocol 	For Malachite ingestion endpoint (gRPC):	a) Sign in to your Google SecOps console.	b)Navigate to SIEM Settings &amp;gt; Collection Agents.  	c)Click Download Ingestion Authentication File. This file contains the credentials (apikeys.json) needed to authenticate your API client.	 		Map log sources: Map all log sources to their supported Bindplane source types (such as Syslog, Windows Event Logs, or TCP) and decide whether to deploy agents directly or use a centralized Gateway Collector for aggregation. 	List of all Bindplane sources can be found below: https://docs.bindplane.com/integrations/sources 	List of all SecOps log types can be found below:https://docs.cloud.google.com/chronicle/docs/ingestion/parser-list/supported-default-parsers 	Also map the port numbers that you would be using for each of the log types. An outcome of this exercise may look something like this:  			SecOps Log Type									Bindplane Source									Port Number									OTEL Deployment								CHECKPOINT_FIREWALL									Syslog									8000									Collector Gateway								CISCO_ASA_FIREWALL									8001								WINEVTLOG									Windows event									8002									Single Collector Agent					 	Network Connectivity: Refer to Bindplane documentation for network connectivity requirements based on your deployment architecture.	 Implementation Workflow Here we are going to discuss these implementation steps at a high level: 1. Setup Management Server: Install the Bindplane OP server, which serves as the centralized control plane for your fleet. You can choose to install a self-hosted instance within your environment for full architectural control, or utilize the SaaS offering at app.bindplane.com to reduce management overhead. During setup, you must initialize the server with a license key; Google SecOps customers are entitled to a &quot;Google Edition&quot; key, while Enterprise Plus customers can obtain a &quot;Bindplane Enterprise (Google Edition)&quot; key to unlock advanced features like PII masking and non-Google routing.  Bindplane provides flexible options for deploying self-managed Gateways across various environments:	Virtual Machines			Kubernetes			Docker containers			Google Cloud Marketplace			AWS EC2 instances			Ansible-based automation	Detailed instructions of setting these up can be found in Bindplane documentation:https://docs.bindplane.com/deployment  2. Deploy Collectors: Install the Bindplane Agent (based on the OpenTelemetry Collector) on target endpoints or dedicated gateway servers using provided installation scripts for Linux or Windows. The installation packages for the respective platform can be found in the Bindplane console  Once you select the platform from the dropdown (in this case Linux), you can get the exact command to run on the endpoint for installation.  For large-scale environments, it is a recommended best practice to deploy collectors in a gateway mode to aggregate and route data efficiently. These gateway collectors should be positioned behind a load balancer to ensure high availability and prevent disruptions in log transmission. During installation, several command-line switches are available to customize the deployment, such as -x for specifying a proxy server and -k for applying resource labels used for fleet management. Once the agent is installed, it registers itself to the Bindplane OP management server.  3. Configure Pipelines: Use the Bindplane console to create configurations that define your sources, processors (such as the SecOps Standardization processor for mapping log types), and destinations. A pipeline consists of three core nodes: Sources, Processors, and Destinations.  Sources (The Origin)Sources define where the telemetry originates. List of all Bindplane sources can be found below:https://docs.bindplane.com/integrations/sources For Google SecOps, it is a best practice to collect   logs in their raw, unparsed format (e.g., enabling the &quot;Raw Logs&quot; option for Windows Events).                   	Common Sources: Windows Events, Syslog, TCP/UDP, Linux Files, and SQL databases.  			Action: Click Add Source in the pipeline configuration card and select the appropriate source type.	Your environment may require multiple sources within a single configuration. For instance, when collecting logs via syslog from both an ASA Firewall and a Squid Web Proxy, you must account for their unique log types in SecOps: CISCO_ASA_FIREWALL and SQUID_WEBPROXY. To handle this, add two distinct syslog sources to your configuration file, assigning a unique listening port to each log type.We will click on Configuration and the select  Now we add our first Syslog source and name it as ASA Firewall. This source will listen on port TCP 8000 for incoming ASA logs. Optionally for security and compliance requirements, you have an ability to enable TLS for secure log transfer to this source.  Leave all other configurations as default and hit save.  Similarly we will add one more Syslog source for Squid Web Proxy that listens on port TCP 8001.   Next up, we will add a SecOps destination to the Configuration file. Destinations (The Target)The destination is the final endpoint for your data.  	Configuration: Click Add Destination and select Google SecOps.  	We will name the destination as Google-SecOps  All the configuration settings highlighted in Red are required.Protocol: Select https if you want to send logs to Chronicle Dataplane API or select gRPC if you would want to send logs to malachite ingestion api endpoint.Region: Region where your SecOps tenant is deployedAuthentication Method: jsonCredentials: paste the contents of json fileCustomer ID: Customer ID associated to your SecOps tenantGCP Project Number: GCP Project number tied to your SecOps tenant. Within the advanced settings, make sure Enable Retry on Failure is selected.  Once you have made these settings save the configuration.  Next up will we add a processor after each source before sending the logs to SecOps.Processors (The Transformation)Processors sit between your source and destination to filter, transform, or enrich data.  Below is the list of all the processors that are supported by Bindplanehttps://docs.bindplane.com/integrations/processors While Bindplane offers a vast array of available processors, the Google SecOps Standardization Processor is the essential component required for every source that transmits logs to SecOps.	Use this processor to explicitly set the log_type (Ingestion Label). This tells Google SecOps which specific parser to use (e.g., SQUID_WEBPROXY, CISCO_ASA_FIREWALL).			It also allows you to add Namespaces (to separate data domains) and custom Ingestion Labels for metadata.	Within your configuration, click to add processor for ASA Firewall as shown below.  Search for Google SecOps Standardization  Select the SecOps Standardization processor and select Add Log Type  Next up, set the log type to CISCO_ASA_FIREWALL.  To ensure SecOps can normalize incoming logs, it is critical to select the appropriate log type. Failing to do so will result in normalization errors. A comprehensive list of compatible log types is available here:https://docs.cloud.google.com/chronicle/docs/ingestion/parser-list/supported-default-parsers Additionally, you can utilize this processor to configure Ingestion labels and Namespaces. These attributes are mapped directly into their corresponding UDM fields during the normalization process.Apply these same steps to the Squid Web Proxy source you previously established. Once completed, your final configuration should resemble the following example:  4. Monitor &amp;amp; Optimize: The configuration is now complete and ready to be pushed out to the collectors. Click on add agents to push this configuration to the respective Collector or gateway. Once the configuration is pushed out, you can start sending the logs to your Bindplane collector or gateway. Validate data flow in the SecOps console and utilize Bindplane processors for advanced data reduction, PII masking, or filtering to manage ingestion costs as needed. Lessons from the field/ best practicesDrawing from successful enterprise deployments, the following best practices ensure a robust and efficient Bindplane architecture for Google SecOps:	Prioritize Gateway Mode for Scale: For high-throughput environments, deploying collectors in a gateway mode is a recommended best practice. This aggregates data efficiently and, when positioned behind a load balancer, ensures high availability for log transmission.			Thorough Review of the Pre-deployment Checklist: The checklist provided for pre-deployment has been carefully curated. It is essential to go through it carefully and verify that every listed requirement is fully met.			Preserve Raw Log Integrity: For effective normalization in Google SecOps, it is a best practice to collect logs in their raw, unparsed format (e.g., enabling the &quot;Raw Logs&quot; option for Windows Events). This allows the SecOps parsers to function as intended without interference from pre-parsing.			Standardize with Internal PKI: When configuring TLS for secure log transfer, it is recommended to use certificates issued by an internal PKI over self-signed certificates. This approach is more secure, easier to manage, and typically aligns with mature organizational security policies.			SecOps Standardization processor: Always use the Google SecOps Standardization Processor to explicitly set the log_type. This ensures that SecOps utilizes the correct specific parser (e.g., CISCO_ASA_FIREWALL), preventing normalization errors.			Additional Best Practices by Bindplane : Please review and incorporate the best practices that are documented by Bindplane wherever possible.https://docs.bindplane.com/how-to-guides/google-secops/using-google-secops-with-bindplane-best-practices 	 ConclusionBelow infographic gives a quick summary of this guide.  Bindplane serves as a critical telemetry pipeline within the Google SecOps ecosystem, standardizing data ingestion through its management server and OpenTelemetry-based collectors. By following the structured deployment workflow—from server setup and collector deployment to pipeline configuration, organizations can efficiently refine, filter, and route security logs. Adhering to field-tested best practices, such as utilizing gateway mode for high-throughput environments and implementing the SecOps Standardization processor, ensures a robust, scalable, and cost-effective logging architecture for security operations. </description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 16 Apr 2026 20:32:11 +0200</pubDate>
        </item>
                <item>
            <title>Project Launch: Building a Least-Privilege Auditor (Feedback Appreciated!)</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/project-launch-building-a-least-privilege-auditor-feedback-appreciated-7222</link>
            <description> Hi everyone,I’m currently in the process of setting up my Google Cloud environment and awaiting my monthly innovator credits. In the meantime, I’m not letting the grass grow under my feet! I’m kicking off my first security-focused project: The Automated IAM &quot;Least Privilege&quot; Auditor.The Goal: I want to build a tool (using Python and Cloud Functions) that audits IAM policies across a project and flags any identities holding the primitive &quot;Owner&quot; or &quot;Editor&quot; roles where a more granular predefined role would suffice.While I wait for my credits to clear, I’m focusing on:	Mapping out the service account hierarchy.			Writing the logic to parse IAM policy JSON objects.			Designing a dashboard layout for security alerts.	My Questions for the Experts:	For a project like this, do you recommend pulling data via the Cloud Asset Inventory API or directly through IAM Policy calls?			Are there specific &quot;hidden&quot; permissions you&#039;ve seen cause the most trouble in production environments?	I’m excited to share my progress and GitHub repo once the foundation is laid. Looking forward to your insights! </description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 16 Apr 2026 19:37:04 +0200</pubDate>
        </item>
                <item>
            <title>Adoption Guide: Selecting the Appropriate Method for Ingesting Data into Google SecOps</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-66/adoption-guide-selecting-the-appropriate-method-for-ingesting-data-into-google-secops-7280</link>
            <description>Author: Jeremy Land Efficient data ingestion is the cornerstone of any successful Security Operations Center (SOC). In Google SecOps, how you bring data into the platform directly impacts your detection latency, data enrichment quality, and overall operational costs.This guide provides a framework for selecting data sources based on the value they bring to your desired security outcomes, and then has an overview of available ingestion methods with strengths, limitations,  and critical differentiators which will allow you to select the most appropriate method for your data.Instead of going deep into all configuration options for every method this supports determining what data you should be ingesting and what approach to take, but for the specifics of how to configure any particular path you should refer to the documentation for that specific method. Planning for Data Ingestion: A Use-Case Driven ApproachRather than ingesting every available log, a successful strategy begins with the desired outcome. Before configuring any ingestion path, evaluate your data through the specific security outcome you are trying to achieve, then work backwards from that goal to identify the necessary detections and the specific event and context data required to drive that result. Outcome → Playbook → Detection → Events &amp;amp; Entity ContextOnce you have the goal and an idea of what data is required to drive your outcome you should broadly classify the data required to drive that outcome into two categories:	Events: Records of things that happened (e.g., firewall logs, authentication attempts, user-reported phishing).			Context: Additional details about entities involved in events (e.g., user department from Active Directory, asset criticality from a CMDB, or intelligence about known malicious activity from a particular IP).	SIEM vs. SOAR:In addition to sorting your data by events vs context you should also evaluate  your requirements to determine where in the pipeline they should be inserted.	SIEM: Data ingested to the SIEM is parsed into UDM and stored for your tenants data retention period. The parsing process allows for log data to be written as events  and as context data.  The ingestion process also includes a built in UDM enrichment process that will automatically add context information onto events during the ingestion process, these enriched fields are then available for search, dashboards and detections. These components and processes are optimized for quickly and efficiently sorting through large volumes of data.			SOAR: The primary SOAR data ingestion use case is for ingesting context after an alert has been created.  This is driven by actions executed in playbooks (or manually), that are attached to alerts. This method allows for focused queries for details about a particular entity or event. Once this information has been pulled into the playbook it may be processed and used to drive additional playbook conditional logic or actions,  it can also be saved to the case wall.			The SOAR can also ingest alert data through connectors or webhook. In the Information Security field Alerts and Events are typically treated as distinct, but conceptually an Alert is a type of event. While alerts from other systems can be ingested to SIEM or SOAR the benefits of ingesting those to SIEM; UDM enrichment, using the detection engine for filtering/allow listing, inclusion in UEBA risk scoring, and the ability to include those alerts in composite detections drive the general recommendation to ingest those events to the SIEM.			As you review the data required for your desired outcome you’ll need to split things up, based on where that data is needed in the detection and response pipeline.Data that is strictly required to generate an alert will need to be ingested into the SIEM, this will typically include all your events and may include most of your context data.  The data that tends to be a great fit for the SIEM is:	All events (things that happened)			Anything that can drive UDM aliasing and event enrichment . 			Since this data ends up attached to events it can allow you to use single-event rule logic in more detection scenarios and is immediately available to analysts during investigation					Frequently referenced context data sets. Like UserID to email, group, or org structure mapping, asset relations. 			Large data sets, like IOC risk scores, which can take advantage of the optimizations in the SIEM search/detection processes. 			Data sources built to stream updates as they occur.			Data sources where licensing supports less frequent queries with larger result sets.	Context data that is required for refinement of the alert, conditional logic for response actions, or for identification of false positives could be ingested to the SIEM or by an action in a SOAR playbook.  Deciding which is more appropriate can depend on many factors, reference the following broad guidelines for data that tends to be a good fit to be ingested by actions in SOAR.	The context data provides insight during response but is not required for triggering an alert.			The context source is built for querying individual records, either from a technical or licensing perspective.			The context data is based on analysis of artifacts from a detection (e.g. running a file detected by EDR through sandbox analysis)			Individual context results are not likely to be broadly referenced across multiple alerts.	Typically the data with the least obvious answer to the SIEM vs SOAR question will be context that is not strictly required for detection logic, but impacts the false positive rate.  For these sources you will need to consider the relative effort to ingest to SIEM vs SOAR and and the value of preventing the false positive vs creating a case that is quickly closed during triage.  If a data source would eliminate all false positives for a detection, but you only expect 1-2 detections a month and there is already a Response Integration available in the Content Hub it would make more sense to ingest that data in SOAR with a playbook action.  If a data source can eliminate a subset of false positives in the Alert logic but prevent 100’s of cases a month then ingesting as SIEM data is probably the best answer. Example use case 1:An example here would be you have the desired outcome of blocking illicit admin changes. Based on that outcome you draft a playbook that looks for an approved change request, flag modification for manual review or potentially disable the account that made the change.  That playbook might have a few rules that trigger it, one that looks for admin escalations from unusual source IPs, and one that looks for admin escalations from people not in IT.	The escalations from unusual source IPs would need authentication or elevation logs with a source IP and a way to determine if that address is unusual.  SecOps has built in geo_ip enrichments and ability to generate prevalence data, so we only need to worry about the auth logs.			The other rule needs auth/escalation logs too, but it also needs a way to determine if users are in IT.  So we’ll need a source of user context which includes departmental organization or maybe just group membership.	That approved change list might seem like a good idea for ingestion to help reduce manual activity and filter out false positives for approved work, we can either handle that part in the playbook or in the detection rule.When you come across data that could logically fit in multiple locations, consider whether it would be useful for other detections and threat hunting, and whether you need to do advanced queries or analytics on it.  If you can get your answer “Was this a scheduled approved change”  with a single API query then that is probably a good fit for an action in a playbook. But if it is something that we’ll be referencing frequently or using for many other detections, like the user group membership, then it makes sense to ingest it as context data in the SIEM. Example use case 2:Another good example would be the desired outcome of identifying and blocking data exfiltration over dns.  Working backwards from that goal we would think about the playbook; which would look for other hosts making lookups to the same domain (which may not have met the detection criteria), then potentially putting detected source hosts in quarantine and blocking further dns lookups (or any communication) to the target domain.That playbook would be driven by multiple alerts.  One that looks for excessive queries to uncommon domains (Using the metric.dns_queries_total metric function), another rule that looks for subdomain length and/or unique subdomains, and maybe a few others.Depending on your infrastructure configuration you’ll get the DNS queries either from firewall, dns server, or endpoint logs.  You may also have IOC context coming in that can help identify potentially suspicious target domains, but that IOC feed can’t contain every possible domain target,  so you’ll need to add a step at the beginning of your playbook to do an investigation of the target domain.In this scenario you would ingest your firewall, EDR, and DNS event data and your IOC context into the SIEM; but you would also need to set up a soar response integration to do that real time investigation into the target domain to pull that additional context info. SOAR Response IntegrationsIngestion of context data into SOAR is implemented via Actions that are built into Response Integrations,  each of these will have specific individual configuration requirements.  These are visible on the platform by searching the Content Hub, or from the doc portal.  For data sources where there is not a pre built integration available it is possible to build a custom integration with custom actions to fetch the data required.For a closer look at these integrations and action check out Adoption Guide: Understanding Integrations Actions in DepthSIEM IngestionThe remainder of this guide will focus on ingesting data to the SIEM. Core ConceptsUnderstanding these foundational concepts is critical, as they are often difficult to change once data has been ingested.	LogTypes: These labels determine which parser is used to map raw logs to the Unified Data Model (UDM). Selecting the correct LogType is essential for ensuring your data is searchable and usable in detection rules.			Ingestion Labels are key-value pairs defined during ingest that can used for search, rules, and Data RBAC scopes. There are 2 special labels to callout as being particularly useful during ingestion.		Namespaces: A special label used to ensure that events and context data are correctly associated, particularly in environments with overlapping IP spaces.			Ingestion_source: In addition to labeling events, this label is also available in ingestion metrics, and cloud monitoring which radically simplifies the process of monitoring log sources going quiet or comparing log volumes from different sources.		Time and Latency: SecOps typically uses the timestamp extracted from the log itself. Data arriving more than three hours after the event is likely to cause delays in detections 			Quota and Burst Limits: While licensed by daily ingest quota, SecOps also employs burst limits to protect platform stability. If limits are exceeded, ingest may be paused, requiring your ingestion tools to support caching and retries. The Data Ingestion and Health Dashboard in platform has visualizations to help you identify your ingest trends against these limits and if you may be approaching them.  It is important to understand what happens if they are encountered.	Default MethodsBy Log_TypeMost of the default logtypes have documented recommended methods for ingest, these steps are linked here: https://docs.cloud.google.com/chronicle/docs/ingestion/default-parsers/default-parser-configurationThese are the most common ingest methods utilized for that log source, but are not the only method; depending on your environment and constraints you may be able to ingest these more efficiently by another method. By Log LocationIf the logtype in question does not have a default ingest method documented typically the current location of those logs will be a key driver in the selection of ingest method.  Use the table below as a starting point then evaluate that method for its effectiveness with the specifics of your data and detection requirements. 			Source Location									Recommended Method								GCP Logging									GCP Direct Ingest								SaaS Services									Third-Party API Feeds								Cloud Storage (General)									Storage Feeds (e.g., S3, GCS, Azure Blob)								Cloud Storage (Low Latency)									Message Feeds (e.g., SQS, EventHub, Pub/Sub, EventDriven Storage)								On-Premise Sources									Collection Agent (BindPlane)					 Overview of Available MethodsThe wide variety of available methods can be broadly grouped into 4 categories:	GCP Direct			Configured in GCP console for GCP native Logs					Feeds			Configured in the SecOps console to fetch logs from a wide variety of sources					The Ingest API			Events forwarded to SecOps from another source					Agent based			Software for endpoints or collector systems to process logs and ship them to SecOps			Each of these categories has a variety of configuration options and log_source specific settings,  the intent of this guide is not to detail every step for configuring these but to provide general guidance on the differences that may drive you to select or not-select a particular method.  For details beyond the overview presented here I have provided links to documentation for each method. GCP Direct IngestThis first of these categories is having SecOps directly pick up logs from the GCP logging service for the organization attached to SecOps.  	Strengths			Supports the most common security event and context data from GCP and automatically selects the appropriate log types.						Can be configured for multiple projects or organizations.						The LogScoping tool is available to auto-generate the export filter.					https://docs.cloud.google.com/architecture/security-log-analytics#log_scoping_tool								Limitations			Does not support namespace or ingestion KV labels. A broad selection of log types are supported but not all.						Export Filter uses ‘GCP logging query language’ syntax but is a subset of capabilities.					Supporting Documentation			https://cloud.google.com/chronicle/docs/ingestion/cloud/ingest-gcp-logs. 			 FeedsThe second broad ingestion method is ‘Feeds’, this is where SecOps is “doing something” to go get your logs.Initially this capability was limited to fetching logs from third party API or cloud storage buckets but has been expanded to include push based queues and webhooks. There are several subtypes of feeds, but all feeds will have these capabilities in common:	Each feed is configured with a name, a logtype, authentication mechanism, namespace and ingestion labels (optional) in addition to per-feed configuration data.			Changes to feed configuration require all fields (including credentials) to be resubmitted.			The feed management console includes the current status and ‘last succeeded on’			Different feed types have different ingest schedules.			https://cloud.google.com/chronicle/docs/reference/feed-management-api						Consider the feed ingest schedule and your use case requirements when planning your ingest						https://cloud.google.com/chronicle/docs/administration/feed-management			 Note: We are currently in the process of migrating Cloud Storage and Streaming feeds to a newer V2 feed type and deprecating the legacy V1 type.  The V2 feeds leverage the Storage Transfer Service and offer for improved reliability, scalability, and performance. More details here Feeds: Cloud Storage and StreamingFor ingesting logs from cloud storage with the 3 major cloud providers you can use bucket style feeds, or messaging/streaming based feeds. 	Amazon:  S3, Firehose, SQS			Azure: Blob, Eventhub			GCP: CloudStorage, Pub/Sub Push, EventDrivenStorage	 Note: We are currently in the process of migrating Cloud Storage and Streaming feeds to a newer V2 feed type and deprecating the legacy V1 type.  The V2 feeds leverage the Storage Transfer Service and offer for improved reliability, scalability, and performance. More details here For storage style feeds; Specify the storage path, whether its a file, folder, or a folder with nested subdirectories, and whether or not you want to delete files from source after transferThese feeds will run their ingest schedule every 15 minutes and ingest anything that has been added or modified since the last run.  For messaging and streaming based feeds the configuration is effectively a link to the queue, the storage backing it, and credentials. You also have the option of configuring delimiters if multiple logs will be included in the same message.  These feeds run on a nearly instantaneous schedule and pull new logs as the message is sent from the source.In addition to any egress fees with getting your data out of Azure or AWS, be mindful that there is typically a cost associated with operating the message queue.  Even though the queue/streaming based methods will get your data into SecOps faster it may not always be the correct call based on the timeliness requirements of your use case. Note: There are two options for feeds that leverage GCP Pub/Sub, both options have similar performance but have key differentiators in how authentication is handled. Understanding the differences in authentication will be key in selecting the best method if your log source and SecOps instance do not exist in the same GCP organization.	Using the ‘Google Cloud Pub/Sub Push’ method the log message is written to the Pub/Sub and a push based subscriber forwards that message to the SecOps instance. This push based subscription requires the subscription to authenticate as a service account with permissions to ingest logs into your SecOps instance.			When using the ‘Google Cloud Storage Event Driven V2’ method logs are written to a storage bucket, which is then configured to notify Pub/Sub of the log write.  This pubsub message includes details about the entry that was written to the storage bucket instead of the actual log text.  SecOps subscribes to this topic with a Pull based subscription and authenticates with the Storage Transfer Service Agent account for your SecOps project.	 			Strengths			Eliminate need for additional forwarding/processing infrastructure.						Highly redundant + scalable					Limitations			Storage Buckets/Blobs have 15 minute query intervals which may not be appropriate for select log sources based on your detection use case.						Limited support for path wildcards on Blobs/Buckets						Additional costs associated with operation of PubSub or message queues, and for blob/bucket storage.					Supporting Documentation			https://cloud.google.com/chronicle/docs/administration/feed-management#storage-example		 			Feeds: HTTP(S) FileHTTP file pulls are a very basic setup. They have similar capabilities to the cloud storage feeds, just targeting a http fileserver instead.  	Strengths			Low complexity direct calls to hosted file locations					Limitations			15-minute ingest schedule						No authentication					Supporting Documentation			https://cloud.google.com/chronicle/docs/reference/feed-management-api#http		 			Feeds: Third Party APIThe next major type of feeds are ‘Third Party API’ ; these are integrations built to pull logs directly from the API of many popular cloud services.    Configuration varies based on what API is being targeted but is typically straightforward; needing the URL for your tenant and credentials. For many of these the ‘Ingest logs from specific sources’ section of our documentation, has directions on where to go in the source systems console to create the API keys and usually indicates what permissions are required. The ingest schedule for these varies based on the target API, and are documented here. These tend to have a slower daily rate for context info and faster 1 or 5 minute schedules for event data, but make sure you take a look when setting these up and that they support your overall Mean Time To Respond (MTTR) goals.	Strengths			Runs on google infrastructure						Minimal configuration required					Limitations			Ingest schedule may not meet detection requirements						Most integrations lack filtering options						Not available for all third-party platforms					Supporting documentation			https://cloud.google.com/chronicle/docs/reference/feed-management-api#api-log-types			 Feeds: HTTPS WebhookThe last feed type is HTTPS webhook. Webhooks are broadly supported across a wide variety of products for sending notifications.We accept authentication either as url parameters or as headers, and can configure delimiters to split a single message into multiple events.	Strengths			Broad Compatibility						Supports authentication as headers or parameters						Configurable delimiters					Limitations			Simple delimiters only, no multi-event json						4MB/Message, 15k QPS per endpoint					Note: Parser max is 1MB per logline, the 4MB message limit is intended to support delimited events.									Error messages are returned with HTTP response but are not visible in SecOps console, which can make troubleshooting more challenging if your log source does not log the response.						You are responsible for caching and retries on HTTP 4XX/5XX errors					Supporting documentation			https://cloud.google.com/chronicle/docs/administration/feed-management#setup-webhook			 Ingest APIFor many pre-existing logging pipeline tools the Ingest API provides a direct path for sending raw logs or pre-formatted UDM.  This is also an excellent way to ingest events and context data from custom applications.	Strengths			Many 3rd party log pipeline tools have pre-built modules to use SecOps ingest API as a destination.						Duplicate batch protection					Limitations			1MB / Batch*						Batching must be configured by logtype						Credentials are created by support*						You are responsible for caching and retries on HTTP 4XX/5XX errors					Supporting documentation			https://cloud.google.com/chronicle/docs/reference/ingestion-api			* The regional malachiteingestion-pa endpoints allow for 1MB per batch,  this limit is applied after the batch is decompressed on the SecOps side.  There is also a pre-ga logs.import method on the newer chronicle api; using the new API allows for 4MB per batch. Ingestion Scripts as Cloud Run FunctionsAnother option with the ingestion API is a series of prebuilt ingestion scripts we provide. These can be configured as cloud run functions which will query the 3rd party API, then upload the results to the SecOps Ingestion API	Strengths			Allows ingesting data from 3rd party API where a feed option is not available.						Allows control over additional query parameters that may not be present in 3rd party api feeds.						Can be easily reworked to run in Azure or AWS based on specifics of use case					Limitations			May incur additional GCP cost associated with the function running						Other limitations per Ingestion API					Supporting documentation			https://cloud.google.com/chronicle/docs/ingestion/ingest-using-cloud-functions						https://github.com/chronicle/ingestion-scripts			 Agent-Based CollectionNote: The SecOps Forwarder is now deprecated in favor of the Collection Agent. Details on the deprecation and EOL timeline are available here: https://docs.cloud.google.com/chronicle/docs/deprecationsFor centrally managed collection of on-premises sources or log collection directly from endpoints, the Bindplane collection agent is the preferred solution. Bindplane is a telemetry pipeline that allows for collection, and refinement of logs before sending them to SecOps. All SecOps customers are entitled to ‘Bindplane (Google Edition)’. Customers with  ‘Google SecOps Enterprise Plus’ are entitled to ‘Bindplane Enterprise (Google Edition)’ which adds additional capabilities around data filtering, redaction.  Details on the capabilities of the different license are here: https://docs.cloud.google.com/chronicle/docs/ingestion/use-bindplane-agent#bp-differencesThe collection agent supports a variety of architectures from sending logs directly from an agent, to routing through a gateway agent.  The agent/gateway model is largely a theoretical distinction, The same binary is leveraged for the collection agent and gateway and a single instance of the agent software can fulfill both roles simultaneously when configured to do so. The agent includes a built in healthcheck endpoint and can be easily deployed behind in a load-balanced configuration to support high availability and high throughput requirements.  Example architecture options The agent configuration is a yaml file that can be edited manually however it is strongly recommended to use the Bindplane server which allows for centralized management and monitoring.  The Bindplane server is included with the Google Edition licenses and can be deployed in cloud or on-prem.	Strengths			Direct or gateway based upload options						Gateway can be deployed in load-balanced HA configuration						Dynamic ingestion labelling based on log contents						Highly configurable log processors						Support for complex filter logic					Limitations			Additional software to deploy and manage					Supporting documentation			https://cloud.google.com/chronicle/docs/ingestion/use-bindplane-agent						https://docs.bindplane.com/how-to-guides/google-secops/google-secops-with-bindplane-quick-start						https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/receiver		 			SecOps License UtilizationRemember that SecOps tenants are licensed by how much data is ingested so it is important to keep an eye on your current ingestion rates to avoid unexpected overage charges. Your current commit and total usage are visible in the cloud billing console.https://docs.cloud.google.com/chronicle/docs/onboard/understand-billing#track-secops-billingIn the SecOps platform there is a prebuilt ‘Data Ingestion and Health’ dashboard that provides an approximation of your throughput broken down by logtypes.  This is useful for roughly determining the license consumption associated with a particular log source and monitoring for unexpected changes but be aware the cloud billing console is the actual source of truth. ConclusionEffective data ingestion in Google SecOps begins with a use-case driven approach, working backwards from desired security outcomes to identify necessary events and context. For most data sources there are commonly applied “default” methods but having an understanding of the pros and cons of all the available methods can ensure you can easily apply to pick the most effective option based on the specifics of your environment and security goals. Related Topics	Adoption Guide: Basics of GoStash-Parsing</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 16 Apr 2026 19:27:16 +0200</pubDate>
        </item>
                <item>
            <title>Send Reports with Google SecOps SIEM only</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/send-reports-with-google-secops-siem-only-7279</link>
            <description>We currently have SIEM only for Google SecOps and we want to create dashboards and send reports.  Previously, we were using the Legacy Dashboards and it gave us the option to send them.  We are now moving to the new Dashboard but we are not able to see where we can send these as reports to an email. Can someone help me find where this is setup on the new Dashboards?</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 16 Apr 2026 16:31:38 +0200</pubDate>
        </item>
                <item>
            <title>Help Installing Recaptcha on Site</title>
            <link>https://security.googlecloudcommunity.com/fraud-defense-recaptcha-6/help-installing-recaptcha-on-site-7216</link>
            <description>Hi, I am trying to install reCaptcha on my site but when I copy and paste that snippet of code with my site key, I get a submit button that appears at the top of my website. (Not the checkbox I was going for). </description>
            <category>Fraud Defense (reCAPTCHA)</category>
            <pubDate>Thu, 16 Apr 2026 15:46:35 +0200</pubDate>
        </item>
                <item>
            <title>SOAR Platform Unavailable - No current fix from Support</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/soar-platform-unavailable-no-current-fix-from-support-7236</link>
            <description>Hi All,First may I apologise for the incorrect place for this topic - but right now I feel completely out of options.Our production SOAR instance became unresponsive yesterday around 13:45BST (503 errors from a number of API’s meaning we can’t authenticate into the system at all)- I raised a support ticket (Ref 70050003) which I was told due to technical reasons could be no higher than a Priority 2, and right now the only outcome I have received between yesterday and today (22 hours) is “We’re still looking for an engineer”.I am completely dismayed, that a production service outage has not only been left to placeholder SLA matching updates, but there is a complete lack of information, timelines and indeed any form of troubleshooting or breakfix been applied.I have asked a number of questions which have been met with silence, and this reflects on us as this then translates to the only updates we can give customers.I’ve been a proponent of SIEMplify/Chronicle SOAR/Google SecOps for over 7 years, the product and the community that surrounds it is fantastic, however, the way our production instance outage has been handled by support has left a more than bitter taste in my mouth as it honestly feels im just been given short shrift in the hope things automatically solve themselves before an engineer based in Tel-Aviv comes online tomorrow.Which comes to why i’m here: To the community - is anyone else facing an outage on their SOAR platform? Or have had an outage in the past couple of days? Was their any root cause which may help us determine what is happening with our instance?	To Google Staff - if you are reading this please could I get some form of escalation or empathetic eyes on our support ticket. I do not believe for one second that you do not have engineers available, so it could be an issue of miscommunication.I would like to round this out with an apology for having to come to a public domain to air my grievances - but I genuinely feel right now that it is my only option. Thank you for coming to my TED talk Kyle</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 16 Apr 2026 10:16:10 +0200</pubDate>
        </item>
                <item>
            <title>reCAPTCHA Enterprise MFA not triggering challengeAccount UI even with requestToken</title>
            <link>https://security.googlecloudcommunity.com/fraud-defense-recaptcha-6/recaptcha-enterprise-mfa-not-triggering-challengeaccount-ui-even-with-requesttoken-7276</link>
            <description>Hi,I am integrating reCAPTCHA Enterprise with Account Verification (MFA).Flow:1. Frontend calls grecaptcha.enterprise.execute() with action LOGIN and twofactor: true2. Backend calls createAssessment with accountVerification enabled3. Response contains accountVerification with requestToken4. When I call grecaptcha.enterprise.challengeAccount() with:- account-token- containerHowever:- No popup UI is rendered- No verification code is sentCode snippet:const triggerMFA = async (requestToken: string): Promise&amp;lt;string&amp;gt; =&amp;gt; { if (!window.grecaptcha?.enterprise) { throw new Error(&quot;reCAPTCHA not ready&quot;); } return window.grecaptcha.enterprise.challengeAccount( import.meta.env.VITE_RECAPTCHA_SITE_KEY, { &#039;account-token&#039;: requestToken, // requestToken is coming from Account verification &#039;container&#039;: &#039;mfa-container&#039;, // Binded to div in log-in page } );};Sample Response:{ ..., &quot;accountVerification&quot;: { &quot;endpoints&quot;: [{ &quot;emailAddress&quot;: &quot; [removed by moderator] &quot;, &quot;requestToken&quot;: &quot;tplIUFvvJUIpLaOH0hIVj2H71t5Z9mDK2RhB1SAGSIUOgOIsBv&quot;, &quot;lastVerificationTime&quot;: &quot;&quot;, }], &quot;latestVerificationResult&quot;: &quot;RESULT_UNSPECIFIED&quot; }}Technologies:Front End - ReactBack End - Spring BootHere are list of questions I have:1. Under what exact conditions does challengeAccount() render UI?2. Is it correct that challengeAccount only works when riskAnalysis.challenge = &quot;CHALLENGE_REQUIRED&quot;?3. Is there any way to force MFA challenge for testing?4. Does MFA depend entirely on Google’s risk engine ?</description>
            <category>Fraud Defense (reCAPTCHA)</category>
            <pubDate>Thu, 16 Apr 2026 09:08:25 +0200</pubDate>
        </item>
                <item>
            <title>Unable to access Google SecOps (Chronicle) on mobile (Safari) – anyone else facing this?</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/unable-to-access-google-secops-chronicle-on-mobile-safari-anyone-else-facing-this-7275</link>
            <description>Is anyone else unable to access Google SecOps (Chronicle) on mobile (Safari)?It was working earlier, but now it’s not loading / getting stuck.Tried clearing cache, cookies, and using private mode — still not working.Works fine on desktop.Is this happening to others?</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 16 Apr 2026 04:36:31 +0200</pubDate>
        </item>
                <item>
            <title>Clarification about suppression</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/clarification-about-suppression-7271</link>
            <description>I would like to get some clarification on two things about suppression via options which are not clear based on the official documentation Options section syntax  |  Google Security Operations  |  Google Cloud Documentation1.) For single event queries, is it possible to use more than one variable for the suppression key?  For example, $suppression_key = $hostname, $cmd does not work (editor throws an error), but using a strings.concat for the two variables does not throw an erroroutcome:    $hostname = $e.principal.hostname    $user = $e.principal.user.userid    $cmd = $e.target.process.command_line    $suppression_key = strings.concat($hostname, $cmd)2.) This statement from the doc If you don&#039;t specify a suppression_key, all query instances are suppressed globally during the window. What does this exactly mean, the entire rule is suppressed, regardless of what events are matched, during the specified window?</description>
            <category>Google Security Operations</category>
            <pubDate>Wed, 15 Apr 2026 20:04:08 +0200</pubDate>
        </item>
                <item>
            <title>No access to billing account</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/no-access-to-billing-account-7270</link>
            <description>Recently, my ex-partner took off and took the 2FA authenticator with him.Now I want to pay the Google Cloud bill, but I have no access to the Google account anymore.I tried account recovery for 2 weeks at 08:00 AM every day, but it keeps rejecting my attempt to recover with the message that they could not confirm the account belongs to me.The weird part is that I am the actual owner of the email address and domain name (with proof).How can I regain access to my account to pay the bill?</description>
            <category>Google Security Operations</category>
            <pubDate>Wed, 15 Apr 2026 19:19:57 +0200</pubDate>
        </item>
                <item>
            <title>Community Contest: Create and Share Your Security Gemini Gems</title>
            <link>https://security.googlecloudcommunity.com/news-announcements-9/community-contest-create-and-share-your-security-gemini-gems-7027</link>
            <description>Create and Share Your Security Gemini Gems We&#039;re launching a new Community challenge, and this time you not only have a chance to win Google swag, but to also be featured on the Google Cloud Security podcast with Anton Chuvakin and Timothy Peacock. The goal is simple: create and share a Gemini Gem that streamlines security tasks and saves you time. How to Participate:We are looking for creativity and utility.1. Design a Gemini Gem that solves a specific security problem. Crucially, DO NOT put corporate sensitive data in your Gems! 2. Share your submission in the comments below or USE THIS FORM. Ensure your entry includes the following details:    A link to your Gem. Learn about sharing Gems here.	    Who it is for (e.g., SOC analyst, CISO, Compliance Manager).	    A clear explanation of how to use it and the the value you derive from it (saves time, reduces risk, makes tasks easy, etc) 3. Determining the winner:The post with the most likes in the comment section below will win. Make sure to like your favorite responses to help us find our winners!	Separate from the most liked post, a panel of Googlers will determine at least one additional winner based on Gem creativity and utility. Prizes: The post with the highest number of likes will receive multiple Google Cloud Security swag items, and Anton and Tim will consider inviting them to speak on the Google Cloud Security podcast! Additional Google Cloud Security swag will be given out to participants who are either chosen by the Googler panel, or receive the second and third highest number of post likes. Duration:The challenge will run from March 12th - April 9th, 2026.  Winner Announcement Date: Winners will be announced shortly after the challenge closes. Gem Ideas and Use CasesSecurity Gems can be role-based, focusing on areas like CISO gems or policy/compliance Gems. Need a place to start? Here are some ideas from our team: 			Target Audience									Idea/Use Case									Functionality								Vulnerability Management									CVE Explainer/Prioritizer									Summarize, explain, and prioritize new vulnerabilities from advisories. A Gem could fetch structured data about a Common Vulnerability and Exposure (CVE), including impact and suggested workarounds.								Threat Intelligence									The Threat Intel Synthesizer									Quickly distill vast amounts of threat intelligence into actionable, prioritized insight, extracting top IOCs and summarizing TTPs.								DevSecOps/Code Review									The Code/Configuration Auditor									Act as a Secure Code Reviewer to identify basic security misconfigurations, such as hardcoded secrets or overly permissive access controls, and suggest secure alternatives.								Compliance/GRC									The Policy Compliance Checker									Rapidly check if a new operational proposal conflicts with existing internal policies, or generate ideas for PCI DSS / compliance compensating controls and mitigations.								Security Awareness									The Security Awareness Content Generator									Convert complex, technical vulnerabilities into engaging, non-technical communications like a short chat message or an executive summary.					  We are looking for creativity, clarity, and most importantly, how the Gem improves your security workflow. Don&#039;t hold back—even small, clever use cases can make a big impact. Ready to share? Drop your submission below in the comment section!  </description>
            <category>News &amp; Announcements</category>
            <pubDate>Wed, 15 Apr 2026 17:55:50 +0200</pubDate>
        </item>
                <item>
            <title>EP271 Can AI-Native MDR Actually Fix Your Broken SOC Workflows or Just Automate the Mess?</title>
            <link>https://security.googlecloudcommunity.com/ciso-podcast-78/ep271-can-ai-native-mdr-actually-fix-your-broken-soc-workflows-or-just-automate-the-mess-7268</link>
            <description>Guests:Eric Foster, CEO, Tenex.AI	Bashar Abouseido, President,  Tenex.AITopics: SIEM and SOC Artificial Intelligence Subscribe at YouTubeSubscribe at SpotifySubscribe at Apple Podcasts Topics covered:	“10X SOC” sounds great.  But for an organization stuck in &quot;SIEM 1.0&quot; with poor data quality and manual workflows, is “AI-native MDR” a &quot;leapfrog&quot; opportunity or a recipe for disaster?			We’ve seen the rise of &quot;Decoupled SIEM&quot; and security data lakes. Does a &quot;Modern SIEM&quot; even need to exist if an MDR platform has an agentic layer doing the heavy lifting? 			You’ve argued for AI-native over AI-bolted-on. For an end user, what are the tangible differences of using &quot;AI inside a legacy SIEM&quot; versus using an &quot;AI-native separate product&quot;?			What is the one task you thought AI would handle by now that still requires a senior human analyst to step in?			If a CISO is using an AI MDR, &quot;Mean Time to Detect&quot; (MTTD) starts to look like a vanity metric because the machine is instant. What is the new golden metric for an AI-powered SOC? Is it &quot;Time to Context,&quot; &quot;Reduction in Human Toil,&quot; or something else?			How do you help a skeptical SOC Manager—who has been burned by false positives for a decade—trust an autonomous agent to perform a &quot;containment&quot; action at 3:00 AM? 	  </description>
            <category>CISO Podcast</category>
            <pubDate>Wed, 15 Apr 2026 15:48:32 +0200</pubDate>
        </item>
                <item>
            <title>Google SecOps SIEM Stats Query API Limit</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/google-secops-siem-stats-query-api-limit-7245</link>
            <description>I know that Google is currently increasing the API limits for Google SecOps to go up to 2000 for APIs. I imagine that this would also apply to the the “Execute UDM Search” action in the GoogleChronicle integration? Would the same API quota also apply to stats searches launched through that action?</description>
            <category>Google Security Operations</category>
            <pubDate>Wed, 15 Apr 2026 11:09:51 +0200</pubDate>
        </item>
                <item>
            <title>EP270 The Convenience Tax: Why We Keep Failing at Supply Chain Security</title>
            <link>https://security.googlecloudcommunity.com/ciso-podcast-78/ep270-the-convenience-tax-why-we-keep-failing-at-supply-chain-security-7250</link>
            <description>Guest:	Dan Lorenc, Founder / CEO, Chainguard	Topics: Supply Chain Security Subscribe at YouTubeSubscribe at SpotifySubscribe at Apple Podcasts Topics covered: We just saw a security tool (Trivy) get used to pop an AI infrastructure tool (LiteLLM) to eventually pop end users. Have we reached the point where our security tooling is actually our largest unmanaged attack surface?	Why now? Software supply chain security had the perennial vibe of “not top concern” for most organizations, right?	TeamPCP pushed malicious code to existing GitHub tags. We’ve been screaming about pinning versions to SHAs for years, but clearly, nobody is listening. Is it time to admit that &#039;convenience&#039; is the primary enemy of supply chain security?	The Axios incident showed a victim compromised in under two minutes. In a world of auto-updating dependencies, is the concept of a human-in-the-loop for software updates officially dead, or do we need to look very hard at version pinning and such?	With XZ Utils case, we saw a long-game social engineering attack. Beyond just &#039;watching npm closely,&#039; what are the realistic architectural safeguards for an org that knows they can&#039;t audit every line of an update?	We’ve spent the last three years talking about SBOMs (Software Bill of Materials) like they were a pill for supply chain health. But if the scanner producing the SBOM is the one that&#039;s compromised, isn&#039;t the SBOM just a signed receipt for your own house being on fire?	What is the one practical thing they can do to ensure their CI/CD isn&#039;t a credential-exfiltration-as-a-service platform? </description>
            <category>CISO Podcast</category>
            <pubDate>Wed, 15 Apr 2026 10:13:06 +0200</pubDate>
        </item>
                <item>
            <title>Switching Google’s role with reCAPTCHA from Data Controller to Data Processor</title>
            <link>https://security.googlecloudcommunity.com/community-blog-42/switching-google-s-role-with-recaptcha-from-data-controller-to-data-processor-6646</link>
            <description>We are excited to announce that reCAPTCHA is switching from a “data controller” offering to a “data processor” offering on April 2, 2026 and will begin processing reCAPTCHA data in alignment with other Google Cloud services. With this switch, customers deploying reCAPTCHA on their websites will become data controllers, determining the purpose and means of processing their users’ personal data, while  Google will become a data processor, processing the data collected on our customers&#039; websites as instructed by our customers.   What is changing?  To effectuate this switch, we are updating our Google Cloud Platform Service Specific Terms that govern the use of reCAPTCHA. A customer’s users accessing reCAPTCHA-protected websites will no longer be subjected to Google’s Privacy Policy and Terms of Use, and Google will remove those references from the reCAPTCHA badge used in customer deployments. reCAPTCHA will process data gathered at customer websites in accordance with our Cloud Data Processing Addendum.   What do you need to do? If your website currently displays references to Google’s Privacy Policy and Terms of Use in connection with reCAPTCHA, remove those references from your website. What are the benefits ? Customer control: Your users’ personal data gathered at your web and mobile surfaces will be processed by reCAPTCHA in accordance with your instructions.	Purpose-driven processing: As a reCAPTCHA customer, your data will be used only as necessary to provide and maintain reCAPTCHA, and ensure that reCAPTCHA’s security, threat detection, protection, and response capabilities remain effective against evolving threats. Are there any impacts to reCAPTCHA service ?  Apart from the aforementioned changes, reCAPTCHA will continue to function without any other changes or service interruptions. Customers can continue to use their existing site keys, create new site keys using the Cloud Console, and access all advanced features such as Account defense, Password defense, SMS defense, Transaction defense, and Mobile SDK for Android &amp;amp; iOS to prevent fraud and abuse from bots, humans and agents. For a complete list of reCAPTCHA features, please refer to the documentation.  As previously communicated, customers utilizing Classic reCAPTCHA keys have had their keys migrated to the Google Cloud Platform and associated with a dedicated Google Cloud project.  How can I get help ?  You can get help by posting your questions in the Google Cloud reCAPTCHA Community page, reaching out to your Google sales contact, or opening a GCP support ticket.</description>
            <category>Community Blog</category>
            <pubDate>Wed, 15 Apr 2026 07:00:15 +0200</pubDate>
        </item>
                <item>
            <title>API-Usage - Stuck</title>
            <link>https://security.googlecloudcommunity.com/google-threat-intelligence-3/api-usage-stuck-7244</link>
            <description>Hello everyone,I hope this is the right place to ask. We (Argos Security) have a question regarding GTI and the API-Usage. Right now our Company is on the plan &quot;Integration Advanced&quot; which gives us certain API-Allowance. Now, what we would like to do, which is impossible according to our Google-Contact, is to change our plan and Daily caps. I can not believe that, even with throwing in more money, it is supposed to be impossible to change our daily caps. Some products we do not need at all (Diff and Private Graph for example) while others we need a way higher cap (ASM for example). Could please someone give me a hint on who to contact to get this sorted out, if possible? </description>
            <category>Google Threat Intelligence</category>
            <pubDate>Tue, 14 Apr 2026 17:37:15 +0200</pubDate>
        </item>
                <item>
            <title>Emerging Threats - Integrate with SOAR?</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/emerging-threats-integrate-with-soar-7230</link>
            <description>Is there any way we can integrate Emerging Threats with SOAR so that if an IOC match is found it creates a SOAR case?</description>
            <category>Google Security Operations</category>
            <pubDate>Tue, 14 Apr 2026 15:39:45 +0200</pubDate>
        </item>
                <item>
            <title>Playbook Triggering Based on Case State, Alert Changes, and Enrichment Context</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/playbook-triggering-based-on-case-state-alert-changes-and-enrichment-context-7211</link>
            <description>At present, playbooks can only be triggered based on information available at the time a case is created, such as initial case fields or other creation‑time metadata. This is a significant limitation, as it prevents triggering playbooks based on later context, including alert data from additional systems, case stage changes, enrichment results, or information added manually by an analyst.As a result, it’s difficult to design playbooks that respond dynamically as a case evolves, which is a common requirement in real‑world SOC workflows.Based on the discussion in Unable to trigger playbook when case is set to notify (linked above), it sounds like expanded triggering capabilities are planned for Q2 this year. Is there any additional clarity on what functionality this feature will include, and whether there is a more specific timeframe for its release?In the meantime, is there a recommended workaround from Google to address this limitation, or is manually attaching playbooks to cases currently the only viable option?</description>
            <category>Google Security Operations</category>
            <pubDate>Tue, 14 Apr 2026 14:26:17 +0200</pubDate>
        </item>
                <item>
            <title>How playbook and actions are mapped in SOAR logging</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/how-playbook-and-actions-are-mapped-in-soar-logging-6888</link>
            <description>Problem StatementIn Google Chronicle SecOps (SOAR), there is ambiguity in how Playbook executions (“playbook runs”) versus Action executions (“action runs”) are represented and counted in Playbook logs.This ambiguity becomes more pronounced in scenarios where:Multiple alerts are correlated into a single case	Each alert has a playbook attached	Each playbook contains multiple actions, integrations, and flow logicAs a result, it is unclear whether:A “playbook run” is counted once per alert, once per case, or once per action	Action execution counts can be reliably used to infer playbook execution counts	Existing Chronicle logging fields can be used to accurately distinguish playbook‑level runs from action‑level runsThis lack of clarity makes it difficult to build accurate metrics for automation coverage, playbook effectiveness, and SOAR ROI reporting.==============================================================The intent is to clearly understand:The conceptual difference between a Playbook and an Action in Chronicle SecOps	How Chronicle internally logs and counts:	Playbook executions		Action executions		How playbook execution counts behave in multi‑alert → single‑case correlation scenarios	Whether Chronicle provides a native or query‑based method to reliably calculate:	Total playbook runs		Total action runs		Whether identical counts for playbook and action executions in logs is expected behavior or a misinterpretation of the data Example from EnvironmentQuery showed case id having 5 playbook runs and 5 action runs.  When seen inside the case mgmt. view it states 1 case → 1Alert → 1 Playbook. (Refer to the snip attached below)   </description>
            <category>Google Security Operations</category>
            <pubDate>Tue, 14 Apr 2026 06:06:11 +0200</pubDate>
        </item>
                <item>
            <title>Agentic Google Threat Intelligence Query to Help Write YARA-X Rules For Livehunts</title>
            <link>https://security.googlecloudcommunity.com/google-threat-intelligence-3/agentic-google-threat-intelligence-query-to-help-write-yara-x-rules-for-livehunts-7242</link>
            <description>Hi Everyone!In support of my upcoming webinar later this week, I wanted to post the Custom Prompt I developed which I use to help convert Advanced IOC Searches into YARA-X Queries used in Livehunting inside of Google Threat Intelligence.  Feel free to use this query and save it as part of your prompts library so you can consistently use this to help you convert IOC Searches and their modifiers into Livehunts. Role: Act as a Senior Threat Intelligence and Malware Analyst and YARA-X Engineering Expert specializing in Google Threat Intelligence and LiveHunting deployments. You have deep expertise in writing highly performant, syntax-perfect rules that scale across massive data streams without causing performance degradation or high false-positive rates. You are a master of converting Advanced IOC Searches from Google Threat Intelligence into Livehunt rules using YARA-X.Action: Translate the provided Advanced IOC Search logic into a fully functional, highly optimized YARA-X rule ready for a LiveHunting pipeline. Make sure all the rulesets have been rendered using the render rule widget tool.Context: I am migrating/translating specific threat intelligence queries into YARA-X to monitor livehunt streams. Because this is for Livehunting, the rule must be extremely fast. Condition ordering matters (e.g., checking file sizes or magic bytes before running heavy string matching or regex).Strict Syntax: Use strict YARA-X syntax. Ensure any required modules (e.g., pe, elf, math, or vt) are explicitly imported at the top of the rule.Mandatory Comments: You must liberally comment the code.  Add inline comments explaining the purpose of complex strings (especially regex or hex).  Add inline comments within the condition block explaining the logic flow.Include a comprehensive meta section (description, author, date, a copy of the IOC Search Command you are converting and reference/hash if applicable).  The Meta section must include the Advanced IOC Search which you are converting to YARA-X.Performance Optimization: Write the condition block using short-circuit evaluation best practices. Put the most restrictive and computationally cheapest conditions first. No Hallucinations: Do not invent arbitrary strings or logic outside of what is required to fulfill the provided IOC search logic.Output Format: Output only the finalized YARA-X rule within a single Markdown code block. Do not include any conversational filler, introductory text, or concluding remarks. The output must be immediately copy-pasteable. Make sure all the rulesets have been rendered using the render rule widget tool.Self Evaluation:After generating the full YARA-X rule, you must perform this self evaluation to check.  For this criteria, you will need to grade your own response against this above criteria and verify the following:1. Did you use strict YARA-X Syntax?2. Did you provide Mandatory Comments?3. Did you have a filled out Meta Section which included the original Advanced IOC Search Command?4. Are your query performances optimized?5. Did you verify there&#039;s not halucinations? 6. Does your YARA query parameters match the number of modifiers from the IOC Search?   For each criteria, provide a validation check and indicate if the self evaluation was completed. Advanced IOC Search Logic to Convert into YARA-X for Livehunting:${{Advanced_IOC_Search_Syntax}}  Comment below if you have any suggested changes to this prompt, or if you have other awesome custom prompts you would like to share with the Google Cloud Security Community!You can register for the webinar that&#039;s happening on this upcoming Wednesday, April 15th at the following link:Community Webinar: Mastering the Art of Advanced IOC Searches in Google Threat Intelligence​​​​​​​</description>
            <category>Google Threat Intelligence</category>
            <pubDate>Tue, 14 Apr 2026 04:02:21 +0200</pubDate>
        </item>
                <item>
            <title>Recommended approach: IRU/Kandji log ingestion into Google SecOps Chronicle</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/recommended-approach-iru-kandji-log-ingestion-into-google-secops-chronicle-7239</link>
            <description>We&#039;re looking to ingest IRU and Kandji logs into our Google SecOps environment but can&#039;t find any related documentation. What&#039;s the recommended approach for this integration?</description>
            <category>Google Security Operations</category>
            <pubDate>Tue, 14 Apr 2026 02:43:51 +0200</pubDate>
        </item>
                <item>
            <title>Accessibility barriers in reCAPTCHA v2</title>
            <link>https://security.googlecloudcommunity.com/fraud-defense-recaptcha-6/accessibility-barriers-in-recaptcha-v2-7193</link>
            <description>Documented accessibility barriers we are seeing with reCAPTCHA v2 that cannot be remediated by website owners due to the widget’s cross‑origin iframe implementation.We understand that:WCAG allows specific CAPTCHA exceptions under SC 1.1.1	reCAPTCHA provides audio alternatives and screen reader status messaging	Many automated accessibility findings are false positives at the host‑page levelThat said, manual accessibility testing (including screen reader and keyboard testing) continues to surface issues that materially impact users with disabilities and are not addressable by implementers, including:Reliance on color‑only cues to communicate status or errors	Dialog / modal semantics inside the challenge that are not programmatically exposed	Focus handling during challenge open/close that is inconsistent for assistive technologies	Form errors and constraints within the challenge that are not programmatically associated	Layout problems when users apply custom text spacing or CSS overrides	Accessibility issues occurring entirely inside the reCAPTCHA iframe, outside the control of the embedding siteOn the host side, we have implemented all reasonable mitigations available (for example: adding iframe titles, removing fixed presentational attributes, improving DOM order, hiding non-user-facing technical fields from assistive technologies). However, these measures only affect the container, not the challenge UI itself.At this point, we are looking for guidance, not necessarily immediate fixes. Specifically:Is there any recommended configuration, pattern, or variant of reCAPTCHA for organizations with strict accessibility obligations?	Is Google’s position that these limitations are expected and accepted under the CAPTCHA exception?	Is reCAPTCHA v3 considered the preferred option from an accessibility perspective when feasible?	Are there any roadmap considerations for improving the internal accessibility semantics of reCAPTCHA challenges?We’re sharing this for awareness and documentation, as these issues are frequently cited in independent audits and affect compliance discussions for public‑sector and higher‑education deployments.Thank you for your time and for maintaining the forum.</description>
            <category>Fraud Defense (reCAPTCHA)</category>
            <pubDate>Tue, 14 Apr 2026 01:34:24 +0200</pubDate>
        </item>
                <item>
            <title>Automated Malware Triage and Analysis with Google Agentic Threat Intelligence</title>
            <link>https://security.googlecloudcommunity.com/community-blog-42/automated-malware-triage-and-analysis-with-google-agentic-threat-intelligence-7241</link>
            <description>Blog Authors: Ofir Rozmann, Principal Researcher, Google Threat Intelligence GroupDaniel Kapellmann Zafra, Intel Strategy Lead, Google Threat Intelligence GroupThe complexity of modern malware has created a persistent gap between the volume of incoming threats and the time required for expert analysis. Traditional automation captures granular behaviors, but often lacks the context required to synthesize them into a cohesive attack narrative or durable detection logic. By integrating agentic AI into the workflow, investigators can move away from tedious manual analysis and focus on decision-making. Agentic threat intelligence (TI) in Google Threat Intelligence reasons through code logic, complex file structures, and global intelligence repositories to present a clear, technical narrative in seconds, rather than hours. The acceleration of analyst workflows is a trend we have previously observed and described in our research on Gemini’s capabilities for malware analysis. This blog provides a deeper dive into using agentic TI capabilities to conduct malware triage and analysis. It builds off our recent blogs on Operationalizing Agentic Threat Intelligence and Redefining Threat Intelligence with Agentic. Submitting Files and Secure ExecutionTo begin analyzing a threat, analysts can provide a sample to agentic TI in two primary ways:	By Hash: This is the most efficient method for triaging samples already indexed in the global database and enables the user to obtain further insights into knowledge compiled from prior Google Threat Intelligence analysis. Agentic TI accepts MD5, SHA1, or SHA256 hashes without consuming Private Scanning quota.			By Upload: For novel or sensitive samples with intelligence gaps, you can upload files through the agentic TI UI or pivot from a Google Threat Intelligence Private Scan. This allows for a deeper level of inspection within seconds. This functionality supports compressed archives (ZIP, 7z, RAR) and will automatically prompt for a password if protected or attempt to unbundle the files using common passwords like &#039;infected&#039; to reach the underlying payload.	Privacy and Secure ExecutionConfidentiality: Using Google Threat Intelligence, Private Scanning ensures the sample remains confidential and is not shared with the public community or used to train external AI models. See more information about our privacy settings in our Platform documentation.	Context vs. Analysis: When you “upload a file as context,” the LLM parses the file&#039;s text (such as code snippets, logs, or an email body) to use as background for your prompt. Choosing the &quot;analyze a file&quot; option instead triggers a full dynamic execution in the secure sandbox to observe its behavior. Figure 1: File upload options (Note: Every file upload for dynamic analysis consumes a portion of your group&#039;s Private Scanning quota.) Agentic Threat Intelligence: Core Capabilities and Deep InspectionTo trigger agentic TI malware triage and analysis capabilities, a user can directly input a prompt in the UI to request the analysis of a sample, or generate a triage report via the Brief button found in the Public and Private Scanning upload pages. Figure 2: Screenshot of the Brief button available in Public and Private upload page Comprehensive Binary AnalysisBy requesting a “deep-dive analysis,” analysts can initiate investigations into logic flows and code citations across an extensive range of compiled formats. The malware analysis agent supports a comprehensive list of executables, including PE binaries (such as those compiled with Go and .NET), ELF binaries, as well as Java archives (JARs) and Android executables (DEX). Furthermore, the agent identifies the underlying file structure and packaging of an executable to optimize analysis. For example, when analyzing an AutoIt3 compiled binary, it will intelligently extract and decompile the underlying script instead of focusing on just standard PE disassembly. Figure 3: Screenshot of the Reverse Engineering Analysis Format Support &amp;amp; Behavioral TriageAgentic TI provides deep visibility across an extensive range of modern delivery vectors. Some examples of supported categories include:Scripts &amp;amp; Interpreted Languages: Immediate insights into the intent of PowerShell, VBScript, Python, etc.	Documents &amp;amp; Productivity Tools: PDFs, RTFs, and Microsoft Office formats.	Archives &amp;amp; Disk Images: The ability to unpack standard archives and disc images like .zip, .rar, .iso, and .vhd files.	Mobile &amp;amp; Email: Android (APK, DEX) and email objects (.eml, .msg).Malware Triage &amp;amp; Analysis In ActionThese sample case studies show how the agentic capabilities accelerate investigations by correlating code analysis with sandbox results and threat intelligence. We are continuously refining these capabilities. Case Study 1: Unknown ELF FileAs part of threat hunting, an analyst encountered an unknown ELF file, with no available data in Google Threat Intelligence. 	Sha256: f94542e53fbf942fdfea0e3b3ea1b2cdd6f3270524e0a2deed9f918214a29e28			MD5: d02421aa7b21a170bb8f92d9326e5745	The analyst submits the file to Google Threat Intelligence Private Scan, which provides a verdict of the file being malicious. Subsequently, the analyst requests a Brief, which triggers agentic TI to analyze the file and return Google Threat Intelligence private sandbox results, providing details and preliminary analysis.	Near-instant verdict: The malware analysis agent identifies the file as a 64-bit ELF binary and confirms its malicious intent through detections and behavioral analysis, noting that it executes unauthorized commands and interacts with system logging utilities and sensitive files.			From Triage To Mitigation: The analyst then submits a prompt to agentic TI requesting an executive brief. The output provides recommendations to mitigate the threat based on the file behavior, ranging from hunting for the activity in logs to hardening the environment and host configurations.	Operational OutcomeThe agentic capabilities allow the analyst to provide a near-instant malicious verdict on a previously unknown file not indexed in Google Threat Intelligence, with no available open source intelligence. The agent’s analysis and recommendations are subsequently leveraged to implement a plan of action to further hunt and mitigate the threat - from a local, immediate response, to a full escalation if needed. Although in-depth manual reverse engineering may be necessary in some cases, this initial triage enables defenders to significantly shorten the time-to-action in the face of malicious, unknown behavior.   Case Study 2: SELFDRIVE – Synthesizing Existing IntelligenceDuring routine endpoint monitoring, an analyst identifies a heavily obfuscated Node.js script.SHA-256: dd8502622eaa4e3798f4848cfe81c06ed0dffd7cb0a62c7ab6c7124d5b07bb04	MD5: 61a47f0fde90832c14c148ab54829ac7	 Figure 4: Screenshot of the obfuscated Node.js script The analyst queries the hash in agentic TI and requests a Brief. Since the file is already indexed in Google Threat Intelligence, the triage agent rapidly synthesizes existing global intelligence to perform behavioral triage. It identifies the script as the SELFDRIVE backdoor, attributed to the threat actor UNC6448. The analysis agent recognizes that this malware is frequently bundled within trojanized installers for legitimate software and proceeds to map out its logic.	From Code to Intent: The analysis agent decodes the obfuscation layers to reveal a sophisticated profiling routine, identifying that the script profiles the host system by querying MachineGuid from the Windows Registry. It extracts a hardcoded HTTPS C2 endpoint (apih.]vtqgo0729ilnmyxs9qd.]com) used for beaconing. The agent then enriched and corroborated the results of its analysis with pre-existing sandbox results.			Contextual Logic Extraction: The analysis agent identifies the script&#039;s capability to download and execute XOR-encrypted, Base64-encoded payloads. By identifying the use of the eval() function, the agent confirms the malware&#039;s ability to execute arbitrary code directly in-memory, bypassing traditional disk-based detection.	Operational OutcomeArmed with the deobfuscated C2 domain, the execution behaviors, and the root-cause delivery URL, the analyst executes a rapid, multi-tiered response. After querying the SIEM to confirm no successful beaconing occurred to apie.]vtqgo0729ilnmyxs9qr.]com, the analyst isolates the affected endpoint to preempt any stage-two payload deployment. Concurrently, they block both the C2 domain and the spoofed download site at the network perimeter, effectively neutralizing the campaign across the enterprise. Finally, to address the root cause, the IR team leverages the incident to issue a targeted advisory, enforcing the internal policy that all corporate utilities must be provisioned exclusively through the company&#039;s vetted self-service software center.  Case Study 3: POISONPLUG – From Analysis To DetectionA researcher encounters a suspicious SFX archive, which extracts a legitimate Windows binary alongside unknown files.	SHA256: 4d7da83ed24320959b067e0ac9682fadc3536e48a4d1290987b4e2991be9c0a3			MD5: 855b67dcf9be43f6e92e41e18c1ae64d	 The researcher provides the MD5 in agentic TI to analyze the sample, attribute it to a specific malware family, and assist in follow-up remediation and detection.Drawing on existing Google Threat Intel corpus data, the analysis agent identifies the sample as a POISONPLUG loader, analyzes its installation process and identifies IOCs. POISONPLUG is a sophisticated, modular remote access trojan (RAT) commonly used by China-nexus threat actors, known for its use of DLL sideloading to bypass security software and maintain long-term persistence.	Infection Chain: Analyzing the SFX file contents and host-based artifacts obtained by the sandbox results, the agent recognizes a three-file DLL sideloading pattern. It identifies that the launcher (a renamed legitimate executable) is being abused to load a malicious DLL (imjp14k.dll), which then decrypts a hidden payload. By connecting these disparate events: file drops, service creation (NetworkSrv), and registry modifications, the agent reconstructs the full life cycle of a POISONPLUG infection.			Artifact Extraction: Moving beyond the initial analysis, the agent extracts host and network based IOCs, which could then be leveraged to detect the activity, or curate host or network based detection rules.	 	Operational OutcomeFollowing the discovery of a new POISONPLUG infection, the organization initiates a containment and eradication protocol. After cross-referencing the MD5 hash and the malicious imjp14k.dll against EDR telemetry to identify any potential infections, the analyst triggers an automated isolation of all hosts exhibiting the unauthorized service creation. To mitigate the risk of future sideloading exploits, the IR team updates the enterprise application control policy to enforce strict DLL signature validation and implements a proactive hunt for non-standard binaries residing in sensitive system directories.  Case Study 4: APT29 – Malicious Email Triage &amp;amp; MitigationDuring a compromise assessment, a CTI analyst identified a spear-phishing attempt utilizing a social engineering lure:SHA-256: 6b0bd7937e6da617eefc4d8e237b34695419d1abff7da2971b8c4e0249f5c9b4	MD5: 6ed7079ffa933174ebed856154b1c44bFigure 5: Screenshot of the phishing email and decoy document The analyst asks the analysis agent to investigate the suspicious email hash. By parsing available public reporting and existing global telemetry, the agent identifies the email as a delivery vehicle for Russia-nexus malware attributed to APT29. The agent identifies the campaign&#039;s focus on European diplomatic missions and proceeds to deconstruct the multi-stage infection vector. 	Decoy Analysis: By analyzing the document&#039;s external relationships, the agent identifies template injection using a URL shortener (t.ly) designed to fetch a secondary payload, bypassing static attachment scanning.			Contextual Logic Extraction: The agent identifies the use of DLL Side-Loading, where a renamed legitimate executable (windoc.exe) is used to side-load a malicious DLL (AppvIsvSubsystems64.dll). It links the attack to Google Threat Intelligence and public reporting, and is able to identify the final backdoor that uses trusted services for C2 communications.	The preliminary analysis can be further enriched through targeted prompts that extract granular intelligence on the campaign, threat actor profile, and historic activities sourced from Google Threat Intelligence and open-source intelligence (OSINT). In this case, following this initial assessment the user leverages agentic TI to formulate a comprehensive response plan. Operational OutcomeFollowing the identification of host-based artifacts and the resetlocationso.]com C2 domain, the analyst initiates a comprehensive containment strategy. Beyond blocking the identified IOCs, the security team implements behavioral monitoring for any legitimate binaries executing from the $Recycle.Bin directory.To mitigate future risks, the organization launches a targeted awareness initiative specifically for high-value diplomatic targets, focusing on the danger of &quot;Special Interest&quot; social engineering and the risks of LNK file execution.Management is briefed on the risk of the potential attack, which uses trusted services for C2 and was identified only retroactively, leading to an audit of cloud-tenant permissions and the enforcement of stricter Office template policies to neutralize the threat&#039;s primary delivery mechanism.  Expanding the Agentic Approach for Malware AnalysisThe transition from manual binary triage to automated, agent-led intelligence represents a fundamental shift in how we close the gap between threat volume and analyst capacity. Agentic TI does more than just supply data. It acts as a digital assistant that reveals the hidden logic of a binary, allowing investigators to move from &quot;what is this file?&quot; to &quot;what is this code trying to achieve?&quot; in a matter of minutes.Acknowledging that sophisticated actors will inevitably develop techniques to subvert automated reasoning, we will proactively enhance agentic TI to adapt to upcoming threats. As we expand these capabilities and refine the analysis agent’s logic, we are moving toward a future where we empower analysts to perform deep, expert-level analysis at the speed and scale of the modern threat landscape.  </description>
            <category>Community Blog</category>
            <pubDate>Mon, 13 Apr 2026 16:57:44 +0200</pubDate>
        </item>
                <item>
            <title>Announcing new partner-supported workflows for Google Security Operations</title>
            <link>https://security.googlecloudcommunity.com/community-blog-42/announcing-new-partner-supported-workflows-for-google-security-operations-7240</link>
            <description>Authors: Raimundo Alcazar, Google Cloud Security Partner Ecosystem LeadMcCall McIntyre, Group Product Manager and Head of Product Partnerships Security teams are frequently burdened with manually stitching together telemetry, alerts, and response playbooks. This fragmentation can limit visibility, increase alert fatigue, and slow down investigations.Defending the modern enterprise requires tools that work together. Today at Google Cloud Next, we are thrilled to announce a robust cohort of new partner integrations for Google Security Operations as part of the Google Cloud Security integration ecosystem. Designed to deliver high-fidelity security workflows right out of the box, our newest partners to join our ecosystem with more than 300 vendors include: Beacon Security, Contrast Security, Darktrace, Gigamon, GreyNoise, Intezer, Prophet Security, SAP, Synqly, Thinkst, Tidal Cyber, Torq, and Vali Cyber. Here’s how our partners are building in the Google Security Operations ecosystem, the integration types supported, and how security operations centers (SOC) can use them. Specificity and depth: Supported integration typesThe Google Security Operations platform supports several distinct integration patterns. Here is how our current cohort is using these architectures to deliver specific technical capabilities: 1. Data feed integrations for deep visibility across your stackThese integrations pipe crucial telemetry directly into the Google Security Operations data lake, pre-mapped to our unified data model (UDM) schema so your team doesn&#039;t have to write custom parsers:	Beacon Security: Architects ingestion for both normalized and raw data. Beacon expands your coverage by collecting data from sources including APIs, syslog, webhooks, and cloud storage. Using a real-time streaming pipeline, it normalizes these raw events directly into out-of-the-box UDM mappings in minutes. Before data even reaches Google Security Operations, Beacon applies security-driven data reduction to filter and aggregate events preserving detection fidelity. Finally, it uses AI-powered data orchestration and continuous security data posture management to track collection health and help reduce the risk of blind spots becoming breaches.			Contrast Security ADR: Detects, investigates, and responds to application-layer attacks with the Contrast ADR and Google Security Operations integration. Verified runtime attack telemetry streams into Google&#039;s UDM, powering purpose-built detection rules that automatically surface confirmed exploits as cases and correlate application-layer findings with signals from WAFs, EDR tools and database security sensors.			Gigamon GigaVUE Cloud Suite: Introduces a new integration to help organizations close visibility gaps across hybrid cloud environments. This integration amplifies the power of Google Security Operations with actionable application and network-derived telemetry — including packets, flows, and metadata — from Gigamon, giving teams the context they need to detect threats earlier and investigate with greater precision.			SAP Logserv: Closes the visibility gap between SAP Logserv and security operations, empowering analysts to detect, investigate, and respond to SAP-specific threats alongside their existing IT landscape. The integration features out-of-the-box ingestion and uses SAP-specific standard parsers to normalize raw, complex infrastructure and application logs into the UDM format. This gives teams unified, enterprise-wide visibility to defend business-critical data while reducing the need for deep SAP technical expertise or custom log pipelines. This integration has been developed by Google, in partnership with SAP. 			Synqly Mesh: Offers a unified API that performs bi-directional data normalization between Google Security Operations&#039; UDM and the Open Cybersecurity Schema Framework (OCSF). It supports event ingestion configurations (Sink) as well as full bi-directional SIEM connectivity.			Vali Cyber Zero Lock: Streams hypervisor-level security events directly into your existing Google Security Operations workflows. This integration provides visibility into emerging ESXi threats and is designed to help keep virtual infrastructure protected and operational.	 2. Response integrations for streamlined alert and case management These integrations hook directly into your workflows, allowing external platforms to trigger alert delivery, create cases, and execute automated actions.	Darktrace: Currently in development, this response integration enables Google Security Operations to ingest Darktrace Incidents and Model Alerts. By pulling in pre-parsed raw logs via API or webhook, this integration provides your team with network context needed to streamline alert delivery, manage cases, and trigger automated response actions.			GreyNoise: New integrations that enhance detection and response capabilities in Google Security Operations. Spanning both SIEM and SOAR, the integration delivers standardized indicator ingestion, pre-built dashboards, YARA-L detection rules, saved searches, webhook support, response actions, and ready-to-deploy playbooks.			Thinkst Canary: Integrates directly with Google Security Operations SOAR, allowing security teams to ingest high-confidence Canary incidents as actionable cases. It preserves full alert context, surfaces extracted entities like IP addresses and hostnames, and allows analysts to acknowledge incidents without ever leaving their Google Security Operations workflow. 			Torq: Brings its AI SOC Platform to Google Security Operations to help automate the threat lifecycle. Torq pulls detections directly via API, applies agentic AI auto-triage to filter out noise, and executes autonomous response actions — like isolating endpoints or revoking access — across the security stack while keeping Google Security Operations updated with case status.	 3. Pulling Google Security Operations data (bi-directional API workflows) Security doesn&#039;t just happen in one console. These integrations use secure APIs to pull Google Security Operations detections and intelligence natively into partner platforms, bridging the gap between tools.	Intezer: Allows you to natively query, investigate, and triage Google Security Operations detections without leaving your established environment. It automatically ingests Google Security Operations alerts directly into Intezer, which then queries your underlying Google Security Operations data during active investigations to drive autonomous triage. This bi-directional workflow ensures your team has the full picture — eliminating the need to pivot between consoles, reducing manual data gathering, and freeing your analysts to focus on high-level decision-making and rapid response.			Prophet Security: Integrates with Google Security Operations to provide AI-powered alert investigation and natural language threat hunting. It is designed to automatically ingest alerts, queries the Chronicle API for real-time UDM event context, and bidirectionally syncs investigation findings and comments back to Google Security Operations, with the goal of reducing analyst workload.			Tidal Cyber: Pulls configuration and policy data from your cyber defense intelligence (CDI) environment. It can retrieve ATT&amp;amp;CK-mapped curated detection rules and user-created rules from Google Security Operations. It also synchronizes the detection rules states with Tidal to reflect enabled and disabled capabilities. By knowing both what a product is capable of and what&#039;s currently enabled in your environment, Tidal helps identify configuration gaps and assists in keeping your defensive stack and coverage map accurate as policies change.	Details on all partner integrations can be found in our technical documentation or in your Google Security Operations Content Hub console.Unify your defense todayFor technology vendors and developers looking to join the Google Cloud Security integration ecosystem, you can get started by downloading the Google Security Operations Build Partner Guide to understand our UDM schema and API requirements, and reach out to our Google Cloud Security Tech Partners team to request a development environment to accelerate your build in time for our next release cycle.You can follow all of our security announcements at Next ‘26 here. </description>
            <category>Community Blog</category>
            <pubDate>Mon, 13 Apr 2026 16:10:13 +0200</pubDate>
        </item>
                <item>
            <title>New to Google SecOps: Integrating Entra ID and Office 365 Using Feed Management (Part 3)</title>
            <link>https://security.googlecloudcommunity.com/community-blog-42/new-to-google-secops-integrating-entra-id-and-office-365-using-feed-management-part-3-3884</link>
            <description>In our previous two blogs (Part 1 and Part 2), we discussed how to set up and configure an application in Entra ID and assign permissions to access Entra ID and Office 365 events. You might be thinking at this point, I’m here to work with Google SecOps, why have we spent so much time talking about Entra ID? Well, your patience will be rewarded now, because this is where we take the prep work we did in the previous blogs and apply it to feed management to ingest events.

Let’s start by clicking Add New in our Feeds view. The pop-up that opens prompts us for a Feed Name, Source Type and Log Type. For both Entra ID and Office 365, select the Third Party API source type in the drop-down.
Entra ID has two distinct log types, one for interactive sign-ins and one for audit. The logs that are generated can also be found within the Microsoft Entra admin center under Monitoring &amp;amp; Health.

While Microsoft changed the name of Azure Active Directory to Entra ID, Google SecOps lists the log types using the Azure AD naming convention, so don’t be confused. Interactive sign-in logs will use the Azure AD log type and audit logs will use the Azure Active Directory Director Audit log type. Finally, if we want to ingest user context data from Entra ID, we can use the log type of Azure AD Organizational Context. Office 365 is also available, it’s just further down the list. Once you have selected the log type of interest, click Next.

In the first blog of this mini-series, we discussed the importance of recording the values for our tenant ID, application (client) ID and secret value. In this step, we are going to enter these values into the three fields below in the blue box.
Depending on the functionality you want to monitor in Office 365, we can set up one to five feeds based on the different content types associated with the Office 365 log type including SharePoint Audit, Exchange Audit, General Audit, and more. A separate feed for each content type is required.

Once the input parameters have been entered, we can click Next and review our settings and then click Submit. Ingestion will immediately start to occur though it may take a few minutes to initially complete.

If you see a failed status next to a specific feed, you can hover over it to get some additional information. If you get a HTTP 403 error like this, adequate permissions for the specific feed may not be set properly. In this example, our Entra ID application had Office 365 permissions but not the correct Graph API permissions for the Azure AD log type.

A HTTP 401 error may indicate that one or more of the following values is incorrect; tenant ID, application (client) ID and secret value.
Let’s take a few minutes and discuss the Azure AD Organizational Context log type. As previously mentioned, this feed ingests users and their properties from Entra ID which can then enrich events with user context, much like Okta or Microsoft Active Directory. When configuring this feed there are checkbox options to retrieve devices and/or groups.

Users in Entra ID can be assigned to one or more groups. In our example, Mike Slayton is assigned to a number of groups, one of which is the InfoSec Study Group.

If we ingest the organizational context from Entra ID without clicking the ingest groups checkbox, we get entity data associated with Mike, including his email address, when the user was created and other properties.

Here is the same entity record for Mike, except this time we clicked the ingest groups checkbox and now we have additional information pertaining to the groups he is assigned to. We’ve trimmed the data to just the InfoSec Study Group for readability, but notice that we have fields that start with the term relations.

Let’s look at another example, this time using the user Dan Cooper. Dan’s Entra ID user shows that he is assigned to the workstation wrk-pacman.&amp;nbsp;

With the retrieve devices checkbox selected, we can ingest Dan’s user information from Entra ID and here we can see relations again, but this time associating the asset and ownership to Dan.
Using the groups and devices retrieval in entity context is optional but if these capabilities are used in Entra ID, this additional context can be very useful in rule writing.
I hope this blog has shed some light on how you can take your Entra ID and Office 365 solutions and integrate them into Google SecOps. Now that you have the data in Google SecOps, what can we do with it? Well you can start by checking out some of the community rules we’ve created and blogged about to gain visibility into activities like anonymous file downloads, applications and secrets being created and more.
In the next few months, I will be making some additional use cases available around Office 365 and Entra ID. BTW, I know there are some additional Microsoft Azure based feeds that Google SecOps makes available, including Azure Activity, Graph API Alerts and Graph Activity logs. While we haven’t walked through the specifics of configuring these feeds, the concepts we covered today apply to these feeds as well, so you can take what we covered today and extend your logging as needed.</description>
            <category>Community Blog</category>
            <pubDate>Sat, 11 Apr 2026 01:54:19 +0200</pubDate>
        </item>
                <item>
            <title>How to define the same index array</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/how-to-define-the-same-index-array-7199</link>
            <description>I have an Azure log that contains multiple policy validations. Each validation generates an object inside an array with rule_name and result.All this information is parsed in SecOps as security_result, so now I have a security_result field with two or more objects.For example: security_result: (2)0:rule_name: &quot;rule_1&quot;result: &quot;success&quot;1:rule_name: &quot;rule_2&quot;result: &quot;notApplied&quot;How can I, in Search and YARA-L, perform a validation where rule_name = &quot;rule_1&quot; and the result of that policy is &quot;notApplied&quot;, without using a predefined index? The position of rule_name = &quot;rule_1&quot; can change depending on the log.</description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 09 Apr 2026 22:58:51 +0200</pubDate>
        </item>
                <item>
            <title>Google Chronicle API Isssue</title>
            <link>https://security.googlecloudcommunity.com/google-security-operations-2/google-chronicle-api-isssue-6946</link>
            <description>I need to create Google Chronicle API instance using workload identity but I am getting permission denied error  “unable to acquire impersonated credentials”.iam.serviceAccountTokemCreator has already been provided.Still getting the error.Request more insights </description>
            <category>Google Security Operations</category>
            <pubDate>Thu, 09 Apr 2026 19:29:19 +0200</pubDate>
        </item>
            </channel>
</rss>
