Skip to main content

 

Hey everyone 🤩,

 

HI’m new here and really excited to join this community. I wanted to start off with a topic that almost every cloud security team is struggling with these days — alert fatigue.

In Google Cloud, tools like Security Command Center and Chronicle generate tons of alerts every day. Many of them turn out to be low priority or false positives, but they still take up time and attention. After a while, it’s easy for real threats to get buried in the noise.

I’ve been trying a few things to handle it better, like:

  • Suppressing repeated low-priority alerts
  • Using automation for triage
  • Tagging alerts based on asset importance
  • Testing out AI-based event correlation

 

Still, I feel there’s a fine line between cutting down noise and missing something critical.

 

So I wanted to ask everyone here:

👉 How are you reducing alert fatigue in your GCP environments?

  1. Any best practices or workflows that worked for you?
  2. Are there Chronicle or SCC tuning methods that actually help?
  3. What kind of automation setups make the biggest difference?

 

Would love to hear your thoughts and learn from your experience.

Looking forward to your insights!

 

—Thanks in advance,
Just getting started, but eager to contribute and learn from you all! 🌿

Here are a few community and other blog posts that can help in this area:


Hi ​@AkshayManojKP,

 

Some of the things we’ve done:

 

Risk Analytics & UEBA, this is quite powerful when used correctly, if your data is mapped to support the UDM fields as filters, and you have sufficient data, it really helps with finding anomalous behavior:

 

https://docs.cloud.google.com/chronicle/docs/detection/metrics-functions

‘Get Similar Cases’ action, as part of a playbook flow, how about using the out of the box action which looks for similar cases, maybe you don’t want to automatically close these if it’s a High or Critical alert, but it will present a widget to easily navigate to a similar case, gather context and quicken the time taken to triage a case.

Within a playbook, asking Gemini for their thoughts on the case and events, presenting this to analysts, you could go to the extent if prompt engineered, to use this as a way to automatically close cases (this of course requires trust in the way you prompt, and the response being some-what accurate).

If you’re getting ton of alerts, and a lot of them are false positives, try to fix this at as close to the source as possible. Review the rule and the false positives, see if you can implement better logic to detect more anomalous activity, utilize some of the global context (dependant on the rule’s purpose) such as using WHOISXMLAPI or GTI as part of assisting in really trying to capture suspicious logic, using UEBA and risk analytics, and of course, composite detections as mentioned above to find where an attack is, chained activity.

 

Use context-enriched data in rules  |  Google Security Operations  |  Google Cloud

Hope these help!

 

Kind Regards,

Ayman