Seeking Advice: Best Approach for Feed-Level Monitoring & Alerting in Chronicle SecOps
Hello,
I’m looking for advice regarding the operation of Chronicle SecOps.
Currently, we’re managing multiple Feeds (log ingestion sources) in SecOps, and our goal is to detect issues such as ingestion failures or delays on a per-Feed basis as early as possible and trigger alerts accordingly.
At the moment, we haven’t decided on any specific implementation method. We’re considering various approaches, including built-in SecOps or Cloud Monitoring features, API integrations, or other cloud services, but we’re not sure which would be the most effective or practical.
If anyone has experience implementing Feed-level ingestion monitoring and alerting in Chronicle SecOps, I’d appreciate your recommendations, tips, or any lessons learned.
Thank you!
Page 1 / 1
We’re using Cloud Monitoring alert policies that send alerts to a SOAR webhook which then creates a case for the log source/log type that stopped being ingested in SIEM.
Thank you very much to everyone who has replied and shared insights—it’s truly helpful and appreciated.
If it’s alright, I’d like to ask the broader community as well: Has anyone set up a monitoring solution that compares the current log ingestion volume to the average from a week ago, and triggers an alert when there’s a deviation of more than 20%?
I am looking for any examples, best practices, or recommendations for configuring such threshold-based monitoring, whether through Cloud Monitoring, Chronicle, or other integrations.
Any further advice or resources would be greatly appreciated. Thank you again for your kind support!
You can set a threshold, rather than just absence, with Cloud Monitoring -
Thank you for your response!
Just to clarify, I’d like to set up an alert that compares the current log ingestion volume to the average over the previous week, and triggers when the deviation exceeds 20%.
Is this kind of dynamic threshold (based on moving averages or historical values) achievable through the standard Cloud Monitoring UI, or does it require a custom MQL query or other solution?
If anyone has experience with configuring this sort of ratio-based alert, I’d be grateful for any pointers or examples.
Thank you again for your support!
@cmorris Thank you very much for your reply and for sharing the screenshot of the Cloud Monitoring alert policy page.
I noticed that the response and screenshot show how to set a fixed threshold for log ingestion, which is very useful.
However, I’m specifically interested in whether Cloud Monitoring can compare the current log ingestion volume to the average from one week ago, and trigger an alert if there is a deviation greater than 20%.
Do you know if Cloud Monitoring supports configuring such alerts based on a relative threshold (for example, if the current value deviates by more than 20% from the previous week’s average)?
Cloud Monitoring is well-suited for monitoring log sources, especially since it allows for configuring alert notifications through various channels such as SMS, Email, or creating a Case in GSO. This makes it a strong option for real-time visibility and response.
However, if you have a mix of noisy and less noisy log sources, handling alerts can become a bit tricky due to the varying volume of data.
In such cases, the most effective approach is:
Configure Cloud Monitoring to send alerts via Email (to avoid overwhelming other notification channels).
Create a SecOps dashboard that categorizes log sources into Noisy and Less Noisy groups.
This way, when an alert is received via email from Cloud Monitoring, analysts can refer to the SecOps dashboard for deeper investigation and context, tailored to the log source type.
Thank you again for all the helpful responses so far.
I’d like to follow up with an additional question. I am trying to configure Cloud Monitoring to send alerts based on a relative threshold comparison—specifically, to trigger an alert when the current log ingestion count decreases or increases by more than 20% compared to the average of the past week.
When I attempt to set the rolling window to 7 days in the alert policy configuration screen, I see an error message: “Value must be at most 1 day 1 hr.” Please see the attached screenshot for reference.
Is there any way to achieve this type of weekly average comparison alert with Cloud Monitoring? Any tips, examples, or documentation would be greatly appreciated.
Thank you for your continued support.
As far as i know, we cant set it more that 1day 1hr.