Skip to main content
Question

Silent log source detection

  • June 3, 2024
  • 23 replies
  • 551 views

Forum|alt.badge.img+5

Hi Team,

I am looking to get an alert if I miss a log from an endpoint from a server. Since the ingestion API monitoring is not granular enough to get this info, I planned to create a dedicated Yara-L rule for the said server. The rule I had planned for looked something like this: 

 

 

 

rule silentLogFromCriticalEndpoint { meta: author = "Srijan Kafle" description = "Detects missing logs from server" severity = "MEDIUM" created = "2024-06-03" events: $session.principal.namespace = "prodSydney" and $session.principal.hostname = "redacted" $source = $session.principal.hostname $tenant = $session.principal.namespace match: $source,$tenant over 12h outcome: $risk_score = 3 // logic to check delay here which is not working $delay = $session.metadata.event_timestamp - timestamp.current_seconds() condition: $session and $delay >= xyz }

 

 line 23 and 26 not a working line currently but wanted to include for sharing logic. Is there any way I can calculated the delay? Another alternative was to search in shorter duration (e.g 1 hour) and if there are none trigger and alert. Since the query does not return any fields during search - this also doesn't trigger. 

Any alternative that helps detects similar granularity would help.

23 replies

JeremyLand
Staff
Forum|alt.badge.img+7
  • Staff
  • June 3, 2024

There isn't really a great way to solve this from the Yara-L side, but there is a blog post from Chris Martin that covers how to approach this problem from BiqQuery  https://medium.com/@thatsiemguy/silent-asset-detection-47ad34fdab55  I recommend you give this a read. 


Rene_Figueroa
Staff
Forum|alt.badge.img+10

Forum|alt.badge.img+5
  • Author
  • Bronze 2
  • June 4, 2024

You can use Cloud Monitoring for Ingestion notifications:

https://cloud.google.com/chronicle/docs/ingestion/ingestion-notifications-for-health-metrics


Hi @Rene_Figueroa,

As stated in the post, for my usecase the ingestion API is not granular enough as it only goes as far as the source and not the fields within the pulled data. Let me know if I am missing out on the implementation


Rene_Figueroa
Staff
Forum|alt.badge.img+10

Hi @Rene_Figueroa,

As stated in the post, for my usecase the ingestion API is not granular enough as it only goes as far as the source and not the fields within the pulled data. Let me know if I am missing out on the implementation


ahh. Apologies, I missed the Ingestion API part. I think the blog that Jeremy suggested will be helpful here then. 

Rules are meant to create detection on certain events and not to track missing logs, so the logic is not there for it.


Forum|alt.badge.img
  • Bronze 1
  • June 18, 2024

A combination of cloud logging, bindplane agent, cloud monitoring and 1:1 ingestion labels might be a solution. 


AymanC
Forum|alt.badge.img+13
  • Bronze 5
  • June 30, 2024

You could create a rule, to match across the log source, and identify if there's at least 1 event being generated. And then utilise the runretrohunt API endpoint (https://cloud.google.com/chronicle/docs/reference/detection-engine-api#runretrohunt), listretrohunt API to identify when the retrohunt has finished running (https://cloud.google.com/chronicle/docs/reference/detection-engine-api#listretrohunts) and then getretrohunt to view the retrohunt detections (https://cloud.google.com/chronicle/docs/reference/detection-engine-api#getretrohunt). And then utilise a SOAR platform / python script to 'getretrohunt', to view the detections from the retrohunt, and then identify which log sources haven't generated a detection, thus isn't 'logging'. You could then re-ingest this discrepancy into Google Chronicle using the Ingestion API (https://cloud.google.com/chronicle/docs/reference/ingestion-api) and create a rule to look for that specific ingested log.

The above is just one way, however, it requires using a conjunction of in-platform and out-of-platform to reach your solution.


Forum|alt.badge.img+3
  • New Member
  • February 28, 2025

Is there a way to resolve this issue except bigQuery?


kentphelps
Staff
Forum|alt.badge.img+11
  • Staff
  • February 28, 2025

Please take a look at Please take a look at https://bindplane.com/docs/how-to-guides/secops-silent-host-monitoring to see if that solution can meet your needs.


mogriffs
Forum|alt.badge.img+3
  • Bronze 1
  • July 15, 2025

I have the same requirement. I feel like this would be an extremely valuable feature and relatively simple to implement in the product. Doing it without purpose-built tools is very complex.

What we need is to track the asset identifier for a log source, on a per-asset basis, and identify assets that fall silent. This is a common requirement in regulated industries, and statistical approaches are not adequate.

For example, I need to be able to PROVE that 100% of my PCI servers are logging, and that I will get an alert if any server stops. I can't just say that my PCI log volumes are at normal levels.

I appreciate the guides above, but BigQuery requires additional spend, and the other approach requires some API coding. I will investigate this if I get some time, but this would be so much easier as a product feature.


Forum|alt.badge.img+3
  • New Member
  • July 16, 2025

I have the same requirement. I feel like this would be an extremely valuable feature and relatively simple to implement in the product. Doing it without purpose-built tools is very complex.

What we need is to track the asset identifier for a log source, on a per-asset basis, and identify assets that fall silent. This is a common requirement in regulated industries, and statistical approaches are not adequate.

For example, I need to be able to PROVE that 100% of my PCI servers are logging, and that I will get an alert if any server stops. I can't just say that my PCI log volumes are at normal levels.

I appreciate the guides above, but BigQuery requires additional spend, and the other approach requires some API coding. I will investigate this if I get some time, but this would be so much easier as a product feature.


Since I didn't receive a positive answer, I created a Python script in the forwarder. This script maintains a status record within the forwarder and passes results using a separate log lines. This log is then processed by a custom parser in Chronicle to identify silent log sources. This method allows me to identify silent log sources for each asset within a given time period. The asset list is fed into the script using a JSON file stored in the forwarder.


uts
Staff
Forum|alt.badge.img+1
  • Staff
  • July 23, 2025

We have now released a three pronged approach to Silent host monitoring. One of these includes a detection rule approach as mentioned in the original post. Please see the details here: https://cloud.google.com/chronicle/docs/ingestion/silent-host-monitoring


maxjunker
Forum|alt.badge.img+4
  • Bronze 4
  • July 31, 2025

Hi ​@uts

we are searching for a solution to silent host monitoring for about a year now. That´s why I was very interested in these approaches. I tested the rule based approach, but this doesn´t work reliably. I can´t say for certain why exactly, but it could be a scheduling issue.
 

The other approach with leveraging Cloud Monitoring seems to be a more robust way to us. But unfortunately the ingestion labels cannot be set using Bindplane and the HTTPS Ingestion API. The support told us this is only available for gRPC API (Case 58321511). The support raised a Feature Request (411445879) but it´s unclear if it will be implemented.

 

The dashboard option is nice for an overview, but impracticle for alerting to a customer.

 

@all:

Anyone here who was successful with the rule based approach?

 

/Max


uts
Staff
Forum|alt.badge.img+1
  • Staff
  • July 31, 2025

Hi,

 

I would love to understand what is stopping the rules based approach from working. 

 

For the Cloud monitoring through HTTPS ingestion, I can have this enabled for you. Please privately send me your SecOps customer ID and we can enable that preview feature for you. 


ChrisSec
  • New Member
  • August 1, 2025

@uts 
We would like to have this feature enabled. How can I send you a private message?

Thank you


uts
Staff
Forum|alt.badge.img+1
  • Staff
  • August 1, 2025

Have your Google Account Manager / Google contact reach out to me and we will get this enabled for you 


  • November 6, 2025

Is there any way to write the rule in SIEM to identify the absence of logs rather than have an alert in GCP.
We are more interested to setup in SIEM rule detection. 


uts
Staff
Forum|alt.badge.img+1
  • Staff
  • November 6, 2025

mogriffs
Forum|alt.badge.img+3
  • Bronze 1
  • November 12, 2025

The approaches in the link all seem to rely on logs being present at some point, and then detecting a gap. This only addresses the case of healthy log sources becoming unhealthy.

The other key requirement is detection of log sources that should be present, but have never logged. This is a very common occurrence if systems are built/onboarded in a non-standard way.

I have solved this as follows, but an in-platform solution would be massively appreciated.

Step 1:
Define a reference list / data table listing all the log sources that should be logging. This can be manually maintained, but I use a script to query our CMDB and update this via the API.

Step 2:
I have a second script that performs a UDM search like this via the API:
 

$e.metadata.log_type = "CISCO_ISE"

strings.to_lower($e.observer.hostname) IN %ig_prod_cisco_ise_pci nocase

$host = $e.observer.hostname

match:

     $host

outcome:

     $count = count($host)

Step 3:
The second script queries the matching reference list via the API (I use a 24 hr window), compares the results of the metrics query above, and then sends a log into the SIEM describing any mismatch between the hosts found.

Step 4:
An alert rule in the SIEM picks up the injected log with a non-zero number of missing hosts, and triggers and alert into the SOC for investigation.

It’s all a bit clunky, and feels like such a simple connection to make inside the product which has all the information to see the mismatch between a static list and a set of search results.

 


Austin123
Forum|alt.badge.img+4
  • Bronze 3
  • November 20, 2025

Hi ​@uts 

Could you please provide the logic to check if no logs have been received from the source for 24 hours.

Since most of the previous posts related to this query, I am not seeing any responses. It was working as expected.


ar3diu
Forum|alt.badge.img+8
  • Silver 2
  • December 3, 2025

@uts 

I tried to use the examples given in the documentation to build a rule that detects when a log source was silent in the last 12 hours but not in the last 24 hours. I ran the rule, but the results are not reliable. It triggers on healthy log types as well. Do you have any idea what could be wrong?

rule Silent_Log_Type_Monitoring {
meta:

events:
$event.metadata.event_timestamp.seconds > timestamp.current_seconds() - 86400 // 24h
$silent_log_type = $event.metadata.log_type
match:
$silent_log_type over 24h
outcome:
$current_time = timestamp.current_seconds()
$max_time = max($event.metadata.event_timestamp.seconds)
$max_time_date = timestamp.get_timestamp($max_time)
$diff_seconds = $current_time - $max_time
$is_truly_silent_12h = if($diff_seconds > 43200, "true", "false") // silent in the last 12h
condition:
$event and $is_truly_silent_12h = "true"
}

 


uts
Staff
Forum|alt.badge.img+1
  • Staff
  • December 4, 2025

It looks like this rule can generate false positives based on the way that run re-runs works and current time. We are recommending a cloud monitoring based approach for SHM


ar3diu
Forum|alt.badge.img+8
  • Silver 2
  • December 4, 2025

I see the docs were also updated. Thanks ​@uts for confirming!


Austin123
Forum|alt.badge.img+4
  • Bronze 3
  • December 8, 2025

It looks like this rule can generate false positives based on the way that run re-runs works and current time. We are recommending a cloud monitoring based approach for SHM

Hi ​@uts ,

I have around 100 devices, and some of them may stop sending logs to SecOps. I need to create an alert policy that monitors all these devices. 

 

Thanks