Skip to main content

Hello, community!

I’m working with SecOps Chronicle and the BindPlane agent and am facing an issue where login/logout audit logs and invalid password attempts are not appearing in SecOps Chronicle, even though they are successfully exported and logged in a custom log file (pem.log).

Here’s the situation:

I’m using the BindPlane agent to export logs to SecOps Chronicle. I’ve set up a custom exporter to log events to a file (pem.log), and this file contains login/logout events, invalid password attempts, and alert status changes.

The exporter configuration looks like this:

exporters:
  file:
    path: /a/b/pem.log


The login/logout, invalid password attempts, and alert status change logs are successfully captured in the pem.log file.

However, in SecOps Chronicle, only the alert update logs are successfully processed, while the login/logout and invalid password logs are missing.

What I’ve already checked:

  • Log Type: The log_type and parser are correctly configured and match SecOps Chronicle’s setup. Alert update logs are processed correctly in SecOps Chronicle.
  • Exporter: The logs are successfully exported to the pem.log file.
  • Log Format: The log format for all logs (login/logout, invalid password, and alert updates) is the same, yet only the alert status changes appear in SecOps Chronicle.

My questions are:

  • How can I check if there are any filters or rules on the SecOps Chronicle ingestion side that might be preventing login/logout logs or invalid password attempts from being ingested or processed correctly?
  • Are there any common filtering mechanisms, configuration issues, or parsing errors in SecOps Chronicle that could be preventing these specific logs from appearing?
  • What logs or configurations within SecOps Chronicle or BindPlane should I check to investigate this further?
  • Why might login/logout logs and invalid password attempts show up in the exported PEM file but not appear in SecOps Chronicle, while alert status changes are processed correctly?

Any help or suggestions would be greatly appreciated!

I would try a raw log search in the first instance (https://cloud.google.com/chronicle/docs/investigation/raw-log-search-in-investigate) and look for a specific unique value in an example log, and then see if that shows in SecOps or not.  If you see the entire raw log, then this would appear to be a parsing issue.  If you do not see the expected log, then its collection and configuration on the agent, e.g., is it a multiline log (perhaps on the first part is being collected), and go from there. 


Hi ​@cmmartin_google 
 

Thanks for the suggestion! I did check using the raw log search in SecOps Chronicle, specifically for the user_authentication logs. Initially, up until 4th September, I was able to see some of the logs, but not all. Some login/logout and invalid password attempts were not reaching SecOps.

Since then, I haven’t been able to see any user_authentication logs at all in the raw log search.

Regarding the log format, this is not a multiline log. The logs are coming through syslog, and in the pem.log file, I can see individual, separate logs being collected (no multiline formatting).

What’s concerning is that if this were an agent issue, why was it partially sending logs before but now it’s not sending any user_authentication ( login, logout, invalid_password ) logs at all? For all other logs like alert updates and asset updates, I’m still receiving them in SecOps Chronicle, so this doesn’t seem like a broader collection issue.

Could it be something related to the configuration or filtering on the agent side that’s affecting only the user_authentication logs?

I am using the simple configurations as below

 

receivers:
tcplog:
listen_address: "0.0.0.0:54525"

exporters:
chronicle/chronicle_w_labels:
compression: gzip
creds: '{ json blob for creds }'
customer_id: <customer_id>
endpoint: malachiteingestion-pa.googleapis.com
ingestion_labels:
env: dev
log_type: <applicable_log_type>
namespace: testNamespace
raw_log_field: body
service:
pipelines:
logs/source0__chronicle_w_labels-0:
receivers:
- tcplog
exporters:
- chronicle/chronicle_w_labels

Any insights on why this would be happening would be greatly appreciated!


I don’t see any processors in your pipeline that would be dropping data.

 

Using Syslog over TCP should rule out truncation of the message, but there could be framing issues (stretch)?  One troubleshooing option could be running a packet capture on the host to see if those auth messages come in as valid Syslog, e.g., in something like tcpdump or wirshark (using protocol analysis).

 

It may be easier to send the data to another Syslog daemon to rule out the OTEL collector, and write the messages to a local file for example.

 

There is a deduplication feature in Google SecOps whereby if the first few hundred bytes of a log are identical to a prior message it is not indexed.  This should not happen with Syslog if the format has a valid syslog header, e.g,. unique date time with source sending device, but if by some chance these messages are not creating a syslog header, and have a static first part of the message that could cause deduplication to occur (unlikely).

 

The other thought, do the OTEL logs themselves on the Agent in question show anything interesting - 

https://cloud.google.com/chronicle/docs/ingestion/use-bindplane-agent#otel_collector_service_and_logs