Skip to main content

Hello Everyone,

I have ingested several logs with different formats into Chronicle under the same log source type. I discovered that the actual logs are Syslog in CEF format rather than JSON. I have built a parser, and it is working fine. However, when validating the parsing, it fails because there are different log formats in Chronicle under the same log source type, causing the validation to fail.

Is there a way to delete all logs—whether JSON, raw, or any other format—associated with the same log source type from SecOps Chronicle? Alternatively, is there another way to avoid this validation error?

There is no way to delete without a support case and that process is time consuming.  


If I understand correctly, different formats for a specifc log type should not matter here. We can ingest multiple formats for a log.   Is there a log message for the validation error?


There is no way to delete without a support case and that process is time consuming.  


If I understand correctly, different formats for a specifc log type should not matter here. We can ingest multiple formats for a log.   Is there a log message for the validation error?


Hello @dnehoda

Thank you for your response. I have created a feed and ingested logs into Chronicle. Initially, to verify whether the ingestion was working correctly, I ingested a simple JSON log:

{
"key1": "value1",
"key2": "value2"
}

However, the actual log format is CEF. I then created a parser in Chronicle to extract values from the Syslog using regular expressions, as shown below:

"message" => [
"%{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:host} CEF: (?P<header_version>[^|]+)\\\\|(?P<device_vendor>[^\\\\|]+)\\\\|%{GREEDYDATA:cef_event_attributes}"
]

The parser works fine for logs in CEF format. However, when I proceed with validation, I encounter the following error:

ERROR: generic::unknown: pipeline.ParseLogEntry failed: LOG_PARSING_CBN_ERROR: "generic::internal: pipeline failed: filter grok (0) failed: failed to parse data with all match patterns"
Log causing the issue:
{"key1": "value1", "key2": "value2"}

It seems that the validation is failing because the feed contains both JSON and CEF format logs under the same log source type. Since logs will always be in CEF format, is there a way to avoid this validation issue?

Any guidance on resolving this would be greatly appreciated.


Hello,
You do not have to parse all logs for the parser to pass validation step, but you have to handle all known cases (all log format already present on the platform up to 30 days ago) in the parser code.
To resolve your issue, you just need to add a snippet that drops logs that are not CEF.
E.g :

filter {
# Initialization of you parser vars
# [...]

# Modify your grok snippet to intentionnaly drop what is not CEF
grok {
match => {
"message" => [
"%{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:host} CEF: (?P<header_version>[^|]+)\\\\|(?P<device_vendor>[^\\\\|]+)\\\\|%{GREEDYDATA:cef_event_attributes}"
]
}
on_error => "not_cef"
}
# Intentionnally drop all logs that have the error flag (= are not CEF)
if [not_cef] {
drop {}
}

# Continue with your parsing/mapping
# [...]
}

Hello,
You do not have to parse all logs for the parser to pass validation step, but you have to handle all known cases (all log format already present on the platform up to 30 days ago) in the parser code.
To resolve your issue, you just need to add a snippet that drops logs that are not CEF.
E.g :

filter {
# Initialization of you parser vars
# [...]

# Modify your grok snippet to intentionnaly drop what is not CEF
grok {
match => {
"message" => [
"%{SYSLOGTIMESTAMP:timestamp} %{HOSTNAME:host} CEF: (?P<header_version>[^|]+)\\\\|(?P<device_vendor>[^\\\\|]+)\\\\|%{GREEDYDATA:cef_event_attributes}"
]
}
on_error => "not_cef"
}
# Intentionnally drop all logs that have the error flag (= are not CEF)
if [not_cef] {
drop {}
}

# Continue with your parsing/mapping
# [...]
}

Thanks @chrisd2 it worked.


Reply