Skip to main content

We are collecting server monitoring logs through an open source EDR called Wazuh, and looking at the raw log messages, multiple log data are transmitted within a single Json and collected into Secops SIEM. In a situation where the customer has no choice but to bundle and send multiple json log messages internally, how can the text be distinguished through Paser?

The pattern is constant between the first and last lines of each log message, and logs are divided into each line.

However, the UDM parser parses and displays only the first incoming line.

When multiple lines of json are input like this, can the parser be modified to parse each line separately?

 

 

 

{"timestamp": "2024-07-19T14:52:45.782+0900", "rule": {"level": 3, "description": "ProFTPD: FTP Authentication success.", "id": "11205", "mitre": {"id": ["T1078"], "tactic": ["Defense Evasion", "Persistence", "Privilege Escalation", "Initial Access"], "technique": ["Valid Accounts"]}, "firedtimes": 3454, "mail": false, "groups": ["syslog", "proftpd", "authentication_success"], "gdpr": ["IV_32.2"], "gpg13": ["7.1", "7.2"], "hipaa": ["164.312.b"], "nist_800_53": ["AC.7", "AU.14"], "pci_dss": ["10.2.5"], "tsc": ["CC6.8", "CC7.2", "CC7.3"]}, "agent": {"id": "1088", "name": "edr-wazuh-user, "ip": "123.123.123.123"}, "manager": {"name": "wazuh-manager"}, "id": "1721368365.207472917", "cluster": {"name": "wazuh", "node": "node1"}, "full_log": "Jul 19 14:52:45 ruws7-005 proftpd[15205]: 123.123.123.123(123.123.123.123[123.123.123.123]) - USER wazuh-user: Login successful.", "predecoder": {"program_name": "proftpd", "timestamp": "Jul 19 14:52:45", "hostname": "node-hostname"}, "decoder": {"parent": "proftpd", "name": "proftpd"}, "data": {"srcip": "111.111.111.111", "dstuser": "hostname-user"}, "location": "/var/log/secure", "hostname": "edr-node", "short_host": "edr-node", "scout_remote_ip": "111.111.111.111", "index_name": "is_wazuh_alert_log"}
{"timestamp": "2024-07-19T14:52:47.416+0900", "rule": {"level": 3, "description": "System Audit event.", "id": "516", "firedtimes": 1402, "mail": false, "groups": ["ossec", "rootcheck"], "gdpr": ["IV_30.1.g"]}, "agent": {"id": "3015", "name": "customer-domain.com", "ip": "123.123.123.123"}, "manager": {"name": "wazuh-manager"}, "id": "1721368367.207473981", "cluster": {"name": "wazuh", "node": "work-node"}, "full_log": "System Audit: SSH Hardening - 3: Root can log in. File: /etc/ssh/sshd_config. Reference: 3 .", "decoder": {"name": "rootcheck"}, "data": {"title": "SSH Hardening - 3: Root can log in.", "file": "/etc/ssh/sshd_config"}, "location": "rootcheck", "hostname": "customer-domain.com", "short_host": "customer-domain.com", "scout_remote_ip": "123.123.123.123", "index_name": "is_wazuh_alert_log"}

 

 

 

 

How do you ingest these logs?

If we take the sample log you provided as an example, it should be parsed line by line when using an AWS S3 feed. It’s possible that your ingestion method does not separate your log into individual lines by newline characters.

Could you please confirm this?


Reply