Skip to main content
Question

logs ingestion via S3 bucket missing

  • November 21, 2025
  • 2 replies
  • 29 views

stephannyeb
Forum|alt.badge.img

I set up a few months ago a long ingestion for slack audit via S3 bucket. It was working fine up until a few days ago that i went to search for logs and couldnt find the log type or log source or literally anything related to this feed. 

I checked the S3 side to make sure everything is working as intended, didnt find any discrepancy there, checked the feed to see when was the last time it received logs from the S3 bucket and its actively receiving it. However, where are the logs going? I’m not seeing anything when i try to search it and i dont know how to troubleshoot this issue? is there a last chance source that i can look in? (splunk has something like this) or is there a way to search what may be the issue? i am going to look into the log explorer in gcp console to see if there’s anything i can find on it there. Please help? suggestions? tips appreciated!

2 replies

JeremyLand
Staff
Forum|alt.badge.img+7
  • Staff
  • December 1, 2025

You can use the ‘Data Ingestion and Health’ dashboard to investigate,  Scroll down to the “Ingestion - Events By Log Type” widget and look for SLACK_AUDIT in that table, then look to see if it is reporting the creation of “normalized events” or if there are any parsing/validation errors.

Depending on what you see it can indicate where the error is or where you should look next.

  • If nothing shows in that dashboard it is an indicator that you do not have global scope access.  You may need someone with that access to investigate the issue and ensure you have access to a data scope that contains these logs.
  • If other logtypes appear but SLACK_AUDIT does not appear in the dashboard - That indicates that logs are not making it to the parser.  Double check you configured the log type properly in the feed, but you may need to submit a support case with your feedID to investigate
  • If SLACK Audit does appear and...:
    • Shows all Normalized Events - then your events are ingesting and parsing properly, try broadening your UDM search for a longer time range or just searching for metadata.log_type = “SLACK_AUDIT”
    • Show All parsing/validation errors - Generally an indicator that your logs are malformed which results in the parsing erroring our producing invalid output,  try to find the logs using a raw log search ` raw = /.*/` and select just ‘Slack Audit’ from the log sources dropdown.  You may need to expand your search timerange for the logs to show up.  From the results here you should be able to see the logs as they were ingested to secops and can then investigate the formatting issue either in your log source or by customizing the parser.
       

JSpoorSonic
Forum|alt.badge.img+9
  • Bronze 3
  • December 11, 2025

Is your feed showing a logs last received timestamp?

 

I had something similar with both a SentinelOne API Feed as well as Azure AD Context Feed (both API in this case).

I guess this is about when they introduced the feed / content packs for these two.

I had to rebuild the feeds.