Skip to main content
Question

AWS CloudWatch → Kinesis Data Firehose → Google SecOps returns HTTP 500 (Firehose: HttpEndpointInvalidResponseException)

  • February 10, 2026
  • 0 replies
  • 9 views

Bobby Pratama HCFI

Hello Google SecOps Community,

I’m currently integrating AWS CloudWatch Logs → Amazon Kinesis Data Firehose → Google SecOps (Chronicle) using the HTTP endpoint / importPushLogs method, and I’m running into a persistent delivery failure.

Environment

  • Google SecOps SIEM (Pre-GA CloudWatch Firehose ingestion)

  • AWS Region: ap-southeast-3

  • Source: CloudWatch Logs (subscription filter)

  • Transport: Amazon Kinesis Data Firehose (HTTP endpoint)

  • Destination: Google SecOps feed endpoint

Configuration (summary)

What works

  • Sending manual test data (same endpoint, same API key, same secret header) is successfully ingested into SecOps

  • This strongly suggests:

    • API key is valid

    • Secret key + header name are correct

    • Feed itself is active and reachable

    • The issue

      When Firehose delivers real CloudWatch logs, every batch fails with:

      HttpEndpointInvalidResponseException

    • Response is not recognized as valid JSON or has unexpected fields. Raw response received: 500 { "error": { "code": 500, "message": "Internal error encountered.", "status": "INTERNAL" } }

    • This happens consistently across multiple records and retries.

      Observations

    • Firehose expects a valid HTTP response but receives a Google API 500 error

    • The request payload is gzip-decoded CloudWatch log data

    • Because manual test ingestion works, the failure seems payload-specific or format-specific, not authentication-related

    • Possibly related to:

      • Expected Firehose request format vs Chronicle ingestion expectations

      • Pre-GA CloudWatch Firehose parser limitations

      • Batch structure or headers sent by Firehose

Questions

  • Is there a known limitation or requirement for Firehose → SecOps payload formatting (headers, content-type, record structure)?

  • Does Chronicle expect newline-delimited JSON or a specific wrapper when receiving Firehose batches?

  • Are there known 500 INTERNAL errors when Firehose sends certain CloudWatch log event types?

  • Is there any required Firehose transformation (Lambda or record deaggregation) before sending to SecOps?

  • Any guidance, confirmation of expected Firehose request format, or known issues would be greatly appreciated.