Hello Google SecOps Community,
Iām currently integrating AWS CloudWatch Logs ā Amazon Kinesis Data Firehose ā Google SecOps (Chronicle) using the HTTP endpoint / importPushLogs method, and Iām running into a persistent delivery failure.
Environment
-
Google SecOps SIEM (Pre-GA CloudWatch Firehose ingestion)
-
AWS Region: ap-southeast-3
-
Source: CloudWatch Logs (subscription filter)
-
Transport: Amazon Kinesis Data Firehose (HTTP endpoint)
-
Destination: Google SecOps feed endpoint
Configuration (summary)
-
Firehose destination: HTTP endpoint
-
HTTP Method: POST
-
Content encoding: Disabled
-
Buffering hints: 1 MiB / 60 seconds
-
S3 backup: Disabled
-
Authentication:Ā https://us-chronicle.googleapis.com/v1alpha/projects//locations/us/instances//feeds/%3AimportPushLogs?key=<API_KEY>
-
-
Secret key header: configured exactly as provided by the SecOps feed
-
Secret key value: generated from the SecOps feed
-
-
IAM + CloudWatch subscription filter configured per documentation
What works
-
Sending manual test data (same endpoint, same API key, same secret header) is successfully ingested into SecOps
-
This strongly suggests:
-
API key is valid
-
Secret key + header name are correct
-
Feed itself is active and reachable
-
The issue
When Firehose delivers real CloudWatch logs, every batch fails with:
HttpEndpointInvalidResponseException -
Response is not recognized as valid JSON or has unexpected fields. Raw response received: 500 { "error": { "code": 500, "message": "Internal error encountered.", "status": "INTERNAL" } } -
This happens consistently across multiple records and retries.
Observations
-
Firehose expects a valid HTTP response but receives a Google API 500 error
-
The request payload is gzip-decoded CloudWatch log data
-
Because manual test ingestion works, the failure seems payload-specific or format-specific, not authentication-related
-
Possibly related to:
-
Expected Firehose request format vs Chronicle ingestion expectations
-
Pre-GA CloudWatch Firehose parser limitations
-
Batch structure or headers sent by Firehose
-
-
Questions
-
Is there a known limitation or requirement for Firehose ā SecOps payload formatting (headers, content-type, record structure)?
-
Does Chronicle expect newline-delimited JSON or a specific wrapper when receiving Firehose batches?
-
Are there known 500 INTERNAL errors when Firehose sends certain CloudWatch log event types?
-
Is there any required Firehose transformation (Lambda or record deaggregation) before sending to SecOps?
-
Any guidance, confirmation of expected Firehose request format, or known issues would be greatly appreciated.