Skip to main content

I’m aware there is a new Feature: SQS V2 to ingest logs to Google SecOps using Federated ID as stated in this update.

https://cloud.google.com/chronicle/docs/release-notes#May_26_2025

 

At the moment I’m using SQS and static keys and I’m interested in moving over to Federated IDs.  However,   I can’t seem to figure out how STS play a role here. Current documentation assumes  STS its already present.    Do I need to set STS before I can migrate over? if so how can I set up to ingest into my current Google SecOps instance.    I’ve opened a support ticket, look for docs to help me get this going.    Seems support its lost and so I’m. I can’t find clear docs on this.

Has anyone use STS + SQSv2 yet?  and or also looking into this?

 

Thanks in advance for any pointers in the right direction.

 

 

 

Are you using the documentation provided here? Feed Management API - AMAZON_SQS_V2

 


Thanks for the reply, much appreciated.

 Yeah. I’ve seen this doc.  Seems it assumes there is an STS running, 

quoting here:

Enable access to your Amazon S3 storage

This feed source uses the Storage Transfer Service (STS) to transfer data from Amazon S3 to Google SecOps. Before using this feed source, you may need to add the IP ranges used by STS workers to your list of allowed IPs

 

We don’t seem to have STS running afaik in our current Google Secops set up, so no idea then where I can get this IPs in the first place.  

 

Thanks,


Maybe start here (more on the GCP side of things) https://cloud.google.com/storage-transfer/docs/overview


Thanks Kent, looking for a how more specific  STS and Google SecOps integration. I’ve seen this doc, but its rather general.   Setting STS and transfer between repos, seems straightforward, yet , there is no doc on how to integrate with a current Google Secops instance with already many log sources coming ,  all I want I do to is start using   SQS v2  and migrate some of my current feed using SQS v1.

 

Thanks anyways for looking into this.

 

 

 


@ericv-ava Hi Eric! Which actions did you setup in your IAM Role Policy? And the trust relationship too. 


Hi ​@karollynecosta   thanks for looking into this,

 

   for the role, I gave it the following 2

and the Trust relationship as follow.   Important to note , that for my subject id I followed this doc

https://cloud.google.com/storage-transfer/docs/source-amazon-s3#federated_identity

 

but this where the questions arise,  to start my google secops project( where the google secops infra resides, storage and some compute) there is no STS service by default.  Nor I think we have one.

  1. In the panel, under Request parameters, enter your project ID. The project you specify here must be the project you're using to manage Storage Transfer Service.

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "accounts.google.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"accounts.google.com:sub": "10586XXXXX....XXX"
}
}
}
]
}

 


Hi,

 

To start, you don’t have to configure STS service in a GCP project, this is all handled by our Feed Management system in the background. To get the Subject ID, you start creating an Amazon SQS V2 feed, complete Feed Name and Log Type, in the next page choose AWS IAM Role for Identity Federation and copy Subject ID to be used in your Trust relationship policy.

After creating the IAM Role and giving the correct S3 and SQS permissions, now you have to do one more step which is adding an SQS Queue Policy like following:

 

{

      "Sid": "Statement1",

      "Effect": "Allow",

      "Principal": {

        "AWS": "arn:aws:iam::AWS_ACCOUNT_ID:role/ROLE_NAME"

      },

      "Action": [

        "sqs:DeleteMessage",

        "sqs:ReceiveMessage"

      ],

      "Resource": "arn:aws:sqs:REGION:ACCOUNT_ID:QUEUE_NAME"

    }

 

 


Also for S3 and SQS, you can create a more strict policy such as: Be aware that, AmazonSQSReadOnlyAccess is not enough as you need sqs:DeleteMessage permission to delete the message from the queue after processing.

 

{

    "Version": "2012-10-17",

    "Statement": [

        {

            "Effect": "Allow",

            "Action": [

                "s3:GetObject",

                "s3:ListBucket"

            ],

            "Resource": [

                "arn:aws:s3:::AWS_BUCKET_NAME/*",

                "arn:aws:s3:::AWS_BUCKET_NAME"

            ]

        },

        {

            "Effect": "Allow",

            "Action": [

                "sqs:DeleteMessage",

                "sqs:ReceiveMessage"

            ],

            "Resource": "arn:aws:sqs:REGION:ACCOUNT_ID:QUEUE_NAME"

        }

    ]

}

 


Thanks, yes,   ​@sgulbetekin that worked. Had the wrong subject ID. I followed the official  documentation that pointed how to get the subject Id from my project, hence the confusion. 

https://cloud.google.com/storage-transfer/docs/source-amazon-s3#federated_identity

  Also, interestingly  enough  when setting up my test sqsv2 for vpcflowlogs, I don’t get the subejct id unlike any other  log type. 

 

 In any case, yes, it has been resolved and did noticed the subject Id given on most of the cases when setting this up.  I would have been nice to see a more concise document nevertheless.  Seems there isn’t one yet.

 

 


Thanks for the feedback, I have raised an internal documentation ticket to make it more clear. 

 

For SQS V2 feeds, I just chose AWS VPC Flow Logs and Subject ID was present as well, if that’s not the case, please raise a support ticket.