Are you using the documentation provided here? Feed Management API - AMAZON_SQS_V2
Thanks for the reply, much appreciated.
Yeah. I’ve seen this doc. Seems it assumes there is an STS running,
quoting here:
Enable access to your Amazon S3 storage
This feed source uses the Storage Transfer Service (STS) to transfer data from Amazon S3 to Google SecOps. Before using this feed source, you may need to add the IP ranges used by STS workers to your list of allowed IPs
We don’t seem to have STS running afaik in our current Google Secops set up, so no idea then where I can get this IPs in the first place.
Thanks,
Thanks Kent, looking for a how more specific STS and Google SecOps integration. I’ve seen this doc, but its rather general. Setting STS and transfer between repos, seems straightforward, yet , there is no doc on how to integrate with a current Google Secops instance with already many log sources coming , all I want I do to is start using SQS v2 and migrate some of my current feed using SQS v1.
Thanks anyways for looking into this.
@ericv-ava Hi Eric! Which actions did you setup in your IAM Role Policy? And the trust relationship too.
Hi @karollynecosta thanks for looking into this,
for the role, I gave it the following 2
and the Trust relationship as follow. Important to note , that for my subject id I followed this doc
https://cloud.google.com/storage-transfer/docs/source-amazon-s3#federated_identity
but this where the questions arise, to start my google secops project( where the google secops infra resides, storage and some compute) there is no STS service by default. Nor I think we have one.
-
In the panel, under Request parameters, enter your project ID. The project you specify here must be the project you're using to manage Storage Transfer Service.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "accounts.google.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"accounts.google.com:sub": "10586XXXXX....XXX"
}
}
}
]
}
Hi,
To start, you don’t have to configure STS service in a GCP project, this is all handled by our Feed Management system in the background. To get the Subject ID, you start creating an Amazon SQS V2 feed, complete Feed Name and Log Type, in the next page choose AWS IAM Role for Identity Federation and copy Subject ID to be used in your Trust relationship policy.
After creating the IAM Role and giving the correct S3 and SQS permissions, now you have to do one more step which is adding an SQS Queue Policy like following:
{
"Sid": "Statement1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AWS_ACCOUNT_ID:role/ROLE_NAME"
},
"Action": [
"sqs:DeleteMessage",
"sqs:ReceiveMessage"
],
"Resource": "arn:aws:sqs:REGION:ACCOUNT_ID:QUEUE_NAME"
}
Also for S3 and SQS, you can create a more strict policy such as: Be aware that, AmazonSQSReadOnlyAccess is not enough as you need sqs:DeleteMessage permission to delete the message from the queue after processing.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::AWS_BUCKET_NAME/*",
"arn:aws:s3:::AWS_BUCKET_NAME"
]
},
{
"Effect": "Allow",
"Action": [
"sqs:DeleteMessage",
"sqs:ReceiveMessage"
],
"Resource": "arn:aws:sqs:REGION:ACCOUNT_ID:QUEUE_NAME"
}
]
}
Thanks, yes, @sgulbetekin that worked. Had the wrong subject ID. I followed the official documentation that pointed how to get the subject Id from my project, hence the confusion.
https://cloud.google.com/storage-transfer/docs/source-amazon-s3#federated_identity
Also, interestingly enough when setting up my test sqsv2 for vpcflowlogs, I don’t get the subejct id unlike any other log type.
In any case, yes, it has been resolved and did noticed the subject Id given on most of the cases when setting this up. I would have been nice to see a more concise document nevertheless. Seems there isn’t one yet.
Thanks for the feedback, I have raised an internal documentation ticket to make it more clear.
For SQS V2 feeds, I just chose AWS VPC Flow Logs and Subject ID was present as well, if that’s not the case, please raise a support ticket.