Skip to main content

Hello,

I am currently using Google SecOps for SOC operations, I want to forward logs from GCP Linux and Windows VMs to SecOps forwarder. I have created forwarder in SecOps to collect Linux data and download the .conf file (https://cloud.google.com/chronicle/docs/install/forwarder-configuration-manual).  I was able to establish connection and send the logs from GCP Linux VM to SecOps forwarder. But as I am using .conf file for authentication, I am storing it on local storage of VM. But that file contains following information :

output:
url: xxxxxxxxxxxxxxxxxxxxxx
identity:
identity:
collector_id: xxxxxxxxxxxxxxxxxxxxxx
customer_id: xxxxxxxxxxxxxxxxxxxxxx
secret_key: |
{
"type": "service_account",
"project_id": "xxxxxxxxxxxxxxxxxxxxxx",
"private_key_id": "xxxxxxxxxxxxxxxxxxxxxx",
"private_key": "xxxxxxxxxxxxxxxxxxxxxx",
"client_email": "xxxxxxxxxxxxxxxxxxxxxx",
"client_id": "xxxxxxxxxxxxxxxxxxxxxx",
"auth_uri": "xxxxxxxxxxxxxxxxxxxxxx",
"token_uri": "xxxxxxxxxxxxxxxxxxxxxx",
"auth_provider_x509_cert_url": "xxxxxxxxxxxxxxxxxxxxxx",
"client_x509_cert_url": "xxxxxxxxxxxxxxxxxxxxxx",
"universe_domain": "xxxxxxxxxxxxxxxxxxxxxx"
}

which contains sensitive information. I don't want to store this file on VM. What I can 

The forwarder requires credentials to authenticate with the ingestion api, so that sensitive info will need to be stored on the forwarder system.

For GCP linux there are 2 ways you can avoid storing those credentials on the local VM and still upload logs to secops.


1: Since the souce system is hosted on GCP you can configure direct ingest via the secops export filter.  https://cloud.google.com/chronicle/docs/ingestion/cloud/ingest-gcp-logs#supported-logs-for-export This allows for NIX_SYSTEM ( log_id's "syslog", "authlog", and "securelog") to be ingested without any additional credentials stored anywhere, but does not allow for custom ingestion labelling or all log_types.


2: Configure the forwarder on a different dedicated use system, and use syslog forwarding from your source VMs to send the logs to your dedicated forwarder.  This does require the ingestion credentials to be stored on a VM, but that system should be a dedicated VM with access restricted to SecOps admins which reduces the credential exposure.  This method does require an additional forwarder VM to be deployed and maintained but allows for protection of the ingest credentials and full control over logtypes, namespaces and ingestion labels.


Hello @JeremyLand, Thank you for your reply.

I can't go with first option because I need to use forwarders only.

is there any way if we can use Secret Manager or KMS, to store the conf file data and use it? 

Thanks in advance


From the forwarder perspective the _auth.conf file containing the secret needs to be accessible on the /opt/chronicle/external path inside the container. So as long as we can get the auth (and regular conf) files on that path it can work.


I don't believe docker has native support for mounting gcp secrets into containers, but the GKE Secret Manager add on does, and I have seen multiple orgs that run their forwarder as GKE clusters manage the auth secret this way.  Compared to running the forwarder on a local docker instance this path does add a fair amount of complexity associated with managing the container images, network, and deployments. It can work but due to the complexity I wouldn't recommend it for production forwarder deployment unless you (or your team) are already familiar with  GKE. 


Reply