Welcome to the Google Cloud Security Forums! Your ultimate security conversation spot. Collaborate with peers and experts to solve challenges, together.
SecOps starts here! Q&A, ask, share, connect.
Google Threat Intelligence chat space. Inquire, contribute, pass it on.
SCC questions? Join in, discuss, solve together.
Security Validation forum: Engage, be curious, stay up to date
reCAPTCHA troubleshooting. Find out the answers together.
Join the discussion. Post, diagnose, get the latest.
Hi Team,We need to retrieve the Gemini-generated report via API for each case as it is generated. Can we access this outside the platform, or can we export the generated results via API? Either option would work for us.
There has been a process in the GCP system that it is failing to accurately validate the reputation of the domain, and terminating the accounts automatically based on the single false positive reporting.There is absolutely no accountability taken neither by Firebase or GCP Team. Please share some details or stories about its resolution.
Hello Team,Recently I tried using Chronicle's `metrics.auth_attempts_success` function to analyze successful login activity by country for a specific user over the past 30 days. My goal was to dynamically filter the metric using the country from the incoming event like this: $ip_country = principal.ip_geo_artifact.location.country_or_region $historical_threshold_country_success = max(metrics.auth_attempts_success( period: 1d, window: 30d, metric: event_count_sum, agg: sum, target.user.userid: $targetAccountId, principal.ip_geo_artifact.location.country_or_region: $ip_country))Surprisingly, this returned 0 for all users, even though I could confirm there were successful logins from countries like Germany and the UK.After some debugging, I discovered that hardcoding the country like this worked: principal.ip_geo_artifact.location.country_or_region: "Germany"So it seems the metric function is case-sensitive and expects exact string values for dimensi
I am unable to loop the playbook.i have created the playbook for automating the eml analysis If eml file is not uploaded it should notify the again. For notifing again ihave 2 optionsLooping as shown in imageRerun the playbook if failes. (Configure action retries in playbooks | Google Security Operations | Google Cloud)Regarding Rerun(retries) feature do i need to raise a techinical ticked to enable it.Thanks in advance.
We are currently using a reCAPTCHA v3 Classic key for form submissions in our application. As per Google's recommendation, we are planning to migrate this key to the Google Cloud Platform (GCP).Before proceeding, we would like to clarify the following questions:Our current implementation references the script:https://www.google.com/recaptcha/api.js After migrating the Classic key to GCP, is it mandatory to update this reference to:https://www.google.com/recaptcha/enterprise.js? If we continue using the existing api.js reference after the migration, will the integration remain functional? Will the existing site key value change after migration? Or will the same key continue to be used? Will there be any change in the format or structure of the key itself (e.g., length, prefix) after the migration to GCP?
If you’re building out your OpenTelemetry telemetry pipeline for SecOps, I wrote a practical guide on how to avoid data loss using High Availability (HA) collectors, persistent queues, and proper batching.Includes: Gateway architecture (with diagrams) Docker + Kubernetes examples Bindplane setup with CLI + UI Would love to know what patterns the community is using.Read the full guide here: https://bindplane.com/blog/how-to-build-resilient-telemetry-pipelines-with-the-opentelemetry-collector-high-availability-and-gateway-architecture
Hey all, I’ve noticed that the Microsoft Defender ATP SOAR Integration has “Create Isolate Machine Task” and accompanying unisolate machine task actions. However, these find the host directly from the alert. I want to be able to have isolate and unisolate actions that can take in a hostname/hostid as an input so that they can be ran ad-hoc. Any ideas?
Hello. My company has recently adopted Google SecOps alongside our current ITSM tool in-which we use to conduct case management and handling. We would like to have SecOps automatically create and updates cases in our ITSM tool, however I don’t see any automated functionality to run things at case level.Could some advice be provided on how to move forward particularly with the following:Have cases created in ITSM when analyst change the case state to incident. Have new alerts added to our ITSM when alerts are added to cases in secops with incidents already raised. Have new alerts removed to our ITSM when alerts are removed from cases in secops with incidents already raised.I already have actions and integrations for our ITSM that work for these purposes, but I now need a trigger to automate them which I cannot find in SecOps.
Announcing the release of a simple SecOps API Wrapper SDK: https://pypi.org/project/secops/ now using the SecOps API is as easy as: pip install secops from secops import SecOpsClient client = SecOpsClient() chronicle = client.chronicle( customer_id="your-chronicle-instance-id", project_id="your-project-id", region="us" ) Currently supported methods: UDM Search Stats Search CSV Export Entity Summaries Entity Summary from UDM Search List IOC Matches in Time Range Get Cases Get AlertsPlease let us know your feedback, and which other use cases you'd like to see supported.
Hi all,I'm looking for some clarity around the use and interpretation of the metadata.log_type and metadata.base_labels.log_types fields in Google SecOps / Chronicle UDM, particularly in relation to the log ingestion method and parser behaviour. The Standard Flow:When data (e.g., Windows Event Logs) is ingested via agents like BindPlane, Chronicle automatically detects the log source (e.g., WINEVTLOG) and uses the appropriate parser – in this case, Windows Event Parser. The parsed UDM ends up with fields such as:"metadata": { "logType": "WINEVTLOG", ... "baseLabels": { "logTypes": ["WINEVTLOG"] }}This makes sense – the original raw event (XML) is parsed and normalized, and the parser used is reflected here. My Question:When I take a pre-parsed UDM log (in the same format as above), and upload it manually via the Events Import API, the fields instead show:"metadata": { "logType": "UDM", ... "baseLabels": { "logTypes": ["UDM"] }}This behavior is expected, I suppose, since t
Hi,I have wrote a custom actions that gets attachments from the case wall and creates a html table, allowing the user to click a button to download the attachment. This had been working okay but we have now been experiencing this following error: File "/opt/siemplify/siemplify_server/bin/Scripting/PythonSDK/SiemplifyBase.py", line 170, in validate_siemplify_error raise Exception("{0}: {1}".format(e, response.content))Exception: 500 Server Error: Internal Server Error for url: http://server:80/v1alpha/projects/project/locations/location/instances/instance/legacySdk:legacyAttachmentData?attachmentId=3157&format=snake: b'{"errorCode":2000,"errorMessage":"An error has occurred. Search for Log identifier c55c79701e3341e380d50c8167df02c9 in the Google Cloud Logs Explorer.","innerException":null,"innerExceptionType":null,"correlationId":"c55c79701e3341e380d50c8167df02c9"}'This is happening when calling siemplify.get_attachment(attachment_id), although not entirely consistent but it s
The reliability and accuracy of Webrisk API response is going bad and out of sync with the response to the malicious urls when browsed on the chrome.Is there any plan or initiative to address these issues?
Hi all, I’m having issues ingesting FortiNDR logs into Google SecOps using cfproduction docker forwarder here is the details: Any thoughts why this happen?
Hi everyone,I'm trying to send multiple alerts via a webhook to Chronicle SecOps and have them be grouped under the same case. However, when I use my job (see code below), even though both alerts are sent together and share common fields, they always end up in separate cases.import sysimport jsonimport requestsfrom urllib3.util import parse_urlfrom SiemplifyJob import SiemplifyJobfrom SiemplifyUtils import output_handler# ====================# Embedded Constants# ====================PROVIDER_NAME = "HTTP V2"INTEGRATION_NAME = "HTTPV2"API_REQUEST_METHODS_MAPPING = { "GET": "GET", "POST": "POST", "PUT": "PUT", "PATCH": "PATCH", "DELETE": "DELETE", "HEAD": "HEAD", "OPTIONS": "OPTIONS",}AUTH_METHOD = { "BASIC": "basic", "API_KEY": "api_key", "ACCESS_TOKEN": "access_token", "NO_AUTH": None,}DEFAULT_REQUEST_TIMEOUT = 120ACCESS_TOKEN_PLACEHOLDER = "{{integration.token}}"# ====================# Internal Classes# ====================class HTTPV2DomainMismatchExc
Hi everyone,I'm trying to send multiple alerts via a webhook to Chronicle SecOps and have them be grouped under the same case. However, when I use my job (see code below), even though both alerts are sent together and share common fields, they always end up in separate cases. import sysimport jsonimport requestsfrom urllib3.util import parse_urlfrom SiemplifyJob import SiemplifyJobfrom SiemplifyUtils import output_handler# ====================# Embedded Constants# ====================PROVIDER_NAME = "HTTP V2"INTEGRATION_NAME = "HTTPV2"API_REQUEST_METHODS_MAPPING = { "GET": "GET", "POST": "POST", "PUT": "PUT", "PATCH": "PATCH", "DELETE": "DELETE", "HEAD": "HEAD", "OPTIONS": "OPTIONS",}AUTH_METHOD = { "BASIC": "basic", "API_KEY": "api_key", "ACCESS_TOKEN": "access_token", "NO_AUTH": None,}DEFAULT_REQUEST_TIMEOUT = 120ACCESS_TOKEN_PLACEHOLDER = "{{integration.token}}"# ====================# Internal Classes# ====================class HTTPV2DomainMismatchEx
The leaderboard is currently empty. Contribute to the community to earn your spot!
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.