Community Webinar: Best Practices For Ingestion And Parsing in Google Security Operations
Los Angeles - Sep 17 | Chicago - Sep 19 | New York City - Oct 8 | Toronto - Oct 28
Got a question? Need an answer? Let's connect!
Q&A, discussions, share, network
Your learning hub for all things security
Join a local meetup!
Discover, RSVP, connect
Hello,I’m trying to create a rule that detects alerts from Check Point firewall, but excludes alerts related to authorized pentest activities.For this, I created a data table named authorized_scanners with the following columns: activity_name source_ip destination_ip start_date end_date I want the rule to reference this table, but I’m running into the following error:"validating intermediate representation: event variables are not all joined by equalities, the joined groups are: (authorized_scanners), (e)" Could someone help me resolve this? Here is my rule rule checkpoint_fw_medium_alert{ meta: ... priority = "Medium" events: $e.metadata.vendor_name = "CheckPoint" nocase $e.security_result.severity > "MEDIUM" $alert_name = $e.security_result.description $targeted_host = $e.target.ip $attacker_host = $e.principal.ip $date = timestamp.get_date($e.metadata.event_timestamp.seconds) //Exclude authorized activities and
Hey Community!For those of you interested in learning about ingestion and parsing in SecOps, we have an upcoming webinar on September 9th at 7AM PST! We’ll run through live examples of everything below and provide a Q&A at the end of the session. Join us to uplevel your ingestion and parsing game or just to get a head start with some best practices. Best practices for collecting logs from diverse sources like security devices (firewalls, EDRs), cloud services (AWS, Google Cloud), and operating systems. Choosing the right transport method (Bindplane, SecOps Forwarder, Cribl) and why buffering is critical for reliable data delivery. Why sending logs in their original format (JSON, SYSLOG, CEF) maximizes out-of-the-box parsing success. Tips for building effective custom parsers, including using AI to generate Grok patterns when needed. A look ahead at upcoming AI-powered features for automatic parsing. See you online soon!
Hi Everyone! My name is Robert Parker and I'm a Technical Solutions Consultant with Google Cloud Customer Success. I work with our customers in deploying and integrating Google Cloud Solutions, including Google Threat Intelligence. Within Google Threat Intelligence, we have a module known as Digital Threat Monitoring, which helps users to protect their organization against activity in the Darknet. One of its capabilities, is Domain Protection Monitoring, which helps you to identify when threat actors are registering look-alike, or similar domains to your own organizations. This is often a trick used in phishing campaigns, where threat actors try to trick their victims into visiting a spoofed (fake) copy-cat version of the trusted organization’s website. While DTM is great at identifying potential matches, or ones that do look similar your own domain, it’s not always immediately clear why it returned some of the results. I often see this when a customer asks me “Why did this domain
Hey folks, In version 64 of Google Chronicle Response Integration, we updated the “Execute UDM Query” action to support Aggregated Queries and YL2 functions. ❗❗Important Note: Aggregated Queries are only supported for Chronicle API. Backstory API doesn’t support it. To change the integration to work with Chronicle API, you need to adjust the API Root in the integration configuration. Keep in mind that this will affect the whole integration and some action outputs are slightly different, if executed with Chronicle API.❗❗ Example 1. Basic Matchingprincipal.hostname = "siemplify"match: target.file.vhash UI JSON Result{ "events": [ { "target.file.vhash": { "values": [ { "stringVal": "d6e1387847bdaafd8a024f52a74ace7a" } ] } }, { "target.file.vhash": { "values": [ { "stringVal": "673961a71ba82e0556ef95cb2147e212" } ] } } ]}Widget Example 2. Using Functions and Variables principal.hostname = "siemplify"match: target.file.vhashoutcome: $avg_seconds = avg(metadata.event_timestamp.seconds) UI J
On reCAPTCHA I do see two domains that I once used, but they are not valid/available anymore; they do not use reCAPTCHA anymore. How to delete those entries from the reCAPTCHA portal?
We have a number of clients who have access to their reCAPTCHA keys currently via the console. We usually create the keys, then invite the client as an owner of the site. Other clients may even have multiple owners in their organisation. Presumably anyone with access to a reCAPTCHA key will be getting emails about keys for migration.If no action is taken and the keys are migrated automatically, where will the GCP project be created and who will have access to the new project? If there is a single owner, how is the owner determined as reCAPTCHA’s existing admin console doesn’t seem to provide a way of determining who the person who originally created the key is?
The logs on Cloud Logging can be ingested directly to Google Secops using export filters. You set a sink filter (e.g., logName:"syslog" AND textPayload:("auth failed")) and Secops only receives logs matching that. Although export filters are pretty straightforward and powerful, they have some limitations:Filters are static – you can only control inclusion/exclusion at the time of sink creation. You can’t easily enrich, transform, or route logs differently once they leave Cloud Logging. Scaling is limited – a single sink can push to Secops but lacks intermediate processing. An alternative way to ingest logs from Cloud Logging is to use Pub/Sub. A Pub/Sub topic is a named resource in Google Cloud Pub/Sub that acts as a channel for sending messages from publishers to subscribers.Publishers send (or "publish") messages to a topic. Subscribers subscribe to that topic and receive those messages.The topic itself doesn’t store the messages permanently—it just serves as the communication point
Hello, I need help accessing the value from a specific key within the returned results of a UDM query via a playbook. I am looking to use the value from the key “isManaged” within a condition. For example: if isManaged == “True” take branch 1 else, take branch 2 I am struggling to isolate the specific value from that key to use within a conditional action. Here is an example JSON which is returned from the UDM Query within the playbook: { "events": [ { "name": "[REDACTED_EVENT_NAME]", "udm": { "metadata": { "productLogId": "[REDACTED_PRODUCT_LOG_ID]", "eventTimestamp": "2025-08-21T14:43:04Z", "eventType": "USER_LOGIN", "vendorName": "Microsoft", "productName": "Azure AD", "ingestedTimestamp": "2025-08-21T14:49:01.321370Z", "id": "[REDACTED_METADATA_ID]", "enrichmentState": "ENRICHED", "logType": "AZURE_AD", "baseLabels": { "logTypes": ["AZURE_AD"], "allowScopedAccess": true }, "enrichmentLabels": { "logTypes": ["AZURE_AD_CONTEXT", "CS_EDR"], "allowScopedAccess": true } }, "additional":
Hi guys,this morning I tried to open my instagram app but there was a recaptcha verification Everythime when I try to do it it says Captcha solution is not correct I tried different browsers like google chrome firefox opera I tried also logging from other devices(I can login but can't complete reCAPTCHA)And I can't write to instagram support because I can't even see my homepage in instagram so I can start conversationPlease someone help me
I am looking a syncing our SOAR environment to BitBucket. I have setup GitSync and configured it on the System Default Instance. When I run the test it works.I get the following errorReading configuration from Server [2025-08-27,22:18:30,000 ERROR] General error performing Job Push Job [2025-08-27,22:18:30,000 ERROR] 400 Client Error: Bad Request for url: http://server:80/v1alpha/projects/project/locations/location/instances/instance/legacySdk:legacyIntegrationConfiguration?identifier=GitSync&format=snake: b'{"errorCode":2000,"errorMessage":"Integration settings for [Identifier: GitSync] could not be found because the integration instance is missing.","innerException":null,"innerExceptionType":null,"correlationId":"2152256c2dce41c1ad3bb15d6c390181"}' I have looked at ticket which is the same error, I have configured and tested multiple times and all successful but not when I am using the IDE or trying to create a Job
Hi all,I'm currently working on a custom Chronicle parser using Logstash to handle logs in CEF format. I have already built and tested a parser for CEF logs that include key-value pairs these work correctly and generate UDM events as expected.While parsing these non-KV logs, no UDM events or entities are generated in Chronicle is what I am getting.Format 1 which is paring: <priority>CEF:0|vendor|product|version|signature|name|severity|key1=value1|key2=value2|…Format 2 which is not generating UDM event and not parsing<priority>CEF:0|vendor|product|user|user_full_name|category|action|description|statusand for the format 2 I am getting "No UDM events or entities were generated for the current parser configuration. If this is not intended, rectify the code snippet/UDM mappings and then click preview.”filter { grok { match => { "message" => [ "CEF:(?P<header_version>[^|]+)\\|(?P<device_vendor>[^\\|]+)\\|(?P<device_product>[^\
Forcepoint WebProxy logs goes into the S3 bucket(Format: export_timestamp.csv.gz) from where Google Chronicle pulls it in and within Chronicle>Settings>Feeds we have give the path to the S3 bucket.I am able to see the raw logs within SIEM but it isn't getting parsed. I click on the Raw log>Manage Parser>Create New Custom Parser>Start with Exisiting Prebuilt Parser>I am using the Forcepoint Web Proxy Parser. Error: generic::unknown: invalid event 0: LOG_PARSING_GENERATED_INVALID_EVENT: "generic::invalid_argument: *events_go_proto.Event_Webproxy: invalid target device: device is empty" The raw log doesn't have quotes. When I directly give a single row input after manually downloading the S3 log file which consists of double quotes, the issue gets fixed. When I view the raw log as CSV in the parser I get additional coulmns, reason is one user can be part of multiple groups. This is the main reason for the error! The column count should remain same. Example:Category:
I have a multi-choice question in a playbook that I need to change to a condition. Is there a way for me to replace the flow without having to delete the existing one and only retaining a single branch?
Hi everyone,I have a data table with two columns: destination_ip and location. The destination IP column contains values in CIDR format. The location column contains strings (e.g., country names). I want to run a search/query where I filter results for location to a specific country (e.g., Italy) from the query directly.What’s the best way to fetch the string value (location) from the data table based on the IP, so I can filter traffic for a country?Thanks in advance for your help!
Previously, we experienced an issue where cases were not created due to a bug in Google secops.This was resolved by updating the connector to the latest version.We are currently considering preventative measures and solutions for this issue.For example, is it technically possible to use a playbook or something to issue an alert when an issue occurs where a case is not created? If the above is not possible, we would appreciate it if you could share any ideas you have for checking the latest update announcements as a preventative measure, or for quickly noticing bugs like this one.
I see this was already asked here, but the "solution" is strange and counters the whole purpose of having actual tables with columns. If I need to do a concat of all my columns into a SINGLE column for comparison, aka, a single dimension, aka exactly how it worked with reference lists, then this is useless! This exact same behaviour was possible with reference lists as you could have a line like `john@example.com_1.1.1.1` and do a concat on the UDM fields in the rule to compare them against the value from the reference list! My use case: I have a data table `common_behaviour` that is populated according to a business logic. That table contains the following columns:user: STRINGroute: STRINGmethod: STRINGisp_name: REGEXisp_country: REGEXuser_agent: REGEX Note that some columns are REGEX, not string. I need them to be REGEX to handle high-volume repeated requests and to also handle some internal business logic in the rule that updates that data table.For example: if `isp_name` is "My Spe
The leaderboard is currently empty. Contribute to the community to earn your spot!
Already have an account? Login
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.