Skip to main content
Question

Extracting Curated Detections via SDK: Is the API limited to returning only rules that have triggered?

  • March 9, 2026
  • 1 reply
  • 1 view

joaocarvalho
Forum|alt.badge.img+2

Hi everyone,

I work for an MSSP, and we are currently trying to build a workflow to extract a section of the inventory of Google SecOps Curated Rules. Our goal is to map our clients' detection coverage (for example, extracting all curated detections monitoring their AWS CloudTrail logs) and show them the active MITRE coverage, regardless of whether a rule has fired an alert yet.

I've been using the Python secops-wrapper SDK, basing my code on this section of the documentation.

Initially, I called the chronicle.list_curated_rules(as_list=True) method expecting it to return the full catalog of 1500+ curated detections. However, I encountered a very specific and limited behavior. Here is my troubleshooting journey:

  1. The Pagination Theory: On my first tenant, the script extracted exactly 161 rules and stopped. I initially thought I was misusing the pagination or hitting a rate limit. However, when I ran the exact same script on a different tenant, it returned a completely different number of rules. This proved it wasn't a hard limit or a pagination bug.

  2. The "Enabled" Status Theory: My next thought was that the API only returns rules that are currently enabled. To test this, I went into a tenant and manually set status = enabled across all Rule Sets. I ran the script again, but the total number of extracted curated detections did not increase.

  3. My Final Theory: Given the above, my current hypothesis is that the SecOps API / SDK only materializes and returns curated detections if they have historically triggered at least one alert within that specific tenant.

My questions for the community:

  • Is my theory correct? Does the API only return curated rules that have already triggered?

  • For an MSSP use case, how can I extract all curated detections belonging to one or more Rule Sets (e.g., all AWS rules), independent of whether they have triggered an alert or not?

  • Am I doing something wrong in my approach with the SDK?

  • Can someone clarify how the underlying architecture works between the SDK method and the Unified Rules API?

I have attached my sanitized Python script below for reference. Any insights or workarounds to get the full catalog visibility would be greatly appreciated!

from secops import SecOpsClient
import json
import csv
from pathlib import Path

# --- CONFIGURATION ---
# Replace with your actual credentials path and export directory
AUTH_FILE = Path("./auth.json")
EXPORT_FILE = Path("./curated_rules_export.csv")

# Replace with your SecOps tenant details
CUSTOMER_ID = "YOUR_CUSTOMER_ID"
PROJECT_ID = "YOUR_PROJECT_ID"
REGION = "us"

def main():
if not AUTH_FILE.exists():
print(f"Error: Credentials file not found at {AUTH_FILE}")
return

try:
service_account_info = json.loads(AUTH_FILE.read_text(encoding='utf-8'))

print("Connecting to Google SecOps...")
client = SecOpsClient(service_account_info=service_account_info)
chronicle = client.chronicle(
customer_id=CUSTOMER_ID,
project_id=PROJECT_ID,
region=REGION
)

print("\nFetching all Curated Rules using as_list=True...")

# ISSUE: This method seems to return only a fraction of the expected 1500+ curated rules.
# It typically stops at around ~160 rules (mostly AWS/Azure).
curated_rules = chronicle.list_curated_rules(as_list=True)

if not curated_rules:
print("No Curated Rules found.")
return

total_rules = len(curated_rules)
print(f"Download complete. Found {total_rules} curated rules.\n")

spreadsheet_data = []

for rule in curated_rules:
rule_id = rule.get("name", "").split("/")[-1]
display_name = rule.get("displayName", "No Name")
description = rule.get("description", "No description")
rule_type = rule.get("type", "N/A")
precision = rule.get("precision", "N/A")
update_time = rule.get("updateTime", "N/A")

rule_set_raw = rule.get("curatedRuleSet", "")
rule_set_id = rule_set_raw.split("/")[-1] if rule_set_raw else "N/A"

severity = rule.get("severity", {}).get("displayName", "N/A")

tactics_raw = rule.get("tactics", [])
tactics = ", ".join([f"{t.get('id')} ({t.get('displayName')})" for t in tactics_raw]) if tactics_raw else "N/A"

techniques_raw = rule.get("techniques", [])
techniques = ", ".join([f"{t.get('id')} ({t.get('displayName')})" for t in techniques_raw]) if techniques_raw else "N/A"

spreadsheet_data.append({
"Rule ID": rule_id,
"Rule Name": display_name,
"Severity": severity,
"Precision": precision,
"Type": rule_type,
"Tactics (MITRE)": tactics,
"Techniques (MITRE)": techniques,
"Rule Set ID": rule_set_id,
"Description": description,
"Last Updated": update_time
})

print(f"Saving CSV file to: {EXPORT_FILE}")

with open(EXPORT_FILE, mode="w", newline="", encoding="utf-8-sig") as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=spreadsheet_data[0].keys(), delimiter=";")
writer.writeheader()
writer.writerows(spreadsheet_data)

print("Export successful!")

except Exception as e:
print(f"\nCritical Error during execution: {e}")

if __name__ == "__main__":
main()

 

1 reply

Forum|alt.badge.img+12

That is correct, the approach you have will only return content you’ve seen a detection for.

 

If you want to download all the rules that are published you would need use the Content Hub with `v1alpha/{PARENT}/contentHub/featuredContentRules`