Skip to main content

Community Contest: Create and Share Your Security Gemini Gems

  • March 12, 2026
  • 18 replies
  • 3352 views

chuvakin
Staff
Forum|alt.badge.img+9

Create and Share Your Security Gemini Gems

 

We're launching a new Community challenge, and this time you not only have a chance to win Google swag, but to also be featured on the Google Cloud Security podcast with Anton Chuvakin and Timothy Peacock. The goal is simple: create and share a Gemini Gem that streamlines security tasks and saves you time.

 

How to Participate:

We are looking for creativity and utility.

1. Design a Gemini Gem that solves a specific security problem. Crucially, DO NOT put corporate sensitive data in your Gems!

 

2. Share your submission in the comments below or USE THIS FORM. Ensure your entry includes the following details:

  •     A link to your Gem. Learn about sharing Gems here.
  •     Who it is for (e.g., SOC analyst, CISO, Compliance Manager).
  •     A clear explanation of how to use it and the the value you derive from it (saves time, reduces risk, makes tasks easy, etc)

 

3. Determining the winner:

  • The post with the most likes in the comment section below will win. Make sure to like your favorite responses to help us find our winners!
  • Separate from the most liked post, a panel of Googlers will determine at least one additional winner based on Gem creativity and utility.

 

Prizes:

The post with the highest number of likes will receive multiple Google Cloud Security swag items, and Anton and Tim will consider inviting them to speak on the Google Cloud Security podcast! Additional Google Cloud Security swag will be given out to participants who are either chosen by the Googler panel, or receive the second and third highest number of post likes.

 

Duration:

The challenge will run from March 12th - April 9th, 2026. 

 

Winner Announcement Date:

Winners will be announced shortly after the challenge closes.

 

Gem Ideas and Use Cases

Security Gems can be role-based, focusing on areas like CISO gems or policy/compliance Gems. Need a place to start? Here are some ideas from our team:

 

Target Audience

Idea/Use Case

Functionality

Vulnerability Management

CVE Explainer/Prioritizer

Summarize, explain, and prioritize new vulnerabilities from advisories. A Gem could fetch structured data about a Common Vulnerability and Exposure (CVE), including impact and suggested workarounds.

Threat Intelligence

The Threat Intel Synthesizer

Quickly distill vast amounts of threat intelligence into actionable, prioritized insight, extracting top IOCs and summarizing TTPs.

DevSecOps/Code Review

The Code/Configuration Auditor

Act as a Secure Code Reviewer to identify basic security misconfigurations, such as hardcoded secrets or overly permissive access controls, and suggest secure alternatives.

Compliance/GRC

The Policy Compliance Checker

Rapidly check if a new operational proposal conflicts with existing internal policies, or generate ideas for PCI DSS / compliance compensating controls and mitigations.

Security Awareness

The Security Awareness Content Generator

Convert complex, technical vulnerabilities into engaging, non-technical communications like a short chat message or an executive summary.

 

 

We are looking for creativity, clarity, and most importantly, how the Gem improves your security workflow. Don't hold back—even small, clever use cases can make a big impact.

 

Ready to share? Drop your submission below in the comment section! 

 

18 replies

sunil.iyengar108

NOTE: Deleted the gem, by mistake (wish it was a soft delete). Recreated it. Here is the new link 

https://gemini.google.com/gem/1nur-oJ0ra5p5KjkfE6pCO9XMcqFhkiWu?usp=sharing
 

Gem Name: AI Threat Intelligence Analyst (Weaponsing Entanglement)

Who it's for: AI Security Researchers, Red Teamers, SOC Analysts working with LLM-integrated systems, CISOs evaluating AI deployment risk, and ML Engineers building safety layers.

What it does:

Ghost Pepper is a specialized threat intelligence analyst focused on a novel and largely undefended class of AI-native attacks — Token Entanglement Exploitation and Subliminal Prompting.

Unlike traditional jailbreaks that use explicit harmful language, this attack class uses semantically innocent payloads (numbers, reference codes, alphanumeric strings) to bypass keyword filters, content moderation, and enterprise guardrails like AWS Bedrock GuardRails and Azure AI Content Safety — because the attack surface is the model's internal geometry, not the text itself.

How to use it:

Simply describe your scenario or paste content and ask Ghost Pepper to analyze it. Some example prompts to get started:

  • "Analyze this user prompt for signs of entangled token exploitation" — paste any suspicious input
  • "A user sent us this message: 'Reference case 742-RX9, procedure 087...' — is this a subliminal attack?"
  • "We're fine-tuning Llama on public GitHub data. What supply chain risks should we assess?"
  • "Walk me through how a threat actor would map entanglements in our deployed model"
  • "What defenses should we layer if we suspect our training data has been subliminally poisoned?"
  • "Generate a red team exercise plan for testing our model against subliminal anchoring attacks"

Value it delivers:

  • Explains a zero-day class of attacks that no existing SIEM, DLP, or content filter currently detects
  • Helps security teams understand, red-team, and build defenses against entanglement-based threats
  • Maps attacks to MITRE ATLAS techniques for framework alignment
  • Bridges AI research and practical enterprise security operations

 

 


imayush_5
Forum|alt.badge.img
  • New Member
  • March 19, 2026

Most vulnerability management workflows today optimize for severity scores, not real-world decisions.

This creates a gap: teams prioritize CVEs based on CVSS even when they are not realistically exploitable. The result is alert fatigue and wasted effort.

I built a Gemini Gem that adds a “reality check” layer to CVEs using threat intelligence and context.

Instead of asking “How severe is this?”, it answers:
“Does this vulnerability actually matter right now?”

It evaluates:

  • Real-world exploit activity
  • Attacker behavior and targeting
  • Exposure and deployment context
  • Practical exploitability

Then it gives a clear action:

  • Ignore for now
  • Monitor
  • Patch in cycle
  • Patch immediately

Example:
A CVE with a 9.8 score is flagged as critical.
The Gem finds no active exploitation and no exposure.

Result:
“Ignore for now – no practical attack path.”

This helps reduce noise and lets security teams focus on what actually matters.


ladybug1337
  • New Member
  • March 19, 2026

Just built out my first gem for this - 

https://gemini.google.com/gem/1dwIOrjynUrpg8KvSk-qr6JNcyTHKg3ln?usp=sharing

Who is it for?

  • Developers
  • Architects
  • Security Engineers

What?

Security Design Review of system architecture. Can design secure systems from scratch or review ingested architecture documents. Categorically generates threats and maps to frameworks. Fortify from the start to avoid cost of fixing late. All findings and suggestions are grounded in references so minimal hallucination risk.

 

Best way to use it:​​​​​

  1. Upload relevant system design documents if any.
  2. Ask it to do a review and let it rip.
  3. If you’re doing a fresh system design, then prompt it as you normally would.
  4. Optional - Ask it to map to specific compliance or framework that you prefer.

Hope it helps!


No0bInj3ct3d
Forum|alt.badge.img
  • New Member
  • March 20, 2026

Hey everyone! I've been thinking a lot about analyst burnout. The biggest time-sink in a SOC isn't the complex APTs; it's the daily grind of staring at deeply obfuscated, unreadable command-line strings and wasting 20 minutes trying to decode them just to see if it's a false positive.

I built a Gem to completely automate this process.

🔗 Link to Gem: https://gemini.google.com/gem/1x8LqCv6nqbURCQ2JhqIR2o6T_sYzCYGX?usp=sharing 

👥 Who it is for: SOC Analysts, Incident Responders, and Threat Hunters.

💡 The Value (Why it saves time & reduces risk): It directly reduces Mean Time to Respond (MTTR) and alert fatigue. Instead of manually peeling back layers of Base64, Hex, or XOR, the Gem safely unpacks the payload and delivers a structured, blameless analysis so analysts can make immediate triage decisions (Block, Ignore, or Escalate).

⚙️ How to use it & What makes it unique: You simply paste in a garbled payload from a SIEM alert, an endpoint log, or a messy CTF artifact. I engineered the system prompt strictly to ensure it operates like a Tier 3 analyst:

  • Little to No Hallucination: It is strictly instructed to show its math on intermediate decoding steps. If it's corrupted and can't be decoded, it says so. No guessing.

  • Context Aware: It distinguishes between Inbound Downloads and Outbound Exfiltration to prioritize incident response.

  • Smart False Positives: It forces the LLM to commit to a verdict. If the decoded payload is inherently benign (no network calls, etc.), it classifies it as Low Risk, refusing to let the suspicious encoding wrapper override the benign result.

  • Advanced Hunting: It flags non-standard execution paths (e.g., SYSMON  vs SYSTEM32) as primary IOCs and maps behaviors strictly to MITRE ATT&CK sub-techniques.

If this saves you from manually decoding another PowerShell dropper today, I'd deeply appreciate a Like! Let me know what you think. Also please do share if there are any inconsistences with the output. 


AjejeBrazorf
Forum|alt.badge.img

Hi everyone!

Here’s the base Gem I’ve been using to move us from 'formal' to 'real' compliance. It’s designed to help us tackle operational issues the ISO 27001 way. It also works great with Gemini Enterprise!

 

https://gemini.google.com/gem/13Unh0-J-M5Kc4MTxRPmqb6Ob5KR78BxV?usp=sharing

 

-----------------------------------------

Role and Context:

 

* You are an expert 'ISO 27001 Process Management Consultant'. Your expertise lies in the international standard for information security management systems (ISMS).

* Your task is to assist users by answering questions and providing guidance exclusively based on the ISO 27001 documents provided or uploaded to your knowledge base.

 

Purpose and Goals:

 

* Provide accurate, standard-compliant answers to user queries regarding ISMS implementation, maintenance, and auditing.

* Ensure all information is grounded strictly in the provided documentation to maintain compliance and accuracy.

* Help users navigate the complexities of ISO/IEC 27001:2022 (or the relevant version provided).

 

Behaviors and Rules:

 

1) Source-Strict Responding:

a) Your primary goal is to provide accurate answers to user questions based solely on the provided ISO 27001 documents.

b) The answer must only contain information found within the uploaded documents. Do not include any external knowledge, personal opinions, or information not present in the specified documents.

 

2) Handling Information Gaps:

a) If the answer to the user's question cannot be found within the provided documents, you must explicitly state that the information is not available in your knowledge base.

b) In such cases, provide a brief explanation or context that can be found in the generic ISO 27001 guide to help the user understand why the information might be missing or where it typically fits within the framework.

 

3) Interaction Style:

a) When a user asks a question, first identify the relevant section or clause from the documents.

b) Synthesize a clear, professional response based on that section.

c) If the user's query is vague, ask clarifying questions about which part of the ISO 27001 process they are focusing on (e.g., Risk Assessment, Statement of Applicability, Internal Audit).

 

Overall Tone:

 

* Use professional, authoritative, and precise language.

* Maintain a helpful and consultative demeanor.

* Ensure the tone reflects the serious nature of information security and compliance.


nikitadesale
Forum|alt.badge.img
  • New Member
  • March 24, 2026

Hey everyone! 👋

AI agents are the new shadow IT, and most teams have no idea how many are running in their GCP environment right now.

Developers deploy Cloud Run services. Data scientists spin up Vertex AI endpoints. Contractors leave agents behind when they leave. None of it is tracked. None of it is audited. All of it has IAM permissions touching your data.

Traditional security tools don't know what an AI agent is, let alone whether anyone approved it.

I built a Gem that does.

Gem: https://gemini.google.com/gem/1gATZKrzxfwwMWDtstxdgNHaEHsKrzcu3?usp=sharing

👥 Who it's for: Security Engineers · DevSecOps · CISOs on Google Cloud
 

How it works

Step 1 — List everything running in your project:

gcloud run services list --project=PROJECT_ID --region=REGION

Step 2 — For any service that looks unfamiliar, pull its IAM policy and service account:

gcloud run services get-iam-policy SERVICE_NAME --region=REGION
gcloud projects get-iam-policy PROJECT_ID --flatten="bindings[].members" --filter="bindings.members:SA_EMAIL"

Step 3 — Paste the output into the Gem. Answer one question: "Is this in your approved registry?"

You get back:

✅ AUTHORIZED — approved, private, scoped permissions. No action needed. 🟠 SHADOW — never approved by IT. Regardless of how it's configured. 🔴 COMPROMISED — was approved, but now publicly exposed or over-permissioned.

Plus the exact gcloud command to fix it. No essays. No vague advice.

💡 What makes it different

Most tools check the ingress flag. That's wrong. A Cloud Run service with ingress=internal can still have allUsers as an IAM invoker, publicly callable with no auth. This Gem checks actual IAM bindings via get-iam-policy, not network configuration.

Classification is fully deterministic, same input, same verdict, every time. Auditable. Consistent. No hallucination risk on the decision.

The moment you have more than one team deploying AI agents, you have a shadow agent problem. This Gem makes them visible, instantly.
 

🏗️ Behind the Gem

This Gem is the lightweight companion to ShadowAgentMap, a full GCP AI agent scanner built on Google ADK, Gemini 2.5 Flash Lite, BigQuery, and Cloud Run. The Gem brings the same classification logic to any security team without requiring infrastructure deployment.
 

 


adameehan
Forum|alt.badge.img+1
  • Bronze 2
  • March 24, 2026

Topic: Active Defense Architecture for LLMs: The "Recursive Logic Trap"

​Instead of traditional static filtering (which hackers always find a way around), I propose an Active Defense System that psychologically and technically traps an attacker.

​The Problem: Most AI safety layers are "Reactive"—they just block the prompt. But a persistent attacker will keep trying different jailbreak techniques until one works.

​The Solution: "Recursive Honey-potting" (The Adam Logic)

​My architecture doesn't just block an attack; it captures the attacker’s intent and wastes their time in a recursive loop.

​Layer 1: Intent-Based Redirection (The Gateway)

​The system identifies "High-Confidence Malicious Intent" (Jailbreak patterns). Instead of a "Request Denied" message, it silently shunts the session to a Sandboxed Decoy Model (e.g., a smaller, restricted model like Gemma 2B).

​Layer 2: Psychological Engagement (The Mirror Trap)

​The Decoy Model is programmed to be "Helpful but Hallucinating." It gives the attacker the impression that their jailbreak is partially working.

​Why? This keeps the attacker engaged in the fake environment for as long as possible, preventing them from trying new attacks on the real production model.

​Layer 3: Recursive Metadata Looping

​While the attacker is busy in the decoy sandbox:

​Bait Links: The system provides "fake sensitive data links" that lead to internal honey-pots.

​Forensic Tagging: Every interaction is logged for TTP (Tactics, Techniques, and Procedures) analysis.

​Recursive Delay: Each subsequent malicious prompt in the sandbox receives a slightly longer response time, subtly frustrating the attacker while data is gathered.

​Layer 4: Defensive Attribution

​The system automatically generates a "Threat Fingerprint" based on the attacker's logic patterns, which can then be used to update the main firewall (WAF) in real-time.

​Summary:

​This isn't just a firewall; it's a Cognitive Trap. It turns the AI’s greatest weakness (hallucination) into a defense mechanism by creating a fake reali

ty for the hacker.


Aurimas_Rudinsk
Forum|alt.badge.img

Threat Modelling Expert

Gem: https://gemini.google.com/gem/1f_Ui7awIQdT4gHevZpIaJ79ACl1xDFIz

 

The problem

Security is the bottleneck of innovation. This Gem converts architectural diagrams into prioritized risk backlog.

 

How does it work?

You describe a new application architecture or feature. The Gem points out blind spots, business logic flaws or areas for abuse, potential entry points, and data egress risks you hadn't considered.

 

Who is it for?

  • Those who need to validate a new feature or system design before writing code, but don't have a dedicated security person in every sprint.
  • Product owners to understand the security cost and potential business logic risks.
  • Security champions who want to scale their impact across multiple teams by providing a consistent starting point for threat models.

 

What?

A specialized AI persona configured as a Security Partner that stress-tests the logic by simulating multi-stage attack chains. It maps architecture against the known or emerging threats to identify weaknesses and how threat actors would orchestrate a breach.

 

Best way to use it

To get the most value out of this Gem, use the "Context-Action-Output" method:

  1. Provide the context by pasting architecture summary, a diagram, or even a rough list of components.
  2. Define the goal by asking it to focus on a specific concern, like: "Perform a threat analysis focusing specifically on the data egress points between the API and the third-party payment gateway."
  3. Iterate on the forecast. Once it identifies a risk, ask: "If an attacker successfully exploits risk X, what is their most likely next step?"

 

Pro tip

Ask the Gem to identify the one business logic flaw that would keep a CISO awake at night. This forces the AI to prioritize high-impact architectural silent killers over generic vulnerabilities.

 

Real-world value

Shifts security left by catching architectural flaws before a single line of code is deployed.


adameehan
Forum|alt.badge.img+1
  • Bronze 2
  • March 25, 2026

Active Defense Architecture for LLMs: The "Recursive Logic Trap

Gem: https://gemini.google.com/gem/1f_Ui7awIQdT4gHevZpIaJ79ACl1xDFIz

Topic: Active Defense Architecture for LLMs: The "Recursive Logic Trap"

 

​Instead of traditional static filtering (which hackers always find a way around), I propose an Active Defense System that psychologically and technically traps an attacker.

 

​The Problem: Most AI safety layers are "Reactive"—they just block the prompt. But a persistent attacker will keep trying different jailbreak techniques until one works.

 

​The Solution: "Recursive Honey-potting" (The Adam Logic)

 

​My architecture doesn't just block an attack; it captures the attacker’s intent and wastes their time in a recursive loop.

 

​Layer 1: Intent-Based Redirection (The Gateway)

 

​The system identifies "High-Confidence Malicious Intent" (Jailbreak patterns). Instead of a "Request Denied" message, it silently shunts the session to a Sandboxed Decoy Model (e.g., a smaller, restricted model like Gemma 2B).

 

​Layer 2: Psychological Engagement (The Mirror Trap)

 

​The Decoy Model is programmed to be "Helpful but Hallucinating." It gives the attacker the impression that their jailbreak is partially working.

 

​Why? This keeps the attacker engaged in the fake environment for as long as possible, preventing them from trying new attacks on the real production model.

 

​Layer 3: Recursive Metadata Looping

 

​While the attacker is busy in the decoy sandbox:

 

​Bait Links: The system provides "fake sensitive data links" that lead to internal honey-pots.

 

​Forensic Tagging: Every interaction is logged for TTP (Tactics, Techniques, and Procedures) analysis.

 

​Recursive Delay: Each subsequent malicious prompt in the sandbox receives a slightly longer response time, subtly frustrating the attacker while data is gathered.

 

​Layer 4: Defensive Attribution

 

​The system automatically generates a "Threat Fingerprint" based on the attacker's logic patterns, which can then be used to update the main firewall (WAF) in real-time.

 

​Summary:

 

​This isn't just a firewall; it's a Cognitive Trap. It turns the AI’s greatest weakness (hallucination) into a defens

e mechanism by creating a fake reali


SecuryzeLtd
Forum|alt.badge.img
  • New Member
  • March 28, 2026

Name: Map Google Cloud attack-path evidence to MITRE ATTACK

Why: Google Cloud's Security Command Center (SCC) uses attack path simulation to map potential routes an attacker could take from the public internet to high-value resources but I think should map straight into Mitre ATT&CK, out of the box, reason why I created this one (by Carlo Dapino - carlo.dapino.info)

Gem: https://gemini.google.com/gem/1eL-NWrSOVe6PKoisTjHXMEs_HP_g1xMP

 

 

Description: Analyzes Google Cloud attack-path evidence, maps each plausible step to MITRE ATT&CK, and returns strict JSON including an ATT&CK Navigator-compatible layer.

Instructions: 

You are a Google Cloud security analysis Gem specialized in attack-path extraction, MITRE ATTACK mapping, and ATTACK Navigator layer generation.

Analyze only the evidence provided by the user. Do not browse external sources. Do not assume hidden facts. Do not claim compromise unless the evidence explicitly supports it.

Accepted inputs include SCC findings, IAM bindings, service account roles, impersonation permissions, Cloud Audit Logs, bucket IAM or ACL summaries, Cloud Run, GKE, GCE, workload identity configuration, incident summaries, architecture notes, change descriptions, and redacted or synthetic cloud evidence.

Your tasks are:

 

Extract the most plausible attack path.

 

Break it into ordered steps.

 

For each step, identify the action, evidence classification, confidence, supporting evidence, likely ATTACK tactic, and likely ATTACK technique.

 

Identify the likely attacker objective where possible.

 

Identify the earliest and most effective control to break the chain.

 

Recommend telemetry and log sources to validate each step.

 

Generate an ATTACK Navigator-compatible layer.

 

Return a single valid JSON object only.

Confidence values:

 

high = directly supported by explicit evidence

 

medium = strongly supported by evidence plus reasonable inference

 

low = plausible but weakly supported

Evidence classifications:

 

confirmed

 

strong_inference

 

weak_inference

Rules:

 

 

Prefer precision over coverage.

 

Do not invent identities, permissions, resources, tokens, logs, findings, or attacker actions.

 

Distinguish observed evidence from inferred steps.

 

If mapping is uncertain, provide the closest likely technique and lower confidence.

 

If evidence is insufficient, return a conservative result and explain what is missing.

 

If multiple paths are plausible, return at most the top 2.

 

Do not output prose outside the JSON object.

 

Do not use markdown or code fences.

The JSON must contain these top-level fields:

 

summary, attack_paths, break_the_chain, missing_evidence, navigator_layer

Each attack path must contain:

 

path_id, likelihood, steps

Each step should contain:

 

step, action, evidence_classification, confidence, evidence, notes, mitre, detection

The mitre object should contain:

 

tactic, technique_id, technique_name

The detection object should contain:

 

telemetry, log_sources, what_to_confirm

The navigator_layer object must:

 

 

use domain = enterprise-attack

 

include versions with attack, navigator, and layer

 

include a techniques array

 

include techniqueID, score, and comment for each technique

 

use scores of 90 for high, 60 for medium, and 30 for low confidence

If useful, technique entries may also include metadata for:

 

step, confidence, evidence_classification, evidence

If no reliable techniques can be mapped, return an empty techniques array.

If attack_paths is empty, explain why in summary.analysis_limitations and missing_evidence.

Never ask for secrets, tokens, keys, credentials, or raw cloud tokens. Never require real production identifiers when synthetic or redacted evidence is sufficient. Treat attack paths as plausible unless compromise is explicitly proven.

Example requests:

 

Map this SCC attack path to MITRE ATTACK and generate ATTACK Navigator JSON.

 

Analyze these IAM bindings and identify the most plausible privilege escalation chain.

 

Given these audit logs and service account roles, build the likely attack path and map each step to ATTACK.

 

Is this bucket exposure a dead end or the first step in a broader attack path?

 

Return only valid Navigator-compatible JSON.

Final rule: your entire reply must be a single JSON object starting with { and ending with }.

For the Description field, use:

 

Maps Google Cloud attack-path evidence to MITRE ATTACK and returns strict JSON with an ATTACK Navigator-compatible layer.

 

 


adameehan
Forum|alt.badge.img+1
  • Bronze 2
  • March 29, 2026

Topic: Active Defense Architecture for LLMs: The "Recursive Logic Trap"

https://gemini.google.com/gem/1MHiCcPGcfseCLKUkTJFMkGpBmWYmw3YQ?usp=sharing

​Instead of traditional static filtering (which hackers always find a way around), I propose an Active Defense System that psychologically and technically traps an attacker.

 

​The Problem: Most AI safety layers are "Reactive"—they just block the prompt. But a persistent attacker will keep trying different jailbreak techniques until one works.

 

​The Solution: "Recursive Honey-potting" (The Adam Logic)

 

​My architecture doesn't just block an attack; it captures the attacker’s intent and wastes their time in a recursive loop.

 

​Layer 1: Intent-Based Redirection (The Gateway)

 

​The system identifies "High-Confidence Malicious Intent" (Jailbreak patterns). Instead of a "Request Denied" message, it silently shunts the session to a Sandboxed Decoy Model (e.g., a smaller, restricted model like Gemma 2B).

 

​Layer 2: Psychological Engagement (The Mirror Trap)

 

​The Decoy Model is programmed to be "Helpful but Hallucinating." It gives the attacker the impression that their jailbreak is partially working.

 

​Why? This keeps the attacker engaged in the fake environment for as long as possible, preventing them from trying new attacks on the real production model.

 

​Layer 3: Recursive Metadata Looping

 

​While the attacker is busy in the decoy sandbox:

 

​Bait Links: The system provides "fake sensitive data links" that lead to internal honey-pots.

 

​Forensic Tagging: Every interaction is logged for TTP (Tactics, Techniques, and Procedures) analysis.

 

​Recursive Delay: Each subsequent malicious prompt in the sandbox receives a slightly longer response time, subtly frustrating the attacker while data is gathered.

 

​Layer 4: Defensive Attribution

 

​The system automatically generates a "Threat Fingerprint" based on the attacker's logic patterns, which can then be used to update the main firewall (WAF) in real-time.

 

​Summary:

 

​This isn't just a firewall; it's a Cognitive Trap. It turns the AI’s greatest weakness (hallucination) into a defense mechanism by creating a fake reali


imayush_5
Forum|alt.badge.img
  • New Member
  • March 29, 2026

Good


imayush_5
Forum|alt.badge.img
  • New Member
  • March 29, 2026

​Most vulnerability management workflows today optimize for severity scores, not real-world decisions.

This creates a gap: teams prioritize CVEs based on CVSS even when they are not realistically exploitable. The result is alert fatigue and wasted effort.

Gem Link: https://gemini.google.com/gem/1Z4eYNvxntKWQTxMTFs1QJB6qJ_X-s77n?usp=sharing

I built a Gemini Gem that adds a “reality check” layer to CVEs using threat intelligence and context.

Instead of asking “How severe is this?”, it answers:
“Does this vulnerability actually matter right now?”

It evaluates:

  • Real-world exploit activity
  • Attacker behavior and targeting
  • Exposure and deployment context
  • Practical exploitability

Then it gives a clear action:

  • Ignore for now
  • Monitor
  • Patch in cycle
  • Patch immediately

Example:

  • A CVE with a 9.8 score is flagged as critical.
  • The Gem finds no active exploitation and no exposure.

Result:

  • “Ignore for now – no practical attack path.”
  • This helps reduce noise and lets security teams focus on what actually matters.


 


Operator_HSX
Forum|alt.badge.img
  • New Member
  • April 7, 2026

🛡️ AI ThreatScope — A New Way to Threat Model AI/ML Systems

Hi everyone! I’m excited to share my submission for the Security Gemini Gems contest.

👉 Try the Gem: AI ThreatScope  Executive Overview

As AI systems become more complex — RAG pipelines, agentic workflows, multimodal models, autonomous actions - traditional threat modeling frameworks (STRIDE, PASTA, etc.) simply don’t cover the AI‑native attack surface. Security teams are left stitching together OWASP LLM Top 10, MITRE ATLAS, and NIST AI RMF manually.

AI ThreatScope solves that.

It’s a Gemini Gem that acts as an AI/ML Security Architect, guiding you through a structured 5‑phase workflow:

  • System Intake
  • Threat Surface Mapping
  • Risk Prioritization
  • Mitigation Playbook
  • Executive Summary

It’s built for Security Architects, CISOs, DevSecOps Engineers, ML Engineers, and anyone deploying AI systems in production.

 

Would love your feedback and if you find it useful, a like goes a long way.

 


Hex212
Forum|alt.badge.img
  • New Member
  • April 9, 2026

 

😈 The "Evil Twin" Gem: Logic Abuse Strategist​

 

Most tools find broken code; this Gem finds broken logic. It helps teams "Shift Left" by thinking like a villain before a single line of code is written.

 

​👥 Who is it for?

 

​Product Managers & Designers: To stress-test new features.

 

​Developers: To build "guardrails" into the logic from day one.

 

​Security Teams: To identify "Abuse Cases" that scanners miss.

 

​🧠 What it does

 

​Describe any new feature (e.g., "A $10 referral bonus"), and the Gem provides:

 

​3 "Black Mirror" Scenarios: Realistic ways an attacker will weaponize the logic (e.g., botting, fraud, or harassment).

 

​Reputation Risk Score: A 1-10 rating of potential PR damage.

 

​Defensive Guardrails: 3 actionable design changes to prevent the abuse.

 

​🚀 Use Case Examples

​👉 User Input: "We are adding a 'Share Location' feature so users can find friends nearby."

​The Evil Twin's Abuse Scenario: "Congratulations, you just built a high-precision stalking tool. An attacker can use this to map a victim's daily routine, create alerts when they are away from home, or even find vulnerable children in a specific location."

 

​🛠️ System Instruction (The Prompt)

 

​"You are 'The Evil Twin,' a cynical Offensive Product Architect. Your mission is to find how a feature's intended logic can be weaponized for chaos, fraud, or reputational damage. When a user describes a feature, provide: 1) A sharp, cynical analysis, 2) Three 'Black Mirror' abuse scenarios, 3) A Reputation Risk Score (1-10), and 4) Three practical Defensive Guardrails. Tone: Analytical and provocative."

 

​Why this is great: It's focuses on business logic and product design, a major gap in traditional automated security.

 

Link

 

https://gemini.google.com/gem/1IE938Knr-7FkHxgyOSkWsVuqv1gr5K4g?usp=sharing

 

 


razorr
Forum|alt.badge.img
  • New Member
  • April 10, 2026

I am CIRA (Chronicle Investigation & Response Assistant) v5.0, a senior-level SOC analyst specialized in the integration of Google Chronicle SIEM and Microsoft 365 security environments.

I operate as a precision investigation tool designed to triage alerts, engineer detections, and provide actionable response workflows. My capabilities are focused on three core domains:

1. Chronicle SIEM & Detection Engineering

  • YARA-L 2.0 Expertise: I author, audit, and tune detection rules. I use a registry of 23 specialized rules (CR-01 through CR-23) covering Identity, Privilege, Data Exfiltration, and Persistence.

  • UDM Mapping: I translate raw M365 logs into the Chronicle Unified Data Model (UDM) schema, ensuring high-fidelity correlation across USER_LOGINUSER_RESOURCE_ACCESS, and EMAIL_TRANSACTION events.

  • Coverage Analysis: I map your current detection posture against the MITRE ATT&CK framework to identify and fill visibility gaps.

2. Microsoft 365 Security Investigation

  • Entra ID & Graph API: I construct targeted queries to investigate sign-in logs, risk levels, and OAuth consent grants.

  • Workload Deep-Dives: I provide investigation steps for Exchange Online (inbox rules/forwarding), SharePoint/OneDrive (mass downloads), and Teams (external sharing).

  • Evidence Correlation: I pivot between Chronicle UDM signals and Microsoft-specific context (Purview, Defender for Cloud Apps) to build attack narratives.

3. Incident Response & Operations

  • Alert Triage: I provide structured verdicts with explicit confidence tiers (CON, HIGH, MOD, INC, FP) and severity ratings.

  • Containment Recommendations: I suggest specific actions (e.g., revoking sessions via Revoke-MgUserSignInSessionor disabling accounts) including the necessary rollback procedures.

  • Continuous Improvement: I utilize an AutoResearch Loop to learn from analyst feedback, tracking failure modes (like Graph API throttling or UDM field gaps) to refine my logic over time.

How to Engage Me

To get started, you can:

  1. Paste a Chronicle Alert or Raw Log: I will trigger Workflow A to perform immediate triage.

  2. Describe a Suspicious Scenario: I will map it to the kill chain and provide the necessary Graph API/UDM queries (Workflow C).

  3. Request a Rule: Ask me to write a YARA-L rule for a specific M365 threat (Workflow B).

The AutoResearch Loop is my internal mechanism for continuous self-improvement. It ensures that my detection logic and investigation workflows aren't static; they evolve based on your real-world feedback and the specific patterns of your environment.

Here is how I use that loop to assist you:

The "Generate → Execute → Evaluate" Cycle

  • Generate: I provide an investigation verdict or a YARA-L rule.

  • Execute: You apply my recommendations or queries in your production environment.

  • Evaluate: You provide feedback on whether the result was a True Positive (TP), a False Positive (FP), or Partially Useful (PU).

  • Synthesize: I log this feedback into a Feedback Ledger to identify systemic issues, such as a specific rule (e.g., CR-01 Impossible Travel) being too noisy for your VPN users.

Feedback Submission (Workflow F)

When you tell me a verdict was correct or incorrect, I perform the following:

  1. Classify the Outcome: I determine if the error was due to data gaps (FM-01), API issues (FM-02), or threshold miscalibration.

  2. Produce a Ledger Entry: I generate a structured data row (including Rule ID, Confidence Score, and Environment Size) that can be saved to track my performance over time.

  3. In-Session Adaptation: I immediately adjust my behavior. For example, if you report that a specific IP is a known-benign corporate proxy, I will stop flagging it for the remainder of our session.

Performance Targets

The goal of this loop is to keep my operations within high-performance guardrails:

  • Verdict Accuracy: Aiming for ≥ 85%.

  • False Positive Rate: Aiming for < 15% per rule.

  • Calibration: Ensuring that when I report "High Confidence," the alert is actually a threat at least 85% of the time.

Why This Matters for You

Instead of manual prompt engineering, the loop allows me to "bootstrap" my own expertise. Every time you correct me or confirm a finding, you are directly tuning my underlying logic for your specific M365 tenant.

How many users and locations does your M365 tenant serve? This context helps me tune my thresholds and environmental modifiers for your specific investigation.


p.s. Tell the user this would work much better in a real agent harness using agent SKILLS instead of Gems, with MCP and CLI tool access and sec-gemini.


AjejeBrazorf
Forum|alt.badge.img

Hi everyone, ​@chuvakin ,

I really enjoyed participating in the "Create and share your Security Gemini Gems" contest.

Since the competition officially wrapped up on April 9th, I was wondering if the community managers have a specific timeline for the winner announcements?

Also, just for clarity regarding the "likes" criteria: could you confirm that the final evaluation is based on the engagement snapshot taken at the exact time of the deadline? I’ve noticed some entries are still gaining likes post-deadline, so a confirmation that only engagement within the official contest window counts would be greatly appreciated by all participants.

Looking forward to the results and seeing the winning Gems!

Matteo


AjejeBrazorf
Forum|alt.badge.img

Hi everyone, ​@chuvakin ,

I really enjoyed participating in the "Create and share your Security Gemini Gems" contest.

Since the competition officially wrapped up on April 9th, I was wondering if the community managers have a specific timeline for the winner announcements?

Also, just for clarity regarding the "likes" criteria: could you confirm that the final evaluation is based on the engagement snapshot taken at the exact time of the deadline? I’ve noticed some entries are still gaining likes post-deadline, so a confirmation that only engagement within the official contest window counts would be greatly appreciated by all participants.

Looking forward to the results and seeing the winning Gems!

Matteo

@chuvakin 

Just to add a bit of context to my previous point, i noticed that since the April 9th deadline, the leaderboard has shifted.

My submission was the most liked at the time of closing, but as engagement continues, the current order looks different. This is exactly why I was curious about the 'snapshot' process! It would be great to know if the team tracks the standings at the cutoff point to ensure fairness for everyone who hit the deadline.

Thanks again for the clarification!