Skip to main content

From Text to Telemetry: How MITRE ATT&CK v18 Changes the Game for Detection Engineers

  • December 15, 2025
  • 1 reply
  • 4013 views

ipninichuck
Staff
Forum|alt.badge.img+4

For nearly a decade, the "Detection" section of the MITRE ATT&CK framework was the industry’s most important afterthought. It was the section every analyst read but no engineer could easily use. With the release of version 18 in October 2025, that era is officially over.

MITRE has fundamentally re-architected how the framework handles defensive guidance, moving from a library of descriptive notes to a rigorous system of Detection Strategies and Analytics.

This isn’t just a formatting update; it is a shift in the philosophy of detection. It acknowledges that finding a threat is not about knowing what it is (the definition), but understanding how it manifests in data (the telemetry). For the Google SecOps community, this update bridges the gap between threat intelligence and Yara-L implementation like never before.

 

The Old World: The "Needle in a Haystack" Era

 

To understand the magnitude of the v18 update, we have to look at the friction inherent in the previous versions. In versions 1 through 17, the detection guidance for any given technique was essentially a free-text field—a "Notes" section.

If you looked up a technique like Scheduled Task/Job (T1053), you would find a paragraph advising you to "monitor for the creation of scheduled tasks" or to "look for suspicious usage of schtasks.exe."

While factually correct, this advice was paralyzed by its own vagueness. It was akin to a cookbook telling a chef to "make the soup taste good" without listing the ingredients or the spices. The burden of translation fell entirely on the defender:

  • Ambiguity: What defines "suspicious"? Is it the arguments? The parent process? The time of day?
  • Fragility: Engineers often wrote rules based on tight strings (e.g., schtasks.exe /create) that adversaries could easily bypass by renaming binaries or using API calls directly.
  • Noise: Broad rules based on vague advice led to alert fatigue. Monitoring all scheduled task creations in an enterprise environment is a recipe for drowning in false positives.

The framework was brilliant at telling us what the adversary was doing, but it was frustratingly silent on the specific logic required to catch them.

 

The New Blueprint: Strategies and Analytics

 

Version 18 tears down that static text field and replaces it with a modular, engineering-focused hierarchy. The vague suggestions have been deprecated in favor of two distinct, actionable objects: Detection Strategies and Analytics.

 

1. Detection Strategies (The "What")

These represent the high-level methodology. A strategy defines the abstract approach to finding a behavior, independent of the tool you are using. It categorizes the type of "net" you are trying to cast.

  • Example Strategy: "Detect persistence via system utility abuse."
  • Goal: This tells you the scope of your detection coverage.

 

2. Analytics (The "How")

This is the revolutionary part. An Analytic is a specific, pseudo-code blueprint that translates the strategy into logic. It bridges the gap between the concept and the keyboard.

Instead of saying "watch for LSASS dumping," an Analytic (e.g., AN0648) explicitly defines the logic:

"Alert if a process opens a handle to lsass.exe with PROCESS_VM_READ rights, specifically excluding known binaries like generic EDR agents."

This change decouples the intent of the adversary from the action they take. In the old model, a technique like "PowerShell" was a monolith. You either detected PowerShell, or you didn't. In the new model, that single technique is fractured into specific, observable behaviors—encoded commands, download cradles, or lateral movement attempts—each with its own distinct analytic ID.

 

Case Study: Lateral Movement via SMB

 

Let's look at a concrete example of how this improves detection engineering for Lateral Movement.

The Legacy Approach (v17):

The guidance for SMB/Windows Admin Shares (T1021.002) might have simply said: "Monitor for file shares being accessed by unauthorized accounts or executables."

The v18 Approach:

MITRE now provides specific Analytics for the underlying behaviors, such as Service Creation or Named Pipe activity.

  • Strategy: Detect Remote Service Execution.
  • Analytic (Pseudocode): Process Create events where the ParentImage is services.exe AND the CommandLine contains network paths or references to admin shares (ipc$, c$).

 

Putting it into Practice: The Google SecOps Advantage

 

The real power of this update unlocks when you apply it to a modern detection platform like Google Security Operations . The new ATT&CK Analytics act as the perfect "ingredients" for Yara-L multi-event rules.

A major challenge in detection engineering is writing rules that correlate events over time without generating noise. With v18, you can build your correlation logic using high-fidelity Analytics as your building blocks. Because the new Analytics focus on specific malicious behaviors rather than just generic log sources, they serve as high-confidence signals.

 

Example 1: Persistence Followed by Credential Dumping

Here is how you can translate the new ATT&CK structure directly into a Google SecOps multi-event rule. Imagine an adversary who achieves persistence via a scheduled task and then, minutes later, attempts to dump credentials.

rule v18_Correlation_Persistence_followed_by_CredDump {

meta:
description = "Correlates two v18-style behaviors: Suspicious Task creation followed by LSASS access."
author = "Detection Team"
severity = "CRITICAL"
reference = "MITRE ATT&CK v18 - T1053, T1003"

events:
// EVENT 1: The 'Setup' (Derived from ATT&CK Analytic: Malicious Task Creation)
// Instead of just looking for 'schtasks.exe', we look for the specific BEHAVIOR
$setup.metadata.event_type = "PROCESS_LAUNCH"
$setup.principal.process.file.full_path = /schtasks\.exe/
// The Analytic logic: Task creation involving encoded commands or non-standard paths
(
$setup.target.process.command_line = /-enc/ or
$setup.target.process.command_line = /AppData/
)

// EVENT 2: The 'Execution' (Derived from ATT&CK Analytic: LSASS Access)
$attack.metadata.event_type = "PROCESS_OPEN"
$attack.target.process.file.full_path = /lsass\.exe/

// Join the events by the same compromised host
$setup.principal.hostname = $hostname
$attack.principal.hostname = $hostname

// Ensure the persistence happened BEFORE the credential dump
$setup.metadata.event_timestamp.seconds < $attack.metadata.event_timestamp.seconds

match:
// Group these events over a 30-minute window
$hostname over 30m

condition:
// Fire only if BOTH distinct behaviors occur on the same host
$setup and $attack
}

 

Example 2: Lateral Movement via Remote Services

Let’s try a different scenario. A common adversary technique is to disable security tools and then immediately move laterally. This rule correlates Defense Evasion (T1562) with Lateral Movement (T1021).

rule v18_Correlation_DefenseEvasion_to_LateralMovement {

meta:
description = "Detects tampering with AV/EDR followed by lateral movement attempts."
author = "Detection Team"
severity = "HIGH"

events:
// EVENT 1: Defense Evasion (Analytic: Security Service Tampering)
$tamper.metadata.event_type = "PROCESS_LAUNCH"
// Generic logic for stopping services or killing processes
(
$tamper.target.process.command_line = /stop/ or
$tamper.target.process.command_line = /taskkill/
)
// Targeting known security vendors or standard windows defender processes
(
$tamper.target.process.command_line = /MsMpEng/ or
$tamper.target.process.command_line = /Sysmon/ or
$tamper.target.process.command_line = /CarbonBlack/
)

// EVENT 2: Lateral Movement (Analytic: Service Execution via PSEXEC style behavior)
$lateral.metadata.event_type = "PROCESS_LAUNCH"
// PSEXEC often launches remotely via services.exe
$lateral.principal.process.file.full_path = /services\.exe/
// Looking for cmd.exe or powershell.exe spawned directly by services.exe
(
$lateral.target.process.file.full_path = /cmd\.exe/ or
$lateral.target.process.file.full_path = /powershell\.exe/
)
// Often executes from hidden admin shares
$lateral.target.process.command_line = /c\$/

// Correlation: Same Host
$tamper.principal.hostname = $hostname
$lateral.principal.hostname = $hostname

// Sequence: Tampering must happen before the lateral movement setup
$tamper.metadata.event_timestamp.seconds < $lateral.metadata.event_timestamp.seconds

match:
$hostname over 15m

condition:
$tamper and $lateral
}

 

Leveling Up: Building Composite Rules with v18

 

While Multi-Event rules (like the examples above) are powerful, they can become unwieldy if you try to combine too many diverse techniques into a single logic block. MITRE v18’s modular nature encourages a different approach: Composite Rules.

Because v18 breaks detection down into small, atomic "Analytics," you can write simple, single-behavior rules for each Analytic (e.g., one rule for AN1234, another for AN5678). These individual rules might be "Low" severity on their own.

You can then write a Composite Rule in Google SecOps that listens for the detections generated by those smaller rules. This allows you to detect a narrative arc—an attack chain—without writing a monster query that touches raw logs.

 

Example 3: The "Triple Chain" Composite Rule

In this scenario, we assume you have already deployed three simple rules based on MITRE v18 Analytics:

  1. Rule A (Initial Access): Detects Office apps spawning command shells (based on Strategy DET-001).
  2. Rule B (Execution): Detects PowerShell downloading content (based on Strategy DET-002).
  3. Rule C (Persistence): Detects Scheduled Task creation (based on Strategy DET-003).

We will now write a Composite Rule that only fires if it sees all three of these detections occur in sequence on the same host within one hour.

rule v18_Composite_Attack_Chain_Office_to_Persistence {
meta:
description = "Detects a full kill chain by correlating three distinct MITRE v18 Analytic rules."
author = "Detection Team"
severity = "CRITICAL"
// This rule doesn't look at raw logs; it looks at your other alerts.
rule_type = "COMPOSITE"


events:
// STEP 1: Initial Access (Office spawning PowerShell)
// We reference the rule by name or ID.
$d1.metadata.event_type = "DETECTION"
$d1.detection.detection.rule_name = "MITRE_v18_Office_Spawn_Shell"
$d1.detection.detection.outcomes["hostname"] = $hostname


// STEP 2: Execution (PowerShell Download Cradle)
$d2.metadata.event_type = "DETECTION"
$d2.detection.detection.rule_name = "MITRE_v18_PowerShell_Download"
$d2.detection.detection.outcomes["hostname"] = $hostname


// STEP 3: Persistence (Scheduled Task)
$d3.metadata.event_type = "DETECTION"
$d3.detection.detection.rule_name = "MITRE_v18_Suspicious_Task_Creation"
$d3.detection.detection.outcomes["hostname"] = $hostname


// SEQUENCE LOGIC: Enforce the order of operations
// Initial Access must happen before Execution
$d1.metadata.event_timestamp.seconds < $d2.metadata.event_timestamp.seconds

// Execution must happen before Persistence
$d2.metadata.event_timestamp.seconds < $d3.metadata.event_timestamp.seconds


match:
// Group all detections by the victim hostname over a 1-hour window
$hostname over 1h


condition:
// Fire only if we see the full story
$d1 and $d2 and $d3
}

 

Beyond Individual Detections: Finding the Chains with TIE and Flow

 

Writing composite rules is powerful, but it begs a difficult question: Which techniques should you chain together? Randomly guessing that "Technique A" might follow "Technique B" is inefficient. To solve this, detection engineers can leverage two cutting-edge projects from the MITRE Center for Threat-Informed Defense.

 

1. ATT&CK Technique Inference Engine (TIE)

The Technique Inference Engine (TIE) acts as a recommender system for your detection strategy. By analyzing thousands of CTI reports, TIE calculates the probability of one technique following another.

  • How to use it: If you have a solid detection for "Phishing (T1566)," you can query TIE to see what techniques statistically follow it. TIE might reveal that "Command and Scripting Interpreter (T1059)" is the most likely next step. You can then prioritize building a Composite Rule that links these two specific detections.
  • Project Link: Technique Inference Engine

 

2. ATT&CK Flow

While the standard ATT&CK Matrix is excellent for categorizing individual actions, it flattens the dimension of time. ATT&CK Flow introduces a data model for describing sequences of adversary behavior.

  • How to use it: ATT&CK Flow allows you to visualize and share complex attack paths. Instead of viewing a static list of techniques, you can map out the logic gates of an attack (e.g., "IF the adversary fails to elevate privileges, THEN they attempt lateral movement via SMB"). These flows serve as the direct blueprints for your Google SecOps Composite Rules.
  • Project Link: ATT&CK Flow

By combining the structural clarity of v18 Analytics with the predictive insights of TIE and the sequence modeling of ATT&CK Flow, you can move from detecting isolated blips to detecting entire campaigns.

 

The Verdict: Operationalizing the Shift

 

For Google SecOps users, MITRE v18 is more than just a documentation update—it is a force multiplier for the Security Operations Center. By formalizing "Analytics" as a first-class citizen of the framework, MITRE has effectively outsourced the R&D phase of detection engineering.

This shift impacts our daily workflows in three critical ways:

  1. Velocity: Analysts and engineers can spend less time debating what to look for and more time tuning how to look for it. The logic is "pre-chewed," allowing junior engineers to implement complex detections that previously required senior-level research.
  2. Standardization: The introduction of unique Analytic IDs (e.g., AN1234) creates a common language. When sharing logic across the Google SecOps community or between different business units, we can now reference specific logic blocks rather than abstract techniques.
  3. Future-Proofing: As adversaries evolve, MITRE will update these analytics. By mapping your YARA-L rules to specific Analytic IDs, you create a system where your detection coverage can be audited and upgraded systematically, rather than relying on ad-hoc reviews.

 

1 reply

tajudenjemal
Forum|alt.badge.img
  • New Member
  • February 11, 2026

2026