Skip to main content
Solved

Microsoft Azure Sentinel Incident Connector v2- Error Log

  • October 3, 2025
  • 2 replies
  • 72 views

VictorSOAR
Forum|alt.badge.img+6

Hi All,

We are observing recurring error logs in the Microsoft Azure Sentinel Incident Connector v2:

“Failed to fetch backlog incident 37014. Error: Incident with number 37014 was not found.”

This error is being logged continuously, with hundreds of entries generated within a 15-minute window. The issue persists as of today, 3rd October.

Background on Incident 37014:

  • The incident was triggered and closed on 13th July 2025 in Sentinel.
  • However, it was not ingested into SOAR, and the reason for this is unclear.
  • Notably, adjacent incidents such as 37013 and 37015 were successfully ingested and corresponding SOAR cases were created.

Please advise on how we can proceed with resolving or suppressing this error.

Best answer by kentphelps

I would recommend opening a support case here.  They can help perform the surgery needed on the connector service and its state file.

2 replies

kentphelps
Community Manager
Forum|alt.badge.img+12
  • Community Manager
  • Answer
  • October 5, 2025

I would recommend opening a support case here.  They can help perform the surgery needed on the connector service and its state file.


Heliosfloresempirellc43
Forum|alt.badge.img

This error typically occurs because the Microsoft Azure Sentinel Incident Connector v2 has a pointer or a "checkpoint" stuck on a specific Incident ID that it can no longer find in the workspace. Since the connector is designed to be stateful to ensure no incidents are missed, it will continuously retry fetching the "missing" ID until the state is cleared or the item is found.
Here is the breakdown of why this is happening and how to resolve the loop:
1. The "Ghost Incident" Logic
The connector maintains a backlog queue. If an incident (in this case, 37014) was deleted, moved to a different workspace, or if the retention policy purged it while the connector was offline, the connector still thinks it has "unfinished business" with that specific ID.
Because the API returns a 404 Not Found instead of a success code, the connector does not move the pointer forward to incident 37015; it stays stuck on 37014, causing the rapid-fire log entries you are seeing.
2. Common Root Causes
 * Manual Deletion: Someone manually deleted the incident from the Azure Sentinel "Incidents" blade.
 * Workspace Migration: The logs were moved, but the connector configuration is still pointing to the old metadata.
 * Permission Changes: The Service Principal or Managed Identity used by the connector lost "Sentinel Reader" or "Sentinel Contributor" permissions on the specific resource group where that incident lived.
3. Recommended Resolution Steps
Option A: Clear the Connector State (The "Hard Reset")
Most Sentinel connectors use a Storage Account or a Function App state to remember where they left off.
 * Navigate to the Logic App or Function powering the connector.
 * Locate the Trigger History.
 * Look for the "State" or "Checkpoint" variable. You may need to manually update the "High Water Mark" (the last processed Incident ID) to 37015 to force the connector to skip the broken record.
Option B: Re-authenticate the Connector
Sometimes the API cache becomes corrupted.
 * Go to Microsoft Sentinel > Data Connectors.
 * Open the Microsoft Azure Sentinel Incident v2 page.
 * Disconnect the connector and wait 5 minutes.
 * Reconnect it. This often forces a refresh of the internal metadata and can clear the backlog queue.
Option C: Check Audit Logs
Search your Azure Activity Logs for the specific timeframe when Incident 37014 was first generated. Look for "Delete Incident" or "Update Incident" actions to see if a specific user or automated playbook removed the record that the connector is now looking for.
4. Suppression
If you cannot immediately fix the connector state, you can temporarily apply a Log Analytics transformation or a filter in your monitoring tool to exclude strings containing “Failed to fetch backlog incident 37014”. This will stop the noise in your logs while you perform the reset.
Would you like me to help you write a KQL query to identify if there are other missing incident IDs in your logs?