Skip to main content
Question

Ingest Microsoft Intune Logs

  • March 22, 2026
  • 0 replies
  • 11 views

Forum|alt.badge.img+1

Microsoft Intune Context Log Feed — ingesting via GCS instead of Azure Blob Storage?

Product: Google SecOps (Chronicle) Log type: Microsoft Intune Context Log Feed

Hi everyone,

Following up on a previous thread where @hzmndt confirmed that cloud storage ingestion is the correct method for Intune Context logs (and that there's an open feature request for Third Party API support: https://issuetracker.google.com/issues/369896058 and previous post here:

).

We've implemented exactly that a Cloud Function that collects Intune data and writes it to Google Cloud Storage, which is then picked up by a SecOps feed. This was a deliberate choice because we have no Azure infrastructure and wanted to keep our pipeline entirely within GCP.

The setup works, and we currently maintain a custom parser to handle the data. However, when we asked Google Support for guidance on aligning our log format with the default "Microsoft Intune Context" parser schema, the recommendation was to switch to the standard Azure Blob Storage V2 method which would require us to introduce Azure components solely for this purpose.

Since the official documentation (https://docs.cloud.google.com/chronicle/docs/ingestion/default-parsers/azure-mdm-intune-context) only covers the Azure Blob Storage path, we're looking for some inputs:

  1. Has anyone used GCS (instead of Azure Blob Storage) as the cloud storage method for Intune Context logs and successfully mapped them to the default parser? SecOps shouldn't care where the blob comes from, but the JSON structure inside needs to match.

  2. Can anyone share the JSON schema the default parser expects? We believe it follows the Azure Monitor Diagnostic Settings common log format (the {"records": [...]} envelope with fields like time, resourceId, operationName, category, properties), but confirmation or a sample payload would be very helpful.

We're happy maintaining a custom parser if necessary, but if the schema is known, we'd rather transform our data to match the default parser and reduce that maintenance burden.

Thanks in advance for any insights!