DMARC reporting 101: how to read aggregate XML reports and take action

dmarctutorialspfdkim

At some point every DMARC rollout hits this exact moment:

"The DNS record is published... now what?"

The answer is reporting. Specifically aggregate DMARC reports (rua) sent as XML files.

This is where DMARC stops being a checkbox and becomes operations: figuring out who is sending as example.com, what is passing, what is failing, and what should be fixed before policy gets stricter.

If the core DMARC model still feels fuzzy, start with What is DMARC?, then come back.

First, what DMARC reporting actually gives you

An aggregate report is not a copy of your messages.

It is a structured summary from a receiver saying, in effect:

  • "These IPs sent mail claiming your domain"
  • "This many messages came from each source"
  • "SPF and DKIM passed or failed with these identities"
  • "DMARC passed or failed, and this would be the disposition"

So think in terms of grouped telemetry, not inbox-level logs.

RUA and RUF, in plain terms

You'll see two reporting tags in DMARC docs:

  • rua: aggregate reporting (XML batches)
  • ruf: failure/forensic reporting (message-level samples)

For almost all teams, RUA is the one that matters daily.

Info DMARCPal intentionally focuses on rua aggregate reporting and does not use ruf tags. This keeps workflows privacy-friendly and simple, and reflects real-world usage where ruf is uncommon.

If a refresher on tags is useful, DMARC record tag cookbook covers rua, ruf, and related options in detail.

The XML sections worth learning

The raw XML can look busy, but nearly every report boils down to three parts:

  1. report_metadata - who generated the report and for what time window
  2. policy_published - DMARC policy context used by the receiver
  3. record - grouped observations (source IP + counts + auth outcomes)

That is enough to triage most real-world issues.

report_metadata: reporting window and source

This section answers:

  • Which provider sent this report (org_name)
  • Which batch ID it has (report_id)
  • Which period it covers (date_range begin/end)

Don't overthink timestamp precision. Different receivers batch on slightly different schedules.

policy_published: DMARC policy context

Key fields here:

  • domain
  • p, sp
  • adkim, aspf
  • pct

Important detail: this is policy as interpreted by the receiver at evaluation time.

If strict vs relaxed alignment still causes confusion, this practical deep dive helps: DMARC alignment: From vs Return-Path (strict vs relaxed).

record: where decisions happen

Most of the useful work is in each record block:

  • row.source_ip and row.count
  • row.policy_evaluated (disposition, spf, dkim)
  • identifiers.header_from
  • auth_results (which domains SPF and DKIM authenticated)

Small simplified example:

<record>
  <row>
    <source_ip>192.0.2.44</source_ip>
    <count>85</count>
    <policy_evaluated>
      <disposition>none</disposition>
      <dkim>pass</dkim>
      <spf>fail</spf>
    </policy_evaluated>
  </row>
  <identifiers>
    <header_from>example.com</header_from>
  </identifiers>
  <auth_results>
    <dkim>
      <domain>example.com</domain>
      <result>pass</result>
    </dkim>
    <spf>
      <domain>mail.example.net</domain>
      <result>pass</result>
    </spf>
  </auth_results>
</record>

Read that as:

  • 85 messages came from 192.0.2.44
  • DKIM aligned and carried DMARC pass
  • SPF may have passed for transport identity, but it did not align with header_from=example.com

This exact pattern is common, and it explains many "SPF passed but DMARC failed" conversations.

Common fields that deserve attention first

When opening reports, focus on these before anything else:

  • source_ip and count to prioritize by volume
  • header_from to confirm claimed identity
  • DKIM domain + result
  • SPF domain + result
  • policy_evaluated.disposition to see enforcement impact

Everything else can wait until those are clear.

Turning reports into actions (the part that matters)

A practical loop that works:

  1. Build a sender inventory from report sources and volumes.
  2. Label each source as known legitimate, unknown, or suspicious.
  3. For legitimate senders, document whether DMARC pass depends on SPF alignment, DKIM alignment, or both.
  4. Fix fragile senders first (for example, pass only through one path and frequently break).
  5. Tighten policy only after reports are boring for a sustained period.

"Boring" here means predictable sources, stable pass rates, and no surprise high-volume failures.

If you are still in monitoring mode, DMARC setup stage 5 - monitoring maps nicely to this workflow.

Why reports can be missing (or feel inconsistent)

A few recurring causes:

  • Syntax errors in the DMARC TXT record
  • rua mailbox rejects large attachments
  • DNS or mail routing problems at the report destination domain
  • Assumption that every provider sends reports daily (not guaranteed)

Provider documentation is useful for reality checks around sender requirements and authentication behavior:

A quick reassurance before moving on

The XML format is machine-oriented, so feeling overwhelmed on first contact is normal.

What helps is treating each report as a table of identities and outcomes, not as "one giant XML file." Once that mental model clicks, DMARC operations become pretty systematic: inventory, align, then enforce.

And when the time comes to move from p=none to stronger policy, reports will already show exactly what could break and what is safe.

Previous PostNext Post