DMARC aggregate reports (RUA) explained: how to read the XML and take action

dmarctutorialspfdkim

DMARC reporting is where the whole thing stops being theoretical.

If DMARC is the policy you publish in DNS, DMARC reports are the receipts: mailbox providers telling you what they observed when mail claiming to be from example.com hit their systems.

Those reports are what let you answer questions like:

  • "Who is sending as us?"
  • "Which sources are properly authenticated and aligned?"
  • "What will break if policy moves from monitoring to enforcement?"

If the general DMARC model is still fuzzy, it helps to read What is DMARC? first, then come back here.

RUA vs RUF (and what most people actually use)

There are two DMARC reporting types:

  • Aggregate reports (RUA): periodic summaries, delivered as XML. This is the workhorse.
  • Forensic/failure reports (RUF): per-message samples when something fails. In practice, these are rare and increasingly constrained by privacy choices at receivers.

This post is about RUA.

Info DMARC's ruf tag is intentionally not supported by DMARCPal. This keeps reporting privacy-friendly and simple, and it matches the reality that ruf is almost never used at scale.

What an aggregate report is (in one sentence)

An aggregate report is a batch of rows like: "messages from IP 192.168.42.17 claiming to be example.com — here's how SPF and DKIM evaluated, whether they aligned with From:, and what DMARC disposition would apply."

It's not a copy of your emails. It's a structured summary.

The shape of the XML (the parts worth learning)

Most RUA XML has three big areas:

1) Who sent the report, and for what time window (report_metadata)

2) What DMARC policy was observed for the domain (policy_published)

3) The actual observations (record entries)

Don't worry about memorizing tag names. The important thing is understanding what each section represents.

1) report_metadata: the “envelope” of the report

This is where the receiver says:

  • which organization generated the report (org_name)
  • which report ID ties the batch together (report_id)
  • the time range (date_range begin/end)

That time range is useful when comparing to sending logs. One gotcha: some reporters batch on their own schedule, so "daily" might mean "roughly daily".

2) policy_published: the DMARC record as the receiver interpreted it

This section includes the policy knobs receivers used as context:

  • domain: the organizational domain the policy applies to
  • adkim / aspf: alignment mode (relaxed or strict)
  • p / sp: policy for the domain and subdomains
  • pct: percentage requested for enforcement sampling

If adkim/aspf and alignment still sound abstract, DMARC alignment explained (From vs Return-Path) is the practical version.

If p, sp, and pct are the tags that keep causing second-guessing, DMARC record tag cookbook is a handy reference.

3) record: where the action is

Each record is usually a grouped observation: "this source IP sent N messages".

Inside it, three sub-parts matter most:

  • row: the DMARC evaluation summary for that group
  • identifiers: the domain in the visible From: header (header_from)
  • auth_results: what SPF and DKIM reported (including which domains they authenticated)

Here is a deliberately small, simplified example (real reports are noisier):

<record>
  <row>
    <source_ip>192.168.42.17</source_ip>
    <count>120</count>
    <policy_evaluated>
      <disposition>none</disposition>
      <dkim>pass</dkim>
      <spf>fail</spf>
    </policy_evaluated>
  </row>

  <identifiers>
    <header_from>example.com</header_from>
  </identifiers>

  <auth_results>
    <dkim>
      <domain>example.com</domain>
      <selector>s1</selector>
      <result>pass</result>
    </dkim>
    <spf>
      <domain>bounces.example.net</domain>
      <scope>mfrom</scope>
      <result>pass</result>
    </spf>
  </auth_results>
</record>

Reading that in plain language:

  • Mail came from 192.168.42.17 (120 messages).
  • DMARC considered DKIM as passing and SPF as failing for DMARC purposes.
  • The visible From: was example.com.
  • DKIM authenticated example.com and passed, so DMARC can pass via DKIM alignment.
  • SPF authenticated bounces.example.net (the 5321.MailFrom / Return-Path domain), which may pass SPF, but would not align with example.com.

That last bullet is one of the most common "wait, what?" moments in DMARC operations.

The one skill that makes DMARC reports useful: thinking in identities

Every message has multiple domain identities involved:

  • Visible identity: the domain in From: (what the human sees)
  • SPF identity: the domain used for the SMTP envelope sender (Return-Path / 5321.MailFrom)
  • DKIM identity: the d= domain in the DKIM signature

DMARC passes when the visible identity aligns with at least one authenticated identity (SPF or DKIM).

So when a report row looks confusing, the fix is usually not "tune the DMARC record". The fix is usually "make the authenticated identity match the visible one".

Turning DMARC reports into actions (a practical loop)

Here is the pattern that works in real domains.

1) Build a sender inventory

From the report data, list:

  • every source IP / sending service
  • what header_from domain they claim
  • whether DMARC passes, and if so via SPF or DKIM

This is why DMARC monitoring exists. It's not just to see attacks; it's to discover legitimate-but-unknown senders before enforcement.

If you're following a staged approach, DMARC setup stage 5 — monitoring lays out the operational rhythm.

2) Triage by “pass but misaligned” vs “failing outright”

Two categories matter most:

  • DMARC pass, but only via one path: these are fragile. One vendor change (or a forwarding path) can flip it.
  • DMARC fail: these are either spoofing, broken configuration, or legitimate senders that need alignment work.

3) For legitimate senders, decide the least risky alignment fix

Common fixes:

  • Enable DKIM signing for that sending system and ensure d=example.com (or aligns under relaxed mode).
  • Adjust the bounce domain (Return-Path / envelope sender) to align with example.com if the provider supports custom bounce domains.
  • Fix SPF authorization (missing includes, wrong IPs), but only after confirming SPF alignment is the path being relied on.

This is where it helps to be very literal about what "alignment" means. The explanation with the two "From" domains (5322 vs 5321) is in DMARC alignment explained (From vs Return-Path).

4) For spoofing, decide whether it’s meaningful

Not every DMARC fail is an emergency.

If a random IP sent 3 messages and failed everything, that's often just opportunistic spoofing that enforcement will handle. If a large volume fails and looks like a legitimate source, that's when to dig deeper.

5) Tighten policy only when the report story is boring

DMARC enforcement is easiest when reports look boring:

  • known sources
  • stable pass rates
  • clear alignment path

If the reports are still surprising, keep monitoring (p=none) while fixing alignment.

“Why am I not getting reports?” (the usual reasons)

When aggregate reports don't show up, it's almost always one of these:

  • rua isn't present, or the DNS record has a syntax error.
  • Reports are being sent, but the mailbox is rejecting large attachments or messages.
  • rua points at a domain that isn't accepting mail (typo, missing MX, strict filtering).
  • Expectations are too strict: not every receiver sends reports, and not every receiver sends them daily.

If the DMARC record needs a quick sanity check, DMARC record tag cookbook includes safe rua patterns.

A small reassurance, because the XML looks intense

The raw XML is verbose, and it's normal to bounce off it the first time.

That's not you being "bad at DMARC". Aggregate reports are intentionally formatted for machines, not humans, because receivers need a consistent schema to automate reporting at scale.

In practice, the fastest way to get value is to let a parser turn the XML into something you can actually triage: sources, volumes, pass/fail rates, and which identity (SPF or DKIM) is carrying DMARC.

Tools like DMARCPal exist for exactly this reason: take machine-generated DMARC aggregates and make them actionable without living inside raw XML all day.

The trick is to stop reading it as "a big XML file" and start reading it as "a table of sending sources and outcomes". Once that clicks, DMARC becomes a manageable operational process: inventory, align, enforce.

If there's interest in going deeper, the next natural step is learning to debug individual failures using real message headers. (That's where Authentication-Results becomes your best friend.)

Previous PostNext Post