DMARC reporting is where the whole thing stops being theoretical.
If DMARC is the policy you publish in DNS, DMARC reports are the receipts: mailbox providers telling you what they observed when mail claiming to be from example.com hit their systems.
Those reports are what let you answer questions like:
If the general DMARC model is still fuzzy, it helps to read What is DMARC? first, then come back here.
There are two DMARC reporting types:
This post is about RUA.
Info DMARC's ruf tag is intentionally not supported by DMARCPal. This keeps reporting privacy-friendly and simple, and it matches the reality that ruf is almost never used at scale.
An aggregate report is a batch of rows like: "messages from IP 192.168.42.17 claiming to be example.com — here's how SPF and DKIM evaluated, whether they aligned with From:, and what DMARC disposition would apply."
It's not a copy of your emails. It's a structured summary.
Most RUA XML has three big areas:
1) Who sent the report, and for what time window (report_metadata)
2) What DMARC policy was observed for the domain (policy_published)
3) The actual observations (record entries)
Don't worry about memorizing tag names. The important thing is understanding what each section represents.
This is where the receiver says:
org_name)report_id)date_range begin/end)That time range is useful when comparing to sending logs. One gotcha: some reporters batch on their own schedule, so "daily" might mean "roughly daily".
This section includes the policy knobs receivers used as context:
domain: the organizational domain the policy applies toadkim / aspf: alignment mode (relaxed or strict)p / sp: policy for the domain and subdomainspct: percentage requested for enforcement samplingIf adkim/aspf and alignment still sound abstract, DMARC alignment explained (From vs Return-Path) is the practical version.
If p, sp, and pct are the tags that keep causing second-guessing, DMARC record tag cookbook is a handy reference.
Each record is usually a grouped observation: "this source IP sent N messages".
Inside it, three sub-parts matter most:
row: the DMARC evaluation summary for that groupidentifiers: the domain in the visible From: header (header_from)auth_results: what SPF and DKIM reported (including which domains they authenticated)Here is a deliberately small, simplified example (real reports are noisier):
<record>
<row>
<source_ip>192.168.42.17</source_ip>
<count>120</count>
<policy_evaluated>
<disposition>none</disposition>
<dkim>pass</dkim>
<spf>fail</spf>
</policy_evaluated>
</row>
<identifiers>
<header_from>example.com</header_from>
</identifiers>
<auth_results>
<dkim>
<domain>example.com</domain>
<selector>s1</selector>
<result>pass</result>
</dkim>
<spf>
<domain>bounces.example.net</domain>
<scope>mfrom</scope>
<result>pass</result>
</spf>
</auth_results>
</record>Reading that in plain language:
192.168.42.17 (120 messages).From: was example.com.example.com and passed, so DMARC can pass via DKIM alignment.bounces.example.net (the 5321.MailFrom / Return-Path domain), which may pass SPF, but would not align with example.com.That last bullet is one of the most common "wait, what?" moments in DMARC operations.
Every message has multiple domain identities involved:
From: (what the human sees)d= domain in the DKIM signatureDMARC passes when the visible identity aligns with at least one authenticated identity (SPF or DKIM).
So when a report row looks confusing, the fix is usually not "tune the DMARC record". The fix is usually "make the authenticated identity match the visible one".
Here is the pattern that works in real domains.
From the report data, list:
header_from domain they claimThis is why DMARC monitoring exists. It's not just to see attacks; it's to discover legitimate-but-unknown senders before enforcement.
If you're following a staged approach, DMARC setup stage 5 — monitoring lays out the operational rhythm.
Two categories matter most:
Common fixes:
d=example.com (or aligns under relaxed mode).example.com if the provider supports custom bounce domains.This is where it helps to be very literal about what "alignment" means. The explanation with the two "From" domains (5322 vs 5321) is in DMARC alignment explained (From vs Return-Path).
Not every DMARC fail is an emergency.
If a random IP sent 3 messages and failed everything, that's often just opportunistic spoofing that enforcement will handle. If a large volume fails and looks like a legitimate source, that's when to dig deeper.
DMARC enforcement is easiest when reports look boring:
If the reports are still surprising, keep monitoring (p=none) while fixing alignment.
When aggregate reports don't show up, it's almost always one of these:
rua isn't present, or the DNS record has a syntax error.rua points at a domain that isn't accepting mail (typo, missing MX, strict filtering).If the DMARC record needs a quick sanity check, DMARC record tag cookbook includes safe rua patterns.
The raw XML is verbose, and it's normal to bounce off it the first time.
That's not you being "bad at DMARC". Aggregate reports are intentionally formatted for machines, not humans, because receivers need a consistent schema to automate reporting at scale.
In practice, the fastest way to get value is to let a parser turn the XML into something you can actually triage: sources, volumes, pass/fail rates, and which identity (SPF or DKIM) is carrying DMARC.
Tools like DMARCPal exist for exactly this reason: take machine-generated DMARC aggregates and make them actionable without living inside raw XML all day.
The trick is to stop reading it as "a big XML file" and start reading it as "a table of sending sources and outcomes". Once that clicks, DMARC becomes a manageable operational process: inventory, align, enforce.
If there's interest in going deeper, the next natural step is learning to debug individual failures using real message headers. (That's where Authentication-Results becomes your best friend.)