At some point every DMARC rollout hits this exact moment:
"The DNS record is published... now what?"
The answer is reporting. Specifically aggregate DMARC reports (rua) sent as XML files.
This is where DMARC stops being a checkbox and becomes operations: figuring out who is sending as example.com, what is passing, what is failing, and what should be fixed before policy gets stricter.
If the core DMARC model still feels fuzzy, start with What is DMARC?, then come back.
An aggregate report is not a copy of your messages.
It is a structured summary from a receiver saying, in effect:
So think in terms of grouped telemetry, not inbox-level logs.
You'll see two reporting tags in DMARC docs:
rua: aggregate reporting (XML batches)ruf: failure/forensic reporting (message-level samples)For almost all teams, RUA is the one that matters daily.
Info DMARCPal intentionally focuses on rua aggregate reporting and does not use ruf tags. This keeps workflows privacy-friendly and simple, and reflects real-world usage where ruf is uncommon.
If a refresher on tags is useful, DMARC record tag cookbook covers rua, ruf, and related options in detail.
The raw XML can look busy, but nearly every report boils down to three parts:
report_metadata - who generated the report and for what time windowpolicy_published - DMARC policy context used by the receiverrecord - grouped observations (source IP + counts + auth outcomes)That is enough to triage most real-world issues.
report_metadata: reporting window and sourceThis section answers:
org_name)report_id)date_range begin/end)Don't overthink timestamp precision. Different receivers batch on slightly different schedules.
policy_published: DMARC policy contextKey fields here:
domainp, spadkim, aspfpctImportant detail: this is policy as interpreted by the receiver at evaluation time.
If strict vs relaxed alignment still causes confusion, this practical deep dive helps: DMARC alignment: From vs Return-Path (strict vs relaxed).
record: where decisions happenMost of the useful work is in each record block:
row.source_ip and row.countrow.policy_evaluated (disposition, spf, dkim)identifiers.header_fromauth_results (which domains SPF and DKIM authenticated)Small simplified example:
<record>
<row>
<source_ip>192.0.2.44</source_ip>
<count>85</count>
<policy_evaluated>
<disposition>none</disposition>
<dkim>pass</dkim>
<spf>fail</spf>
</policy_evaluated>
</row>
<identifiers>
<header_from>example.com</header_from>
</identifiers>
<auth_results>
<dkim>
<domain>example.com</domain>
<result>pass</result>
</dkim>
<spf>
<domain>mail.example.net</domain>
<result>pass</result>
</spf>
</auth_results>
</record>Read that as:
192.0.2.44header_from=example.comThis exact pattern is common, and it explains many "SPF passed but DMARC failed" conversations.
When opening reports, focus on these before anything else:
source_ip and count to prioritize by volumeheader_from to confirm claimed identitydomain + resultdomain + resultpolicy_evaluated.disposition to see enforcement impactEverything else can wait until those are clear.
A practical loop that works:
"Boring" here means predictable sources, stable pass rates, and no surprise high-volume failures.
If you are still in monitoring mode, DMARC setup stage 5 - monitoring maps nicely to this workflow.
A few recurring causes:
rua mailbox rejects large attachmentsProvider documentation is useful for reality checks around sender requirements and authentication behavior:
The XML format is machine-oriented, so feeling overwhelmed on first contact is normal.
What helps is treating each report as a table of identities and outcomes, not as "one giant XML file." Once that mental model clicks, DMARC operations become pretty systematic: inventory, align, then enforce.
And when the time comes to move from p=none to stronger policy, reports will already show exactly what could break and what is safe.