Building a sender inventory with DMARC reports before enforcement

dmarctutorial

If a domain moves to p=quarantine or p=reject before it knows every legitimate system using its visible From domain, the breakage is usually self-inflicted.

That is the real reason to spend time in p=none first. It is not because monitoring feels safer in the abstract. It is because DMARC aggregate reports are often the first place where a team discovers the complete sender footprint for its domain: corporate mail, ticketing tools, CRM blasts, invoicing platforms, forgotten web forms, regional appliances, and the occasional surprise launched by another department without telling email admins.

That last category is what people usually mean by shadow IT in email. The system may be legitimate from a business point of view, but it is still invisible to the team about to enforce DMARC.

Per RFC 7489, DMARC feedback exists in part so domain owners can verify their authentication deployment and get insight into both internal operations and external abuse. In practice, that makes aggregate reports the inventory feed you use before enforcement, not just a compliance box to tick.

The short version

Before enforcing DMARC, use aggregate reports to answer one boring but essential question:

Who is sending mail that claims to be from this domain?

That means identifying, for each meaningful source:

  • source IPs or sending networks
  • the visible From: domain being used
  • whether SPF passes and aligns
  • whether DKIM passes and aligns
  • approximate daily volume
  • whether the source is a business-approved sender, a forwarding artifact, or obvious abuse

When that list stops changing and the remaining failures are understood, enforcement becomes much less dramatic.

What DMARC aggregate reports actually tell you

If the XML format still feels opaque, keep DMARC reporting 101 nearby. The useful sender-inventory view is simpler than the raw file makes it look.

Aggregate reports commonly let you see combinations like:

  • 203.0.113.24 sending mail with From: billing@example.com
  • SPF passing for bounce.vendor-mail.example.net but not aligned with example.com
  • DKIM passing with d=example.com, which means DMARC still passes
  • mail volume large enough to matter, such as hundreds or thousands of messages per day
  • receiver disposition and policy override hints for some edge cases

That is enough to group traffic into a practical inventory.

The key mental shift is this: a DMARC report is not just telling you pass or fail. It is telling you which sending pattern exists, how often it appears, and whether that pattern is already safe to carry into enforcement.

Why sender discovery is harder than teams expect

The official DMARC rollout guidance from Google Workspace still starts with p=none and daily report review before moving toward enforcement. That sequence exists because large organizations almost always have more mail streams than they think they do.

Common misses include:

  • marketing platforms using a custom From: domain but a vendor-owned bounce domain
  • finance or ERP systems that send invoices through a relay nobody documented
  • support platforms signing with a vendor DKIM domain instead of your domain
  • copier, scanner, or appliance mail sent directly to the internet
  • old contact forms or WordPress plugins using local SMTP settings
  • business units that purchased a SaaS tool and configured branded sending on their own

None of those systems show up cleanly in a spreadsheet titled "approved senders" unless somebody has already done the discovery work. DMARC reports are often how that spreadsheet gets built for the first time.

Build the inventory from patterns, not from individual messages

One of the easiest mistakes is treating DMARC reports like a queue of single-message incidents. That turns discovery into noise.

A better approach is to group by sender pattern:

  1. Start with the source IP or source IP range.
  2. Check whether the same pattern repeats across multiple days.
  3. Look at SPF authenticated domain and DKIM d= domain.
  4. Compare the volume against what the business would expect.
  5. Assign the pattern to one of four buckets.

Those four buckets are usually enough:

  • known and correctly aligned
  • known but misaligned
  • unknown and needs owner identification
  • unauthorised or obviously abusive

This keeps the exercise operational. The goal is not to explain every row immediately. The goal is to classify the repeating sources that would matter once p=quarantine or p=reject starts to bite.

A practical triage method

For most domains, the sender inventory can be built with a weekly review loop that looks like this.

1. List the highest-volume sources first

Start with the mail streams that represent the largest message counts. If one source is sending 40% of the domain's traffic and is misaligned, that matters more than ten tiny edge cases combined.

2. Identify whether DMARC already passes somehow

A source may look odd at first glance and still be safe for enforcement.

Example:

SourceSPF alignedDKIM alignedDMARC resultMeaning
CRM platformnoyespassenforcement is probably safe if DKIM remains stable
legacy relayyesnopasssafe if SPF path remains stable
ticketing toolnonofailwill break under enforcement unless fixed

This matters because many legitimate services pass DMARC through only one aligned mechanism. That is normal. DMARC needs one aligned pass, not both, as described in RFC 7489.

If alignment basics are still fuzzy, DMARC identifier alignment deep dive covers the exact matching rules.

3. Resolve unknown senders with business owners, not guesswork

An IP address by itself rarely answers the whole question. The useful move is to bring a small packet of evidence to the business:

  • first seen / last seen dates
  • daily or weekly volume
  • visible From: domain or addresses
  • SPF domain and DKIM signing domain
  • receivers reporting the traffic

That is usually enough for someone to say, "that is our payroll provider" or "that was a short-lived pilot and can be shut off."

4. Fix the sender or change the branding

Once a source is confirmed as legitimate, there are usually only a few real fixes:

  • enable DKIM signing with your domain
  • align the bounce domain so SPF can pass in alignment
  • move the visible From: address to a subdomain used only for that platform
  • route the traffic through an approved central mail path
  • stop using the platform for branded sending

This is where the inventory becomes useful. It turns "DMARC is risky" into a concrete list of integration tasks.

5. Keep watching until the failures are boring

That is the best pre-enforcement signal.

When reports are still producing surprise legitimate senders, the inventory is incomplete. When the remaining failures are mostly obvious spoofing, stale systems, or understood forwarding artifacts, the domain is getting close.

How shadow IT usually shows up in reports

Shadow IT almost never announces itself with a label saying "unaudited SaaS purchase." It appears as technical mismatch.

Typical clues:

  • a new IP range appears with low but steady volume
  • SPF passes for a vendor domain, but the domain is not aligned with your visible From: domain
  • DKIM passes with d=vendor-example.com instead of your domain
  • traffic is concentrated on one business function such as events, HR, surveys, or support
  • the source appears only on weekdays or around a recurring campaign window

That does not prove the sender is malicious. It proves the sender is unknown to the enforcement project.

Treat that as an ownership problem first, then a configuration problem.

What not to conclude from aggregate reports

DMARC reports are extremely useful, but they are not magic asset inventory.

Keep the limits in mind:

  • they show traffic observed by participating receivers, not every message ever sent
  • they are aggregate and delayed, so they are not ideal for minute-by-minute debugging
  • they usually identify infrastructure patterns, not the exact human or SaaS account that triggered them
  • shared ESP infrastructure can make one provider appear under many customers or configurations
  • forwarding can produce failures that are real in reports but not evidence of an unapproved sender

That last point matters. If a business workflow includes forwarding or mailing lists, read DMARC and forwarding/mailing lists before treating every fail row as a broken sender.

A simple working spreadsheet

The sender inventory does not need to be fancy. A table with these columns is usually enough:

  • source IP / range
  • reverse DNS or provider clue
  • visible From: domain
  • SPF authenticated domain
  • DKIM signing domain
  • DMARC result
  • daily volume
  • owner
  • business purpose
  • remediation status
  • enforcement ready: yes or no

Once the same source appears across several reports, add it to the spreadsheet and keep one row per pattern, not one row per XML record.

When the inventory is good enough to enforce

There is no perfect finish line, but most teams are ready to move beyond p=none when these statements are true:

  • every significant legitimate sender has a known owner
  • important sources have at least one aligned pass path that is stable
  • recurring failures are understood and either acceptable or queued for remediation
  • obvious spoofing is a meaningful share of what still fails
  • new unknown sources have stopped appearing in normal business cycles

That is also why the safest rollout is still none, then quarantine, then reject, with report review in between. DMARC.org's deployment overview and Google's rollout guidance both reflect that gradual path.

For the step from monitoring to enforcement, DMARC pct rollout done right and DMARC policy modes explained are the next two reads.

Bottom line

DMARC enforcement does not usually fail because the protocol is mysterious. It fails because the domain owner did not finish discovering who was already sending legitimate mail.

Aggregate reports are the evidence source for that discovery work.

Use them to build a sender inventory, classify repeating sources, chase down shadow IT, and fix alignment before receivers start acting on your policy. When the reports get boring, enforcement gets much safer.

Previous Post