AU-7: Audit Record Reduction and Report Generation

AU-7 requires you to deploy a working capability that can reduce audit records (filter, aggregate, normalize, correlate) and generate consistent audit reports that support monitoring, investigations, and compliance reviews. Operationalize it by defining reportable events, standard report templates, tooling (SIEM/log platform), access controls, and repeatable evidence that proves reports are produced and used. 1

Key takeaways:

  • AU-7 is a capability requirement: tools + configured reports + repeatable operation, not a policy statement. 1
  • “Reduction” means making logs usable at scale (filtering, summarizing, correlating) without destroying evidentiary value.
  • Your audit readiness depends on artifacts: report inventory, schedules, sample outputs, query logic, and proof of review actions.

AU-7: audit record reduction and report generation requirement exists because raw logs are rarely consumable during real incidents, audits, or continuous monitoring. You can be collecting “all the logs” and still fail AU-7 if you cannot consistently turn those logs into trusted, repeatable reports that answer operational questions: What happened? Who did it? Which systems were affected? Did controls work as expected?

For a Compliance Officer, CCO, or GRC lead, the fastest path to implementation is to treat AU-7 as a defined service with owners, inputs, outputs, and evidence. Inputs are your audit records from systems and services. Outputs are standardized reports (and dashboards) that are generated on demand and on a schedule, then reviewed and acted on. The control is satisfied when you can demonstrate the capability exists, is configured for your environment, and is used in routine governance and security workflows. 1

This page gives requirement-level guidance you can hand to Security Operations, IT, and System Owners, then audit against in a week, not a quarter.

Regulatory text

Requirement excerpt: “Provide and implement an audit record reduction and report generation capability that:” 1

Operator interpretation: AU-7 expects more than log collection. You must (1) reduce audit records into a workable form for analysis and (2) generate reports that support monitoring, investigations, and compliance oversight. “Provide and implement” implies the capability is deployed, configured, and operating, not merely procured or documented. 1

Plain-English interpretation

  • Audit record reduction: Convert high-volume, noisy logs into decision-grade information. Typical methods include filtering known-good noise, normalizing fields, aggregating events, correlating across sources, and summarizing activity by user/system/time window. You still preserve underlying raw events per your retention and integrity requirements; reduction is about analysis workflows, not erasing evidence.
  • Report generation: Produce consistent, repeatable outputs (scheduled and ad hoc) that answer defined questions. Reports must be accessible to authorized roles and sufficiently detailed for follow-up.

Who it applies to

AU-7 applies to:

  • Federal information systems and the teams operating them. 1
  • Contractor systems handling federal data, including cloud environments and managed services where the contractor operates or administers logging and monitoring functions. 1

Operational contexts where AU-7 is routinely assessed:

  • Systems with centralized logging/SIEM, log analytics platforms, or managed detection and response workflows.
  • Environments where incident response, continuous monitoring, or audit support requires rapid retrieval and summarization of events across multiple sources (identity, endpoints, servers, network, SaaS, cloud control plane).

What you actually need to do (step-by-step)

1) Assign ownership and define the “AU-7 service”

Set a single accountable owner (often SecOps or a logging platform owner) and name supporting owners (IAM, cloud platform, endpoint, network). Your GRC function should own the requirement mapping and evidence expectations.

Deliverable:

  • AU-7 control sheet: owner, in-scope systems, tooling, report inventory, cadence, evidence list, and where artifacts are stored. This aligns with the recommended control to map AU-7 to owner, procedure, and recurring evidence. 1

2) Inventory audit record sources and confirm coverage

Create a simple table of log sources that must feed your reduction/reporting capability.

Minimum fields to capture:

  • System/service name and owner
  • Log type(s) (auth, admin actions, data access, application events)
  • Collection method (agent, API, native forwarding)
  • Normalization status (mapped to common fields like user, source IP, action, object)
  • Retention location (hot vs archive) and access path for investigations

Practical test: pick one critical user account and trace its activity across identity provider, endpoint, and a key application. If you cannot do that quickly, your reduction/reporting pipeline is not functioning as a capability.

3) Implement reduction functions with documented logic

Reduction must be deliberate and explainable. Configure your SIEM/log platform to perform:

  • Filtering: suppress known-benign noisy events with explicit rules and approvals.
  • Aggregation: summarize repetitive events (e.g., repeated failed logins) into counts over time windows while keeping drill-down to raw events.
  • Normalization: map vendor-specific fields into consistent schema.
  • Correlation: link events across sources (user + device + cloud session) to support investigations.

Control point: keep a change log for reduction rules. If an auditor asks “how do you know this dashboard is complete,” you need to show what is filtered and why.

4) Define a report catalog (what reports exist, for whom, and why)

Create a report catalog with:

  • Report name
  • Purpose (monitoring, investigation, compliance review)
  • Data sources
  • Query logic / saved search name
  • Output format (dashboard, PDF export, ticket summary)
  • Audience and access controls
  • Frequency (scheduled) and trigger conditions (on demand)

Examples of report types (adapt to your environment):

  • Authentication anomalies (privileged logins, impossible travel, repeated failures)
  • Administrative actions (policy changes, role grants, logging disabled attempts)
  • Sensitive data access events (reads/exports on regulated datasets)
  • System integrity and logging health (forwarder failures, ingestion gaps)

5) Make reporting operational: schedule, review, and action

AU-7 fails in practice when reports exist but nobody reviews them. Build a lightweight operating rhythm:

  • Scheduled generation for recurring reports
  • Documented review steps (who reviews, what “investigate” means, how to record disposition)
  • Ticketing or case linkage for findings
  • Escalation thresholds (qualitative is fine; define what triggers an incident or risk acceptance)

Evidence tip: a monthly “no findings” statement without proof of review actions is weak. Keep annotated reports, investigation notes, or ticket references.

6) Lock down access and preserve integrity of reporting outputs

Your report capability becomes a sensitive asset. Enforce:

  • Role-based access to report dashboards and saved searches
  • Separation between report consumers and query/rule editors where feasible
  • Controlled exports and storage locations for audit packages

7) Prove it works with a repeatable demonstration

Run a tabletop-style test:

  • Select a known event (e.g., a test privileged role assignment)
  • Verify the underlying audit record is collected
  • Verify it appears in reduced views (normalized fields, correlated context)
  • Generate a report and show who reviewed it and what action was taken

Required evidence and artifacts to retain

Maintain an “AU-7 evidence pack” that a GRC reviewer can pull without custom work:

  • AU-7 control narrative (how reduction and reporting works, in your environment)
  • Tooling architecture diagram for log flow (sources → aggregator/SIEM → reporting outputs)
  • Log source inventory table and onboarding status
  • Report catalog with owners, access roles, and frequencies
  • Samples of generated reports (exports or screenshots) with timestamps
  • Saved searches/query definitions or configuration snapshots
  • Review records (tickets, case notes, sign-offs, meeting minutes)
  • Change records for reduction rules and report logic (who changed what, when, why)

Common exam/audit questions and hangups

Expect these questions from assessors:

  1. “Show me the capability.” They will ask for a live walkthrough: find an event, reduce it, generate a report.
  2. “How do you prevent noise from hiding signal?” Have documented filtering rules and periodic tuning reviews.
  3. “Who reviews which reports, and what happens next?” Show assignment, evidence of review, and downstream actions.
  4. “Can you generate reports for investigations quickly?” Demonstrate ad hoc reporting with saved searches and export controls.
  5. “How do you ensure reports are complete and accurate?” Show normalization/correlation approach and coverage checks across sources.

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails AU-7 in practice Fix
“We have a SIEM” as the whole control A tool purchase does not prove implemented reduction/reporting Build a report catalog + sample outputs + review workflow
Over-filtering logs You lose evidentiary value and investigative context Require approvals for suppression rules and keep drill-down to raw events
Reports exist but no owner No consistent review, no accountability Assign report owners and define review steps
Custom one-off queries during audits Hard to repeat; brittle evidence Use saved searches, versioned queries, and a standard export procedure
No health reporting Ingestion gaps silently break reporting Add logging pipeline health dashboards and review them

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite enforcement outcomes.

Risk-wise, AU-7 gaps show up during incidents and assessments as “we can’t answer basic questions from logs.” The practical impact is delayed investigations, incomplete scope, weak audit response, and higher likelihood of control failures in adjacent audit and accountability requirements within the AU family. 2

A practical 30/60/90-day execution plan

First 30 days (stabilize and define)

  • Assign AU-7 owner and publish the AU-7 control sheet.
  • Inventory log sources for in-scope systems and identify ingestion gaps.
  • Stand up an initial report catalog with your highest-value reports.
  • Produce initial sample report outputs and store them in a dedicated evidence folder.

Days 31–60 (operationalize)

  • Implement reduction rules with change control (filtering, normalization, aggregation).
  • Schedule recurring reports and define review ownership in writing.
  • Integrate reporting outcomes with ticketing/case management.
  • Run a demonstration test and capture evidence.

Days 61–90 (harden and audit-proof)

  • Add correlation improvements and tune noise suppression based on real review outcomes.
  • Add reporting on pipeline health and ingestion failures.
  • Conduct an internal audit using your evidence pack; close gaps.
  • Optional: Use Daydream to map AU-7 to owners, procedures, and recurring artifacts so evidence collection becomes routine instead of a scramble before assessments. 1

Frequently Asked Questions

What counts as “audit record reduction” for AU-7?

Reduction is any controlled method that makes audit records easier to analyze, such as filtering noise, aggregating repetitive events, normalizing fields, and correlating across sources. Keep the ability to drill down to underlying events so you can support investigations.

Do we need a SIEM to satisfy AU-7?

AU-7 requires a capability, not a specific product category. A SIEM is a common way to meet it, but you can also meet it with cloud-native logging plus a reporting layer if it reliably reduces records and generates repeatable reports.

How many reports are “enough” for AU-7?

There is no fixed number in the provided requirement text. Define reports based on your risk and monitoring needs, then prove they are generated and reviewed with retained artifacts. 1

Can dashboards count as “reports”?

Yes, if they are controlled (saved), access-restricted, and you can export or otherwise preserve evidence of what the dashboard showed at a point in time. Pair dashboards with review records to show they are used.

What evidence is most persuasive to an assessor?

A report catalog plus sample outputs, saved query definitions, and proof of review actions (tickets/cases) typically reads as strong operational evidence. Add a short walkthrough showing you can trace a test event from raw logs to a generated report.

How do we handle third-party hosted systems where we don’t control logs?

Treat the third party as a log source with defined access paths and SLAs for audit records and reporting. Contract for the events you need, confirm you can retrieve them, and keep evidence of retrieval and reporting for in-scope workflows.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as “audit record reduction” for AU-7?

Reduction is any controlled method that makes audit records easier to analyze, such as filtering noise, aggregating repetitive events, normalizing fields, and correlating across sources. Keep the ability to drill down to underlying events so you can support investigations.

Do we need a SIEM to satisfy AU-7?

AU-7 requires a capability, not a specific product category. A SIEM is a common way to meet it, but you can also meet it with cloud-native logging plus a reporting layer if it reliably reduces records and generates repeatable reports.

How many reports are “enough” for AU-7?

There is no fixed number in the provided requirement text. Define reports based on your risk and monitoring needs, then prove they are generated and reviewed with retained artifacts. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Can dashboards count as “reports”?

Yes, if they are controlled (saved), access-restricted, and you can export or otherwise preserve evidence of what the dashboard showed at a point in time. Pair dashboards with review records to show they are used.

What evidence is most persuasive to an assessor?

A report catalog plus sample outputs, saved query definitions, and proof of review actions (tickets/cases) typically reads as strong operational evidence. Add a short walkthrough showing you can trace a test event from raw logs to a generated report.

How do we handle third-party hosted systems where we don’t control logs?

Treat the third party as a log source with defined access paths and SLAs for audit records and reporting. Contract for the events you need, confirm you can retrieve them, and keep evidence of retrieval and reporting for in-scope workflows.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream