Audit Record Review, Analysis, and Reporting

To meet the FedRAMP Moderate audit record review, analysis, and reporting requirement (NIST SP 800-53 Rev. 5 AU-6), you must define a review cadence, actively review and analyze audit logs for inappropriate or unusual activity, and consistently route findings to named roles with tracked follow-up. Passing depends on provable operations, not just “logging is enabled.” 1

Key takeaways:

  • Define “organization-defined frequency” and “organization-defined personnel/roles” in writing, then operate to it.
  • Review means documented human or automated analysis with triage, escalation, and closure evidence.
  • Keep artifacts that show end-to-end handling: detections, investigations, decisions, and reporting.

Footnotes

  1. NIST Special Publication 800-53 Revision 5

AU-6 is where many FedRAMP programs fail quietly: teams enable logging, store logs, and assume they are covered. FedRAMP assessors and agency reviewers look for something stricter: that someone (or a defined automated workflow under accountable oversight) regularly reviews audit records, analyzes them for suspicious or policy-violating activity, and reports findings to specific roles who can act.

The requirement is intentionally flexible (“organization-defined frequency” and “organization-defined personnel or roles”), but that flexibility cuts both ways. You must choose a frequency that matches your risk and architecture, document it, implement it consistently across the authorization boundary, and retain evidence that the process ran and produced outcomes.

This page translates AU-6 into an operator-ready implementation: what systems to include, how to define a review schedule, what “analysis” looks like in practice, how to route findings, what artifacts to retain for assessors, and the exam questions that trigger findings. It also includes a pragmatic execution plan and FAQs geared to a CCO, GRC lead, or compliance owner coordinating Security Operations, Cloud Engineering, and IT.

Regulatory text

Requirement (excerpt): “Review and analyze system audit records at an organization-defined frequency for indications of inappropriate or unusual activity and report findings to organization-defined personnel or roles.” 1

What the operator must do

You must implement an operating procedure that:

  1. Reviews audit records (logs) for systems in scope.
  2. Analyzes those records for inappropriate or unusual activity (not merely checks that logs exist).
  3. Reports findings to specific, named roles (not a generic mailbox with no accountable owner).
  4. Runs on a defined cadence that you choose and document (“organization-defined frequency”).
  5. Produces evidence that the review occurred and that findings were handled.

This is a continuous monitoring control in practice. If you cannot show a repeatable review workflow with tickets, escalations, and outcomes, expect assessment friction even if your technical logging is strong.

Plain-English interpretation

AU-6 requires you to prove you are not “logging into the void.” You collect audit logs, then you regularly look at them (or run detections over them), identify anomalies or policy violations, and notify the people who can investigate and fix issues. The review must be frequent enough to catch problems before they become incidents, and consistent enough that an assessor can sample periods and see the process running.

Who it applies to (entity and operational context)

Applies to:

  • Cloud Service Providers (CSPs) operating a FedRAMP Moderate authorized cloud service offering within the authorization boundary. 1
  • Federal Agencies operating or overseeing systems that inherit or implement AU-6 activities for the authorized service. 1

Operational context inside the boundary:

  • Central log management/SIEM and any managed detection tooling.
  • Identity, authentication, and authorization systems (IdP, PAM, SSO).
  • Core infrastructure and control plane logs (cloud provider audit trails, hypervisors where applicable).
  • OS, database, application, and API audit logs that record security-relevant activity.
  • Security tooling logs (EDR, WAF, IDS/IPS) if they feed your audit record review pipeline.

You should treat AU-6 as a boundary-wide requirement: if a system produces audit records that could indicate misuse or compromise, it needs a defined review path.

What you actually need to do (step-by-step)

Step 1: Set the “organization-defined” parameters (write them down)

Create an AU-6 operating standard that defines:

  • Review frequency per log source class (for example: near-real-time alerting for high-risk events; scheduled reviews for lower-risk sources). Your choices are defensible if they are risk-based and consistently followed. 1
  • In-scope audit record sources within the FedRAMP boundary (a log source inventory).
  • What counts as “inappropriate or unusual” for your system (baseline behaviors and key detections).
  • Reporting roles (SOC analyst, Incident Response lead, System Owner, CISO/ISO, Compliance) and escalation triggers.
  • Required timelines for triage, escalation, and closure (these are your internal requirements; auditors will test whether you meet your own stated SLA).

Deliverable: “AU-6 Audit Record Review & Reporting Standard” mapped to the system boundary.

Step 2: Ensure audit records are collectable, searchable, and protected

AU-6 depends on your ability to access audit records reliably. Confirm:

  • Logs are centralized (SIEM/log analytics) or otherwise accessible for review.
  • Logs have time synchronization and consistent timestamps.
  • Logs are protected from tampering via access controls and immutable storage patterns where feasible.
  • You can search, filter, and pivot across identity, network, endpoint, and application layers.

You do not need exotic tooling, but you do need a reviewable corpus and a defined method.

Step 3: Build a detection and review playbook (minimum viable set)

Create a playbook that ties “review and analyze” to concrete checks. Examples of review categories:

  • Identity misuse: impossible travel, repeated auth failures, MFA changes, new privileged role assignments.
  • Privilege and admin activity: creation of new admin users, privilege elevation, changes to audit settings.
  • Data access anomalies: large exports, access from new geographies, abnormal API calls.
  • System integrity: disabled security agents, log pipeline failures, time drift.
  • Policy violations: access outside approved change windows, direct production access without ticket.

Each category should have:

  • A query/detection rule (SIEM query or equivalent).
  • A triage decision tree (benign, needs investigation, incident).
  • Escalation criteria and required reporting recipients.

Deliverable: “AU-6 Audit Review Playbooks” with sample queries and decision points.

Step 4: Define reporting and escalation mechanics (who gets told, how, and what gets recorded)

AU-6 requires reporting findings to organization-defined roles. Make reporting operational:

  • Automated alerts create tickets in your case system (SOC platform, ITSM).
  • Daily/weekly digest reporting to security leadership for notable events and trends.
  • Immediate escalation for high-severity findings (privilege compromise indicators, audit logging disabled, suspected exfiltration).

Minimum reporting fields to capture in each finding:

  • Detection source and timestamp range
  • Summary of activity and affected assets/accounts
  • Severity and rationale
  • Actions taken, owner assigned, and status
  • Linkage to incident record if escalated

Deliverable: A ticket template and an escalation matrix.

Step 5: Run the process and retain evidence (this is what assessors sample)

Operate the review on your defined cadence and keep proof:

  • Completed reviews (who reviewed, when, what sources)
  • Findings generated (including “no findings” attestations for a review period where appropriate)
  • Triage notes and investigative steps
  • Escalation communications and approvals
  • Closure evidence (config changes, user action, incident postmortem links)

This is the difference between “we monitor” and “we can prove we monitor.”

Step 6: Add control health monitoring for the logging pipeline itself

A common AU-6 failure mode is silent logging gaps. Treat the log pipeline as a monitored service:

  • Alert on log source silence (expected sources stop sending).
  • Alert on collection errors, parsing failures, and storage capacity thresholds.
  • Track exceptions: approved maintenance windows and documented rationale.

This directly supports AU-6’s objective: detect control failures and suspicious activity early. 1

Step 7: Make it assessable (SSP-ready)

In your FedRAMP System Security Plan (SSP) and supporting procedures, document:

  • Review frequency and roles
  • Tooling and log sources in scope
  • Example outputs (sanitized tickets, reports)
  • How you ensure consistency and coverage

FedRAMP templates can help you align the narrative and evidence package. 2

Required evidence and artifacts to retain

Keep artifacts in a form that supports sampling by date range, system, and reviewer.

Core artifacts (expected):

  • AU-6 policy/standard and SOP (versioned, approved)
  • Log source inventory for the authorization boundary
  • Detection library / SIEM rule list mapped to key risks
  • Review schedule/calendar and on-call roster
  • Review records:
    • “Daily review checklist” or equivalent
    • Analyst notes or review attestations
  • Findings and follow-up:
    • Tickets/cases with timestamps, ownership, and resolution
    • Escalation records (pager/notification logs, emails, chat exports as permitted)
    • Incident links and lessons learned where escalated
  • Metrics (optional but helpful): trend reports, top alert categories, recurring control failures

Retention tip: Store evidence where it cannot be casually edited after the fact (case management system with audit trails). If you generate PDF reports, keep the underlying query output references.

Common exam/audit questions and hangups

Assessors typically probe AU-6 with sampling and “show me” questions:

  1. “What is your organization-defined frequency, and where is it documented?”
    Hangup: frequency exists informally or differs by team.

  2. “Show evidence of reviews for these sampled weeks/months.”
    Hangup: alerts exist, but no record that someone reviewed them or dispositioned them.

  3. “Who receives findings? Show the escalation path.”
    Hangup: findings routed to a shared inbox with no accountable role.

  4. “How do you detect audit logging failures?”
    Hangup: no monitoring for missing logs; gaps discovered only during incidents.

  5. “How do you distinguish benign anomalies from incidents?”
    Hangup: no documented triage criteria; decisions vary by analyst.

Frequent implementation mistakes and how to avoid them

Mistake Why it fails AU-6 Fix
“We have a SIEM” as the control narrative AU-6 is about review, analysis, and reporting outcomes Add SOP + evidence: reviews, findings, reporting chain 1
Undefined review cadence “Organization-defined frequency” must be defined Put frequency per log class into the standard and follow it
No evidence for “no findings” periods Auditors can’t tell if you reviewed or just have quiet logs Record completed reviews even when no issues were found
Alerts without case management Findings aren’t tracked to closure Route alerts into tickets with owners, timestamps, and closure reasons
Logging gaps not monitored You can’t review logs you don’t have Alert on log source silence and ingestion errors
Overly broad detections causing alert fatigue Review becomes checkbox work, real issues get missed Start with high-signal detections; tune with documented rationale

Risk implications (why operators should care)

AU-6 failures create two kinds of exposure:

  • Security exposure: suspicious activity persists longer because nobody is looking consistently. 1
  • Authorization exposure: you may be unable to demonstrate control operation during FedRAMP assessment and continuous monitoring reviews, creating plan-of-action pressure and potential authorization risk. 1

Practical 30/60/90-day execution plan

First 30 days: Define, scope, and make reviews visible

  • Publish the AU-6 standard: frequency, roles, escalation matrix, minimum evidence set. 1
  • Build the boundary log source inventory and identify coverage gaps.
  • Stand up a basic case workflow for audit findings (ticket type, required fields, owners).
  • Implement “log pipeline health” checks (missing logs, ingestion failures).

Days 31–60: Operationalize analysis and reporting

  • Create the initial detection/review playbooks (identity, privilege, audit setting changes, data access anomalies).
  • Run scheduled reviews and hold a weekly review meeting with Security + System Owners.
  • Start producing a recurring “audit review report” to the defined recipients (even if it is brief).
  • Tune detections based on false positives and document tuning decisions.

Days 61–90: Harden evidence and make it assessor-ready

  • Perform an internal “mini-assessment” sampling prior periods: can you show review records end-to-end?
  • Add quality controls: peer review for high-severity closures, consistent severity rubric.
  • Update SSP/control narratives and align artifacts to FedRAMP documentation expectations. 2
  • If you need to scale evidence collection and mapping across many systems and third parties, consider Daydream to standardize control ownership, evidence requests, and reviewer-ready packets without rebuilding the same audit trail every cycle.

Frequently Asked Questions

What counts as “organization-defined frequency” for AU-6?

AU-6 lets you choose the cadence, but you must document it and follow it consistently. Set different frequencies by risk level (for example, high-risk identity and privilege events more frequently than low-risk informational logs) and keep evidence of execution. 1

Do automated SIEM alerts satisfy “review and analyze” by themselves?

Alerts help, but AU-6 still expects analysis and reporting outcomes tied to accountable roles. If automation generates findings, you still need tracked triage, escalation, and closure records that show the alerts were reviewed and acted on. 1

How do we prove we reviewed logs when there were no incidents?

Keep review attestations or checklist records per period and per log source class, plus evidence that the review ran (saved queries, dashboard snapshots, or case notes). “No findings” is an outcome that still needs a record.

Which systems must be included in AU-6 log review for FedRAMP?

Include systems within the FedRAMP authorization boundary that generate audit records relevant to inappropriate or unusual activity, including identity, infrastructure control plane, OS, applications, and security tooling. Start from a boundary asset inventory and map each asset to its log source and review method. 1

What artifacts do assessors usually ask for during sampling?

They ask for your documented cadence and roles, then sample periods and request proof of completed reviews plus follow-up records (tickets/cases, escalations, and closures). If the evidence trail breaks between alert and resolution, AU-6 becomes a finding.

How should third parties fit into AU-6 operations?

If a third party operates components inside your boundary or provides managed security monitoring, define responsibilities in contracts/SOWs, require access to review evidence, and ensure findings are reported to your named roles. You remain accountable for proving AU-6 operation in the authorized environment. 1

Footnotes

  1. NIST Special Publication 800-53 Revision 5

  2. FedRAMP documents and templates

Frequently Asked Questions

What counts as “organization-defined frequency” for AU-6?

AU-6 lets you choose the cadence, but you must document it and follow it consistently. Set different frequencies by risk level (for example, high-risk identity and privilege events more frequently than low-risk informational logs) and keep evidence of execution. (Source: NIST Special Publication 800-53 Revision 5)

Do automated SIEM alerts satisfy “review and analyze” by themselves?

Alerts help, but AU-6 still expects analysis and reporting outcomes tied to accountable roles. If automation generates findings, you still need tracked triage, escalation, and closure records that show the alerts were reviewed and acted on. (Source: NIST Special Publication 800-53 Revision 5)

How do we prove we reviewed logs when there were no incidents?

Keep review attestations or checklist records per period and per log source class, plus evidence that the review ran (saved queries, dashboard snapshots, or case notes). “No findings” is an outcome that still needs a record.

Which systems must be included in AU-6 log review for FedRAMP?

Include systems within the FedRAMP authorization boundary that generate audit records relevant to inappropriate or unusual activity, including identity, infrastructure control plane, OS, applications, and security tooling. Start from a boundary asset inventory and map each asset to its log source and review method. (Source: NIST Special Publication 800-53 Revision 5)

What artifacts do assessors usually ask for during sampling?

They ask for your documented cadence and roles, then sample periods and request proof of completed reviews plus follow-up records (tickets/cases, escalations, and closures). If the evidence trail breaks between alert and resolution, AU-6 becomes a finding.

How should third parties fit into AU-6 operations?

If a third party operates components inside your boundary or provides managed security monitoring, define responsibilities in contracts/SOWs, require access to review evidence, and ensure findings are reported to your named roles. You remain accountable for proving AU-6 operation in the authorized environment. (Source: NIST Special Publication 800-53 Revision 5)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
Audit Record Review, Analysis, and Reporting | Daydream