AU-6(4): Central Review and Analysis
AU-6(4) requires you to implement a centralized capability to collect, review, and analyze audit records from multiple system components so security-relevant activity can be detected and investigated consistently. Operationalize it by onboarding all in-scope log sources into a central platform, defining review/alert workflows, and retaining repeatable evidence that review and analysis occurred. 1
Key takeaways:
- Centralize audit records across components, then review and analyze them from a single place. 1
- Auditors look for completeness (coverage), consistency (standard review/analysis), and proof (tickets, reports, dashboards, runbooks).
- Assign a control owner and build a recurring evidence package so AU-6(4) stays “always on,” not a one-time logging project.
The au-6(4): central review and analysis requirement is a capability requirement, not a paperwork requirement. Your system can generate audit logs all day, but AU-6(4) expects you to bring audit records together across multiple components and perform review and analysis centrally so you can spot suspicious patterns that don’t appear in a single host or application view. 1
For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat this as a defined service: (1) identify the components that produce audit records, (2) ensure those records land in a central place, (3) define what “review” and “analysis” mean in your environment (roles, cadence, and outcomes), and (4) retain evidence that proves the process runs reliably.
This page gives requirement-level implementation guidance you can hand to your SOC lead, cloud/platform team, and internal audit partner. It focuses on operational decisions auditors test: scope boundaries, log onboarding, correlation across sources, and evidence quality. Where tooling decisions matter, the key is not the brand; it is whether the capability is central, repeatable, and provable.
Regulatory text
Requirement (AU-6(4)): “Provide and implement the capability to centrally review and analyze audit records from multiple components within the system.” 1
What the operator must do
- Provide the capability: Stand up (or contract for) a centralized log collection and analysis function that can ingest audit records from across the system’s components (for example, identity, endpoints, servers, databases, network, cloud control plane, and applications). 1
- Implement the capability: Actually use that central function for review and analysis, not just “we have a SIEM.” AU-6(4) is weak if central collection exists but nobody reviews outputs or investigates findings. 1
Plain-English interpretation
AU-6(4) means: you cannot rely on each component’s local logs and ad hoc checks. You need central visibility and central analysis so you can correlate events across components (e.g., suspicious sign-in plus privilege change plus unusual data access) and drive action through triage and investigation.
Who it applies to (entity and operational context)
This control shows up most often in:
- Federal information systems and contractor systems handling federal data that adopt NIST SP 800-53 Rev. 5 control baselines. 2 Operationally, it applies to any environment where:
- The “system” is made of multiple components (common in hybrid cloud, SaaS + IaaS, microservices, and segmented networks).
- Audit records are distributed across teams (IT, SecOps, app owners) and tools (cloud provider logs, endpoint agents, application logs).
If you have third parties operating parts of the system (managed security provider, managed cloud, SaaS app that stores regulated data), your AU-6(4) story must still explain how their audit records are centrally reviewed and analyzed in your security operations process, or how you receive and act on their security-relevant logs and alerts.
What you actually need to do (step-by-step)
Step 1: Name a control owner and define “central”
Owner: assign one accountable owner (often SecOps/SOC manager) and one GRC owner who maintains the control narrative and evidence map.
Define “central” in your environment in a way you can defend in an exam:
- A central SIEM/log analytics platform, or
- A security data lake with a standard detection layer, or
- A managed detection service that ingests logs from multiple components and provides centralized review outputs.
Your definition must include: where logs land, who can access them, and how reviews/investigations are performed centrally.
Step 2: Build an authoritative log source inventory (system components)
Create a table of “components within the system” and their audit record sources. Minimum columns:
- Component / service
- Audit record types produced (auth, admin actions, data access, network flows, app events)
- Logging mechanism (native logs, agent, API export)
- Destination (central platform index/bucket)
- Onboarding status (planned / live)
- Owner (team/contact)
- Notes on gaps and compensating controls
This inventory is your primary scope artifact for AU-6(4) because it proves you understood “multiple components” and didn’t cherry-pick.
Step 3: Onboard audit records into a central collection point
Implement log forwarding/collection for each component. Practical checklist:
- Standardize timestamps (time sync, timezone handling).
- Normalize identity fields (user, service account, role, device) so correlation works.
- Protect integrity of audit records (restricted access, immutability features where available, controlled pipelines).
- Document exclusions (if a component cannot log, record the reason and mitigation).
A common audit failure: “We have a SIEM” but only the firewall and a few servers send logs. AU-6(4) expects multi-component centrality. 1
Step 4: Define central review and analysis procedures (what humans do)
Write a short, enforceable procedure that answers:
- What gets reviewed: dashboards, correlation rules, alert queues, and periodic log review reports.
- Who reviews: SOC analyst on duty, security engineer, or outsourced MDR with named roles.
- What “analysis” means: correlation across sources, triage, root cause, and escalation.
- What “done” looks like: tickets created, findings documented, false positives tuned, incidents escalated.
Keep it operational. Auditors prefer a procedure that maps to actual artifacts (alerts, tickets, reports) over a policy statement.
Step 5: Implement correlation and use-cases that prove “analysis”
You need to demonstrate cross-component analysis. Examples you can implement as detections or review queries:
- Sign-in anomalies (IdP) correlated with privileged role assignment (directory/IAM) and admin console activity (cloud control plane).
- Lateral movement indicators (endpoint) correlated with new inbound connections (network) and unusual service creation (server logs).
- Application admin changes (app logs) correlated with database access spikes (DB audit) and exports (storage audit).
Pick use-cases tied to your system’s threat model and data sensitivity. Document the rationale and the log sources used.
Step 6: Connect central review to incident response and remediation
AU-6(4) is tested through outcomes: what happens when central analysis finds something.
- Route alerts into your ticketing/incident tool.
- Define severity and escalation paths.
- Track tuning and closure codes (true positive, benign, needs more telemetry).
- Feed improvements back into onboarding gaps and detection logic.
Step 7: Make evidence generation automatic and recurring
Design the control so evidence falls out of normal work:
- Scheduled reports (weekly/monthly) from the central platform
- Alert and case exports
- Ticket metrics and closure notes
- Change records for detection rules and log onboarding
This reduces the “scramble before audit” failure mode. Daydream can help by mapping AU-6(4) to a control owner, a concrete implementation procedure, and a recurring evidence checklist so you can show continuous operation instead of a point-in-time screenshot. 1
Required evidence and artifacts to retain
Use an evidence pack approach. Keep artifacts that prove capability, coverage, and operation:
- Control narrative (AU-6(4))
- Central platform description (architecture diagram or written overview)
- Scope statement: which components are in the system boundary and why
- Log source inventory
- The authoritative table described above
- Export or screenshot from the central platform showing onboarded sources (names match inventory)
- Procedures and runbooks
- Central log review SOP (roles, steps, escalation)
- Alert triage runbook and investigation checklist
- Operational evidence
- Sample alerts/cases showing correlation across multiple sources
- Tickets with investigation notes and disposition
- Recurring review reports with reviewer identity and timestamps
- Change records for detection rule updates and log onboarding
- Access control evidence
- List of authorized users/roles for the central log platform
- Evidence of restricted admin access and auditability of changes (platform audit logs)
Common exam/audit questions and hangups
Auditors and assessors tend to push on these points:
- “What do you mean by centrally?” Show the platform, its role in operations, and how multiple components feed it. 1
- “Which components are in scope?” Produce the inventory and system boundary definition.
- “Prove you review and analyze.” Screenshots help, but tickets, cases, and reports with human action are stronger.
- “How do you know logs are complete?” Show onboarding status, monitoring for ingestion failures, and gap tracking.
- “Can you correlate identity across sources?” Demonstrate normalization (consistent user identifiers) and an example investigation timeline.
Frequent implementation mistakes and how to avoid them
| Mistake | Why it fails AU-6(4) | Fix |
|---|---|---|
| Treating AU-6(4) as “we have logging” | Local logs are not central review/analysis | Build a central pipeline and document it. 1 |
| Onboarding only security perimeter logs | Single-layer visibility misses cross-component patterns | Prioritize identity, admin actions, and app/data access logs. |
| No documented review workflow | Capability exists, but “review” cannot be proven | Create an SOP and tie it to tickets/cases. |
| Evidence is screenshots only | Screenshots don’t prove ongoing operation | Retain reports, exports, and ticket history. |
| MDR contract is vague | Outsourced monitoring can be “black box” | Require log source list, review outputs, and investigation artifacts. |
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement. Practically, AU-6(4) gaps increase operational risk: without centralized review and analysis, investigations slow down, cross-component attack paths get missed, and you lose defensible evidence of monitoring effectiveness. In federal assessments, the more common impact is a control deficiency that expands the scope of corrective action and ongoing reporting.
A practical 30/60/90-day execution plan
Use this as a fast operational rollout plan. Adjust based on system size and tooling maturity.
First 30 days (stabilize scope and capability)
- Assign AU-6(4) control owner and GRC evidence owner.
- Define “central review and analysis” for your environment and document it.
- Build the log source inventory for all in-scope components.
- Confirm central platform access controls and admin roles.
- Select initial correlation use-cases tied to identity and privileged activity.
Days 31–60 (onboard sources and prove review)
- Onboard highest-value sources (identity, cloud control plane, endpoint/security tooling, core application logs).
- Publish the log review SOP and triage runbook.
- Stand up alert routing into ticketing and standardize closure notes.
- Produce the first recurring review report and retain it as evidence.
- Hold a tabletop investigation using centrally correlated logs; document findings and follow-ups.
Days 61–90 (make it durable and audit-ready)
- Expand onboarding to remaining components; track exclusions with compensating controls.
- Add correlation rules that demonstrate multi-component analysis beyond identity.
- Implement ingestion health checks and an owner for failures.
- Build a standing AU-6(4) evidence pack folder with a monthly cadence.
- Use Daydream to keep the control mapped to an owner, procedure, and evidence artifacts so audits don’t become a scavenger hunt. 1
Frequently Asked Questions
Do we need a SIEM to meet au-6(4): central review and analysis requirement?
You need a centralized capability to review and analyze audit records across multiple components. 1 A SIEM is common, but a security data lake or managed detection service can work if it is truly central and produces reviewable, retained outputs.
What counts as “multiple components within the system”?
Any distinct system elements that generate audit records relevant to security monitoring, such as identity, endpoints, servers, cloud control plane, applications, and databases. Your system boundary and component inventory should define this clearly so assessors can test completeness.
If a third party runs part of the environment, can we inherit their monitoring?
You can rely on a third party for operations, but you still need a defensible centralized review and analysis capability and evidence that review occurs. Contractual deliverables should include log source coverage, alert outputs, and investigation artifacts you can retain.
How do we prove “analysis” versus just collecting logs?
Retain cases or tickets showing correlation across sources, investigation steps, and outcomes (false positive tuning, incident escalation, or remediation). Collection-only evidence rarely satisfies the “review and analyze” expectation. 1
What evidence is strongest for auditors?
A log source inventory tied to central platform configuration, a written SOP, and a set of real operational records (alerts, cases, tickets, recurring review reports). These show both capability and execution over time.
What if some legacy systems can’t forward logs centrally?
Document the exception, the business and technical constraints, and the compensating controls (for example, local review procedure, constrained network access, or accelerated replacement plan). Keep the exception visible in your inventory so it is managed rather than hidden.
Footnotes
Frequently Asked Questions
Do we need a SIEM to meet au-6(4): central review and analysis requirement?
You need a centralized capability to review and analyze audit records across multiple components. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON) A SIEM is common, but a security data lake or managed detection service can work if it is truly central and produces reviewable, retained outputs.
What counts as “multiple components within the system”?
Any distinct system elements that generate audit records relevant to security monitoring, such as identity, endpoints, servers, cloud control plane, applications, and databases. Your system boundary and component inventory should define this clearly so assessors can test completeness.
If a third party runs part of the environment, can we inherit their monitoring?
You can rely on a third party for operations, but you still need a defensible centralized review and analysis capability and evidence that review occurs. Contractual deliverables should include log source coverage, alert outputs, and investigation artifacts you can retain.
How do we prove “analysis” versus just collecting logs?
Retain cases or tickets showing correlation across sources, investigation steps, and outcomes (false positive tuning, incident escalation, or remediation). Collection-only evidence rarely satisfies the “review and analyze” expectation. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What evidence is strongest for auditors?
A log source inventory tied to central platform configuration, a written SOP, and a set of real operational records (alerts, cases, tickets, recurring review reports). These show both capability and execution over time.
What if some legacy systems can’t forward logs centrally?
Document the exception, the business and technical constraints, and the compensating controls (for example, local review procedure, constrained network access, or accelerated replacement plan). Keep the exception visible in your inventory so it is managed rather than hidden.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream