AU-2(1): Compilation of Audit Records from Multiple Sources

To meet au-2(1): compilation of audit records from multiple sources requirement, you must design logging so audit records from different systems (for example, identity, endpoints, network, cloud, and applications) can be compiled into a coherent, correlated record set for review, investigation, and reporting. Operationalize it by standardizing event fields, synchronizing time, centralizing collection, and proving end-to-end completeness. 1

Key takeaways:

  • Compile audit records across multiple sources into a unified, reviewable dataset, not isolated log silos. 1
  • Standardization (time sync + common fields) is what makes “compilation” defensible during an assessment. 2
  • Evidence must show coverage, correlation capability, and ongoing operation (not a one-time setup). 1

AU-2(1) is a practical control enhancement: your audit records must be compilable from multiple sources so you can reconstruct events that span systems. In real environments, incidents and compliance questions rarely sit in one log stream. A privileged login might be in your identity provider, the command execution in an endpoint tool, the data access in an application log, and the egress signal in network telemetry. AU-2(1) pushes you to make those records “joinable” and retrievable as one timeline.

For a Compliance Officer, CCO, or GRC lead, the fast path is to treat AU-2(1) as an operating requirement for (1) centralized log collection, (2) normalization into a common schema, (3) reliable timestamps, and (4) correlation and retrieval workflows that security and audit can actually run. Your goal is assessment-ready proof that you can compile records across sources for selected event types and that the compilation is complete enough to support accountability and investigations. 3

Regulatory text

Excerpt (as provided): “NIST SP 800-53 control AU-2.1.” 1

What the operator must do (how to read that excerpt): AU-2(1) is the AU-2 enhancement that expects your audit logging program to compile audit records from multiple sources into a usable whole, so investigators and reviewers can understand a cross-system event sequence. Practically, this means you need an architecture and procedures that pull logs from your key systems, normalize key fields, and support correlation and retrieval for review and investigations. 3

Plain-English interpretation (requirement-level)

You satisfy AU-2(1) when you can answer: “Show me all relevant audit records for this user/session/asset across systems, in order, with consistent timestamps and identifiers, and show that this is standard practice, not an ad hoc effort.”

Key attributes assessors look for:

  • Multi-source coverage: more than one system of record (for example, identity + endpoint + application).
  • Join keys: common identifiers (user ID, device ID, session ID, request ID) that allow correlation.
  • Time integrity: synchronized clocks and consistent time zones.
  • Retrievability: a repeatable method to compile and export the relevant record set for review. 2

Who it applies to (entity and operational context)

Entities: Federal information systems and contractor systems handling federal data commonly inherit NIST SP 800-53 control expectations through ATO processes and contractual requirements. 2

Operational contexts where AU-2(1) becomes “real work”:

  • Hybrid identity: AD/Azure AD/Okta plus SaaS applications.
  • Cloud environments: CSP logs (control plane) plus workload logs (apps/containers).
  • Privileged access: admin actions across multiple systems.
  • Third party connections: VPN, SSO, APIs, managed service provider tooling.
  • Regulated operations: any environment where you must reconstruct activity for investigations or audits. 2

What you actually need to do (step-by-step)

1) Name an owner and define “compilation” for your environment

  • Assign a control owner (often Security Operations or Platform Security) and a GRC owner for evidence management.
  • Define what “compiled” means: searchable in one place, correlated by common fields, exportable for audit/legal, and retained per your retention standard. 1

2) Build a source inventory and event coverage map

Create a table that lists:

  • Log source (IdP, EDR, firewall, cloud audit logs, application audit logs, database audit logs)
  • Events captured (authentication, authorization changes, admin actions, data access, configuration changes)
  • Collection method (agent, API pull, syslog, native integration)
  • Critical join keys (user ID, hostname, cloud resource ID, request ID)
  • Retention/location (SIEM, log lake, ticketing attachments) This becomes your assessment backbone and prevents “we thought we had it” gaps. 2

3) Standardize timestamps and identifiers

Compilation fails most often on time and identity mismatches.

  • Enforce time synchronization on hosts and appliances (standard enterprise time service).
  • Standardize time zone handling in your logging pipeline (store in a consistent format).
  • Standardize identity fields: map email/UPN to immutable user IDs where possible; map device names to unique asset identifiers. 2

4) Centralize collection and normalize into a common schema

  • Route logs into a centralized platform (SIEM/log analytics/log lake).
  • Normalize core fields across sources: actor, action, target, outcome, timestamp, source system, correlation IDs.
  • Preserve raw logs alongside parsed events to protect investigation fidelity. 2

5) Define correlation and compilation procedures (the “operator runbook”)

Write a short runbook that answers:

  • How to compile all records for a given user across systems for a specified time window.
  • How to compile records for a given asset or cloud resource.
  • How to compile records for an administrative change (for example, role grant) and downstream actions.
  • How to export the compiled set with integrity (case folder, immutable storage, chain-of-custody notes when required). Keep it concrete with example queries and screenshots. 2

6) Prove it works with a repeatable test

Run a tabletop or test scenario that generates multi-source activity (login → privileged change → data access → egress attempt) and verify:

  • All sources produced logs.
  • Logs arrived centrally within expected operational windows (define your own).
  • Events correlate via join keys and time.
  • An analyst can compile an evidence packet. 2

7) Operationalize ongoing governance

  • Add new systems to the source inventory during change management.
  • Review logging health: missing sources, parsing failures, time drift, ingestion errors.
  • Schedule periodic re-tests and keep results as evidence. 1

Required evidence and artifacts to retain

Maintain artifacts that show both design and operation:

  • AU-2(1) control statement: scope, owner, systems in scope, and what “compilation” means internally. 1
  • Log source inventory + coverage map (the table described above), including data owners and collection methods.
  • Architecture diagram of log flow from sources to central repository.
  • Normalization/correlation mapping (field mapping document for key sources).
  • Time synchronization standard and proof of configuration for representative systems.
  • Runbook for compilation and export (queries, steps, access controls).
  • Test results: scenario description, sample correlated timeline, exported packet, and lessons learned.
  • Access control evidence for who can view/export compiled audit records (ties to least privilege expectations). 2

Common exam/audit questions and hangups

Assessors and auditors often ask:

  • “Which sources are considered ‘multiple sources’ in your system boundary, and why?”
  • “Show me a compiled timeline for a real ticket or test case.”
  • “How do you correlate identities across SaaS and on-prem?”
  • “How do you handle log gaps, parsing failures, or dropped events?”
  • “Can you export records in a way that preserves integrity and context?” Hangups typically surface around inconsistent user identifiers, inconsistent timestamps, and missing application-layer audit logs. 2

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails AU-2(1) Fix
Central SIEM exists, but no normalization Logs are “present” but not compilable into one coherent record set Define a minimum common schema and enforce parsing/field mapping for in-scope sources
Time drift across sources You cannot reconstruct sequence reliably Enforce time sync standards and verify drift in monitoring
Only infrastructure logs, no application audit logs User actions that matter are invisible Require application audit logging for key workflows and add to onboarding checklist
Correlation depends on display names/emails Renames break timelines; duplicates create ambiguity Use immutable IDs and maintain identity mapping tables
Evidence is ad hoc You cannot prove ongoing operation Run periodic compilation tests and store results in your GRC evidence library

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for AU-2(1), so this page does not cite enforcement actions.

Risk implications you can explain internally without over-claiming:

  • Without compilable multi-source audit records, investigations become slower and less reliable.
  • In an assessment, fragmented logging increases the chance of audit findings tied to incomplete monitoring, weak accountability, or inability to support incident reconstruction. 2

A practical 30/60/90-day execution plan

First 30 days (stabilize scope and prove a baseline)

  • Assign control ownership and write the AU-2(1) control statement. 1
  • Build the log source inventory and identify top-priority sources (identity, endpoints, network, cloud audit).
  • Confirm time synchronization standards and validate on representative systems.
  • Draft the compilation runbook with at least two compilation scenarios (user-centric and asset-centric).

Days 31–60 (make compilation real, not theoretical)

  • Onboard missing priority sources to centralized collection.
  • Implement field normalization for join keys and timestamps.
  • Create saved searches/queries and an export procedure for evidence packets.
  • Run a test scenario and store the compiled record set as evidence.

Days 61–90 (operationalize and make it assessable)

  • Add logging onboarding to change management so new systems cannot go live without audit logging coverage.
  • Define ongoing health checks (ingestion failures, parser errors, time drift) and assign review responsibilities.
  • Repeat the compilation test and demonstrate consistent results.
  • If you use Daydream for control management, map AU-2(1) to the control owner, implementation procedure, and recurring evidence artifacts so audits pull from a single source of truth. 1

Frequently Asked Questions

What counts as “multiple sources” for AU-2(1)?

Any distinct systems that each produce audit records needed to reconstruct activity across your boundary. Common examples are identity, endpoint security, network, cloud control plane logs, and application audit logs. 2

Do I need a SIEM to satisfy AU-2(1)?

The control expects compilable records; many organizations meet that through a SIEM or log analytics platform, but the tool is less important than central collection, normalization, and repeatable compilation procedures. 2

How do I prove “compilation” to an assessor?

Show a runbook plus a live or recorded demonstration compiling a cross-system timeline for a test scenario, then export the resulting record set and retain it with the ticket/test evidence. 2

What if some systems can only export logs manually?

Document the limitation, put the manual export steps into the runbook, and prioritize automation for high-risk sources. Assessors typically focus on whether you can consistently compile the records when needed. 2

How should third-party systems be handled?

Treat third-party-provided systems that process or store in-scope data as log sources that may need to feed your compilation process, either through direct integration or contractual reporting and export requirements. 2

What evidence do I keep if logs are too sensitive to attach to tickets?

Keep a minimal evidence packet: query screenshots, metadata summaries, hash/manifest of exported files, and an access-controlled storage reference that points to the retained records. Document who can retrieve the full dataset. 2

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

  3. NIST SP 800-53 Rev. 5; NIST SP 800-53 Rev. 5 OSCAL JSON

Frequently Asked Questions

What counts as “multiple sources” for AU-2(1)?

Any distinct systems that each produce audit records needed to reconstruct activity across your boundary. Common examples are identity, endpoint security, network, cloud control plane logs, and application audit logs. (Source: NIST SP 800-53 Rev. 5)

Do I need a SIEM to satisfy AU-2(1)?

The control expects compilable records; many organizations meet that through a SIEM or log analytics platform, but the tool is less important than central collection, normalization, and repeatable compilation procedures. (Source: NIST SP 800-53 Rev. 5)

How do I prove “compilation” to an assessor?

Show a runbook plus a live or recorded demonstration compiling a cross-system timeline for a test scenario, then export the resulting record set and retain it with the ticket/test evidence. (Source: NIST SP 800-53 Rev. 5)

What if some systems can only export logs manually?

Document the limitation, put the manual export steps into the runbook, and prioritize automation for high-risk sources. Assessors typically focus on whether you can consistently compile the records when needed. (Source: NIST SP 800-53 Rev. 5)

How should third-party systems be handled?

Treat third-party-provided systems that process or store in-scope data as log sources that may need to feed your compilation process, either through direct integration or contractual reporting and export requirements. (Source: NIST SP 800-53 Rev. 5)

What evidence do I keep if logs are too sensitive to attach to tickets?

Keep a minimal evidence packet: query screenshots, metadata summaries, hash/manifest of exported files, and an access-controlled storage reference that points to the retained records. Document who can retrieve the full dataset. (Source: NIST SP 800-53 Rev. 5)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream