AU-9(2): Store on Separate Physical Systems or Components

To meet the au-9(2): store on separate physical systems or components requirement, you must store audit records in an audit repository that lives on physically different infrastructure than the system (or component) generating those logs. Operationally, this means your production workloads cannot be the only place logs exist; logs must be exported to a separate host, appliance, or service designed to resist tampering if production is compromised.

Key takeaways:

  • “Separate” means physically different system/components, not just a different folder, VM, or account on the same hardware.
  • Your design must protect audit records from a compromised production admin plane and support investigation and recovery.
  • Auditors look for architecture evidence + configuration proof + access controls + retention that show separation is real and enforced.

AU-9(2) is a design requirement that changes how you architect logging. Many teams think they satisfy audit-record protection by making logs “read-only,” turning on WORM storage, or restricting access in the SIEM. Those are useful controls, but AU-9(2) is specifically about where audit records are stored: the repository must be part of a physically different system or component than the one being audited.

For a Compliance Officer, CCO, or GRC lead, the fastest path is to translate “physically different” into enforceable engineering decisions: separate log collectors, separate storage, separate administrative access paths, and documented data flows. You also need to define the scope (“which systems/components are being audited”) and the set of audit records covered by this requirement (the control includes an organization-defined parameter for which audit records must be stored separately).

This page gives you requirement-level implementation guidance you can hand to security engineering and platform teams: what to build, what to document, what to test, and what evidence to retain so AU-9(2) passes an assessment under NIST SP 800-53 Rev. 5. 1

Regulatory text

Requirement (AU-9(2)): “Store audit records … in a repository that is part of a physically different system or system component than the system or component being audited.” 2

Operator interpretation: You must design logging so that if the audited system (for example, a production application server, database server, or Kubernetes node) is compromised, the attacker cannot trivially alter or delete the authoritative audit trail because the records are already stored on different physical systems/components. The repository can still be reachable over the network; the key is that it is not the same physical system/component being audited. 1

Plain-English interpretation (what the control is really asking)

AU-9(2) is about survivability and integrity of evidence:

  • If production gets owned, your audit records should still exist elsewhere.
  • The “elsewhere” must be on physically different infrastructure, not just logically separated.
  • You must be able to show this separation in diagrams, configurations, and access controls.

A practical restatement you can put in your control narrative:

“All in-scope audit logs are transmitted off-system and stored in a centralized logging repository hosted on infrastructure physically separate from the generating systems. Access to the repository is restricted and monitored, and production administrators cannot modify or delete stored audit records without going through controlled, logged administrative paths.” 2

Who it applies to (entity and operational context)

This requirement is typically assessed in:

  • Federal information systems and programs aligned to NIST SP 800-53. 1
  • Contractor systems handling federal data where NIST SP 800-53 controls are contractually flowed down or used as the security baseline. 1

Operational contexts where AU-9(2) becomes high-friction:

  • Highly privileged production teams (SRE/platform) who also administer logging.
  • Ephemeral compute (containers, autoscaling) where local logs are transient.
  • Single-account cloud designs where logging services and production sit under the same administrative blast radius.

What you actually need to do (step-by-step)

Use this as an execution checklist you can assign to control owners.

1) Define scope and the “audit records” parameter

AU-9(2) includes an organization-defined parameter (“audit records …”). Translate that into an explicit list:

  • Which systems/components are “being audited” (apps, OS, DB, network, IAM, CI/CD, cloud control plane).
  • Which audit record classes are covered (authentication, authorization, admin actions, system changes, security events, data access where applicable).

Deliverable: AU-9(2) scoping statement (one page) with system inventory references and log source list. 2

2) Choose an architecture that is physically separate

You need a repository that is not on the same physical system/component as the source. Common compliant patterns:

  • Central SIEM/log platform hosted on separate servers/appliances from production app servers.
  • Dedicated log collection tier (separate hosts) forwarding to dedicated storage/search nodes.
  • Cloud logging service where logs are exported off workload instances into a managed logging repository that is not co-resident on the same physical compute component.

Design expectations to bake in:

  • Logs are forwarded off-host promptly (near-real-time is typical; define your own standard).
  • The repository is administered separately enough that compromise of a production node does not equal compromise of the audit store.

Deliverable: Logging architecture diagram showing physical separation boundaries and data flows. 1

3) Enforce separation through access control and administrative boundaries

Physical separation can be undermined by shared admin control. Put guardrails in place:

  • Separate admin roles for production vs. audit repository administration.
  • Restrict delete/modify capabilities on stored audit records (even for admins), and require break-glass with approvals where feasible.
  • Centralize access via identity with strong authentication and logging of admin actions.

Deliverable: Role matrix and access review evidence for the logging repository. 1

4) Configure reliable, tamper-resistant log transport

Your repository being separate only helps if logs get there reliably:

  • Use authenticated, encrypted transport from sources to collectors/repository.
  • Buffer locally or at the collector tier to prevent loss during network interruptions.
  • Monitor for pipeline health: dropped events, collector backlog, ingestion errors.

Deliverable: Configuration extracts/screenshots for forwarding, plus pipeline health monitoring alerts. 1

5) Validate with a compromise-oriented test

Run a test that mirrors the threat AU-9(2) addresses:

  • Assume an attacker gains admin/root on a production host.
  • Confirm they cannot delete or alter historical centralized audit records from that host context.
  • Confirm attempts to access the repository generate their own audit records.

Deliverable: Test plan + results + remediation tickets if issues found. 1

6) Document the control and map ownership and recurring evidence

Assessments fail as often from missing evidence as from weak engineering. Make it operational:

  • Control owner (Security, Platform, or SOC).
  • Runbook for onboarding new log sources.
  • Evidence cadence (access reviews, config drift checks, pipeline monitoring reviews).

Daydream fits naturally here as a control operations layer: track the AU-9(2) narrative, assign owners, and collect recurring artifacts (diagrams, access reviews, pipeline screenshots) so you can answer assessors fast without rebuilding proof each cycle. 2

Required evidence and artifacts to retain

Keep evidence that proves physical separation, actual forwarding, and restricted control:

Architecture & scope

  • System boundary diagram with audited components and log repository placement.
  • Data flow diagram for log sources → collectors → repository.
  • Inventory of in-scope log sources and audit record types (the organization-defined parameter output).

Configuration proof

  • Representative configurations for log forwarding agents/collectors.
  • Repository configuration showing storage location and administrative controls.
  • Screenshots/exports showing immutable or restricted retention settings, if used as compensating protection.

Access control

  • Role-based access control listing for the audit repository and collectors.
  • Access review records (who has admin, who can delete).
  • Break-glass procedure and audit trail of its use (if applicable).

Operational proof

  • Monitoring/alerting evidence for ingestion failures and pipeline health.
  • Test results demonstrating logs remain available if a production node is compromised.
  • Incident response procedure referencing centralized audit record retrieval.

Common exam/audit questions and hangups

Assessors tend to probe these areas:

  1. “Show me that the log repository is physically separate.”
    Expect to provide diagrams plus infrastructure inventory evidence (hostnames, cluster separation, or service architecture statements) tied back to your system boundary. 1

  2. “Can a production admin delete audit logs?”
    They will ask about privileged pathways, not just default permissions. Have a crisp answer: who can delete, under what process, and where that action is logged. 1

  3. “What happens if the audited system is the logging system?”
    If the component being audited includes parts of the logging pipeline, you need separation at the component level (for example, app logs stored on separate log storage nodes). Document the boundary clearly. 2

  4. “Are endpoint-local logs enough?”
    Local-only logs usually fail AU-9(2) because they are on the same system being audited. Central export is the normal expectation. 2

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails AU-9(2) Fix
Storing logs on the same host, different directory Not physically separate Forward off-host to a separate collector/repository 2
Separate VM, same physical hypervisor/host component (unclear) “Physical” separation not demonstrated Use dedicated logging hosts/appliances or managed service; document physical separation rationale and boundaries 1
SIEM is separate, but production admins have full delete rights Separation exists, but audit integrity is weak Separate duties, restrict deletes, require controlled break-glass, and log admin actions 1
No evidence package Good engineering, failed assessment Define evidence list and collect continuously 2

Enforcement context and risk implications

No public enforcement cases were provided for this specific requirement in the source catalog, so treat AU-9(2) as an assessment-driven requirement rather than a “case law” topic for this page.

Risk-wise, AU-9(2) is a control that reduces:

  • Forensic failure risk: inability to reconstruct events after compromise.
  • Insider/admin abuse risk: privileged users altering evidence on the same system they control.
  • Operational recovery risk: losing audit history when a system is rebuilt or autoscaled.

These risks show up during incident response and during authorization decisions for federal systems aligned to NIST SP 800-53. 1

Practical 30/60/90-day execution plan

First 30 days (design + scope locked)

  • Name a control owner and backups; publish the AU-9(2) control statement and scope (systems and audit record types). 2
  • Produce a current-state logging diagram and identify gaps where logs remain on-system.
  • Decide the target “separate physical repository” pattern and document why it is physically separate for your environment. 1

Days 31–60 (build + migrate highest-risk sources)

  • Implement or harden centralized collection and storage on physically separate infrastructure.
  • Onboard the highest-risk log sources first (identity, privileged actions, system changes).
  • Implement role separation and remove routine delete permissions from production admins where feasible. 1

Days 61–90 (prove it works + operationalize evidence)

  • Run a compromise-oriented test and capture results and remediations.
  • Turn on pipeline health monitoring and define the operational response for ingestion failures.
  • Stand up an evidence program: monthly exports of access lists, screenshots/config exports, updated diagrams, and a log source inventory snapshot. Daydream can track these artifacts and reminders so AU-9(2) stays assessment-ready. 1

Frequently Asked Questions

Does “separate physical systems or components” mean a different cloud account?

A different account can help with administrative separation, but AU-9(2) specifically requires a repository that is part of a physically different system/component than the system being audited. Document how your design achieves physical separation in the underlying hosting model. 2

Are immutable/WORM settings enough if logs are stored on the same server?

No. WORM can strengthen protection, but AU-9(2) is about storing audit records on physically different systems/components than the source. Forward logs off-system to meet the requirement. 2

What counts as the “system or component being audited” in a microservices environment?

Treat each workload platform component that generates security-relevant events as “being audited” (nodes, control planes, identity, CI/CD, apps). Your scope statement should list the components and the audit record types you export to the separate repository. 1

Can the SIEM and log storage be in the same cluster as production if access is restricted?

Restricting access helps, but it does not address the “physically different” requirement if the infrastructure is the same system/component. If you must share infrastructure, document a component-level separation argument and expect assessor scrutiny. 1

What evidence do assessors accept to prove physical separation?

Provide architecture diagrams, infrastructure inventories, and configuration exports showing where logs are stored and how they are transported off-host. Pair that with access control listings and test results showing a compromised production host cannot delete centralized audit records. 1

How do we operationalize this across third parties that host parts of our system?

Put log-export and retention requirements into third-party security addenda and onboarding checklists, then collect evidence that their audit records are exported to a separate repository or made available to yours. Tie that evidence to your AU-9(2) control narrative and assessment package. 1

Footnotes

  1. NIST SP 800-53 Rev. 5

  2. NIST SP 800-53 Rev. 5 OSCAL JSON

Frequently Asked Questions

Does “separate physical systems or components” mean a different cloud account?

A different account can help with administrative separation, but AU-9(2) specifically requires a repository that is part of a physically different system/component than the system being audited. Document how your design achieves physical separation in the underlying hosting model. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Are immutable/WORM settings enough if logs are stored on the same server?

No. WORM can strengthen protection, but AU-9(2) is about storing audit records on physically different systems/components than the source. Forward logs off-system to meet the requirement. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What counts as the “system or component being audited” in a microservices environment?

Treat each workload platform component that generates security-relevant events as “being audited” (nodes, control planes, identity, CI/CD, apps). Your scope statement should list the components and the audit record types you export to the separate repository. (Source: NIST SP 800-53 Rev. 5)

Can the SIEM and log storage be in the same cluster as production if access is restricted?

Restricting access helps, but it does not address the “physically different” requirement if the infrastructure is the same system/component. If you must share infrastructure, document a component-level separation argument and expect assessor scrutiny. (Source: NIST SP 800-53 Rev. 5)

What evidence do assessors accept to prove physical separation?

Provide architecture diagrams, infrastructure inventories, and configuration exports showing where logs are stored and how they are transported off-host. Pair that with access control listings and test results showing a compromised production host cannot delete centralized audit records. (Source: NIST SP 800-53 Rev. 5)

How do we operationalize this across third parties that host parts of our system?

Put log-export and retention requirements into third-party security addenda and onboarding checklists, then collect evidence that their audit records are exported to a separate repository or made available to yours. Tie that evidence to your AU-9(2) control narrative and assessment package. (Source: NIST SP 800-53 Rev. 5)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream