AU-9(3): Cryptographic Protection
AU-9(3): Cryptographic Protection requires you to use cryptographic mechanisms to protect the integrity of audit logs and the tools that generate, transmit, store, and analyze them so logs can’t be altered without detection. Operationalize it by defining the “audit system of record,” enforcing cryptographic integrity controls end-to-end, and retaining evidence that those controls are configured, monitored, and tested. 1
Key takeaways:
- Protect audit data integrity (not confidentiality) with cryptographic mechanisms across the full log pipeline. 1
- Include audit tools (agents, collectors, SIEM, parsers) in scope; harden them and verify integrity of their code/config where feasible. 1
- Keep assessor-ready evidence: architecture, configs, key management, and testing that proves tamper-evidence and tool integrity protections operate. 2
You only get credit for AU-9(3): cryptographic protection requirement if you can show an assessor that audit logs and the audit tooling are protected against undetected modification. The practical goal is simple: if a privileged user, compromised workload, or third party attacker tries to edit or delete audit records, you either prevent it or you can detect it quickly and prove what happened.
Most teams already encrypt logs “somewhere,” but AU-9(3) is narrower and stricter than generic encryption-at-rest. It focuses on integrity of audit information and integrity of audit tools. That means you need (1) a clear definition of what counts as “audit information,” (2) an end-to-end design for how logs are generated, transported, stored, and queried, and (3) cryptographic integrity controls appropriate to each hop.
This page is written for a Compliance Officer, CCO, or GRC lead who needs to drive fast, requirement-level implementation with Security and IT. It provides a practical control design, step-by-step execution, and the exact evidence auditors ask for, grounded in NIST SP 800-53 Rev. 5 language. 2
Regulatory text
Requirement (AU-9(3)): “Implement cryptographic mechanisms to protect the integrity of audit information and audit tools.” 1
What the operator must do
- Audit information: Implement cryptographic integrity protections so audit records are tamper-evident (or tamper-resistant) from creation through storage and retrieval. 1
- Audit tools: Implement cryptographic integrity protections for the tools that collect, process, and manage audit records (agents, collectors, forwarders, SIEM components, parsers, and admin utilities). The intent is to reduce the chance that an attacker can modify the toolchain to falsify logs or suppress events. 1
Plain-English interpretation
AU-9(3) expects you to make log tampering detectable and hard. If someone changes a log line, reorders events, forges an entry, or replaces a collector configuration to stop logging, you need cryptographic controls that either block the change or create evidence that the change occurred.
This is broader than “encrypt the SIEM.” You need to cover:
- Log creation (endpoints/apps)
- Transport (forwarding and ingestion)
- Storage (object store, SIEM index, archive)
- Tooling (agents, collectors, parsers, correlation rules, admin consoles)
Who it applies to
AU-9(3) commonly applies in environments aligned to NIST SP 800-53 Rev. 5, including:
- Federal information systems and programs adopting NIST SP 800-53 baselines. 2
- Contractor systems handling federal data, where NIST controls flow down through contracts, ATO requirements, or customer security addenda. 2
Operationally, it applies anywhere you rely on audit logs for:
- Incident response and forensics
- Privileged activity monitoring
- Compliance attestations
- Detecting misuse by admins, insiders, or compromised service accounts
What you actually need to do (step-by-step)
Step 1: Declare your audit “system of record” and in-scope sources
- List log sources that feed security/compliance monitoring: IAM, OS, EDR, cloud control plane, database audit, application audit, network/security tools.
- Identify the authoritative store(s): SIEM indices, log lake, WORM archive, ticketing attachments.
- Define trust boundaries: where logs cross accounts, networks, or third parties.
Deliverable: “Audit logging data flow and system-of-record statement” owned by Security/GRC.
Step 2: Choose cryptographic integrity mechanisms per pipeline stage
Map controls to each hop in the pipeline. Typical patterns:
- In transit: Use authenticated encryption or strong channel protection (for example, TLS with mutual authentication) between agent → collector → SIEM/log lake to reduce injection and alteration risk.
- At rest: Use storage mechanisms that provide integrity checking and immutability properties (for example, WORM/immutable storage features), plus cryptographic checks (signatures/HMAC) for critical audit datasets where feasible.
- Record-level integrity: For high-risk logs (privileged actions, authentication events, admin changes), implement signing or chained-hash approaches so you can detect missing or modified entries.
- Tool integrity: Protect collector/agent binaries and configurations through signing/verification, hardened deployment pipelines, and controlled administrative access.
You do not need every mechanism everywhere; you need a defensible design that shows cryptographic integrity protections address realistic tampering paths for your environment. 1
Step 3: Lock down keys and integrity anchors
Cryptographic integrity fails if keys are exposed or shared.
- Document where signing/HMAC keys live (KMS/HSM preferred when available).
- Restrict key access to a small set of service identities; avoid human access.
- Define rotation, revocation, and backup procedures.
- Ensure clock synchronization strategy exists, since integrity verification and forensics often depend on time alignment.
Deliverable: “Audit integrity key management procedure” mapped to the audit pipeline.
Step 4: Harden audit tools as production security systems
Treat audit tooling like security infrastructure, not “just logging.”
- Baseline and control changes to: agent configs, collector pipelines, parsing rules, SIEM detection content, retention policies.
- Require change approval and testing for log pipeline changes that could drop or alter events.
- Implement integrity monitoring for the toolchain (file integrity monitoring on collectors, signed packages, or verified container images).
Practical operator test: Can a SIEM admin modify parsing so a class of events disappears without leaving a trail? Your design should make that hard and detectable. 1
Step 5: Add continuous validation (not just configuration)
You need operational checks that confirm integrity protections keep working:
- Alerts on ingestion gaps (sudden drop to zero events from a critical source)
- Alerts on pipeline configuration changes
- Periodic integrity verification runs (sample validation of signed logs, archive immutability checks)
- Access reviews for privileged accounts that can alter logging or retention
Deliverable: “Audit integrity monitoring checklist” with owners and cadence.
Step 6: Prove it with recurring evidence
Auditors tend to accept strong engineering choices if you can show:
- What is protected
- How it is protected
- How you know it stays protected
Daydream (or any GRC system you use) should map AU-9(3) to a control owner, implementation procedure, and a recurring evidence set so you stop rebuilding the same packet each assessment cycle. 1
Required evidence and artifacts to retain
Keep evidence that is both design and operational:
Design evidence
- Audit logging architecture diagram and data flow (sources → collectors → storage → analytics)
- Control narrative for AU-9(3) describing cryptographic integrity mechanisms and scope (audit info + audit tools)
- Key management procedure for integrity mechanisms (ownership, access model, rotation approach)
- Toolchain hardening standard (how you protect agents/collectors/SIEM configs)
Operational evidence
- Configuration screenshots/exports: TLS settings, mTLS where used, log forwarder configs, immutable storage settings
- Change management records for log pipeline and SIEM content changes
- Access review results for accounts/groups that can modify logging, collectors, retention, or SIEM ingestion
- Monitoring outputs: alerts for ingestion failures, config changes, integrity verification job results
- Sampling results: a short test showing integrity verification detects tampering or fails validation when a record is modified
Common exam/audit questions and hangups
- “Show me how logs are protected from alteration.” Expect follow-ups by hop: endpoint, transport, SIEM ingestion, archive.
- “What about the audit tools?” Auditors often find that teams secure log storage but ignore the collector and parser layers. AU-9(3) explicitly includes tools. 1
- “Who can disable logging or change retention?” Privileged access to the log pipeline is a core focus area; have a clean access story.
- “How do you know logs weren’t modified before ingestion?” If you can’t do record-level signing everywhere, document compensating controls (endpoint hardening, EDR, restricted admin access, monitored pipeline).
- “Where’s the evidence?” The most common failure mode is a plausible design with thin artifacts.
Frequent implementation mistakes (and how to avoid them)
- Mistake: Equating encryption-at-rest with integrity. Fix: document integrity properties explicitly (signatures/HMAC, immutability controls, verified transport), and show how tampering is detected. 1
- Mistake: Securing the SIEM but not the collectors/agents. Fix: include audit tools in scope and apply hardening, integrity monitoring, and controlled changes. 1
- Mistake: No defined “system of record.” Fix: name the authoritative store and retention tier; tie evidence to it.
- Mistake: Key sprawl and shared secrets. Fix: centralize key control, minimize human access, document rotation and revocation.
- Mistake: One-time setup with no ongoing validation. Fix: implement monitoring for ingestion gaps and config changes; retain alert evidence.
Enforcement context and risk implications
No public enforcement cases were provided in the source material for AU-9(3). From a risk perspective, AU-9(3) gaps increase the chance that an attacker can cover tracks, weaken investigations, and undermine compliance claims that depend on trustworthy logs. The most common assessment risk is not an explicit technical failure; it is missing implementation evidence tied to AU-9(3). 1
A practical 30/60/90-day execution plan
First 30 days: Define scope, architecture, and ownership
- Assign a single control owner for AU-9(3) (Security Engineering or SecOps is typical; GRC coordinates evidence).
- Inventory audit sources and document the log data flow.
- Declare the audit system of record and retention tiers.
- Identify top tampering paths (privileged users, compromised workloads, third party admin access).
By 60 days: Implement integrity controls where they matter most
- Enforce protected transport between major hops (agent/collector/SIEM).
- Turn on immutability controls for archived logs where available.
- Implement or formalize integrity protections for audit tools: signed packages/images, restricted admin access, change controls, configuration monitoring.
- Write the AU-9(3) control narrative and start collecting evidence on a recurring schedule.
By 90 days: Operationalize monitoring and testability
- Build alerting for ingestion gaps and pipeline changes; route to an owned queue with SLAs.
- Run a tabletop or technical test: attempt a controlled modification of a sample audit record or pipeline config and demonstrate detection.
- Package an assessor-ready evidence binder: diagrams, configs, change tickets, access reviews, test results.
- In Daydream, map AU-9(3) to owner, procedure, and recurring evidence tasks so the next audit is a refresh, not a rebuild. 1
Frequently Asked Questions
Does AU-9(3) require encrypting audit logs?
AU-9(3) is specifically about protecting integrity of audit information and audit tools with cryptographic mechanisms. Encryption may help, but you should be ready to show tamper-evidence or tamper-resistance, not just confidentiality. 1
What counts as an “audit tool” under AU-9(3)?
Include the components that generate, collect, process, store, and analyze audit records, such as agents, collectors, forwarding pipelines, SIEM parsing and correlation components, and administration utilities. AU-9(3) explicitly includes audit tools in scope. 1
If we use a managed SIEM, are we automatically compliant?
No. You still own the end-to-end integrity story across sources, transport, configuration, and who can change pipeline settings. Managed services help, but auditors will still ask for your configs, access controls, and monitoring evidence. 2
Do we need digital signatures on every single log event?
The control requires cryptographic mechanisms, but it does not prescribe a single approach. Many teams apply stronger record-level integrity controls to the highest-risk audit streams (auth, privilege, admin changes) and use strong protected transport plus immutable storage for broader logs, with a documented rationale. 1
How do we show evidence of integrity protection without exposing keys or sensitive configs to auditors?
Provide redacted configuration exports, screenshots of enabled settings, and a test record demonstrating verification behavior (pass/fail) without disclosing key material. Keep full secrets in your secured vault; evidence should show control operation, not reveal secrets. 2
What’s the fastest way to fail AU-9(3) in an assessment?
Having a plausible logging design but no retained artifacts that prove cryptographic integrity protections and tool protection are configured and monitored. Missing evidence is a common risk factor tied to AU-9(3). 1
Footnotes
Frequently Asked Questions
Does AU-9(3) require encrypting audit logs?
AU-9(3) is specifically about protecting **integrity** of audit information and audit tools with cryptographic mechanisms. Encryption may help, but you should be ready to show tamper-evidence or tamper-resistance, not just confidentiality. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What counts as an “audit tool” under AU-9(3)?
Include the components that generate, collect, process, store, and analyze audit records, such as agents, collectors, forwarding pipelines, SIEM parsing and correlation components, and administration utilities. AU-9(3) explicitly includes audit tools in scope. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
If we use a managed SIEM, are we automatically compliant?
No. You still own the end-to-end integrity story across sources, transport, configuration, and who can change pipeline settings. Managed services help, but auditors will still ask for your configs, access controls, and monitoring evidence. (Source: NIST SP 800-53 Rev. 5)
Do we need digital signatures on every single log event?
The control requires cryptographic mechanisms, but it does not prescribe a single approach. Many teams apply stronger record-level integrity controls to the highest-risk audit streams (auth, privilege, admin changes) and use strong protected transport plus immutable storage for broader logs, with a documented rationale. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How do we show evidence of integrity protection without exposing keys or sensitive configs to auditors?
Provide redacted configuration exports, screenshots of enabled settings, and a test record demonstrating verification behavior (pass/fail) without disclosing key material. Keep full secrets in your secured vault; evidence should show control operation, not reveal secrets. (Source: NIST SP 800-53 Rev. 5)
What’s the fastest way to fail AU-9(3) in an assessment?
Having a plausible logging design but no retained artifacts that prove cryptographic integrity protections and tool protection are configured and monitored. Missing evidence is a common risk factor tied to AU-9(3). (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream