AU-5(1): Storage Capacity Warning

AU-5(1): storage capacity warning requirement means you must automatically alert the right responders, fast enough to act, when your audit log repository approaches its maximum capacity. Operationalize it by setting a measurable threshold, wiring monitoring to alerting and ticketing, testing the alert path, and retaining evidence that warnings trigger before logs are dropped.

Key takeaways:

  • Define three parameters: who gets warned, how quickly, and at what capacity threshold.
  • Implement automated monitoring and alert routing for every audit log repository, not just the SIEM.
  • Keep proof: configuration, test results, and incident/ticket history showing warnings were generated and handled.

Audit logging controls fail in boring ways. One of the most common is storage exhaustion: the log volume fills, the platform starts dropping events, or the system overwrites older records without anyone noticing. AU-5(1) exists to prevent that exact failure mode by forcing a warning before capacity becomes a data loss event.

For a Compliance Officer, CCO, or GRC lead, the work is less about “turning on monitoring” and more about making the requirement assessable: you need explicit parameters (recipient, time window, threshold), consistent implementation across log repositories, and durable evidence that the warning triggers and gets acted on. If you cannot prove those three things, the control is effectively not implemented even if engineers believe “alerts exist somewhere.”

This page translates AU-5(1) into an implementation checklist you can hand to Security Engineering, SRE, or your SIEM/logging team. It also highlights the audit traps that cause findings: missing scope (cloud-native logs, managed services, endpoints), thresholds that are not defined, and alerts that are configured but untested.

Regulatory text

Requirement (verbatim excerpt): “Provide a warning to {{ insert: param, au-05.01_odp.01 }} within {{ insert: param, au-05.01_odp.02 }} when allocated audit log storage volume reaches {{ insert: param, au-05.01_odp.03 }} of repository maximum audit log storage capacity.” 1

What the operator must do:
You must (1) identify who should receive the warning, (2) define how quickly they must be warned after the threshold is reached, and (3) define the capacity threshold relative to the repository’s maximum. Then you must implement automated warnings that reliably trigger at that threshold for each in-scope audit log repository. 2

Plain-English interpretation (what an assessor expects)

AU-5(1): storage capacity warning requirement is about preventing silent loss of audit records. Assessors typically look for these elements:

  • Named recipients: a role or on-call function, not “the team.”
  • Defined timeliness: a stated response/notification time expectation tied to an operational path (pager, ticket, chat with escalation).
  • Concrete threshold: expressed as a portion of maximum capacity for the repository.
  • Complete coverage: each audit log repository that stores audit records for in-scope systems.
  • Evidence of operation: tests and/or real alert history showing the warning is generated.

Who it applies to (entity and operational context)

Entities:

  • Federal information systems and contractors handling federal data commonly inherit AU controls through program requirements and system security plans. 3

Operational contexts where AU-5(1) matters most:

  • Central logging/SIEM pipelines (indexer storage, hot/warm tiers, object storage buckets used for archives).
  • Cloud-native audit log stores (managed logging services with quotas, retention caps, or export buckets).
  • Endpoint/network/security tools that maintain local or tenant-level audit repositories.
  • Regulated multi-tenant environments where one tenant’s log surge can exhaust shared capacity.

If your environment includes third parties who host or operate logging components (managed SIEM, MSSP, cloud provider managed audit logs), you still own demonstrating that warnings exist and reach your responders. Contracting out operations does not remove your obligation to retain evidence.

What you actually need to do (step-by-step)

1) Set the three required parameters (make them auditable)

Document the organization-defined parameters embedded in the control statement:

  • Recipient (who is warned): e.g., SOC on-call, SRE on-call, IR lead distribution list.
  • Time window (within how long): define what “within” means in your environment (alert generation plus delivery).
  • Capacity threshold: define at what portion of maximum repository capacity the warning triggers.
    These parameters are part of the requirement text and should be explicitly captured in your control narrative. 1

Decision note: pick recipients who can actually fix capacity issues (rotate indices, expand storage, adjust retention, fix ingestion spikes). Avoid routing only to a generic mailbox.

2) Inventory every audit log repository in scope

Build a list of “repositories” that store audit logs for the system boundary. Include:

  • SIEM/index clusters and their storage backends
  • Log forwarder buffers and queues if they can fill and drop events
  • Cloud audit log sinks and export targets
  • Application/platform audit log databases (where applicable)

Output artifact: Audit Log Repository Inventory with owner, platform, max capacity concept, and monitoring method.

3) Define “maximum capacity” per repository (the measurement model)

Repositories expose maximum capacity differently:

  • Fixed disk volume size (VMs, bare metal)
  • Quotas (managed services)
  • Logical limits (index limits, retention tier capacity)
  • Object storage with lifecycle policies (capacity “max” may be budgetary or policy-defined)

For each repository, write down:

  • What metric represents allocated volume
  • What represents maximum capacity
  • What “reaches threshold” means operationally

This mapping prevents a classic audit failure: an alert exists, but it monitors the wrong metric (for example, instance disk, not the log index volume).

4) Implement automated monitoring and warnings

Engineering tasks usually include:

  • Create alert rules on the chosen capacity metric(s).
  • Route alerts to the defined recipients via your alerting platform.
  • Ensure the warning is generated before the system drops or overwrites logs.
  • Add deduplication and escalation rules so repeated warnings don’t get ignored.

Minimum operational pattern (recommended):

  • Monitoring signal → alert rule → paging/ticket → runbook action → confirmation check (capacity reduced or storage expanded).

5) Create a runbook that ties the warning to action

Your warning is only useful if responders know what to do. Document:

  • Immediate triage: is capacity growth real or metric noise?
  • Containment: expand storage, adjust ingestion, stop noisy sources, rotate logs safely
  • Preservation: confirm no audit records were lost, and if they were, open an incident and document scope

Keep the runbook short and executable. Auditors look for the connection between alerting and incident handling, even if AU-5(1) only says “warning.”

6) Test the alert path and retain results

Do not rely on “it should work.” Prove it:

  • Simulate capacity reaching the threshold in a non-production repository, or temporarily adjust the threshold to trigger a controlled test.
  • Verify delivery to the correct recipients.
  • Verify timestamps to show “within” is met.
  • Verify the alert contains enough context to act (repository name, current utilization, link to dashboard/runbook).

7) Operationalize recurring evidence collection

Make evidence collection routine:

  • Export alert rule configuration snapshots on a schedule or upon change.
  • Keep alert history showing warnings occurred (or tests if warnings are rare).
  • Retain tickets/incidents created from warnings, including resolution notes.

Daydream tip (earned, not mandatory): if you struggle to keep evidence current across many repositories, Daydream is useful as the system to map AU-5(1) to an owner, a procedure, and recurring evidence artifacts so audits do not become a screenshot scramble. 1

Required evidence and artifacts to retain

Keep artifacts that prove design and operation:

  1. Control narrative for AU-5(1) with the three defined parameters (recipient, time window, threshold). 1
  2. Audit Log Repository Inventory (scope list) with owners and monitoring coverage.
  3. Alert configuration evidence (screenshots/export/config-as-code) showing:
    • metric monitored
    • threshold condition
    • routing/recipients
    • severity/escalation behavior
  4. Runbook / SOP for responding to storage capacity warnings.
  5. Test evidence: dated test results, notifications received, and ticket created.
  6. Operational records: alert history and related tickets/incidents showing warnings were handled.

Common exam/audit questions and hangups

  • “Show me where the threshold is defined and approved.”
  • “Which repositories are in scope, and how did you identify them?”
  • “Who receives the warning? How do you ensure coverage after-hours?”
  • “How do you prove the warning occurs within your defined time window?”
  • “What happens if the SIEM is down? Do you still detect repository saturation?”
  • “Do third-party operated log stores provide equivalent warnings, and do you receive them?”

Hangup to expect: teams often show a dashboard, not a warning. AU-5(1) asks for a warning to recipients, not passive visibility.

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails How to avoid it
Monitoring only the SIEM cluster disk Audit logs may also live in cloud exports, app databases, agent buffers Maintain a repository inventory and map each to a monitoring signal
Alert goes to an email inbox No guaranteed response time; weak on-call coverage Route to on-call paging and/or ticketing with escalation
Threshold is undocumented Assessor can’t verify AU-5(1) parameters Put the parameters in the control narrative and runbook
Warning triggers too late You still lose logs before action Validate warning occurs before overwrite/drop behaviors
No testing evidence “Configured” is not “working” Run periodic controlled alert tests and retain results

Enforcement context and risk implications

No public enforcement cases were provided in the supplied source catalog for this specific enhancement, so treat it as a control assurance expectation rather than a case-driven requirement.

Risk-wise, storage exhaustion can create:

  • Breaks in audit trails that block incident investigations and root-cause analysis.
  • Reporting gaps for security monitoring and compliance attestations.
  • Increased breach impact if you cannot reconstruct what happened.

For federal and contractor environments aligned to NIST SP 800-53, auditors commonly treat loss of audit records as a serious operational control failure because it undermines multiple AU controls beyond AU-5(1). 3

A practical 30/60/90-day execution plan

First 30 days (establish control design and scope)

  • Assign a control owner and named engineering owner for logging platforms.
  • Define AU-5(1) parameters: recipients, warning time window, capacity threshold. 1
  • Build the audit log repository inventory and confirm system boundary coverage.
  • Draft the response runbook and escalation path.

By 60 days (implement and prove operation)

  • Implement alert rules and routing for each repository in the inventory.
  • Connect alerts to ticketing for traceability.
  • Perform controlled tests per repository type and retain evidence.
  • Update the SSP/control narrative to reflect actual implementation details. 3

By 90 days (stabilize and make audit-ready)

  • Add recurring evidence capture (config snapshots and alert history exports).
  • Review noise and tune alert thresholds to reduce false positives without delaying detection.
  • Run a tabletop for “log repository full” and verify incident handling integrates audit log preservation.
  • Centralize artifacts in your GRC system; Daydream can help keep the AU-5(1) mapping, procedures, and evidence current across repositories.

Frequently Asked Questions

What counts as an “audit log repository” for AU-5(1)?

Any storage location where audit records are kept for an in-scope system boundary counts, including SIEM storage, cloud audit log exports, and tool-specific audit databases. Document the list and monitor each repository’s capacity against its defined maximum. 1

Can we satisfy AU-5(1) with a dashboard instead of alerts?

A dashboard supports visibility, but AU-5(1) requires a warning to defined recipients within a defined time window. Use alerts routed to on-call/ticketing, and keep evidence of delivery. 1

How do we handle “maximum capacity” in elastic or managed cloud logging?

Define maximum capacity based on the service’s quota/limit model or your configured caps for the repository, then alert on the metric that reflects allocated consumption against that cap. Record the mapping so an assessor can trace threshold logic to the repository limit. 3

What evidence is strongest for auditors?

Configuration exports or screenshots of the alert rule and routing, plus dated test results that show the warning was generated and received by the correct recipients. Ticket records tied to warnings strengthen operational proof.

Do we need to warn multiple teams?

Warn the team that can take action fast, then escalate if the condition persists. If Security monitors but SRE expands storage, route to both or route to SRE with Security copied so investigation and preservation stay aligned.

How do we operationalize this when a third party runs our SIEM?

Require the third party to provide capacity warning notifications to your named recipients and to supply alert configuration and notification evidence. Keep those artifacts with your AU-5(1) control record so you can prove the warning reaches your operators.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5 OSCAL JSON; Source: NIST SP 800-53 Rev. 5

  3. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as an “audit log repository” for AU-5(1)?

Any storage location where audit records are kept for an in-scope system boundary counts, including SIEM storage, cloud audit log exports, and tool-specific audit databases. Document the list and monitor each repository’s capacity against its defined maximum. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Can we satisfy AU-5(1) with a dashboard instead of alerts?

A dashboard supports visibility, but AU-5(1) requires a warning to defined recipients within a defined time window. Use alerts routed to on-call/ticketing, and keep evidence of delivery. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we handle “maximum capacity” in elastic or managed cloud logging?

Define maximum capacity based on the service’s quota/limit model or your configured caps for the repository, then alert on the metric that reflects allocated consumption against that cap. Record the mapping so an assessor can trace threshold logic to the repository limit. (Source: NIST SP 800-53 Rev. 5)

What evidence is strongest for auditors?

Configuration exports or screenshots of the alert rule and routing, plus dated test results that show the warning was generated and received by the correct recipients. Ticket records tied to warnings strengthen operational proof.

Do we need to warn multiple teams?

Warn the team that can take action fast, then escalate if the condition persists. If Security monitors but SRE expands storage, route to both or route to SRE with Security copied so investigation and preservation stay aligned.

How do we operationalize this when a third party runs our SIEM?

Require the third party to provide capacity warning notifications to your named recipients and to supply alert configuration and notification evidence. Keep those artifacts with your AU-5(1) control record so you can prove the warning reaches your operators.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream