AU-5(3): Configurable Traffic Volume Thresholds
AU-5(3) requires you to set and enforce configurable network traffic volume thresholds that are tied to your audit log storage limits, and to take defined actions when traffic exceeds those thresholds. Operationally, you implement monitored thresholds at key network choke points, connect them to alerting and automated responses, and retain evidence that thresholds are configured, reviewed, and effective. (NIST SP 800-53 Rev. 5 OSCAL JSON)
Key takeaways:
- Thresholds must be configurable, enforced, and linked to audit log storage capacity. (NIST SP 800-53 Rev. 5 OSCAL JSON)
- You need defined actions for traffic above thresholds, not just dashboards. (NIST SP 800-53 Rev. 5 OSCAL JSON)
- Audit readiness depends on repeatable evidence: configs, change records, alerts, and response outcomes.
AU-5(3): configurable traffic volume thresholds requirement sits in the Audit and Accountability family and focuses on a failure mode that shows up in real incidents: spikes in network communications can cause telemetry overload, delayed ingestion, dropped logs, or rapid exhaustion of log storage. When that happens, you lose forensic visibility precisely when you need it most.
This enhancement is not asking you to “monitor the network” in a generic sense. It expects you to set explicit volume thresholds that reflect the actual limits of your audit log storage and then enforce those thresholds through technical controls and operational response. That means your thresholds are not arbitrary “nice to have” alert levels; they are engineered to protect logging capacity and support continuity of audit logging under abnormal traffic conditions. (NIST SP 800-53 Rev. 5)
For a Compliance Officer, CCO, or GRC lead, the fastest path to operationalization is to (1) define ownership and scope, (2) pick measurement points and traffic types that matter for audit generation and ingestion, (3) configure enforceable thresholds with response playbooks, and (4) retain evidence that proves the thresholds are set, reviewed, and triggered responses are handled consistently.
Regulatory text
Text (excerpt): “Enforce configurable network communications traffic volume thresholds reflecting limits on audit log storage capacity and {{ insert: param, au-05.03_odp }} network traffic above those thresholds.” (NIST SP 800-53 Rev. 5 OSCAL JSON)
Operator meaning:
You must do two things:
-
Set thresholds that are configurable and specifically derived from audit log storage capacity constraints (for example, SIEM indexing capacity, collector disk limits, log pipeline throughput, or managed service quotas). (NIST SP 800-53 Rev. 5 OSCAL JSON)
-
Enforce a response when network traffic exceeds those thresholds, based on your organization-defined parameter for how to handle “traffic above those thresholds” (the control leaves the exact enforcement action to your defined approach, but it expects enforcement, not passive monitoring). (NIST SP 800-53 Rev. 5 OSCAL JSON)
Plain-English interpretation
AU-5(3) is about preventing “logging blind spots” created by traffic surges. You’re implementing guardrails so abnormal network communications do not silently overwhelm audit log storage or the logging pipeline. In practice, teams satisfy this by tying traffic volume alerts to automated and manual actions such as traffic shaping, rate limiting, DDoS scrubbing escalation, quarantining chatty workloads, or increasing ingestion capacity with documented approval.
Who it applies to
Entity types: Federal information systems and contractor systems handling federal data. (NIST SP 800-53 Rev. 5 OSCAL JSON)
Operational contexts where assessors expect to see this implemented:
- Systems with centralized logging (SIEM) where ingestion can be saturated by bursts.
- Environments with shared log collectors, forwarders, or agents (a single “chatty” segment can crowd out other logs).
- Boundary and segmentation points (internet gateways, VPN concentrators, cloud egress points, inter-VPC/VNET routing layers).
- Third-party managed logging or SOC services where capacity limits exist contractually or technically.
Control ownership (practical):
- Primary: Security Engineering / Network Engineering (threshold enforcement)
- Shared: SOC/IR (alert triage and response), Platform/SRE (capacity planning), GRC (control definition, evidence, testing)
What you actually need to do (step-by-step)
Step 1: Define scope and “traffic volume” in enforceable terms
Decide what “traffic volume” means in your environment so engineers can implement it consistently:
- Bandwidth (bps), packets per second, flows per second, connections per second
- Traffic by segment (ingress/egress), by protocol/port, by identity (service account, workload), by destination class (SIEM/log endpoints)
Write this as an implementation standard: “We measure and threshold traffic at X points using Y metric(s).”
Step 2: Identify audit-log capacity constraints and failure modes
Map your logging pipeline and record the constraints that matter:
- Collector disk capacity and retention buffering
- Ingestion throughput into SIEM / log analytics
- Queue depth in log shippers/streaming systems
- Rate limits on managed services
Then document the failure mode you are preventing: “If network traffic spikes, audit events increase and collectors back up, risking dropped audit logs.” Tie that explicitly to AU-5(3). (NIST SP 800-53 Rev. 5 OSCAL JSON)
Step 3: Choose monitoring and enforcement points (“choke points”)
Pick the technical control points where enforcement is possible:
- Perimeter firewalls / cloud security groups + network firewalls
- WAF / API gateway
- DDoS protection service
- Kubernetes ingress controllers / service mesh policies
- Egress proxies / NAT gateways
- Flow telemetry layer (NetFlow/VPC Flow Logs) paired with automated control actions
If you cannot enforce at the network device, enforce at the application gateway or platform layer where traffic control is real.
Step 4: Set configurable thresholds tied to capacity, not guesses
Establish thresholds by working backward from capacity:
- Determine maximum sustainable log ingestion (or storage growth) without loss.
- Identify which traffic classes materially drive audit generation (authentication endpoints, API gateways, admin paths, east-west traffic in high-churn clusters).
- Convert capacity to a threshold you can implement (for example, a maximum connections/sec at the API gateway that correlates to a log rate you can store).
Requirements focus: thresholds must be configurable. Implement them as policy-as-code, firewall objects, gateway limits, or managed service configs with change control.
Step 5: Define “traffic above thresholds” actions (your ODP)
The enhancement expects enforcement for traffic above the threshold. Your enforcement actions should be explicit and pre-approved. Examples you can document:
- Alert SOC and open incident ticket with severity mapping.
- Automatically rate-limit specific endpoints or clients.
- Block traffic from offending IPs or quarantine a workload identity.
- Shift traffic to scrubbing service or enable stricter WAF rules.
- Temporarily increase logging pipeline capacity (if supported) under an emergency change.
Document decision criteria: what is automated vs. human-approved, and what is the rollback procedure.
Step 6: Integrate alerting, case management, and runbooks
An auditor will test that exceedances create detectable events and lead to action:
- Alert routes to on-call/SOC channel
- Ticket automatically created with required fields (time, metric, threshold value, affected assets)
- Runbook includes initial triage questions (false positive checks, impact check, containment options)
- Post-incident review requires confirming no audit log loss
Step 7: Test and validate
Run a controlled test (in a non-production segment or with synthetic traffic) to prove:
- Threshold triggers at the configured value
- Enforcement action occurs (rate limit/block/route)
- Logging pipeline remains within capacity and audit logs remain available
Capture evidence from the test in a repeatable format.
Step 8: Operationalize governance (ownership + recurring evidence)
Assign a control owner and set recurring review triggers:
- Review thresholds after major architecture changes, logging pipeline changes, or traffic pattern changes
- Review after any incident where logs were delayed/dropped or storage neared capacity
Daydream (as a practical workflow) fits here as a system of record to map AU-5(3) to a named owner, a concrete procedure, and a recurring evidence checklist so the control stays assessable over time. (NIST SP 800-53 Rev. 5 OSCAL JSON)
Required evidence and artifacts to retain
Maintain evidence that proves both configuration and enforcement:
Configuration evidence
- Network/gateway/firewall configs showing threshold values and that they are change-controlled
- Logging pipeline capacity documentation (ingestion limits, storage sizing assumptions, retention/buffering design)
- Architecture diagram showing measurement and enforcement points
Operational evidence
- Alert definitions (rule logic, thresholds, routing)
- Runbooks/playbooks for “traffic above threshold” actions
- Tickets/incident records showing threshold exceedance handling
- Post-incident reviews where relevant (especially confirming audit logs remained available)
Governance evidence
- Control narrative mapping AU-5(3) to systems in scope, owners, tools, and review cadence
- Change management records for threshold updates (who approved, why, when)
Common exam/audit questions and hangups
Expect assessors to ask:
- “Show me where the thresholds are configured and how they relate to audit log storage capacity.” (NIST SP 800-53 Rev. 5 OSCAL JSON)
- “What happens when traffic exceeds the threshold? Is it enforced or just alerted?” (NIST SP 800-53 Rev. 5 OSCAL JSON)
- “Which segments are covered? Are you only monitoring the perimeter?”
- “How do you know logs weren’t dropped during the last spike?”
- “Who can change threshold values, and how do you prevent ad hoc weakening?”
Hangups that delay audits:
- Thresholds exist in a dashboard but are not enforceable.
- No documented linkage between thresholds and log storage/ingestion capacity.
- No evidence of testing or real-world trigger handling.
Frequent implementation mistakes and how to avoid them
-
Mistake: Thresholds based on “normal traffic” rather than audit capacity.
Fix: Start from log storage/ingestion constraints and work backward to traffic limits that protect the pipeline. (NIST SP 800-53 Rev. 5 OSCAL JSON) -
Mistake: One global threshold for everything.
Fix: Use segmented thresholds for high-risk choke points (API gateway, admin interfaces, egress) and for critical environments. -
Mistake: Alerts without response authority.
Fix: Pre-approve automated actions for defined scenarios and document who can authorize blocks or rate limits. -
Mistake: No proof that enforcement worked.
Fix: Run a controlled test and retain artifacts (before/after graphs, rule logs, incident ticket). -
Mistake: SRE changes thresholds during incidents with no record.
Fix: Require emergency change logging and retrospective approval so you can show governance.
Enforcement context and risk implications
No public enforcement cases were provided in the source materials for this requirement, so you should treat audit risk as primarily assessment-driven rather than case-law-driven.
Risk-wise, AU-5(3) failures show up as:
- Gaps in audit trails during high-traffic events (DDoS, credential stuffing, runaway jobs, misconfigured clients).
- Inability to support incident response timelines because logs are delayed, dropped, or overwritten.
- Weak defensibility in investigations and reporting because you cannot demonstrate audit log completeness during abnormal conditions.
A practical 30/60/90-day execution plan
First 30 days (stabilize and define)
- Name a control owner and approve a one-page AU-5(3) control narrative mapped to in-scope systems. (NIST SP 800-53 Rev. 5 OSCAL JSON)
- Document logging pipeline constraints and the most likely overload scenarios.
- Identify enforcement points and pick the initial traffic metrics (keep it implementable).
Days 31–60 (implement and integrate)
- Configure thresholds at selected choke points with change control.
- Build alerting and case creation, attach runbooks, and align SOC/SRE responsibilities.
- Define your “traffic above threshold” enforcement actions and approval rules. (NIST SP 800-53 Rev. 5 OSCAL JSON)
Days 61–90 (test, evidence, and harden)
- Execute a test that triggers thresholds and produces an auditable record.
- Tune thresholds to reduce noise while maintaining capacity protection.
- Package evidence: configs, diagrams, test results, sample alerts/tickets, and a recurring review checklist in Daydream or your GRC system of record.
Frequently Asked Questions
Does AU-5(3) require blocking traffic when thresholds are exceeded?
The text requires you to “enforce” thresholds and address “network traffic above those thresholds,” but it leaves the specific action to your organization-defined approach. Blocking is one option; rate limiting or routing to scrubbing can also qualify if it is enforceable and documented. (NIST SP 800-53 Rev. 5 OSCAL JSON)
How do I tie traffic thresholds to audit log storage capacity in a way an auditor will accept?
Document your logging pipeline constraints (ingestion rate, buffer depth, storage growth) and show how the selected traffic metric correlates to audit event volume or pipeline saturation. Keep the mapping simple and retain the sizing notes and the threshold rationale as evidence. (NIST SP 800-53 Rev. 5 OSCAL JSON)
What systems should be “in scope” for thresholds first?
Start with boundary and aggregation points that can affect the most logging: internet ingress, API gateways, VPN concentrators, and major egress points. Add internal segments where east-west spikes can overload collectors.
Are cloud-native controls (WAF/API gateway limits) acceptable, or do I need network appliances?
Cloud-native enforcement is acceptable if it is configurable, enforced, and produces auditable logs of the threshold configuration and actions taken. You still need to show the linkage to audit log capacity and consistent response. (NIST SP 800-53 Rev. 5 OSCAL JSON)
What evidence is strongest if we have never had a real threshold exceedance?
A controlled test plus screenshots/exports of the configured thresholds, the generated alert, the created ticket, and the enforcement action logs is usually stronger than “we haven’t seen it happen.” Preserve the test plan and results so you can repeat them.
How do we handle third-party managed logging limits?
Treat provider quotas and rate limits as part of your “audit log storage capacity” constraints. Set thresholds upstream to prevent saturating the third party service, and retain the contract/SOW technical limits plus your threshold configuration rationale. (NIST SP 800-53 Rev. 5 OSCAL JSON)
Frequently Asked Questions
Does AU-5(3) require blocking traffic when thresholds are exceeded?
The text requires you to “enforce” thresholds and address “network traffic above those thresholds,” but it leaves the specific action to your organization-defined approach. Blocking is one option; rate limiting or routing to scrubbing can also qualify if it is enforceable and documented. (NIST SP 800-53 Rev. 5 OSCAL JSON)
How do I tie traffic thresholds to audit log storage capacity in a way an auditor will accept?
Document your logging pipeline constraints (ingestion rate, buffer depth, storage growth) and show how the selected traffic metric correlates to audit event volume or pipeline saturation. Keep the mapping simple and retain the sizing notes and the threshold rationale as evidence. (NIST SP 800-53 Rev. 5 OSCAL JSON)
What systems should be “in scope” for thresholds first?
Start with boundary and aggregation points that can affect the most logging: internet ingress, API gateways, VPN concentrators, and major egress points. Add internal segments where east-west spikes can overload collectors.
Are cloud-native controls (WAF/API gateway limits) acceptable, or do I need network appliances?
Cloud-native enforcement is acceptable if it is configurable, enforced, and produces auditable logs of the threshold configuration and actions taken. You still need to show the linkage to audit log capacity and consistent response. (NIST SP 800-53 Rev. 5 OSCAL JSON)
What evidence is strongest if we have never had a real threshold exceedance?
A controlled test plus screenshots/exports of the configured thresholds, the generated alert, the created ticket, and the enforcement action logs is usually stronger than “we haven’t seen it happen.” Preserve the test plan and results so you can repeat them.
How do we handle third-party managed logging limits?
Treat provider quotas and rate limits as part of your “audit log storage capacity” constraints. Set thresholds upstream to prevent saturating the third party service, and retain the contract/SOW technical limits plus your threshold configuration rationale. (NIST SP 800-53 Rev. 5 OSCAL JSON)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream