SC-5(2): Capacity, Bandwidth, and Redundancy

SC-5(2): Capacity, Bandwidth, and Redundancy requires you to engineer and operate enough network and system headroom (plus redundant paths/services) to reduce the impact of information-flooding denial-of-service attacks. To operationalize it fast, define DDoS capacity targets for critical services, implement scalable bandwidth and redundancy controls, and retain recurring evidence that they are tested and monitored. 1

Key takeaways:

  • Treat SC-5(2) as an engineering-and-operations control: capacity planning, scalable defenses, and redundant design must be measurable and testable. 1
  • Auditors look for proof you can absorb or shed malicious traffic without taking critical services down, plus proof you review and tune those limits. 2
  • Evidence beats intent: diagrams, configs, runbooks, monitoring, and test results should be packaged into a repeatable control record. 1

SC-5(2): capacity, bandwidth, and redundancy requirement is the “make flooding survivable” enhancement under the System and Communications Protection family. The requirement is short, but implementation is not. It forces a cross-functional outcome: your infrastructure must handle unexpected surges of traffic (malicious or accidental) without collapsing, and it must have redundancy so failures or saturation in one path does not take down mission-critical services. 1

For a Compliance Officer, CCO, or GRC lead, the fastest way to get this right is to translate the sentence into operational commitments: (1) identify which services must stay up, (2) set capacity and bandwidth objectives tied to those services, (3) implement scalable or redundant technical measures (at network, edge, and application layers), and (4) continuously monitor and periodically test that the measures still work under real conditions. 2

This page gives requirement-level guidance you can hand to engineering, cloud ops, and network teams with minimal interpretation. It also tells you what artifacts to collect so your assessment does not hinge on tribal knowledge.

Regulatory text

Requirement (verbatim): “Manage capacity, bandwidth, or other redundancy to limit the effects of information flooding denial-of-service attacks.” 1

Operator interpretation: You must design and operate your environment so a flooding attack (traffic volume, request rate, connection exhaustion, or similar) does not cause unacceptable downtime or service degradation for critical systems. “Manage” implies active governance: you set targets, implement technical controls, monitor performance, and adjust over time. 2

What this requires you to do, in practice:

  • Maintain adequate capacity headroom for critical components (internet links, load balancers, WAF/CDN, reverse proxies, API gateways, application tiers, DNS, and upstream dependencies). 1
  • Provision bandwidth and/or elastic scaling so sudden spikes can be absorbed or offloaded. 1
  • Build redundancy (multi-AZ/region, multiple ISPs, redundant DNS, redundant edge services, failover capability) so saturation or failure in one path does not cascade. 1

Plain-English requirement statement

You need enough throughput and fallback options to keep critical services reachable during traffic floods, and you need to prove it with monitoring and tests. 1

Who it applies to

SC-5(2) applies anywhere you must meet NIST SP 800-53 control expectations, including:

  • Federal information systems operating externally reachable services (public websites, citizen services, portals, APIs, email gateways, remote access, DNS). 2
  • Contractor systems handling federal data (including cloud-hosted systems, managed services, and SaaS) where availability and resilience are part of the security baseline. 2

Operational contexts that almost always trigger real work under SC-5(2):

  • Internet-facing applications with authenticated and unauthenticated traffic paths
  • Systems with shared capacity pools (multi-tenant clusters, shared gateways)
  • Dependencies on third parties (CDN/DNS/WAF/DDOS providers, ISPs, upstream API providers) where your redundancy is partly contractual and architectural

What you actually need to do (step-by-step)

Step 1: Set the scope and “critical service” list

  1. Inventory externally reachable entry points: domains, IP ranges, load balancers, API gateways, VPN/remote access, DNS providers, email gateways.
  2. Tag which are mission-critical vs. non-critical based on impact of outage and recovery expectations.
  3. Document the service owners and on-call responders.

Deliverable: “Flooding/DDoS attack surface register” linked to system boundaries and owners.

Step 2: Define measurable capacity and redundancy objectives

Create service-specific objectives that engineering can design to and that auditors can evaluate. Avoid vague statements like “sufficient bandwidth.”

Minimum fields to define per critical service:

  • Expected baseline traffic profile (normal peaks)
  • Maximum tolerable degradation (what can slow down, what must stay responsive)
  • Capacity strategy: vertical capacity, horizontal autoscaling, edge offload, rate limiting, queuing, graceful degradation
  • Redundancy strategy: failover design, duplicate providers, alternate routes, backup DNS, multi-region patterns

Tip for GRC: Put these objectives into the system’s control implementation statement and link to the architecture diagrams and runbooks.

Step 3: Implement layered technical controls mapped to capacity/bandwidth/redundancy

Use a layered approach so “capacity” is not only “buy bigger pipes.”

A practical control menu (pick what fits your architecture):

  • Edge absorption/offload: CDN caching for static and cacheable dynamic content; upstream DDoS protection service; Anycast where appropriate.
  • Request shaping: rate limiting, connection limits, request size limits, bot controls, throttling per identity/API key, and backpressure.
  • Infrastructure scaling: autoscaling groups, container horizontal pod autoscaling, load balancer scaling, database read replicas where read-flood is plausible.
  • Network redundancy: multiple availability zones, redundant load balancers, redundant NAT/egress, multiple network paths.
  • DNS resilience: redundant DNS hosting, protective settings to prevent easy exhaustion, operational runbooks for DNS failover.
  • Dependency protections: circuit breakers, timeouts, bulkheads, and fallback responses so downstream failures do not amplify resource exhaustion.

Compliance mapping: Each selected measure should be explicitly tied back to “capacity, bandwidth, or other redundancy” so SC-5(2) is satisfied with evidence, not inference. 1

Step 4: Build operational processes (monitoring, thresholds, and response)

Auditors will test whether the control operates, not just whether it was designed.

Implement:

  • Dashboards for traffic rates, error rates, latency, saturation (CPU/memory), connection counts, and upstream link utilization
  • Alert thresholds aligned to your objectives (early warning + critical)
  • A runbook for flooding events: triage, mitigation toggles (turn on stricter rate limits, blocklists, CAPTCHA challenges where applicable), failover steps, escalation paths (including third-party providers)

Step 5: Test redundancy and capacity assumptions

You need a repeatable way to validate that your headroom and redundancy work.

  • Tabletop exercises for DDoS/flooding scenarios (decision-making and escalation)
  • Controlled load tests for critical endpoints in a safe environment
  • Failover tests (DNS failover, region failover, provider failover) where your redundancy plan depends on it

Record outcomes and track remediation items to closure.

Step 6: Package SC-5(2) into an assessable control record

Map SC-5(2) to:

  • a named control owner
  • an implementation procedure
  • recurring evidence artifacts on a defined cadence (for example, per release, per quarter, or after major architecture changes)

Daydream (practical fit): teams often lose time during assessments hunting for diagrams, configs, and test results across tickets and wikis. Daydream’s value is turning SC-5(2) into a living requirement page with assigned ownership, procedure steps, and an evidence checklist that refreshes on schedule.

Required evidence and artifacts to retain

Keep evidence that proves (a) design, (b) operation, and (c) verification.

Design evidence

  • Network and application architecture diagrams showing redundancy paths (multi-AZ/region, dual providers, redundant DNS)
  • Data flow diagrams for internet ingress and critical APIs
  • Capacity plan(s) for critical services, including assumptions and constraints
  • Threat model or risk assessment notes specific to flooding/DDoS (if you have them)

Operational evidence

  • Monitoring dashboards and alert definitions (screenshots or exported config)
  • Runbooks/playbooks for traffic flooding incidents
  • Change records for scaling, rate-limit rule updates, WAF policy changes, CDN configuration changes
  • Third-party contracts/SOWs for DDoS/CDN/DNS services showing service commitments and escalation contacts (where applicable)

Verification evidence

  • Load test summaries and results (what was tested, max achieved, bottlenecks, fixes)
  • Failover test records and post-test remediation tracking
  • Incident tickets and post-incident reviews for any flooding events, with corrective actions

Common exam/audit questions and hangups

Expect these questions and prep your evidence packet accordingly:

  • “Which services are in scope for SC-5(2), and why?”
  • “Show me how capacity and redundancy were determined. What are the assumptions?”
  • “Where are rate limits/WAF rules/CDN protections defined, and who approves changes?”
  • “How do you know redundancy works? Show the last failover test.”
  • “What monitoring would detect an information-flooding denial-of-service attack, and what happens after the alert?”
  • “How do third parties factor into your bandwidth/redundancy plan?” (CDN, ISP, DNS, managed edge)

Hangup to avoid: answers that rely on “the cloud scales automatically” without showing scaling configs, limits, and tests.

Frequent implementation mistakes and how to avoid them

Mistake Why it fails SC-5(2) Fix
Treating DDoS as only a network problem Many outages come from app-layer floods and dependency exhaustion Add app-layer throttling, timeouts, bulkheads, and caching alongside edge controls
No declared capacity objective Auditors can’t evaluate “managed capacity” Write service-specific objectives and link them to configs and monitoring
Redundancy exists on paper only Failover paths fail during real incidents Schedule failover tests and retain evidence with remediation tracking
Missing third-party escalation path DDoS/CDN/DNS providers are part of the control Keep contracts, support contacts, and runbooks for provider engagement
Evidence scattered across teams Assessments stall and control looks non-operational Centralize artifacts in a control record (Daydream or your GRC system) with a fixed checklist

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for SC-5(2), so you should treat this as an assessment-driven requirement rather than a case-law-driven one. The risk remains operationally material: flooding attacks can cause downtime, lost mission capability, and cascading failures across shared services. 1

Translate risk into decisions:

  • If a service is critical, fund redundancy and test it.
  • If you depend on a third party for bandwidth or DDoS protection, bake that dependency into incident response and continuity planning.

A practical 30/60/90-day execution plan

First 30 days (stabilize and document)

  • Name the SC-5(2) control owner and backup.
  • Build the attack surface register for internet ingress points.
  • Identify critical services and define initial capacity/redundancy objectives.
  • Collect existing diagrams, configs, and provider agreements into one evidence folder with a checklist.

Days 31–60 (implement gaps and monitoring)

  • Close obvious single points of failure in ingress and DNS where feasible.
  • Implement or tighten rate limits and connection controls on critical endpoints.
  • Stand up dashboards and alerts aligned to your objectives.
  • Publish the flooding/DDoS runbook and validate escalation paths (internal and third party).

Days 61–90 (prove it works)

  • Run a controlled load test for at least one critical service path and document bottlenecks and fixes.
  • Execute at least one redundancy test (failover or reroute) and capture results.
  • Hold a tabletop exercise for a flooding scenario and record action items.
  • Convert all of the above into a recurring evidence cadence tied to change management.

Frequently Asked Questions

Do we meet SC-5(2) if we have a CDN and WAF?

Only if you can show the CDN/WAF meaningfully limits flooding impact for your critical services and you monitor and maintain it as an operational control. Keep configs, alerting, and test results as evidence. 1

What counts as “redundancy” for SC-5(2)?

Redundancy can be extra capacity, alternate routes, failover environments, or additional providers that prevent saturation or failure from taking the service down. Document what is redundant, how failover happens, and how you verify it works. 1

How do we scope SC-5(2) in a multi-tenant SaaS?

Scope it to shared ingress, shared control planes, and high-impact tenants/services where flooding could degrade other customers. You need tenant-safe rate limits, isolation mechanisms, and evidence that scaling and redundancy protect the platform. 2

If our cloud provider handles DDoS, are we done?

You still own configuring scaling limits, rate limits, monitoring, and incident response for your application and account-level architecture. Keep proof of what the provider covers and what you control, then test your side of the boundary. 2

What evidence is “make or break” in an audit?

Auditors usually want to see (1) architecture diagrams showing redundant ingress/failover, (2) monitoring and alerting tied to saturation/flood indicators, and (3) a recent test or incident record that shows the control works in practice. 1

How do we operationalize this across third parties (CDN, DNS, ISP)?

Treat third parties as part of your control design: maintain contracts and support paths, define escalation steps in the runbook, and test provider failover where your redundancy strategy depends on it. Keep evidence of periodic reviews and any change tickets. 2

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

Do we meet SC-5(2) if we have a CDN and WAF?

Only if you can show the CDN/WAF meaningfully limits flooding impact for your critical services and you monitor and maintain it as an operational control. Keep configs, alerting, and test results as evidence. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What counts as “redundancy” for SC-5(2)?

Redundancy can be extra capacity, alternate routes, failover environments, or additional providers that prevent saturation or failure from taking the service down. Document what is redundant, how failover happens, and how you verify it works. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we scope SC-5(2) in a multi-tenant SaaS?

Scope it to shared ingress, shared control planes, and high-impact tenants/services where flooding could degrade other customers. You need tenant-safe rate limits, isolation mechanisms, and evidence that scaling and redundancy protect the platform. (Source: NIST SP 800-53 Rev. 5)

If our cloud provider handles DDoS, are we done?

You still own configuring scaling limits, rate limits, monitoring, and incident response for your application and account-level architecture. Keep proof of what the provider covers and what you control, then test your side of the boundary. (Source: NIST SP 800-53 Rev. 5)

What evidence is “make or break” in an audit?

Auditors usually want to see (1) architecture diagrams showing redundant ingress/failover, (2) monitoring and alerting tied to saturation/flood indicators, and (3) a recent test or incident record that shows the control works in practice. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we operationalize this across third parties (CDN, DNS, ISP)?

Treat third parties as part of your control design: maintain contracts and support paths, define escalation steps in the runbook, and test provider failover where your redundancy strategy depends on it. Keep evidence of periodic reviews and any change tickets. (Source: NIST SP 800-53 Rev. 5)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream