CP-2(2): Capacity Planning

CP-2(2) requires you to perform capacity planning specifically for contingency operations so your organization has enough compute, telecom, and environmental support (power/cooling/facilities) to run critical services during disruptions. Operationalize it by defining contingency capacity targets, mapping them to recovery scenarios, validating provider and facility limits, and retaining repeatable evidence of testing and reviews 1.

Key takeaways:

  • CP-2(2) is about “can we run during contingency,” not day-to-day performance tuning 1.
  • You must cover three domains: information processing, telecommunications, and environmental support 1.
  • Auditors will look for measurable targets tied to recovery strategies plus proof you can meet them under failover conditions 1.

The cp-2(2): capacity planning requirement is one of those controls that looks deceptively small but turns into a cross-functional coordination problem the minute you try to “show it works.” CP-2(2) sits in the Contingency Planning (CP) family and focuses on whether you have sufficient capacity to operate during contingency operations, not whether your systems are “fast enough” in normal conditions 1. That distinction matters in exams: assessors tend to challenge assumptions that production capacity automatically equals disaster recovery (DR) capacity.

For a Compliance Officer, CCO, or GRC lead, the fastest path is to turn CP-2(2) into: (1) explicit contingency capacity requirements for critical services, (2) a mapping from those requirements to your DR design (alternate site/region, backup connectivity, generator/UPS, cloud quotas), and (3) recurring evidence that you reassess and validate those assumptions. If you run significant workloads in cloud, your evidence set must include cloud service limits and your own guardrails (quotas, scaling, reserved capacity). If you run on-prem, environmental support becomes the center of gravity.

Regulatory text

Requirement (verbatim): “Conduct capacity planning so that necessary capacity for information processing, telecommunications, and environmental support exists during contingency operations.” 1

Operator interpretation: You must plan, document, and periodically validate that your contingency environment can support the processing load, network connectivity, and facility needs required to meet your contingency objectives. That means you define what “necessary capacity” is for your critical services under your contingency scenarios, then prove the DR design can meet it 1.

Plain-English interpretation (what this control really asks)

If a primary environment is impaired, can you still run the mission-essential services at an acceptable level, with enough:

  • Information processing: compute, storage, IOPS, database throughput, virtualization hosts, container capacity, cloud quotas.
  • Telecommunications: internet circuits, WAN/MPLS/SD-WAN, VPN concentrators, DNS, telephony/contact center connectivity if applicable.
  • Environmental support: power (utility + UPS + generator), cooling, rack space, fire suppression, physical security constraints at the alternate site 1.

CP-2(2) does not require a specific architecture. It requires that whatever architecture you chose is backed by capacity planning for contingency operations.

Who it applies to

Entity scope

  • Federal information systems and contractor systems handling federal data that align to NIST SP 800-53 Rev. 5 control expectations 2.

Operational context (where the control “bites”)

CP-2(2) becomes high-friction in these situations:

  • You have “warm” or “cold” DR where capacity is intentionally reduced and must still meet recovery objectives.
  • You rely on a third party for DR (cloud provider, colocation, managed hosting) and assume capacity without documenting quotas/contractual commitments.
  • You have tight facility constraints (generator runtime, fuel contracts, cooling redundancy) and no single owner for “environmental support.”
  • Multiple critical systems share a single DR landing zone; aggregate demand exceeds what any single app team planned.

What you actually need to do (step-by-step)

Use this sequence as a control procedure you can hand to an auditor.

1) Assign a control owner and define the operating cadence

  • Name a primary owner (often BC/DR lead, IT Ops, or SRE) plus accountable approvers (CIO/CTO or Head of Infrastructure; Facilities for environmental support).
  • Define a recurring review trigger: significant architecture change, onboarding a critical system, major growth event, or a scheduled review cycle you can defend.

Evidence goal: a RACI and a written procedure that describes how capacity planning for contingency is performed and approved 1.

2) Define “contingency operations” scenarios you will plan for

Pick the scenarios that match your continuity plan and recovery strategy. Examples:

  • Primary site outage
  • Primary cloud region outage
  • Loss of a major network carrier / degraded connectivity
  • Data center HVAC failure requiring load shedding

Tie each scenario to which services must run and at what level. Avoid hand-waving. If you cannot define targets, you cannot prove capacity exists.

3) Translate contingency objectives into capacity requirements

For each critical service/system:

  • Determine the minimum service level in contingency mode (for example: reduced functionality allowed, batch processing paused, read-only allowed).
  • Convert that into resource requirements:
    • Compute: vCPU/cores, memory, node counts, autoscaling boundaries
    • Storage: capacity, performance tier, replication overhead
    • Network: bandwidth, concurrent VPN sessions, load balancer capacity
    • Facility: kW draw, cooling tonnage, rack units

This is where GRC adds value: force a consistent template so app teams provide comparable numbers and assumptions.

4) Compare requirements to DR/alternate capacity (gap analysis)

Build a single “contingency capacity register” that answers:

  • What capacity is required per system and in aggregate?
  • What capacity is available in the contingency environment?
  • What assumptions does “available” depend on (reserved instances, cloud quota increases, shared tenancy, colocation contract terms, fuel delivery)?

Then identify gaps and assign remediation actions with owners.

5) Validate telecommunications and external dependencies

Capacity planning fails most often at the network edge and with third parties. Confirm:

  • Circuit diversity and throughput at the alternate site/region
  • DNS, identity, email, and security tooling dependencies in contingency mode
  • Third-party limits (support response, concurrency caps, usage caps) that impact recovery

If a third party is part of your contingency path, capture evidence from that third party (contracts, SLAs, service descriptions, or attestations) and map it to your required capacity.

6) Validate environmental support (don’t leave it implicit)

For on-prem or colo:

  • Confirm generator capacity and runtime assumptions
  • Validate cooling redundancy and max sustainable load
  • Confirm physical space and power distribution constraints (PDUs, breakers)

For cloud-heavy environments, “environmental support” still exists, but it shifts to ensuring your organization’s own supporting facilities can operate (network rooms, call centers, key sites) and that your DR design does not depend on a single physical location for access.

7) Test the assumptions during contingency exercises

During tabletop and technical DR tests, capture actual observations:

  • Were quotas sufficient?
  • Did scaling behave as expected?
  • Did telecom paths saturate?
  • Did facility constraints force load shedding?

You do not need to “prove perfect performance.” You do need to show you planned and validated capacity for contingency operations and corrected gaps 1.

8) Retain repeatable evidence and make it auditable

Treat CP-2(2) like a requirement with recurring artifacts. One recommended operational control is to explicitly map CP-2(2) to a control owner, an implementation procedure, and recurring evidence artifacts so you can produce an assessment-ready package quickly 1. Daydream is commonly used to maintain this mapping and keep artifacts current across systems and third parties without relying on institutional memory.

Required evidence and artifacts to retain

Keep these in a single audit folder (or GRC system) labeled to the control:

  1. CP-2(2) procedure (how you perform contingency capacity planning; who approves; when it triggers).
  2. System/service inventory with criticality and which services require contingency capacity planning.
  3. Contingency scenarios and the list of in-scope services per scenario.
  4. Contingency capacity register (requirements vs. available capacity, plus assumptions).
  5. Third-party evidence supporting capacity claims (contracts/SLA language, quota confirmations, support terms).
  6. Facilities capacity documentation (power/cooling/load limits; maintenance/testing records where relevant).
  7. Test and exercise records showing capacity planning assumptions were validated and issues tracked to closure.

Common exam/audit questions and hangups

Auditors and assessors tend to ask:

  • “Show me how you determined ‘necessary capacity’ for contingency operations.” 1
  • “What is your evidence that the alternate environment can support the aggregated load during failover?”
  • “Where do you account for telecommunications capacity under contingency operations?” 1
  • “Who owns environmental support planning, and how do you know generator/cooling capacity is sufficient?” 1
  • “How do changes (new systems, higher usage, architecture changes) update the DR capacity plan?”

Hangups you can preempt:

  • Teams show performance monitoring dashboards instead of contingency capacity planning artifacts.
  • Cloud DR is documented, but quotas and cross-region constraints are not.
  • Facilities evidence exists but is not tied back to specific contingency loads.

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: Treating CP-2(2) as a one-time document.
    Fix: Make capacity planning a living register tied to change management and DR testing outputs.

  2. Mistake: Planning per system, not in aggregate.
    Fix: Include a roll-up view that totals DR demand across all critical services, then compare to shared DR capacity pools.

  3. Mistake: Ignoring “telecommunications” until a test fails.
    Fix: Model bandwidth and session limits explicitly (VPN, WAF/LB, DNS, carrier circuits), then validate during exercises.

  4. Mistake: Environmental support has no accountable owner.
    Fix: Put Facilities (or Data Center Ops) into the RACI and require sign-off that power/cooling constraints are known and mapped to DR loads.

  5. Mistake: Third-party DR dependency without evidence.
    Fix: Store third-party commitments and limits as artifacts; link them to the capacity register assumptions.

Enforcement context and risk implications

No public enforcement cases were provided in the source material for CP-2(2). Practically, the risk shows up as operational outages and failed recoveries: even a well-written contingency plan fails if DR capacity is undersized or contractually unavailable. For regulated environments, the most common compliance failure mode is “missing implementation evidence” for CP-2(2), where teams cannot produce a coherent package tying contingency objectives to capacity and validation 1.

A practical 30/60/90-day execution plan

First 30 days (establish control ownership and baseline)

  • Assign owner and approvers; publish the CP-2(2) procedure.
  • Identify in-scope critical services and current DR strategy per service.
  • Create the first version of the contingency capacity register with placeholders for unknowns.

By 60 days (quantify requirements and identify gaps)

  • Define contingency scenarios used for planning and testing.
  • Gather capacity requirements from service owners using a standard template.
  • Collect telecom and environmental support constraints (network diagrams, circuit details, facility load limits).
  • Record gaps with owners and target remediation dates in your risk/issues tracker.

By 90 days (validate and make it repeatable)

  • Run a DR exercise that validates capacity assumptions for a representative set of critical services.
  • Update the register with observed results and documented exceptions.
  • Package evidence for audit: procedure, register, third-party evidence, test results, and approvals.
  • If you manage controls in Daydream, map CP-2(2) to the control owner, implementation procedure, and recurring evidence artifacts so evidence stays current as systems and third parties change 1.

Frequently Asked Questions

Does CP-2(2) require a specific DR architecture (active-active, warm standby, etc.)?

No. It requires that your chosen contingency approach has enough capacity for information processing, telecommunications, and environmental support during contingency operations 1.

We’re mostly cloud-based. What counts as “environmental support” for us?

You still need to plan for supporting conditions required to operate during contingency, including any physical sites you depend on (network rooms, call centers) and the cloud-side constraints your organization controls, like quotas and scaling guardrails 1.

Is showing autoscaling configuration enough evidence?

Usually not. Auditors expect to see defined contingency capacity requirements, a comparison to available capacity, and validation through exercises or testing results 1.

How do we handle third-party capacity commitments in DR?

Treat third-party limits and commitments as assumptions that must be documented and evidenced. Keep contract/SLA language or written confirmations and map them to the contingency capacity register.

What if we intentionally reduce capacity during contingency operations?

That can be acceptable if the reduced capacity still meets the contingency objectives you defined for critical services. Document the reduced mode, the resulting resource requirements, and how you validated them during a contingency test 1.

What’s the minimum artifact set to pass an assessment?

A documented procedure, a capacity register tying scenarios to requirements and available capacity across compute/network/environmental support, and proof of review and validation (test records or exercise outputs) 1.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

Does CP-2(2) require a specific DR architecture (active-active, warm standby, etc.)?

No. It requires that your chosen contingency approach has enough capacity for information processing, telecommunications, and environmental support during contingency operations (Source: NIST SP 800-53 Rev. 5 OSCAL JSON).

We’re mostly cloud-based. What counts as “environmental support” for us?

You still need to plan for supporting conditions required to operate during contingency, including any physical sites you depend on (network rooms, call centers) and the cloud-side constraints your organization controls, like quotas and scaling guardrails (Source: NIST SP 800-53 Rev. 5 OSCAL JSON).

Is showing autoscaling configuration enough evidence?

Usually not. Auditors expect to see defined contingency capacity requirements, a comparison to available capacity, and validation through exercises or testing results (Source: NIST SP 800-53 Rev. 5 OSCAL JSON).

How do we handle third-party capacity commitments in DR?

Treat third-party limits and commitments as assumptions that must be documented and evidenced. Keep contract/SLA language or written confirmations and map them to the contingency capacity register.

What if we intentionally reduce capacity during contingency operations?

That can be acceptable if the reduced capacity still meets the contingency objectives you defined for critical services. Document the reduced mode, the resulting resource requirements, and how you validated them during a contingency test (Source: NIST SP 800-53 Rev. 5 OSCAL JSON).

What’s the minimum artifact set to pass an assessment?

A documented procedure, a capacity register tying scenarios to requirements and available capacity across compute/network/environmental support, and proof of review and validation (test records or exercise outputs) (Source: NIST SP 800-53 Rev. 5 OSCAL JSON).

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream