Continuity Plan Testing

Continuity plan testing means you must run planned exercises of your continuity of operations plans on a defined cadence, capture results, and update the plans based on what the tests proved or failed to prove. Under C2M2 v2.1 RESPONSE-3.B, the frequency is yours to define, but you must be able to defend it, execute it, and show evidence. (Cybersecurity Capability Maturity Model v2.1)

Key takeaways:

  • Define a testing cadence tied to critical services, recovery objectives, and material changes, then get it approved and followed.
  • Test what matters (people, process, technology, and third parties), not just document reviews.
  • Keep tight evidence: scope, scenarios, results, issues, corrective actions, and plan updates mapped to findings.

C2M2’s continuity plan testing requirement is short, but auditors and operational leaders read it as a maturity signal: do you actually know you can recover, or do you only have paperwork. The requirement does not mandate a specific interval. It requires that you define the frequency, run tests accordingly, and update continuity of operations plans based on test outcomes. (Cybersecurity Capability Maturity Model v2.1)

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat continuity testing as a governed program with a small set of repeatable deliverables: a test calendar, test scripts, participant lists, results, an issue log with owners and dates, and proof the plans were updated. Then connect the program to operational reality: critical processes, IT/OT dependencies, backup and restoration capabilities, and any third parties that are required for restoration.

This page focuses on operationalizing RESPONSE-3.B in a way you can defend during internal audit, customer diligence, or a C2M2-based assessment. It also flags the most common failure mode: teams “test” by holding a meeting, but never validate restoration, communications, decision rights, or dependency handoffs.

Regulatory text

Requirement (excerpt): “Continuity of operations plans are tested and updated at an organization-defined frequency.” (Cybersecurity Capability Maturity Model v2.1)

What an operator must do:

  1. Define a testing frequency for continuity of operations plans that matches your environment and risk.
  2. Execute tests according to that defined cadence (not ad hoc, not “when we remember”).
  3. Update the plans based on test results and changes in operations, systems, threats, and dependencies. (Cybersecurity Capability Maturity Model v2.1)

Implementation anchor: If you cannot show test results and resulting plan updates, the requirement is not met, even if you have a well-written plan.

Plain-English interpretation (what the requirement “really means”)

You need a continuity plan testing program that proves your organization can keep operating (or restore operations) through realistic disruptions. “Testing” must generate learnings: what worked, what failed, who did what, how long key steps took, what dependencies broke, and what you changed afterward.

“Organization-defined frequency” is not a loophole; it is a decision you must document and defend. Your rationale should tie to:

  • The criticality of services and safety impacts (especially in energy and other critical infrastructure contexts)
  • The complexity of dependencies (IT, OT, telecom, identity systems, specialized staff)
  • The rate of change (new systems, new sites, reorganizations, third-party changes)
  • The threat environment and recent incidents (Cybersecurity Capability Maturity Model v2.1)

Who it applies to

Entities: Energy sector organizations and other critical infrastructure operators using C2M2 to assess cybersecurity maturity for a defined scope. (Cybersecurity Capability Maturity Model v2.1)

Operational context (scope matters):

  • Applies to the business unit, function, site, or OT environment you included in your C2M2 assessment scope.
  • Applies whether continuity plans are called COOP, BCP, DR, IR/BCP integration, or “resilience playbooks,” as long as they govern continuity of operations. (Cybersecurity Capability Maturity Model v2.1)

Typical in-scope teams:

  • Operations leadership (plant/site ops, grid ops, control center)
  • IT infrastructure and applications
  • OT engineering / ICS support
  • Cybersecurity incident response (because many continuity events start as cyber events)
  • Facilities, procurement, and critical third-party owners

What you actually need to do (step-by-step)

Step 1: Set testing governance and define “done”

Create a short continuity testing standard (or procedure) that answers:

  • What plans are in scope (enterprise COOP, site plans, system DR runbooks)
  • Who owns testing (often resilience/BCP with IT/OT co-owners)
  • Who approves results and plan updates
  • Where evidence is stored and how long it is retained (set your retention rule)
  • What constitutes a “test” versus a “review” (Cybersecurity Capability Maturity Model v2.1)

Step 2: Define and document your testing cadence (and rationale)

Document a cadence that is defensible. Use a simple tiering approach:

  • Tier A (highest criticality): most frequent exercises and at least one exercise that validates technical restoration for key dependencies
  • Tier B: periodic exercises plus targeted tabletop drills
  • Tier C: lighter-touch reviews with targeted tests after major changes

Record the rationale for the frequency you chose, and what triggers an out-of-cycle test (major system change, site move, third-party exit, incident learnings). This directly satisfies “organization-defined frequency” and makes it auditable. (Cybersecurity Capability Maturity Model v2.1)

Step 3: Build a continuity test plan that covers people, process, and technology

For each planned test, define:

  • Objective (decision-making, communications, system restoration, OT safety constraints, manual workarounds)
  • Scenario (ransomware, loss of control center, telecom outage, loss of key supplier)
  • Success criteria (restore priority services, verify backup restoration, validate contact trees)
  • Participants and roles (incident commander, IT/OT leads, comms, third-party contacts)
  • Preconditions (access to recovery tools, offline credentials, break-glass procedures)
  • Evidence to capture (screenshots, tickets, bridge logs, restoration logs) (Cybersecurity Capability Maturity Model v2.1)

Practical note: A tabletop is acceptable for some objectives, but you should also include at least one exercise type that validates restoration execution for your most critical services. If you never attempt restoration, you will struggle to prove “tested” in a serious review.

Step 4: Execute tests and capture results in a repeatable format

Run tests like an operational event:

  • Pre-brief: scope, safety rules (especially for OT), and “no-fault learning” expectations
  • During: capture timestamps, decision points, communications issues, and dependency failures
  • Post: hotwash within a short window while details are fresh
  • Publish a results report with findings and corrective actions (Cybersecurity Capability Maturity Model v2.1)

A lightweight results template is usually enough:

  • What was tested
  • What happened
  • What worked
  • What failed
  • Issues (ranked by service impact)
  • Corrective actions with owners and target dates
  • Plan updates required

Step 5: Track corrective actions to closure

Create a single continuity testing issue log (Jira/ServiceNow/GRC system/spreadsheet) with:

  • Unique issue ID
  • Root cause (process gap, skill gap, technology gap, third-party gap)
  • Owner and due date
  • Status and closure evidence
  • Link back to the test that discovered it (Cybersecurity Capability Maturity Model v2.1)

Auditors look for closure discipline. Open issues are acceptable; unmanaged issues are not.

Step 6: Update continuity plans based on what you learned

“Updated” must be explicit:

  • Revise runbooks (steps, screenshots, access paths, escalation criteria)
  • Update call trees and role assignments
  • Update dependency maps (identity services, DNS, OT historian, remote access, telecom)
  • Update recovery priorities and workarounds if the test invalidated assumptions (Cybersecurity Capability Maturity Model v2.1)

Keep a change log that ties plan changes to test findings. That linkage is often the cleanest evidence that you meet RESPONSE-3.B.

Required evidence and artifacts to retain

Retain artifacts that prove frequency, execution, outcomes, and updates:

  • Continuity test policy/standard with defined cadence and triggers (Cybersecurity Capability Maturity Model v2.1)
  • Annual/rolling test calendar and scope statement
  • Test scripts or scenarios, including objectives and success criteria
  • Attendance records and role roster (internal teams and key third parties)
  • Test results report (what happened, findings, gaps)
  • Corrective action plan and issue log with closure evidence (Cybersecurity Capability Maturity Model v2.1)
  • Plan versions and change log showing updates after tests
  • Restoration evidence where applicable (backup restore logs, system screenshots, ticket IDs) (Cybersecurity Capability Maturity Model v2.1)

Daydream fit (where it earns its place): If you struggle to keep continuity testing evidence consistent across sites and third parties, Daydream can centralize the test calendar, artifacts, corrective actions, and plan-version evidence so you can answer audits without hunting across email, SharePoint, and ticketing systems.

Common exam/audit questions and hangups

Expect these questions in a C2M2-aligned assessment or resilience review:

  • “Show me your defined testing frequency and who approved it.” (Cybersecurity Capability Maturity Model v2.1)
  • “Which plans are in scope, and how do you ensure every critical plan gets tested?”
  • “Prove the last tests occurred on schedule; show results.”
  • “What changed in the plan because of the test?”
  • “How do you verify third-party dependencies (telecom, managed services, OT vendors) during continuity events?”
  • “Where do you track corrective actions, and how do you know they are closed?” (Cybersecurity Capability Maturity Model v2.1)

Hangups that slow teams down:

  • Tests exist, but evidence is scattered.
  • Plans were updated “informally” without a change log.
  • Tests cover IT, but exclude OT operational constraints and safety requirements.

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: Calling a document review a “test.”
    Avoid: Label reviews as reviews. Run at least some exercises that force role execution, communications, and recovery steps. (Cybersecurity Capability Maturity Model v2.1)

  2. Mistake: No defined cadence, only ad hoc exercises.
    Avoid: Publish a test calendar and tie it to your “organization-defined frequency.” (Cybersecurity Capability Maturity Model v2.1)

  3. Mistake: Testing stops at the IT boundary.
    Avoid: Map dependencies end-to-end. Include OT engineering, operations, facilities, and the third parties you need to restore service.

  4. Mistake: Findings don’t drive plan updates.
    Avoid: Require a “plan update decision” field for each finding: update now, update later (with date), or no update (with rationale). (Cybersecurity Capability Maturity Model v2.1)

  5. Mistake: No proof of restoration.
    Avoid: Capture restoration evidence for critical systems when safe and feasible, or document why a live restore test is constrained and how you validate readiness another way. (Cybersecurity Capability Maturity Model v2.1)

Enforcement context and risk implications

No public enforcement cases were provided in the cited source catalog for this specific requirement, so this page does not list enforcement actions.

Operationally, weak continuity plan testing shows up as:

  • Recovery time and recovery steps that exist only on paper
  • Inability to defend resilience claims in customer diligence
  • Higher likelihood that a cyber event turns into a prolonged outage because recovery roles, access, and dependencies were never validated (Cybersecurity Capability Maturity Model v2.1)

Practical 30/60/90-day execution plan

First 30 days (stabilize and define)

  • Inventory continuity of operations plans in scope and assign owners.
  • Write/refresh the continuity testing procedure: what counts as a test, evidence requirements, and where artifacts live. (Cybersecurity Capability Maturity Model v2.1)
  • Define your organization’s testing frequency with a short rationale and approval record.
  • Build a rolling test calendar for the scoped environment.

By 60 days (run and learn)

  • Execute at least one scoped exercise for a high-criticality service, including communications and decision-making.
  • Stand up a corrective action log with owners and due dates.
  • Start collecting standardized evidence bundles per test (script, attendance, results, action items). (Cybersecurity Capability Maturity Model v2.1)

By 90 days (prove repeatability)

  • Execute a second exercise of a different type (for example, tabletop plus restoration validation, where feasible).
  • Update plans based on findings and publish a change log that ties updates to tests. (Cybersecurity Capability Maturity Model v2.1)
  • Report to leadership: what was tested, top gaps, and remediation status. Confirm the next test dates remain on schedule.

Frequently Asked Questions

Does C2M2 require a specific testing interval (quarterly, annually, etc.)?

No. The requirement says testing occurs at an organization-defined frequency, so you choose the cadence and must be able to defend and demonstrate it. (Cybersecurity Capability Maturity Model v2.1)

What counts as “testing” versus a tabletop discussion?

A tabletop can be a valid test if it has a defined scenario, roles, success criteria, results, and documented corrective actions. For critical services, add exercises that validate execution steps such as restoration, access, and dependency handoffs. (Cybersecurity Capability Maturity Model v2.1)

How do I prove we “updated” the continuity plan?

Keep versioned plans and a change log that links updates to specific test findings, plus approvals where your governance requires them. Auditors want traceability from test result to plan change. (Cybersecurity Capability Maturity Model v2.1)

Do third parties need to participate in continuity plan tests?

If a third party is required to restore or operate a critical service, you should test that dependency in some form (direct participation, validated contact paths, documented recovery steps, or coordinated exercises). Otherwise your test may not reflect real recovery conditions. (Cybersecurity Capability Maturity Model v2.1)

Our OT environment can’t support live failover testing. How do we meet the requirement?

Document the constraint, run controlled exercises that validate people/process steps, and use safe technical validations where feasible (for example, restore to an isolated environment or validate backups and runbooks through non-production evidence). Then update plans based on the outcomes. (Cybersecurity Capability Maturity Model v2.1)

What’s the minimum evidence package I should keep per test?

Keep the test scope and scenario, participant list, results report, corrective action list with owners, and proof of any plan updates that followed. Add restoration evidence for tests that include technical recovery. (Cybersecurity Capability Maturity Model v2.1)

Frequently Asked Questions

Does C2M2 require a specific testing interval (quarterly, annually, etc.)?

No. The requirement says testing occurs at an organization-defined frequency, so you choose the cadence and must be able to defend and demonstrate it. (Cybersecurity Capability Maturity Model v2.1)

What counts as “testing” versus a tabletop discussion?

A tabletop can be a valid test if it has a defined scenario, roles, success criteria, results, and documented corrective actions. For critical services, add exercises that validate execution steps such as restoration, access, and dependency handoffs. (Cybersecurity Capability Maturity Model v2.1)

How do I prove we “updated” the continuity plan?

Keep versioned plans and a change log that links updates to specific test findings, plus approvals where your governance requires them. Auditors want traceability from test result to plan change. (Cybersecurity Capability Maturity Model v2.1)

Do third parties need to participate in continuity plan tests?

If a third party is required to restore or operate a critical service, you should test that dependency in some form (direct participation, validated contact paths, documented recovery steps, or coordinated exercises). Otherwise your test may not reflect real recovery conditions. (Cybersecurity Capability Maturity Model v2.1)

Our OT environment can’t support live failover testing. How do we meet the requirement?

Document the constraint, run controlled exercises that validate people/process steps, and use safe technical validations where feasible (for example, restore to an isolated environment or validate backups and runbooks through non-production evidence). Then update plans based on the outcomes. (Cybersecurity Capability Maturity Model v2.1)

What’s the minimum evidence package I should keep per test?

Keep the test scope and scenario, participant list, results report, corrective action list with owners, and proof of any plan updates that followed. Add restoration evidence for tests that include technical recovery. (Cybersecurity Capability Maturity Model v2.1)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
C2M2 Continuity Plan Testing: Implementation Guide | Daydream