CP-13: Alternative Security Mechanisms

To meet the cp-13: alternative security mechanisms requirement, you must define and implement backup security controls (alternative mechanisms) that can still enforce required security functions when the primary control is unavailable or compromised, and you must be able to prove they work. This is an operational requirement: pick alternatives, document when they trigger, test them, and retain evidence. 1

Key takeaways:

  • Identify security functions that can fail and pre-approve alternative mechanisms for each.
  • Define clear trigger conditions, roles, and runbooks so teams can switch without improvising.
  • Test alternatives and retain proof that the switch works under realistic failure and compromise scenarios.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

CP-13 sits in the Contingency Planning family, but it is not a “disaster recovery” control in the narrow sense. It is a requirement to maintain security functions when your primary means of implementing those functions is unavailable or cannot be trusted. That distinction matters during audits: you are not only restoring business operations; you are preserving security outcomes like authentication, authorization, logging, encryption, key management, malware prevention, and policy enforcement.

Operationally, CP-13 forces you to answer a hard question: “If this control fails or is compromised, how do we still meet the requirement it supports?” In practice, you will implement pre-approved, technically feasible fallbacks (or compensating controls) and document when and how to activate them. For many environments, this looks like secondary identity providers, break-glass access with strong oversight, alternate log pipelines, secondary key management procedures, or compensating monitoring when an inline control is bypassed.

This page gives requirement-level implementation guidance you can execute quickly: scope, owners, step-by-step build, test approach, evidence to retain, and audit questions that commonly stall teams.

Requirement: CP-13 alternative security mechanisms (plain-English)

Plain-English interpretation: For each critical security function your system must provide, you need an approved alternative way to achieve that function if the primary method is down or compromised. You must be able to switch deliberately, operate securely in the alternate mode, and show evidence to assessors that the alternate mechanism is real, tested, and governed. 1

Think of CP-13 as “security control continuity.” Downtime is one driver. Loss of trust is the other driver. If the primary mechanism might be compromised (for example, an identity provider suspected of token-signing key exposure), CP-13 expects you to have a safe alternate path that does not depend on the compromised component.

Regulatory text

NIST’s control statement is:

“Employ {{ insert: param, cp-13_odp.01 }} for satisfying {{ insert: param, cp-13_odp.02 }} when the primary means of implementing the security function is unavailable or compromised.” 1

What an operator must do with this text:

  • Treat the bracketed parameters as your implementation choices: define what your “alternative security mechanisms” are and which “security functions” they satisfy.
  • Cover both conditions explicitly: unavailable (outage, service disruption, dependency failure) and compromised (integrity loss, suspected attacker control, untrusted outputs).
  • Make it operational: identify triggers, decision authority, and documented procedures to activate alternatives, then validate through testing and evidence. 2

Who it applies to (entity + operational context)

Entities: Federal information systems and contractor systems handling federal data commonly inherit CP-13 expectations when NIST SP 800-53 is in scope for the system’s security plan. 1

Operational contexts where CP-13 becomes “exam-visible”:

  • High-availability environments where security functions are delivered by managed services (IdP, EDR, WAF, SIEM, KMS).
  • Segmented environments where an inline control can fail open/closed and create either an outage or an uncontrolled bypass.
  • Incident response scenarios where you must disable a compromised control quickly without dropping required protections (for example, revoking a logging agent suspected of tampering, while keeping audit logs intact).
  • Hybrid and multi-cloud architectures where control dependencies differ by platform (security groups vs. network firewalls vs. service mesh policy).

What you actually need to do (step-by-step)

Step 1: Assign ownership and define assessment scope

  1. Name a control owner (usually Security Engineering or GRC with platform owners as implementers).
  2. Define the system boundary where CP-13 applies (match your SSP boundary if you maintain one).
  3. Decide the security functions in scope by mapping CP-13 to the requirements you must keep meeting during outages/compromise (authentication, access control enforcement, logging, encryption, malware protection, network filtering). 2

Deliverable: a CP-13 control record that lists owner, in-scope functions, and dependencies. This aligns with the practical guidance to map CP-13 to an owner, procedure, and recurring evidence. 1

Step 2: Build a “security function to alternative mechanism” matrix

Create a table with columns:

  • Security function (what must still happen)
  • Primary mechanism (what normally provides it)
  • Failure mode (unavailable vs. compromised)
  • Alternative mechanism (what takes over)
  • Activation method (manual runbook, automated failover, configuration toggle)
  • Approvals/roles (who can switch)
  • Residual risk (what gets weaker in fallback mode)
  • Monitoring (what alerts confirm it’s active)

Example entries (adapt to your stack):

  • Authentication: Primary IdP → Alternative IdP or cached/short-lived emergency accounts with MFA and strict logging.
  • Logging: Primary log forwarder/SIEM route → Secondary collector or immutable storage sink with separate credentials.
  • Key management: Primary KMS region/service → Secondary region/provider with documented key escrow/rotation constraints.

Step 3: Define trigger criteria and decision authority

You need explicit “go/no-go” conditions so the team does not improvise in a crisis:

  • Unavailable triggers: dependency outage, control plane degradation, certificate expiry preventing service, failed health checks.
  • Compromised triggers: integrity alarms, confirmed credential theft, signing key exposure, tamper evidence, suspicious admin actions.

Document:

  • Who can declare the primary mechanism “untrusted.”
  • Who can activate the alternative.
  • What approvals are required for risky fallbacks (for example, break-glass).
    Tie this into incident response and change management workflows so switching is tracked.

Step 4: Write runbooks that operators can execute under pressure

Each alternative mechanism needs a short runbook:

  • Preconditions and safety checks (confirm compromise vs. outage, preserve evidence).
  • Activation steps (commands, toggles, routing changes).
  • Validation checks (what “good” looks like, how to confirm coverage).
  • Rollback steps and post-event actions (credential rotation, retrospective, update triggers).

Keep the runbook runnable by the on-call team. If it requires a niche engineer, it will fail when you need it.

Step 5: Engineer separation so the alternative is not the same single point of failure

Audit hangups often appear when “alternative” is still dependent on the same identity store, same keys, same admin plane, or same network path. Design for real independence where feasible:

  • Separate credentials and admin roles.
  • Separate logging path and storage destination.
  • Separate regions/accounts/tenants when your threat model includes tenant compromise.

Step 6: Test both “unavailable” and “compromised” scenarios

Testing must prove two things: (1) you can switch, and (2) the security function still meets required intent. 2

Practical test methods:

  • Tabletop: validate decision points, approvals, and runbook clarity.
  • Technical failover: simulate outage of the primary mechanism.
  • Compromise simulation: assume the primary control cannot be trusted and prove the alternative does not rely on it.

Step 7: Operationalize recurring evidence

Build a lightweight evidence cadence:

  • After each test: capture artifacts (screenshots, logs, tickets).
  • After each change to a primary mechanism: review whether the alternative still works.
  • After incidents: record whether CP-13 alternatives were used, and what changed.

If you run Daydream for control operations, treat CP-13 like a living control: owner assignment, procedure versioning, and recurring evidence collection are the core mechanics you want centralized for audit readiness. 1

Required evidence and artifacts to retain

Minimum artifacts auditors commonly expect for the cp-13: alternative security mechanisms requirement:

  • CP-13 control narrative: what alternatives exist and what functions they cover. 1
  • Security function → alternative mechanism matrix (the table described above).
  • Runbooks with version history and approval.
  • Access control records: who can activate break-glass or failover paths.
  • Test records: test plan, execution evidence, results, and remediations.
  • Change/incident tickets showing actual activations (or documented confirmation that alternatives remain ready).
  • Architecture diagrams showing primary and alternative paths.

Common exam/audit questions and hangups

Assessors tend to probe CP-13 with questions like:

  • “List the security functions you consider critical and show the alternative mechanism for each.” 2
  • “What’s your process if the primary control is compromised, not just down?”
  • “Show evidence you tested the failover in the last test cycle.”
  • “How do you prevent emergency access from becoming a permanent bypass?”
  • “Is the alternative mechanism independent, or does it share the same credentials/control plane?”

Frequent implementation mistakes (and how to avoid them)

  1. Only designing for outages, not compromise.
    Fix: write two paths in the runbook. For compromise, include steps to isolate, rotate credentials, and avoid relying on the suspect component.

  2. Calling “manual work” an alternative mechanism without controls.
    Fix: if the fallback is manual, wrap it in approvals, strong authentication, logging, and time limits. Document residual risk.

  3. Alternative depends on the same root of trust.
    Fix: separate admin roles, keys, and log sinks. If you cannot, document the dependency and add compensating monitoring.

  4. No evidence package.
    Fix: predefine what screenshots/log excerpts/tickets are saved after each test and store them in a consistent location mapped to CP-13. Missing evidence is a named risk factor for CP-13 readiness. 1

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this control. Practically, CP-13 gaps show up as:

  • Increased breach impact when a primary security control fails and teams disable it without a governed fallback.
  • Audit findings framed as “control not implemented” or “not operating effectively” because alternatives are undocumented or untested.
  • Operational risk during incident response when the team cannot maintain required security protections under degraded conditions. 2

Practical execution plan (30/60/90)

First 30 days: establish design and governance

  • Assign CP-13 owner and identify stakeholders across IAM, SecOps, Cloud/Platform, and App owners.
  • Build the security function → alternative mechanism matrix for the system boundary.
  • Draft runbooks and define trigger criteria and decision authority.
  • Stand up an evidence folder structure and naming convention mapped to CP-13. 1

By 60 days: implement and validate alternatives for highest-risk functions

  • Implement alternatives for top failure points (often IAM, logging, and key management).
  • Restrict and monitor activation permissions (break-glass, failover toggles).
  • Run at least one tabletop plus one technical simulation per high-risk function; record evidence and remediation tickets. 2

By 90 days: expand coverage and make it repeatable

  • Extend alternatives to remaining in-scope functions and document residual risk where alternatives are limited.
  • Integrate CP-13 checks into change management (control changes trigger review of alternatives).
  • Convert test execution into a repeatable control operation with recurring evidence capture and owner attestations. Daydream can centralize the control record, evidence requests, and audit-ready exports so CP-13 does not degrade between assessments. 1

Frequently Asked Questions

Does CP-13 require a fully redundant security stack for every control?

No. CP-13 requires alternative mechanisms sufficient to satisfy required security functions when the primary means is unavailable or compromised. Document where full redundancy is not feasible and add compensating controls plus clear residual risk acceptance. 2

What counts as “compromised” for CP-13 purposes?

Treat “compromised” as a loss of integrity or trust in the primary mechanism, not only downtime. Define concrete triggers such as suspected credential theft, signing key exposure, or evidence of tampering, and tie them to incident response decision authority. 2

Can break-glass admin accounts satisfy CP-13 for authentication and access control?

They can, if they are governed: strong authentication, restricted membership, time-bounded use, full logging, and a runbook that includes post-use credential rotation where appropriate. Auditors will ask for evidence that emergency access is controlled and tested. 2

How do we prove the alternative mechanism works?

Keep test records that show activation steps, validation checks, and outcomes, plus the tickets that tracked any fixes. Pair that with diagrams and configuration evidence showing the alternative path exists and is accessible under the defined triggers. 2

What if the alternative mechanism introduces weaker security than the primary?

Document the residual risk explicitly, limit the scope and duration of the fallback mode, and add monitoring to detect misuse. CP-13 is about maintaining required security functions under stress, so risk tradeoffs must be governed and reviewable. 2

How should we map CP-13 into our control management workflow?

Assign a control owner, document the procedure/runbooks, and define recurring evidence artifacts (tests, tickets, approvals) so you can show consistent operation. This mapping is a recommended best practice for CP-13 readiness. 1

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

Does CP-13 require a fully redundant security stack for every control?

No. CP-13 requires alternative mechanisms sufficient to satisfy required security functions when the primary means is unavailable or compromised. Document where full redundancy is not feasible and add compensating controls plus clear residual risk acceptance. (Source: NIST SP 800-53 Rev. 5)

What counts as “compromised” for CP-13 purposes?

Treat “compromised” as a loss of integrity or trust in the primary mechanism, not only downtime. Define concrete triggers such as suspected credential theft, signing key exposure, or evidence of tampering, and tie them to incident response decision authority. (Source: NIST SP 800-53 Rev. 5)

Can break-glass admin accounts satisfy CP-13 for authentication and access control?

They can, if they are governed: strong authentication, restricted membership, time-bounded use, full logging, and a runbook that includes post-use credential rotation where appropriate. Auditors will ask for evidence that emergency access is controlled and tested. (Source: NIST SP 800-53 Rev. 5)

How do we prove the alternative mechanism works?

Keep test records that show activation steps, validation checks, and outcomes, plus the tickets that tracked any fixes. Pair that with diagrams and configuration evidence showing the alternative path exists and is accessible under the defined triggers. (Source: NIST SP 800-53 Rev. 5)

What if the alternative mechanism introduces weaker security than the primary?

Document the residual risk explicitly, limit the scope and duration of the fallback mode, and add monitoring to detect misuse. CP-13 is about maintaining required security functions under stress, so risk tradeoffs must be governed and reviewable. (Source: NIST SP 800-53 Rev. 5)

How should we map CP-13 into our control management workflow?

Assign a control owner, document the procedure/runbooks, and define recurring evidence artifacts (tests, tickets, approvals) so you can show consistent operation. This mapping is a recommended best practice for CP-13 readiness. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream