RC.RP-05: The integrity of restored assets is verified, systems and services are restored, and normal operating status is confirmed

RC.RP-05 requires you to prove that restored systems are trustworthy and fully back in service after an incident: verify the integrity of restored assets, restore required services in the right order, and confirm “normal operations” with defined acceptance criteria and business sign-off. Build this into runbooks, testing, and evidence bundles.

Key takeaways:

  • Define “normal operating status” per service (technical + business criteria) before the incident happens.
  • Verify restored asset integrity (not just availability) using hashes, known-good baselines, validation tests, and security checks.
  • Capture operator-grade evidence: restore logs, integrity checks, test results, and formal return-to-service approvals.

A restore is not “done” when a server boots or a SaaS admin clicks “recover.” RC.RP-05 focuses on the hard part of recovery: proving that what you restored is correct, unaltered, and safe to run, then demonstrating that systems and services are actually delivering expected outcomes under normal conditions. This requirement is written at the outcome level in NIST CSF 2.0, which means you must translate it into measurable, repeatable operational steps that work during an incident and hold up under internal audit, customer diligence, and regulator questions. 1

For a CCO, compliance officer, or GRC lead, the fastest path to operationalizing RC.RP-05 is to (1) define restore acceptance criteria by service tier, (2) embed integrity verification checkpoints in recovery runbooks and tooling, and (3) require explicit “return to normal” confirmation with evidence and accountable sign-off. The control should also extend to third parties that participate in your recovery path (cloud hosting, managed service providers, backup providers, and critical SaaS). Your goal is simple: if you must defend your recovery decision later, you can show what you checked, who approved it, and why it was safe. 2

Regulatory text

Requirement (RC.RP-05): “The integrity of restored assets is verified, systems and services are restored, and normal operating status is confirmed.” 3

What the operator must do:
You must run recovery in a controlled, verifiable way. That means:

  • Integrity verification: confirm restored data, configurations, images, and identities are authentic, complete, and not tampered with.
  • Service restoration: restore the right capabilities (not just infrastructure) and validate dependencies.
  • Normal operations confirmation: define what “normal” means and collect proof that you have returned to that state, with accountable approval.

This requirement is routinely misunderstood as “we have backups.” RC.RP-05 expects you to show integrity checks and operational confirmation, not just the existence of backup tooling. 2

Plain-English interpretation (what RC.RP-05 really means)

RC.RP-05 is your “safe to resume” control. After an outage or cyber incident, you must be able to answer three questions with evidence:

  1. Did we restore the right thing? (correct point-in-time, correct scope, correct tenant/account, correct environment)
  2. Is the restored thing trustworthy? (no unauthorized changes, no reintroduced malware, no corrupted data, no broken access controls)
  3. Are we truly back to normal? (production services function as expected, monitoring is clean, business owners agree to resume)

If you can’t prove these, you risk restoring compromised assets, re-triggering an incident, corrupting customer data, or creating a misleading “all clear.”

Who it applies to (entity and operational context)

Applies to: any organization operating a cybersecurity program and performing recovery after incidents, including hybrid and cloud-native environments. 1

Operational contexts where RC.RP-05 is examined hardest:

  • Ransomware or destructive attacks where backup integrity and “clean restore” are critical.
  • Identity compromise where restoring systems without fixing access paths recreates the incident.
  • Cloud/SaaS recoveries where “restore” often means configuration changes, re-provisioning, or provider-driven recovery.
  • Third-party dependencies (managed detection/response, managed backups, cloud hosting, payment processors) that affect your restoration sequence and your ability to validate integrity.

What you actually need to do (step-by-step)

Use this as a requirement-level runbook blueprint. Keep it short enough to run during a crisis, but specific enough to generate audit-grade evidence.

1) Define restore acceptance criteria per service (before an incident)

Create a one-page Recovery Acceptance Criteria sheet per Tier-1/Tier-2 service:

  • Integrity criteria: checksums/hashes match known-good, gold image signatures, database consistency checks, IaC state matches approved repo commit, privileged access review complete.
  • Security criteria: EDR healthy, critical detections cleared, vulnerabilities addressed per emergency standard, secrets/keys rotated if compromise suspected.
  • Operational criteria: dependency map satisfied, background jobs caught up, queues stable, monitoring green, error rates within defined thresholds.
  • Business criteria: key workflows function (order processing, claims, payments, customer login), reconciliations complete where needed, customer-impacting comms approved.

Assign an owner (service owner) and a sign-off role (business + IT). This directly addresses the CSF expectation to translate outcomes into measurable actions and accountable ownership. 2

2) Build “restore integrity verification” checkpoints into technical procedures

For each asset type, define the verification method and who performs it:

Examples of integrity verification methods

  • VMs/containers: restore from signed images; validate image digests; confirm runtime attestation if available; compare against baseline configs.
  • Databases: run consistency checks; validate replication status; reconcile row counts or key aggregates; confirm correct restore point.
  • Files/object storage: verify checksums; validate versioning history; spot-check critical directories; confirm no unexpected encryption artifacts.
  • Identity/IAM: confirm break-glass accounts, MFA, conditional access policies; review privileged role assignments since last known-good.
  • Network/security tools: validate firewall rules and security groups match approved templates; confirm logging and alerting pipelines work.

Make these steps explicit in runbooks. During an incident, nobody should be inventing verification tests from scratch.

3) Restore services in an approved order (dependency-based)

Create and maintain a Service Restoration Sequence:

  • List business services, their upstream dependencies (identity, network, DNS, PKI, logging, databases), and restoration prerequisites.
  • Identify “no-go” conditions (for example, production app cannot return until identity controls are confirmed and logging is functional).
  • Include third-party prerequisites (cloud provider support ticket status, SaaS restore completion notice, managed backup job completion logs).

This avoids the common failure mode where teams bring the application up before identity, monitoring, or logging is trustworthy.

4) Confirm “normal operating status” using objective signals + sign-off

Define a Return-to-Service Checklist that must be completed for each service:

  • Technical validation: synthetic transactions pass; monitoring dashboards reviewed; alert backlog triaged; capacity and latency checks.
  • Security validation: incident eradication steps complete for that service; detections reviewed; re-compromise paths closed.
  • Business validation: service owner confirms workflows; support desk confirms ticket trends; finance/ops confirms reconciliations if relevant.
  • Approval record: timestamped approval from accountable owner(s) with any conditions or exceptions.

5) Run periodic control performance reviews (so it works when needed)

RC.RP-05 will fail in practice if it only exists as a policy statement. Establish a recurring review cycle:

  • sample a recent restore, DR test, or tabletop and inspect evidence completeness;
  • track exceptions (missing integrity checks, incomplete sign-offs, unclear “normal” criteria) with remediation owners and due dates;
  • report outcomes to security and operational leadership.

This matches the CSF program expectation for measurable indicators, exception handling, and management review. 2

Required evidence and artifacts to retain

Keep an “RC.RP-05 evidence bundle” per incident and per test. Auditors look for objective proof and clear accountability.

Minimum evidence set (practical):

  • Recovery runbooks with embedded integrity verification steps and restoration sequence.
  • Asset restore logs: backup job IDs, restore timestamps, target environments, restore points.
  • Integrity verification outputs: hash/checksum reports, DB consistency check outputs, IaC drift reports, configuration comparison results.
  • Security validation records: EDR status screenshots/exports, key rotation records, privileged access review notes, monitoring/logging confirmation.
  • Return-to-service checklist completed and signed by service owner and incident commander.
  • Post-incident report section summarizing what was restored, what was verified, exceptions, and residual risk acceptance (if any).

If you use Daydream, aim for a single exportable packet per recovery event: checklist, approvals, and attached logs in one place. That reduces the “evidence scavenger hunt” that derails audits.

Common exam/audit questions and hangups

Expect these questions from internal audit, regulators, and customer assessors:

  • “Show me how you verify restored data has not been altered.”
  • “Who decides a system is safe to return to production, and where is that approval recorded?”
  • “Define ‘normal operating status’ for your top services. Where are the criteria documented?”
  • “How do you prevent restoring malware or reintroducing compromised accounts?”
  • “Demonstrate that third parties involved in backup/restore meet your recovery requirements.”
  • “Provide evidence from the last incident or DR exercise: logs, test results, and sign-offs.”

Hangup: teams produce a DR test report but no integrity verification outputs. Fix this by making integrity checks a required checklist item with attached artifacts.

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails RC.RP-05 How to avoid it
Treating “restore completed” as “service restored” Infrastructure may be up while workflows fail Validate at the service level with synthetic tests and business checks
No definition of “normal operating status” You can’t prove recovery is complete Document acceptance criteria per service, including security and business signals
Integrity verification is informal (“it looked fine”) Auditors need objective evidence Require recorded outputs: hashes, DB checks, config diffs, screenshots/exports
Identity controls restored last Recompromise risk stays open Make IAM and logging prerequisites for production return
Third-party recovery is a blind spot Dependencies can block or invalidate restoration Include third-party confirmations, tickets, and restore evidence in the bundle

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so you should treat RC.RP-05 primarily as a defensible operational expectation under NIST CSF adoption and as a common due diligence focus from customers and sector regulators. The practical risk is straightforward: if you cannot prove restored integrity and normal operations, you can extend outages, trigger repeat incidents, and create contested post-incident narratives about whether it was safe to resume services. 1

Practical 30/60/90-day execution plan

Use phased milestones instead of dates you can’t meet during incident response.

First 30 days (Immediate foundation)

  • Inventory Tier-1 services and name accountable service owners.
  • Draft Recovery Acceptance Criteria sheets for Tier-1 services (one page each).
  • Update incident response and recovery runbooks to include integrity verification checkpoints.
  • Create a standardized Return-to-Service Checklist template with required evidence fields.

Days 31–60 (Operationalize and test)

  • Run a recovery exercise for one Tier-1 service and generate a full RC.RP-05 evidence bundle.
  • Tighten tooling hooks: ensure logs, checksum outputs, EDR health, and change records are exportable.
  • Add third-party recovery dependencies to the Service Restoration Sequence (cloud, MSP, backup provider, critical SaaS).
  • Train incident commanders and service owners on sign-off expectations.

Days 61–90 (Make it repeatable and auditable)

  • Expand acceptance criteria and runbooks to remaining Tier-1 services.
  • Establish periodic control performance reviews with exception tracking and remediation owners. 2
  • Build a lightweight KPI set (for example: percentage of restores with complete evidence bundles; count of exceptions per test cycle) without inventing metrics that your tooling can’t support.
  • Use Daydream (or your GRC system) to standardize templates, assign owners, and store per-event evidence packets for fast audits.

Frequently Asked Questions

Does RC.RP-05 require cryptographic hashes for every restored file?

No specific technique is mandated in the requirement text. You need an integrity verification method that is appropriate for the asset type and risk, and you must retain objective evidence that you performed it. 1

What does “normal operating status” mean in practice?

It means you predefine acceptance criteria for each critical service and then confirm you meet them after restoration. Include technical health (monitoring, error rates, dependencies) and business validation (critical workflows). 2

Who should approve the return to service?

Assign accountable approval to the incident commander for coordination and the service owner for functional correctness. For high-impact services, add a business owner sign-off so “normal” is confirmed beyond IT.

How do we handle SaaS where we can’t access underlying restore logs?

Require provider confirmation artifacts (support tickets, status reports, admin audit logs, configuration snapshots) and document compensating checks you can perform (user access validation, configuration review, functional tests).

If we restore from immutable backups, is integrity “automatically verified”?

Immutable storage reduces tampering risk, but RC.RP-05 still expects you to verify correctness and safety of what you restored. You still need validation tests, security checks, and documented confirmation of normal operations. 1

What evidence is most persuasive to auditors?

Time-stamped restore records, machine-generated integrity check outputs, completed return-to-service checklists, and explicit approvals with named owners. A single organized evidence packet per incident/test prevents gaps.

Footnotes

  1. NIST CSWP 29

  2. NIST CSF 1.1 to 2.0 Core Transition Changes

  3. NIST CSWP 29; NIST CSF 1.1 to 2.0 Core Transition Changes

Frequently Asked Questions

Does RC.RP-05 require cryptographic hashes for every restored file?

No specific technique is mandated in the requirement text. You need an integrity verification method that is appropriate for the asset type and risk, and you must retain objective evidence that you performed it. (Source: NIST CSWP 29)

What does “normal operating status” mean in practice?

It means you predefine acceptance criteria for each critical service and then confirm you meet them after restoration. Include technical health (monitoring, error rates, dependencies) and business validation (critical workflows). (Source: NIST CSF 1.1 to 2.0 Core Transition Changes)

Who should approve the return to service?

Assign accountable approval to the incident commander for coordination and the service owner for functional correctness. For high-impact services, add a business owner sign-off so “normal” is confirmed beyond IT.

How do we handle SaaS where we can’t access underlying restore logs?

Require provider confirmation artifacts (support tickets, status reports, admin audit logs, configuration snapshots) and document compensating checks you can perform (user access validation, configuration review, functional tests).

If we restore from immutable backups, is integrity “automatically verified”?

Immutable storage reduces tampering risk, but RC.RP-05 still expects you to verify correctness and safety of what you restored. You still need validation tests, security checks, and documented confirmation of normal operations. (Source: NIST CSWP 29)

What evidence is most persuasive to auditors?

Time-stamped restore records, machine-generated integrity check outputs, completed return-to-service checklists, and explicit approvals with named owners. A single organized evidence packet per incident/test prevents gaps.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream