The entity identifies, develops, and implements activities to recover from security incidents
To meet the the entity identifies, develops, and implements activities to recover from security incidents requirement, you need a documented, tested recovery capability that restores impacted services, data, and security controls after an incident, and you must retain evidence it works in practice. For SOC 2, auditors will expect defined roles, recovery procedures, and recurring exercises with tracked remediation 1.
Key takeaways:
- Recovery is a control set: documented plans, trained owners, and repeatable execution, not an aspirational statement.
- Evidence matters as much as the plan: tests, incident retrospectives, restores, and approvals are the difference between “designed” and “operating.”
- Recovery must cover systems, data, and security tooling, including third-party dependencies that affect your service commitments.
CC7.5 sits in the part of SOC 2 that examiners use to separate “we respond to incidents” from “we can restore normal operations after an incident.” The requirement is plain: you must identify, develop, and implement activities to recover from security incidents 1. In practice, that means you define what “recovered” looks like for your in-scope services, build the runbooks and tooling to get there, and prove you can execute under pressure.
Recovery controls fail for predictable reasons. Teams document incident response but forget restoration steps. Backups exist but restores are not tested. Service owners know the systems but not the decision rights. Third-party outages break you, and nobody has a workaround. Auditors then see a plan without operating evidence.
This page gives requirement-level implementation guidance you can execute quickly: who owns what, what to write down, what to test, and what artifacts to keep so your SOC 2 narrative matches how recovery actually happens. The goal is operational readiness and auditability, with minimal ceremony.
Regulatory text
Requirement (SOC 2 TSC CC7.5): “The entity identifies, develops, and implements activities to recover from security incidents” 1.
What the operator must do:
You must (1) determine what recovery means for your services and supporting systems, (2) design recovery activities that restore availability and integrity after security events, and (3) run those activities in a repeatable way with retained evidence 1. “Activities” typically include restoration runbooks, backup/restore processes, re-hardening steps, credential resets, environment rebuild procedures, and validation checks before returning to normal operations.
Plain-English interpretation (what CC7.5 is really asking)
Auditors are looking for a closed loop: incident occurs → you contain it → you restore affected capabilities → you validate the restoration → you capture lessons learned and fix gaps. CC7.5 is the recovery segment of that loop.
Recovery is broader than disaster recovery. A security incident can require:
- Restoring corrupted or encrypted data.
- Rebuilding hosts and images to a known-good baseline.
- Rotating keys, tokens, and credentials.
- Re-enabling security tools that were disabled or degraded.
- Returning customer-facing services to a stable state with monitoring and guardrails.
Your control story should show that recovery is engineered, not improvised.
Who it applies to (entity and operational context)
Applies to: Service organizations undergoing SOC 2 reporting where the Security category is in scope 1. Practically, that includes SaaS providers, managed service providers, and any organization hosting or processing customer data as part of a defined service.
Operational scope you should assume auditors will test:
- In-scope production environments and the systems that support them (CI/CD, IAM, logging, monitoring, ticketing).
- Data stores, backups, and restoration paths.
- Security tooling required to detect and contain incidents (EDR, SIEM, IAM controls), because recovery often includes re-establishing these controls.
- Material third-party dependencies that, if disrupted, change your ability to recover (cloud provider primitives, managed database services, incident communications platforms). Treat these as recovery inputs you must account for, even if you do not control them.
What you actually need to do (step-by-step)
1) Define “recover” for your service commitments
Create a short recovery definition per in-scope service:
- What must be restored (service functions, datasets, integrations).
- What “safe to resume” means (validation steps, security checks, monitoring thresholds).
- Who can declare recovery complete (named role, not a team).
Output: Recovery scope statement tied to your SOC 2 system description boundaries 1.
2) Inventory recovery dependencies and single points of failure
Build a recovery dependency map that lists:
- Critical systems and data stores.
- Backup locations and restore mechanisms.
- Golden images / infrastructure-as-code sources.
- IAM components (IdP, break-glass access, key management).
- Third-party services needed for recovery (cloud console access, managed backup tooling, DNS, status page).
Tip from audits: If a third party is required to restore service, auditors will ask where that dependency is documented and what your workaround is if it fails.
3) Develop recovery runbooks that are executable
Write runbooks that an on-call team can follow. Minimum contents:
- Preconditions (who can run it, access required, approvals required).
- Step-by-step restoration procedure (commands/console paths where feasible).
- Re-hardening actions (patching, configuration baselines, re-enabling logging/EDR).
- Data integrity validation (checksums, application-level verification, sampling approach).
- Customer-impact decision points (feature flags, read-only mode, degraded operations).
- “Stop” conditions (when to escalate, when to halt restoration to preserve evidence).
Keep runbooks version-controlled. Treat them like code: review, approve, and track changes.
4) Implement backup/restore and environment rebuild capabilities
CC7.5 expects capability, not just documentation 1. Confirm you have:
- Backups that support restoration of in-scope data.
- A tested process to rebuild critical compute (images, containers, infrastructure-as-code).
- A credential/key rotation procedure that can be executed during recovery.
- Logging/monitoring restoration steps (because you need telemetry during the recovery window).
5) Assign roles and decision rights (RACI that matches reality)
At minimum define:
- Incident Commander (overall authority).
- Recovery Lead (restoration execution owner).
- Service Owner(s) (validation and business sign-off).
- Security (re-hardening and risk acceptance).
- Communications (customer/internal updates).
- Third-party coordinator (cloud provider escalation, key vendors).
Auditors will test whether people understand these roles by interview and by ticket evidence.
6) Test recovery regularly and after meaningful change
Run at least two categories of exercises:
- Tabletop: Walk through a security incident recovery scenario and record decisions, gaps, and action items.
- Technical restore test: Perform an actual restore or rebuild in a controlled setting and document results, including evidence of validation.
Also test after major architecture changes, backup tooling changes, or identity platform migrations. The point is to prove the activity is implemented and operating 1.
7) Close the loop: lessons learned and control improvements
Every exercise and real incident should produce:
- A retrospective with root cause themes.
- A tracked remediation plan with owners and due dates.
- Runbook updates and training updates.
This is where many teams lose the audit: the gaps are known but not tracked to completion.
8) Operationalize evidence collection (don’t “recreate” later)
Build evidence capture into your workflow:
- Use tickets for every recovery test and real incident.
- Attach screenshots/log exports/restore job IDs.
- Record approvals and final “recovered” declaration in the ticket.
Daydream (or any GRC system you already run) becomes useful here by templating the evidence requests for CC7.5 and keeping the runbook versions, test records, and remediation tracking tied to the control, so audits do not become a scavenger hunt.
Required evidence and artifacts to retain
Auditors typically ask for both design proof and operating proof. Keep:
- Recovery policy/standard describing recovery objectives, roles, and triggers.
- Incident response plan section that covers recovery and restoration steps (aligned to CC7.5) 1.
- System/service recovery runbooks (version history, approvals).
- Backup and restore configuration evidence (job configuration exports, retention settings, access controls).
- Restore test records (ticket, test steps, outcomes, validation evidence, issues found).
- Tabletop exercise minutes (scenario, attendees, decisions, action items).
- Real incident evidence (timeline, containment steps, recovery steps, validation checks, closure criteria).
- Post-incident reviews and remediation tracking (Jira/ServiceNow items, approvals, closure evidence).
- Training/communications for on-call responders about recovery responsibilities.
Common exam/audit questions and hangups
Expect questions like:
- “Show us a recovery runbook for a critical service and the last time you executed it.”
- “How do you validate data integrity after restore?”
- “Who has authority to declare recovery complete, and where is that recorded?”
- “What happens if your primary backup mechanism is unavailable?”
- “How do third parties impact your recovery, and what are your escalation paths?”
Frequent hangup: teams show a DR plan that covers natural disasters but not security-specific restoration (credential rotation, re-imaging, re-establishing security controls).
Frequent implementation mistakes (and how to avoid them)
-
Backups exist; restores are unproven.
Fix: schedule restore tests, capture evidence, and track findings to closure. -
Runbooks are high-level PDFs.
Fix: make them executable, owned by service teams, and updated via change control. -
No “recovery complete” criteria.
Fix: define and document validation steps and sign-off roles per service. -
Third-party dependencies are ignored.
Fix: document dependencies, escalation paths, and fallback modes (degraded operations, alternate regions, alternate providers where feasible). -
Evidence is assembled at audit time.
Fix: enforce ticket-based execution and attach evidence artifacts as part of the workflow.
Enforcement context and risk implications
SOC 2 is an attestation framework, not a regulator, so “enforcement” usually shows up as: failed control testing, qualified opinions, delayed reports, customer trust friction, and contractual consequences. CC7.5 gaps also increase operational risk: longer outages, incomplete restoration, secondary compromise during recovery, and inconsistent customer communications. From an assurance standpoint, auditors test whether recovery is both designed and operating 1.
Practical 30/60/90-day execution plan
Days 1–30: Establish the baseline and close obvious gaps
- Confirm in-scope services and map recovery dependencies for each.
- Draft/refresh recovery policy language and role definitions aligned to CC7.5 1.
- Identify existing runbooks; convert the most critical ones into executable step-by-step procedures.
- Set up a single evidence workflow (tickets + attachments) and assign control ownership.
Days 31–60: Prove recovery works at least once
- Run a tabletop focused on a security scenario (credential compromise, ransomware-style encryption, unauthorized production change).
- Execute at least one technical restore/rebuild test for a critical system.
- Document validation checks and “recovered” sign-off steps.
- Create remediation tickets for every gap found; assign owners.
Days 61–90: Operationalize and make it repeatable
- Expand runbooks to remaining critical services and shared infrastructure (IAM, logging, CI/CD).
- Add “recovery readiness” checks to change management for major releases (runbook updates, backup impact review).
- Perform a second exercise that includes third-party failure or degraded mode.
- Review evidence completeness against what auditors request; centralize artifacts in your GRC repository (Daydream can help by aligning tests, evidence, and remediation to CC7.5).
Frequently Asked Questions
Does CC7.5 require a formal disaster recovery (DR) plan?
CC7.5 requires recovery activities for security incidents, which often overlap with DR but are not identical 1. If your DR plan includes security-specific restoration steps and you test it, it can satisfy much of the requirement.
What counts as “activities to recover” in a SaaS environment?
Activities include restoring data, rebuilding infrastructure, rotating credentials/keys, re-establishing logging and monitoring, and validating the application is safe to resume 1. Your runbooks should show these steps, not just “restore from backup.”
How do we handle recovery when a third party is the point of failure (cloud outage, SaaS dependency)?
Document the dependency, your escalation path, and your customer-facing degraded mode or workaround. Auditors look for evidence that you planned for this dependency in recovery procedures, even if you cannot control the third party 1.
What evidence is strongest for auditors: tabletop or real restores?
Real restore/rebuild evidence is usually stronger because it demonstrates operating effectiveness. Tabletop evidence still matters because it proves decision-making, roles, and escalation paths are defined and rehearsed 1.
We had no incidents this period. How do we show CC7.5 is operating?
Use recovery tests and exercises as operating evidence: restore tests, tabletop scenarios, and documented remediation of gaps. Auditors accept testing evidence when real incidents are absent, as long as it is relevant and well-documented 1.
Who should own CC7.5: Security, IT, or Engineering?
Security typically owns the control narrative, but service owners and platform/IT teams must own the runbooks and execution because they restore systems and data. Assign a single accountable owner and document cross-functional responsibilities 1.
Related compliance topics
- 2025 SEC Marketing Rule Examination Focus Areas
- Access and identity controls
- Access Control (AC)
- Access control and identity discipline
- Access control management
Footnotes
Frequently Asked Questions
Does CC7.5 require a formal disaster recovery (DR) plan?
CC7.5 requires recovery activities for security incidents, which often overlap with DR but are not identical (Source: AICPA TSC 2017). If your DR plan includes security-specific restoration steps and you test it, it can satisfy much of the requirement.
What counts as “activities to recover” in a SaaS environment?
Activities include restoring data, rebuilding infrastructure, rotating credentials/keys, re-establishing logging and monitoring, and validating the application is safe to resume (Source: AICPA TSC 2017). Your runbooks should show these steps, not just “restore from backup.”
How do we handle recovery when a third party is the point of failure (cloud outage, SaaS dependency)?
Document the dependency, your escalation path, and your customer-facing degraded mode or workaround. Auditors look for evidence that you planned for this dependency in recovery procedures, even if you cannot control the third party (Source: AICPA TSC 2017).
What evidence is strongest for auditors: tabletop or real restores?
Real restore/rebuild evidence is usually stronger because it demonstrates operating effectiveness. Tabletop evidence still matters because it proves decision-making, roles, and escalation paths are defined and rehearsed (Source: AICPA TSC 2017).
We had no incidents this period. How do we show CC7.5 is operating?
Use recovery tests and exercises as operating evidence: restore tests, tabletop scenarios, and documented remediation of gaps. Auditors accept testing evidence when real incidents are absent, as long as it is relevant and well-documented (Source: AICPA TSC 2017).
Who should own CC7.5: Security, IT, or Engineering?
Security typically owns the control narrative, but service owners and platform/IT teams must own the runbooks and execution because they restore systems and data. Assign a single accountable owner and document cross-functional responsibilities (Source: AICPA TSC 2017).
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream