The entity responds to identified security incidents by executing a defined incident response program
To meet the SOC 2 requirement that “the entity responds to identified security incidents by executing a defined incident response program,” you need a written, approved incident response (IR) program and proof you follow it for real incidents. Operationalize it by defining severity, roles, response steps, communications, and evidence capture, then running and documenting incidents and tests end-to-end.
Key takeaways:
- A policy is not enough; auditors look for consistent execution evidence (tickets, timelines, approvals, communications, lessons learned).
- Define what “incident” means in your environment, then map every alert source to a documented triage and escalation path.
- Make the IR program auditable: time-stamped records, decision logs, and post-incident corrective actions tied to owners and completion.
SOC 2 CC7.4 is a response requirement, not a tooling requirement. You can have strong monitoring and still fail if the organization cannot show a repeatable way to triage, contain, eradicate, recover, and learn from security incidents. In practice, this control breaks down when response is informal (“Slack and heroics”), when severity is undefined, or when evidence is scattered across email, chat, and endpoint consoles with no single narrative.
A defined incident response program gives you a consistent playbook for handling events that matter: suspected account takeover, malware, suspicious admin activity, data exposure, third-party compromise, or availability-impacting attacks. For SOC 2, your goal is to prove two things: (1) the program exists and is fit for purpose, and (2) it operated during the audit period, including documentation of what happened, who did what, what decisions were made, and what changed afterward. This page translates the entity responds to identified security incidents by executing a defined incident response program requirement into concrete steps, artifacts, and an execution plan a CCO or GRC lead can drive quickly.
Regulatory text
Requirement (SOC 2 TSC CC7.4): “The entity responds to identified security incidents by executing a defined incident response program.” 1
Operator meaning: You must have a documented incident response program and execute it when security incidents are identified. “Execute” is the operative verb: your team needs to demonstrate triage through closure, with appropriate escalation, containment, communications, and post-incident follow-up, backed by retained evidence.
Plain-English interpretation
- You have a written IR program that defines: what counts as an incident, who responds, how incidents are prioritized, what actions responders take, how you communicate internally and externally, and how you track closure and lessons learned.
- When an incident occurs, you follow the program and can prove it with time-stamped records.
- You improve security after incidents via corrective actions and program updates, and you can show those changes were completed.
Who it applies to (entity and operational context)
This applies to any service organization seeking a SOC 2 report where security incidents could affect customer data, systems, or commitments. It is relevant across:
- Cloud/SaaS operations: identity compromise, insecure configuration, data exposure, ransomware, DDoS.
- On-prem or hybrid environments: endpoint malware, lateral movement, privileged account misuse.
- Heavily third-party-dependent environments: incidents originating from or impacting a third party (e.g., managed service providers, hosting, identity providers).
- Regulated customer base: customers often require prompt notification and evidence that your incident handling is disciplined and documented.
Operationally, CC7.4 touches Security/IT, Engineering, Support, Legal/Privacy (as needed), Communications, and the business owner for the impacted service.
What you actually need to do (step-by-step)
Step 1: Define “security incident” and scope it to your systems
Create a short incident definition and examples relevant to your environment. Include:
- Security events that impact confidentiality, integrity, or availability of in-scope systems.
- Near-misses that require coordinated response (for example, confirmed credential stuffing with blocked logins). Document “in scope” system boundaries aligned to your SOC 2 system description.
Practical tip: If you can’t decide whether something is an incident, define a category like “security event requiring investigation” and document escalation criteria into “incident.”
Step 2: Establish severity levels and escalation rules
Create a severity matrix (example structure you can adapt):
- Severity criteria: customer impact, data sensitivity, privilege level involved, spread, persistence, regulatory notification triggers (if applicable), availability impact.
- Escalation: who must be paged/assigned for each severity level, who approves customer communications, when executives are informed.
Keep it executable: responders should be able to classify within minutes based on observable facts.
Step 3: Assign roles and authority (RACI)
Minimum roles to document:
- Incident Commander (IC): runs the process, keeps timeline, owns decisions.
- Security/IR lead: investigation and containment owner.
- Infrastructure/App owners: implement fixes, provide logs, deploy patches.
- Comms/Customer lead: status updates, customer-facing messaging.
- Legal/Privacy (as needed): breach assessment, notification guidance. Define decision rights (e.g., who can take systems offline, rotate keys, disable accounts, block traffic).
Step 4: Write the incident response runbook with required phases
Your IR program should cover, at minimum:
- Detection & intake: where alerts come from (SIEM, EDR, cloud provider alerts, bug bounty, customer report).
- Triage: validate signal, scope affected assets, set severity.
- Containment: isolate hosts, disable accounts, revoke tokens, block IOCs.
- Eradication: remove persistence, patch, rotate credentials/keys, fix misconfigurations.
- Recovery: restore services, confirm normal operations, heightened monitoring.
- Post-incident: lessons learned, root cause, corrective actions, control improvements.
Tie each phase to required documentation (see “evidence” section) so the process produces audit artifacts by design.
Step 5: Implement an incident tracking mechanism that preserves evidence
Pick a system of record (ticketing system, IR module, or case management) and require that every incident has:
- A unique case/ticket ID
- Owner (IC)
- Severity
- Start time, end time
- Impact statement
- Actions taken (with timestamps)
- Approvals and communications references
- Closure criteria and sign-off
- Post-incident tasks with due dates and owners
Common audit gap: Chat-only response. If Slack is where work happens, you still need a durable record: paste key decisions and timestamps into the ticket and attach exports where appropriate.
Step 6: Pre-stage communications and notification workflows
Even without a legal breach requirement in CC7.4, SOC 2 auditors expect communication planning as part of a defined program. Prepare:
- Internal comms templates (engineering, leadership, support)
- Customer notification decision workflow (who decides, what inputs are needed)
- Status page process (if applicable)
- Third-party coordination steps (cloud provider, MSSP, impacted vendors)
Step 7: Train, test, and prove execution
You need evidence that people know the process and that it works.
- Train responders and on-call staff on intake, escalation, and documentation expectations.
- Run tabletop exercises and record outcomes (scenario, participants, gaps, action items).
- For real incidents, ensure the incident ticket tells a complete story end-to-end.
Step 8: Close the loop with corrective actions (CAPA)
For each material incident, require:
- Root cause analysis proportionate to severity
- Corrective actions tied to control improvements (e.g., add detection rule, tighten IAM, rotate secrets management pattern, improve logging)
- Verification of completion and effectiveness (e.g., screenshots, change tickets, PR links)
Required evidence and artifacts to retain
Auditors typically ask for evidence that is both designed (program exists) and operating (program executed). Retain:
Program design artifacts
- Incident Response Policy and/or Incident Response Plan (approved, versioned)
- Severity classification matrix and escalation chart
- Roles/RACI and on-call roster or assignment method
- Communication plan and templates
- Tooling/process description for incident intake and tracking
Operating effectiveness artifacts (for sampled incidents)
- Incident tickets/cases with timestamps and actions
- Supporting logs or exports (SIEM alerts, EDR detections, cloud audit logs) referenced by the ticket
- Decision log (who approved containment steps, customer comms, downtime decisions)
- Evidence of containment/eradication actions (change tickets, configuration diffs, PRs)
- Post-incident report, lessons learned, and corrective action tracking through closure
- Tabletop exercise records and resulting remediation items
Retention note: Keep evidence in a tamper-resistant or access-controlled repository, and limit edit rights to preserve integrity.
Common exam/audit questions and hangups
Use these as a readiness checklist:
- “Show me your incident response program.” They will look for an approved document, not a draft.
- “How do you define an incident versus an event?” Vague definitions trigger sampling expansion.
- “Walk me through one incident from detection to closure.” Auditors want a coherent narrative with timestamps.
- “Who was the Incident Commander and how was severity assigned?” Missing ownership is a frequent finding.
- “Where are your lessons learned and what changed afterward?” Post-incident actions that never close are a red flag.
- “How do you ensure third-party-related incidents are handled?” Expect questions if critical services are outsourced.
Frequent implementation mistakes and how to avoid them
| Mistake | Why it fails CC7.4 | Fix |
|---|---|---|
| “We have an IR policy” but no real incident records | CC7.4 requires execution evidence | Require every incident to have a ticket with a minimum evidence checklist |
| No severity criteria | Leads to inconsistent escalation and weak governance | Publish a severity matrix and require classification at intake |
| Actions live in chat; tickets are empty | No durable audit trail | Make the ticket the system of record; copy key decisions with timestamps |
| No defined communications approvals | Risk of inconsistent customer messaging | Define who approves external comms and what triggers review |
| Post-incident tasks not tracked to completion | No demonstrated improvement loop | Track CAPA items in the same system as other security work and report closure |
Enforcement context and risk implications
SOC 2 is an attestation framework; public “enforcement” is not typically issued by AICPA for individual control failures. Your practical risk is commercial and contractual: customers and procurement teams treat weak incident response as a high-risk signal, and auditors may report exceptions that delay or qualify your SOC 2 opinion if execution evidence is missing. CC7.4 failures also increase downstream risk in breach response because you cannot prove timeliness, accountability, or completeness of actions taken.
Practical 30/60/90-day execution plan
Day 0–30: Define and publish the program
- Draft and approve an IR policy/plan, including incident definition and scope aligned to your SOC 2 boundary.
- Create severity matrix, escalation rules, and RACI (name roles, not just teams).
- Select the incident system of record and define mandatory fields and closure criteria.
- Build a lightweight evidence checklist responders must attach or reference per incident.
Deliverables: approved IR plan; severity matrix; RACI; incident ticket template; evidence checklist.
Day 31–60: Make it operational
- Train responders, on-call engineers, and support leads on intake, escalation, and documentation.
- Run at least one tabletop exercise and record gaps and action items.
- Tune intake sources: make sure alerts and reports reliably open/route into your case workflow.
- Start monthly reporting: incident counts by severity, time-to-triage notes (qualitative if you don’t track metrics yet), open CAPA items.
Deliverables: training attendance records; tabletop report; updated playbooks; first monthly IR summary.
Day 61–90: Prove consistency and close the loop
- Validate evidence quality by doing an internal “mock audit” on recent incidents (pick a few and test if the story is complete).
- Require post-incident reviews for defined severities and track CAPA completion.
- Update the IR plan based on tabletop and real incidents; document version changes and approvals.
- If you use third parties in response (MSSP, forensics), document engagement procedures and evidence handoffs.
Deliverables: mock-audit checklist and results; closed CAPAs; updated IR plan version; incident samples ready for auditors.
How Daydream fits (practically)
Teams commonly struggle with “evidence drift,” where response happens across tools and the audit trail is assembled late. Daydream can help by mapping CC7.4 to a concrete evidence list, standardizing the incident ticket template, and maintaining an audit-ready binder of incidents, post-incident reviews, and tabletop outputs so you can answer auditor requests quickly without reconstructing timelines.
Frequently Asked Questions
Do we need a dedicated security team to satisfy this requirement?
No. You need defined roles and a repeatable program that the organization follows. In smaller companies, roles can be part-time, but the responsibilities and authority still need to be explicit and evidenced in incident records.
What counts as “executing” the incident response program?
Execution means the incident followed your documented steps in a traceable way: triage, severity assignment, containment/eradication/recovery actions, communications decisions, and closure with lessons learned as required. Auditors look for a complete, time-stamped narrative in a system of record.
If we had no incidents this period, can we still pass CC7.4?
You still need the defined program, training, and testing evidence (like tabletop exercises) to show readiness. Auditors may also sample “security events” or drills to assess whether the program would operate as written.
Can we use Slack as our primary incident process?
You can coordinate in Slack, but you still need a durable case record with key decisions, timestamps, and artifacts. Export or link relevant threads and summarize them in the incident ticket so the audit trail survives chat retention limits.
How do we handle incidents caused by a third party?
Your IR program should include coordination steps: how you engage the third party, what evidence you request, how you assess impact to your system, and who owns customer communications. Document the handoffs and decisions in your incident case, not only in the third party’s portal.
What is the minimum post-incident documentation we should require?
For any incident that meets your defined threshold, require an impact statement, root cause (as known), actions taken with timestamps, and corrective actions assigned to owners with tracked completion. Keep it consistent so sampled incidents don’t vary in quality.
Related compliance topics
- 2025 SEC Marketing Rule Examination Focus Areas
- Access and identity controls
- Access Control (AC)
- Access control and identity discipline
- Access control management
Footnotes
Frequently Asked Questions
Do we need a dedicated security team to satisfy this requirement?
No. You need defined roles and a repeatable program that the organization follows. In smaller companies, roles can be part-time, but the responsibilities and authority still need to be explicit and evidenced in incident records.
What counts as “executing” the incident response program?
Execution means the incident followed your documented steps in a traceable way: triage, severity assignment, containment/eradication/recovery actions, communications decisions, and closure with lessons learned as required. Auditors look for a complete, time-stamped narrative in a system of record.
If we had no incidents this period, can we still pass CC7.4?
You still need the defined program, training, and testing evidence (like tabletop exercises) to show readiness. Auditors may also sample “security events” or drills to assess whether the program would operate as written.
Can we use Slack as our primary incident process?
You can coordinate in Slack, but you still need a durable case record with key decisions, timestamps, and artifacts. Export or link relevant threads and summarize them in the incident ticket so the audit trail survives chat retention limits.
How do we handle incidents caused by a third party?
Your IR program should include coordination steps: how you engage the third party, what evidence you request, how you assess impact to your system, and who owns customer communications. Document the handoffs and decisions in your incident case, not only in the third party’s portal.
What is the minimum post-incident documentation we should require?
For any incident that meets your defined threshold, require an impact statement, root cause (as known), actions taken with timestamps, and corrective actions assigned to owners with tracked completion. Keep it consistent so sampled incidents don’t vary in quality.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream