TSC-CC7.4 Guidance

TSC-CC7.4 requires you to respond to identified security incidents by executing a defined incident response (IR) program. To operationalize it fast, publish an approved IR plan, assign on-call roles and escalation paths, run incidents through a consistent workflow (detect → triage → contain → eradicate → recover → learn), and retain evidence that the program ran as designed during the audit period.

Key takeaways:

  • Your auditor will look for a defined IR program plus proof it was executed on real events and tested.
  • Evidence beats intent: tickets, timelines, approvals, communications, and post-incident reviews matter more than policy text.
  • Build repeatability: severity definitions, decision criteria, and an audit-ready incident record for every material event.

The tsc-cc7.4 guidance requirement is one of the fastest ways SOC 2 auditors distinguish “we have a plan” from “we can run the plan under pressure.” The criterion is simple on paper: respond to identified security incidents by executing a defined incident response program. In practice, teams fail this requirement for predictable reasons: the IR plan is generic, roles are unclear, incident records are inconsistent, or evidence is scattered across chat, email, and tools with no defensible timeline.

This page is written for a Compliance Officer, CCO, or GRC lead who needs to translate TSC-CC7.4 into operational controls that engineering, security, IT, and support teams will actually follow. The goal is to make incident handling consistent and auditable without slowing response. You’ll find step-by-step implementation guidance, the minimum evidence set auditors request, common hangups during SOC 2 fieldwork, and a pragmatic execution plan you can run as a project.

Source of record for this requirement: AICPA Trust Services Criteria 2017, TSC-CC7.4 1.

Regulatory text

Requirement (excerpt): “The entity responds to identified security incidents by executing a defined incident response program.” 1

What the operator must do

You need an incident response program that is (1) defined, (2) in scope for the systems covered by your SOC 2 boundary, and (3) executed when incidents are identified. “Defined” means documented and approved. “Executed” means your incident workflow is actually used, with timestamps and accountability, for real incidents and for tests (tabletops or simulations) that demonstrate readiness.

A practical reading for audit: if an incident happens, your team can show who did what, when, under which procedure, what decisions were made, what communications occurred, and what improvements were captured afterward.

Plain-English interpretation of the requirement

TSC-CC7.4 expects you to run incident response like an operational process, not an ad hoc scramble. When security events are detected, you classify them, coordinate response, contain impact, recover services, and document results. Auditors usually test this by sampling incidents during the audit period and tracing them back to your IR program artifacts (policy, plan, runbooks, tickets, communications, and post-incident reviews).

Who it applies to (entity and operational context)

Applies to: Any organization undergoing a SOC 2 audit that includes the Common Criteria 1.

Operationally, it applies to:

  • Security/IR team (or whoever acts as IR lead in smaller orgs)
  • IT operations and cloud/platform engineering
  • Application engineering (for code-level containment and fixes)
  • Customer support and success (customer-facing comms and triage signals)
  • Legal/privacy (as needed for contractual or regulatory notifications)
  • Third parties that monitor or host your systems (MSSPs, cloud providers, SaaS platforms) when they are part of your detection and response chain

Boundary note: Your IR program must cover the in-scope systems and data defined in your SOC 2 description. If you outsource monitoring or incident handling steps to a third party, your program still needs clear ownership, handoffs, and evidence you oversaw the response.

What you actually need to do (step-by-step)

1) Define the incident response program (documents that matter)

Create or refresh these artifacts and get them approved:

  • Incident Response Policy: purpose, scope, definitions, governance, and authority to act (who can isolate systems, revoke credentials, block traffic).
  • Incident Response Plan / Playbook: the workflow stages and required actions per stage.
  • Severity model: criteria for Sev levels (impact, data sensitivity, system criticality, exploitability), including escalation triggers.
  • Roles & responsibilities (RACI): Incident Commander, Communications Lead, Scribe, Forensics/Technical Lead, and Executive Sponsor.
  • Communication procedures: internal channels, stakeholder lists, customer communications path, and how you handle privileged information.

Keep the documents short enough that responders will follow them during an event.

2) Implement an end-to-end incident workflow in your tools

Pick one system of record for incidents (common choices: Jira, ServiceNow, a dedicated IR platform, or a ticketing system). Configure it so every security incident record includes:

  • Unique incident ID
  • Detection source (SIEM alert, user report, third-party notice)
  • Triage notes and classification (incident vs. non-incident)
  • Severity and rationale
  • Timeline fields (detected, acknowledged, contained, resolved)
  • Containment and eradication actions (what changed, by whom)
  • Recovery validation steps
  • Customer impact and communications log (if applicable)
  • Root cause summary and corrective actions
  • Approvals for major actions (where required)

This is the fastest way to satisfy “executing a defined program” because it hard-bakes the program into day-to-day operations.

3) Build monitoring, review, and escalation into operations

TSC-CC7.4 is triggered by “identified” incidents. That means you need dependable pathways for identification and escalation:

  • Define what counts as a security incident vs. a reliability incident vs. a false positive.
  • Create an intake channel (SOC queue, email alias, ticket type, or chatbot form) for employee reports.
  • Ensure on-call coverage and escalation rules exist so incidents don’t sit unacknowledged.
  • Add management review for material incidents (security leadership sign-off on closure, or a weekly incident review meeting).

4) Train the people who will execute the program

Auditors may interview staff. Train for:

  • How to declare an incident
  • Who can escalate to Sev levels
  • How to use the incident ticket template
  • What evidence must be captured during response
  • When to involve privacy/legal and customer comms

Use short role-based training (incident commander vs. engineer vs. support).

5) Test the program and track corrective actions

Run incident response exercises and record outcomes. Testing should prove:

  • People know their roles
  • Escalation and communications work
  • The workflow produces complete incident records
  • Lessons learned become tracked remediation items and are followed to closure

6) Operationalize third-party dependencies

If a third party provides monitoring, alerting, hosting, or managed detection:

  • Define handoffs (who opens the incident ticket, who is Incident Commander).
  • Require incident notifications and timelines contractually where possible.
  • Preserve third-party communications (tickets, emails, portal exports) as audit evidence.

Where Daydream fits (earned mention)

Most TSC-CC7.4 failures are evidence failures: the response happened, but you can’t reconstruct it cleanly. Daydream can act as the compliance layer that maps your incident records to SOC 2 expectations, standardizes the evidence checklist per incident, and keeps artifacts (tickets, chat transcripts, postmortems, approvals) tied to the control for audit retrieval.

Required evidence and artifacts to retain

Keep evidence in an audit-ready folder or GRC system, organized by audit period:

Program definition

  • Approved IR policy and IR plan/playbook (with version history and approval)
  • Severity matrix and escalation procedures
  • IR roles/RACI and on-call documentation

Execution evidence (for sampled incidents)

  • Incident tickets with full timeline and actions
  • Alerts/logs referenced in triage (screenshots or exports as needed)
  • Containment/eradication change records (change tickets, PRs, configuration diffs)
  • Internal comms (incident channel transcript export or summary with timestamps)
  • External comms (customer notifications if applicable, third-party notices)
  • Post-incident review/postmortem and corrective action tracking

Testing evidence

  • Tabletop or simulation records: scenario, participants, outcomes, action items, closure proof

Governance

  • Periodic review meeting notes for incidents and trends
  • Metrics dashboards if you maintain them (keep qualitative if you cannot substantiate numbers)

Common exam/audit questions and hangups

Auditors commonly probe these areas:

  • “Show me your IR program.” They expect a defined plan, not a slide deck.
  • “Walk me through an incident from start to finish.” They look for adherence to your documented workflow.
  • “How do you decide severity?” Missing criteria or inconsistent application creates findings.
  • “How do you know incidents are identified?” If detection is informal, auditors question completeness.
  • “Where is the evidence?” Scattered evidence across tools without a single incident record causes delays and exceptions.
  • “How do third parties fit in?” If monitoring is outsourced, they still expect your oversight and documented handoffs.

Frequent implementation mistakes and how to avoid them

Mistake Why it fails TSC-CC7.4 Avoid it by
IR plan exists but nobody uses it “Defined” without “executed” Put the workflow into the ticket template and require it for closure
Incidents handled in chat only No durable audit trail Require an incident ticket and link chat export to it
No consistent severity model Inconsistent escalation and comms Publish severity criteria and require rationale in the ticket
Postmortems done “sometimes” No evidence of learning loop Define when PIR is required and track action items to closure
Third-party incidents not captured Blind spots in incident universe Log third-party notices as incidents (or “security events”) with documented assessment
Testing is informal Weak readiness evidence Document exercises and retain artifacts like any other control test

Enforcement context and risk implications

SOC 2 is an audit framework, not a regulatory enforcement regime 1. The practical risk is commercial and operational: inability to demonstrate controlled incident response can drive SOC 2 exceptions, delay reports, trigger customer contract issues, and increase the impact of incidents due to slower containment and unclear decision-making. Treat TSC-CC7.4 as a maturity floor for responding under pressure with proof.

Practical 30/60/90-day execution plan

First 30 days: establish the minimum audit-ready IR program

  • Confirm SOC 2 boundary and incident scope (systems, data, environments).
  • Publish IR policy + IR plan/playbook with approval.
  • Define severity model and escalation paths.
  • Implement an incident ticket type/template and make it the system of record.
  • Stand up an incident comms channel and define the scribe process (who captures the timeline).
  • Identify third-party monitoring/hosting dependencies and document handoffs.

Days 31–60: make it operational and repeatable

  • Train responders and stakeholders (security, IT, engineering, support).
  • Run a tabletop exercise; produce a post-incident report and remediation tickets.
  • Start a weekly or biweekly incident review (even if there are few incidents).
  • Add closure gates: no incident ticket closes without severity rationale, timeline, and corrective actions (as applicable).
  • Validate evidence capture: pick one recent incident and ensure you can reconstruct it end-to-end.

Days 61–90: prove effectiveness for SOC 2 fieldwork

  • Run a second exercise or simulation with a different scenario.
  • Review all incident records in the audit period for completeness and consistency.
  • Sample-check third-party notices and ensure they are assessed and logged.
  • Package evidence for likely auditor sampling: program docs, training proof, incident samples, exercise artifacts, corrective action tracking.
  • In Daydream (or your GRC system), map artifacts to the control and create a ready-to-export evidence set per incident.

Frequently Asked Questions

What counts as an “identified security incident” for TSC-CC7.4?

Use your documented definition. In practice, treat anything that could compromise confidentiality, integrity, or availability of in-scope systems as incident candidates, then document the triage decision and outcome in the system of record 1.

Do we need a formal postmortem for every incident?

Your program should define when a post-incident review is required (for example, based on severity or customer impact). Auditors mainly want to see that material incidents produce documented lessons learned and tracked corrective actions 1.

We use a third party for managed detection and response. Are we covered?

You still need a defined program and evidence you executed it. Document handoffs, ensure incidents are logged in your system of record, and retain third-party tickets/notifications linked to your incident record 1.

Can we treat reliability outages as security incidents?

You can, but be consistent. If you mix reliability and security incident processes, define classification rules so security incidents are not lost inside generic outage handling 1.

What will auditors sample to test TSC-CC7.4?

Typically incident tickets from the audit period plus evidence of an exercise/test. Prepare to show the full timeline, decisions, communications, and post-incident actions for each sampled item 1.

How do we show “execution” if we had no security incidents during the period?

Run and document a tabletop or simulation and show monitoring/escalation processes that would identify incidents. Make sure the exercise follows the same workflow and evidence requirements as real incidents 1.

Related compliance topics

Footnotes

  1. AICPA Trust Services Criteria 2017

Frequently Asked Questions

What counts as an “identified security incident” for TSC-CC7.4?

Use your documented definition. In practice, treat anything that could compromise confidentiality, integrity, or availability of in-scope systems as incident candidates, then document the triage decision and outcome in the system of record (Source: AICPA Trust Services Criteria 2017).

Do we need a formal postmortem for every incident?

Your program should define when a post-incident review is required (for example, based on severity or customer impact). Auditors mainly want to see that material incidents produce documented lessons learned and tracked corrective actions (Source: AICPA Trust Services Criteria 2017).

We use a third party for managed detection and response. Are we covered?

You still need a defined program and evidence you executed it. Document handoffs, ensure incidents are logged in your system of record, and retain third-party tickets/notifications linked to your incident record (Source: AICPA Trust Services Criteria 2017).

Can we treat reliability outages as security incidents?

You can, but be consistent. If you mix reliability and security incident processes, define classification rules so security incidents are not lost inside generic outage handling (Source: AICPA Trust Services Criteria 2017).

What will auditors sample to test TSC-CC7.4?

Typically incident tickets from the audit period plus evidence of an exercise/test. Prepare to show the full timeline, decisions, communications, and post-incident actions for each sampled item (Source: AICPA Trust Services Criteria 2017).

How do we show “execution” if we had no security incidents during the period?

Run and document a tabletop or simulation and show monitoring/escalation processes that would identify incidents. Make sure the exercise follows the same workflow and evidence requirements as real incidents (Source: AICPA Trust Services Criteria 2017).

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream