The entity evaluates security events to determine whether they could or have resulted in failures
To meet the the entity evaluates security events to determine whether they could or have resulted in failures requirement, you must run a documented, repeatable triage and analysis process that reviews detected security events, determines likely impact to your systems and service commitments, and records whether the event caused (or could have caused) a control or system failure. Evidence matters as much as the analysis.
Key takeaways:
- Define what “security event” and “failure” mean in your environment, then map events to failure modes and service commitments.
- Operationalize with an event evaluation workflow: intake → enrichment → classification → impact analysis → escalation → closure.
- Retain audit-ready artifacts: alerts, triage notes, impact determinations, incident linkage, and proof of timely review.
SOC 2 auditors expect more than “we have a SIEM.” TSC-CC7.3 is about proving that your organization reviews security signals and makes a defensible determination about business and control impact. A security event can be minor (blocked malware) or ambiguous (suspicious admin login). The requirement is satisfied when you can show that each meaningful event is evaluated for whether it could have resulted in a failure or did result in one, and that your team escalates accordingly.
For operators, the hard part is consistency: the same kinds of alerts should be handled the same way, across shifts and analysts, with the same minimum documentation. The second hard part is scoping “failure.” In SOC 2 terms, failure is not only an outage; it can be a failure of a security control (for example, unauthorized access that bypassed intended access restrictions) or a failure to meet a stated service commitment in your SOC 2 description.
This page gives requirement-level implementation guidance you can put into production quickly: roles, workflow steps, decision criteria, evidence to retain, and a 30/60/90 execution plan that auditors can test.
Regulatory text
Requirement (TSC-CC7.3): “The entity evaluates security events to determine whether they could or have resulted in failures.” 1
What the operator must do:
You need a defined process where security events are (1) reviewed, (2) analyzed for potential/actual impact, and (3) concluded with a documented determination about whether a failure occurred or was plausible. The determination must drive action: escalation to incident response, problem management, customer impact workflows, or control remediation.
Plain-English interpretation of the requirement
- Security events are signals that something security-relevant happened: alerts, detections, suspicious activity, policy violations, or anomalies from tools or people.
- Evaluate means more than “acknowledge.” It includes enrichment (context), classification (what is it), and analysis (so what).
- Failures are conditions where your system, controls, or commitments did not perform as intended. In practice, auditors look for alignment to:
- Your availability/security commitments described in your SOC 2 scope narrative
- Material control breakdowns (for example, access control not enforced)
- System failures that degrade confidentiality, integrity, or availability
A clean way to operationalize this: every security event that meets your triage threshold must end with a recorded impact decision:
- no plausible failure, 2) could have resulted in failure (near miss), or 3) resulted in failure (incident/problem).
Who it applies to (entity and operational context)
This applies to service organizations undergoing a SOC 2 examination, across production systems and supporting environments in scope. Practically, it applies to teams and workflows that touch detection, response, and operations, including:
- Security Operations (SOC), on-call security, and incident response
- IT operations / SRE / platform teams (especially if they receive security alerts)
- Application owners who validate business impact
- GRC/Compliance teams who define evidence requirements and run control testing
It also applies to security events originating from third parties that can impact your services (for example, a cloud provider alert about exposed keys). Even if the third party caused the event, you still must evaluate whether your system experienced a failure condition and document your decision.
What you actually need to do (step-by-step)
1) Define event sources and minimum triage threshold
Create an inventory of “security event inputs,” such as:
- SIEM detections, EDR alerts, IDS/WAF findings
- Cloud security posture alerts, IAM anomaly alerts
- Ticketed reports from employees or customers
- Third-party notifications that affect your environment
Set a rule for which items require evaluation. Avoid “everything is evaluated” unless you can prove it operationally. A common approach is: evaluate all events that are high/medium severity (as defined by your internal rubric) and any event that touches in-scope production data or privileged access.
Artifact: Event source list + triage threshold statement (in your SOP/runbook).
2) Create a documented event evaluation workflow (your “triage to determination” path)
Your workflow should be explicit enough that two analysts reach similar outcomes. Minimum stages:
- Intake: alert/event is received and logged
- Enrichment: add context (asset owner, environment, identity, timeframe, related alerts)
- Classification: categorize (malware, suspicious login, policy violation, data exfil suspicion, misconfiguration)
- Impact analysis: determine whether a failure could have occurred or did occur
- Escalation/containment: if warranted, open an incident and start response
- Closure: record final disposition, rationale, and links to evidence
Tip for auditability: Require a “failure assessment” field in your ticketing or SIEM case management system with controlled values:
- No failure possible (explain why)
- Could have resulted in failure (near miss; explain safeguard that prevented it)
- Failure occurred (link incident/problem record)
3) Define “failure” in operational terms (map events to failure modes)
Build a short decision matrix that maps security event types to potential failures. Example structure:
| Event type | “Failure could have occurred” indicators | “Failure occurred” indicators | Required escalation |
|---|---|---|---|
| Suspicious privileged login | Privileged account targeted; MFA challenge occurred; access blocked | Privileged access gained without authorization; access policy bypass | Incident + access review |
| Malware detected on server | Quarantined; no execution; no persistence | Execution confirmed; lateral movement suspected | Incident + forensic steps |
| Cloud storage exposure alert | Exposure detected but blocked by policy; no access logs | Public access confirmed; sensitive data accessed | Incident + customer impact review |
Keep this matrix aligned with your environment and commitments. Auditors will accept reasonable definitions if they are documented and consistently applied.
4) Assign roles and escalation paths
Document who does what:
- Tier 1 triage: acknowledges and enriches, applies initial classification
- Tier 2/IR lead: confirms failure determination and starts incident response
- System owner: validates service impact and assists remediation
- GRC: ensures evidence is captured, samples are reviewable, control operates as written
Define how you handle after-hours alerts (on-call rotation, paging thresholds) and how decisions get approved (for example, IR lead sign-off on “failure occurred”).
5) Make the evaluation measurable (for internal control operation)
Auditors test operating effectiveness by sampling. You need consistency:
- A standard ticket template (fields required before closure)
- A checklist for analysts to complete (what logs to check, what context to capture)
- A review/QA step for a subset of events (manager or IR lead review)
If you struggle to produce coherent evidence across tools (SIEM, ticketing, cloud consoles), Daydream can help by turning event evaluation into a control workflow with required fields, evidence capture prompts, and an audit-ready export aligned to SOC 2 testing.
Required evidence and artifacts to retain
Auditors typically ask for evidence that shows the event happened, was evaluated, and resulted in a documented determination. Retain:
- Event records: SIEM/EDR alert screenshots or exports, including timestamps and identifiers
- Case/ticket showing:
- Severity/priority and who handled it
- Enrichment notes (asset, user, environment, related alerts)
- Failure assessment (could have / did result in failure) with rationale
- Escalation decision (incident opened or not) and why
- Closure approval/review (if required)
- Linked evidence: relevant logs, query outputs, IAM history, endpoint telemetry
- Incident records for events determined to be failures (incident timeline, containment, RCA)
- Runbooks/SOPs for event evaluation and definitions (including the failure matrix)
- Training/role assignments showing staff are prepared to execute the process
Keep evidence in a way that supports sampling: auditors will request a set of events over the audit period and expect you to produce complete packets quickly.
Common exam/audit questions and hangups
Expect questions like:
- “Show me how an alert becomes an evaluated event, and where the failure determination is recorded.”
- “What do you consider a ‘failure’ for this control? Is it only outages, or control failures too?”
- “Provide a sample of events and demonstrate consistent triage and rationale.”
- “How do you ensure events from third parties (cloud providers, MSSPs) are still evaluated for your system impact?”
- “What happens when an analyst closes an event incorrectly? Is there a QA process?”
Hangups that slow audits:
- Evidence split across tools with no linking identifier
- Missing rationale (“closed as benign” with no explanation)
- No documented definition of failure, so decisions look subjective
Frequent implementation mistakes and how to avoid them
-
Treating “event evaluation” as “incident response only.”
Fix: Evaluate events before they become incidents. Track near misses explicitly. -
No controlled vocabulary for outcomes.
Fix: Force standardized dispositions and a required failure assessment field. -
Closing tickets without proof of review.
Fix: Require minimum artifacts (log references, screenshots, queries) for specific event categories. -
Inconsistent severity model.
Fix: Publish a severity rubric tied to business impact and in-scope systems. Train analysts and test for consistency. -
Ignoring third-party originated events.
Fix: Create an intake path for third-party advisories and notifications, and document your impact determination.
Enforcement context and risk implications
SOC 2 is an attestation framework, not a regulator with direct fine schedules in this requirement text. Your practical risk is audit qualification, exceptions, and trust impact if you cannot show consistent evaluation and documented determinations 1. Operationally, weak event evaluation increases the chance that early indicators of compromise are misclassified as noise and never escalated.
Practical 30/60/90-day execution plan
First 30 days: define, standardize, and start capturing evidence
- Draft the event evaluation SOP: sources, thresholds, workflow stages, required fields.
- Define “failure” for your environment and publish the failure decision matrix.
- Update ticket templates/SIEM case fields to include “failure assessment” and rationale.
- Run a tabletop: process 5–10 historical events and see if the workflow produces audit-ready packets.
Days 31–60: operationalize across teams and add QA
- Train SOC/on-call staff and system owners on the workflow and definitions.
- Implement an escalation path and decision authority (who can declare “failure occurred”).
- Start a light QA review for a subset of closed events; document corrections and coaching.
- Confirm evidence retention: where screenshots/log exports live, how long they’re retained, and how they’re linked to tickets.
Days 61–90: prove operating effectiveness and close gaps
- Perform an internal sampling exercise: select events across the period and test whether each has complete evidence and a defensible failure determination.
- Tune thresholds to avoid alert overload that prevents real evaluation.
- Update your SOC 2 control narrative to match reality (tools used, roles, workflow, evidence).
- If evidence is still scattered, implement a control operations layer (for example, Daydream) to standardize collection, enforce required fields, and speed auditor requests.
Frequently Asked Questions
What counts as a “security event” for TSC-CC7.3?
Any security-relevant signal that meets your documented triage threshold counts, including tool-generated alerts and human reports. Define sources and thresholds in your SOP so auditors can see what you evaluate and why.
Do we have to evaluate every single alert from our tools?
No, but you must clearly define which events require evaluation and show the process runs consistently. If you claim “all alerts are evaluated,” auditors can sample anywhere and expect complete evidence each time.
How do we define “failure” without over-scoping it?
Tie failure to your in-scope system behavior and control objectives: unauthorized access, loss of confidentiality, integrity issues, or service commitments not met. Use a decision matrix so analysts apply the definition consistently.
What evidence is “enough” to prove evaluation happened?
You need the original alert/event record plus triage notes that show enrichment, analysis, and a recorded failure determination with rationale. Tickets closed as “benign” still need defensible notes and supporting log references.
Can we satisfy this control if an MSSP does triage for us?
Yes, if you can produce evidence that evaluation occurred and that your organization owns the failure determination and escalation decisions for your in-scope services. Contractual reporting and shared ticket access are common solutions.
How should we handle near misses?
Track them explicitly as “could have resulted in failure,” document what prevented impact (control, configuration, or user action), and route systemic near misses to problem management or control improvement.
Related compliance topics
- 2025 SEC Marketing Rule Examination Focus Areas
- Access and identity controls
- Access Control (AC)
- Access control and identity discipline
- Access control management
Footnotes
Frequently Asked Questions
What counts as a “security event” for TSC-CC7.3?
Any security-relevant signal that meets your documented triage threshold counts, including tool-generated alerts and human reports. Define sources and thresholds in your SOP so auditors can see what you evaluate and why.
Do we have to evaluate every single alert from our tools?
No, but you must clearly define which events require evaluation and show the process runs consistently. If you claim “all alerts are evaluated,” auditors can sample anywhere and expect complete evidence each time.
How do we define “failure” without over-scoping it?
Tie failure to your in-scope system behavior and control objectives: unauthorized access, loss of confidentiality, integrity issues, or service commitments not met. Use a decision matrix so analysts apply the definition consistently.
What evidence is “enough” to prove evaluation happened?
You need the original alert/event record plus triage notes that show enrichment, analysis, and a recorded failure determination with rationale. Tickets closed as “benign” still need defensible notes and supporting log references.
Can we satisfy this control if an MSSP does triage for us?
Yes, if you can produce evidence that evaluation occurred and that your organization owns the failure determination and escalation decisions for your in-scope services. Contractual reporting and shared ticket access are common solutions.
How should we handle near misses?
Track them explicitly as “could have resulted in failure,” document what prevented impact (control, configuration, or user action), and route systemic near misses to problem management or control improvement.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream