Incident Escalation
To meet the incident escalation requirement, you need documented, approved escalation procedures for cybersecurity incidents and proof that teams actually follow them in real incidents and exercises. Your procedures must route the right incidents to the right technical responders and management based on severity, impact, and timing expectations, then leave an auditable trail. 1
Key takeaways:
- Write and approve a clear escalation procedure tied to severity tiers, roles, and time-to-escalate expectations. 1
- Operationalize escalation inside your incident workflow (ticketing/SIEM/on-call), not as a PDF that sits unused. 1
- Retain artifacts that show escalation happened (timestamps, notifications, decision logs, approvals, post-incident reviews). 1
“Incident Escalation” is a control that fails quietly until you have a real event, a regulator asks for evidence, or a customer due diligence review tests your operating maturity. C2M2’s requirement is short but specific: escalation procedures must be established and followed. 1 That means you need two things: (1) a defined standard (who escalates what, to whom, how fast, and through which channels) and (2) operating proof (records showing your teams used that standard in day-to-day work).
For a CCO, Compliance Officer, or GRC lead, the fastest way to operationalize this is to treat escalation as a workflow with decision points and immutable audit hooks. Your escalation procedure should connect detection and triage to: incident commander assignment, executive notification, legal/privacy involvement, business owner engagement, third-party coordination, and external reporting triggers (when applicable to your other obligations). Even if C2M2 is not a regulator, it is often used to benchmark maturity in critical infrastructure and energy-sector environments; your auditors and customers will expect the basics to work under pressure. 1
Regulatory text
Requirement (C2M2 v2.1 RESPONSE-2.C, MIL1): “Escalation procedures for cybersecurity incidents are established and followed.” 1
What this means in plain English
You must:
- Define escalation rules for cybersecurity incidents (criteria, severity levels, responsible roles, and communication paths). 1
- Execute those rules consistently during real incidents and testing, so escalation is timely and reaches appropriate technical and management personnel. 1
- Prove it happened with artifacts: version-controlled procedures plus operating records (tickets, logs, call trees, notifications, and post-incident documentation). 1
If you can’t show both “established” and “followed,” you will look immature during assessments and may fail internal control testing or external diligence reviews. 1
Who it applies to
Entities
This applies to organizations using C2M2 to assess cybersecurity maturity, commonly in energy-sector and critical infrastructure contexts. 1
Operational context (where escalation breaks most often)
- 24/7 operations (SOC, NOC, OT operations) where incidents occur outside business hours.
- Operational technology (OT) environments where safety and availability constraints change escalation thresholds.
- Hybrid responsibilities (IT owns detection, operations owns recovery, legal owns notification decisions).
- Third-party dependencies (MSSP, cloud provider, OEM, or incident response retainer) where escalation crosses organizational boundaries and contracts.
What you actually need to do (step-by-step)
The goal is a repeatable escalation path from “signal” to “decision-maker,” with evidence at each handoff.
Step 1: Define your escalation triggers and severity tiers
Build a severity model that a triage analyst can apply quickly. Keep it simple and tied to impact:
- Confidentiality impact (data exposure risk)
- Integrity impact (unauthorized changes)
- Availability impact (service/OT downtime)
- Safety/reliability impact (especially OT)
Document trigger examples that force escalation:
- Confirmed malware execution on critical assets
- Privileged credential compromise
- Lateral movement indications
- OT network segmentation breach
- Third-party breach affecting your environment
Your written procedure must show how severity maps to escalation recipients and urgency. 1
Step 2: Assign escalation roles (name roles, not just teams)
In the procedure, define:
- Incident commander (role, not person; include backup)
- SOC lead / IR lead
- Business service owner (the accountable owner of impacted service/plant/process)
- Executive notification group (CISO, CIO, COO; add CCO/GC triggers where needed)
- Legal/privacy escalation triggers (suspected regulated data exposure, law enforcement contact, litigation hold)
- Comms (internal comms, PR) when reputational risk is plausible
- Third-party contact points (MSSP, cloud, key suppliers) with after-hours methods
The C2M2 requirement is satisfied only when procedures exist and are followed; unclear ownership is a common reason teams “follow something else” during an incident. 1
Step 3: Define the escalation path and channels (primary + fallback)
Your procedure should list:
- Primary channels: incident ticketing system, paging/on-call tool, dedicated incident bridge, secure chat channel
- Fallback channels: phone tree, SMS, alternate emails, secondary bridge provider
- Rules for what gets documented where (e.g., the ticket is the system of record)
Add a rule: every escalation action must generate a timestamped artifact (ticket update, page sent log, email, meeting minutes). That is how you prove “followed.” 1
Step 4: Embed escalation into the incident workflow (don’t rely on memory)
Operationalize in tools:
- Ticket templates with required fields: severity, impacted assets, who was paged, time paged, acknowledgement time, current decision owner
- SIEM/SOAR rules that open tickets and page on defined detections
- On-call schedules aligned to roles in the procedure
- Runbooks that include an explicit “Escalate to X” checkpoint
If your procedure is separate from the tools analysts use, escalation will drift and you will struggle to evidence consistent execution. 1
Step 5: Train and test, then correct the procedure
Use:
- Tabletop exercises (include execs and legal)
- Technical simulations (phishing, ransomware, OT segmentation loss)
- After-action reviews that produce procedure updates and re-approvals
Testing is how you prove the procedure is “followable,” not just “written.” Retain artifacts from the tests as operating evidence. 1
Step 6: Govern the document (approval, review cadence, version history)
Minimum governance expectations implied by “established”:
- Named procedure owner
- Defined review cadence
- Formal approval history
- Version control and change log
This is explicitly aligned to recommended best practices for the requirement: publish approved procedures with ownership and review cadence, then retain version history and operating artifacts. 1
Required evidence and artifacts to retain
Auditors and assessors will ask for both design evidence (the written standard) and operating evidence (proof it was used).
Design evidence (policy/procedure level)
- Incident escalation procedure (current version) with:
- Severity definitions and escalation triggers
- RACI or role responsibilities
- Escalation matrix (severity → who gets notified → expected timeframe)
- Communication channels and fallback methods
- Approval record (e-signature, meeting minutes, change ticket)
- Version history / change log and review cadence evidence 1
Operating evidence (execution level)
- Incident tickets showing escalation steps (timestamps + recipients)
- Paging/on-call logs or notification exports
- Incident bridge records (invite logs, attendance, minutes)
- Decision logs (who declared severity, who approved containment steps)
- Post-incident review reports mapping “what happened” to “what the procedure required”
- Exercise records (tabletop agenda, attendance, outcomes, corrective actions) 1
Tip for operators: decide your “system of record” early (usually the incident ticket). Force every escalation event to land there.
Common exam/audit questions and hangups
Expect questions like:
- “Show me your incident escalation procedure and who approved it.” 1
- “How do you decide severity, and who can declare a Severity 1 incident?”
- “Provide evidence of escalation for your last incident of each severity level.”
- “How do you ensure after-hours escalation works?”
- “How do third parties escalate incidents to you, and how do you escalate to them?”
- “What happens if the incident commander is unavailable?”
Hangups that slow assessments:
- Escalation “policy” exists, but it lacks a usable matrix (people, not roles; no triggers; no channels).
- Evidence is scattered across chat, email, and tickets with no single timeline.
- Teams bypass the process during real incidents because it is too complex.
Frequent implementation mistakes (and how to avoid them)
- PDF-only escalation plans. Fix: wire escalation actions into ticket workflows and paging tools so evidence is automatic. 1
- Escalation tied to individuals. Fix: role-based on-call schedules with backups.
- No third-party escalation clauses. Fix: add contractual notification paths and test them with key third parties.
- Severity defined by feelings. Fix: use impact-based criteria and pre-agreed triggers.
- No proof of “followed.” Fix: require that every page/email/bridge has a ticket reference and that the ticket captures timestamps and recipients. 1
Enforcement context and risk implications
No public enforcement cases were provided for this specific C2M2 requirement in the supplied sources. Practically, the risk shows up as:
- Failed audits or customer assessments because you cannot demonstrate consistent execution. 1
- Slower containment and higher business impact because the right decision-makers were not pulled in early.
- Missed downstream obligations (internal reporting, contractual notices, or regulatory notifications) because escalation did not reach legal/compliance quickly enough.
C2M2 is a maturity model, but the expectation to both define and follow escalation procedures aligns with what regulators and customers commonly test: governance, repeatability, and evidence.
A practical 30/60/90-day execution plan
Use this as an implementation sprint plan. Adjust scope to the business unit or environment you assessed under C2M2. 1
First 30 days: Get to “established”
- Draft or refresh the incident escalation procedure with a severity model and escalation matrix. 1
- Assign owners for incident commander, SOC/IR lead, business owners, legal/privacy escalation, and exec notifications.
- Define primary and fallback channels; validate contact lists and on-call coverage.
- Put the procedure into formal approval with a documented review cadence and change log. 1
Deliverable: approved escalation procedure + escalation matrix + contact/on-call mapping.
By 60 days: Get to “followed in tooling”
- Implement ticket templates and required escalation fields.
- Configure paging/on-call routing and ensure it matches the roles in the procedure.
- Create at least one runbook per top incident type (ransomware, privileged access compromise, OT outage) with explicit escalation checkpoints.
- Start an evidence folder structure (or GRC control record) that captures design evidence and operating artifacts. 1
Deliverable: escalation workflow in daily tools + an evidence package structure.
By 90 days: Prove it works and tighten the loop
- Run a tabletop exercise with executives and legal; capture minutes and corrective actions.
- Run a technical simulation or review a recent incident and map the timeline to the escalation procedure.
- Update the procedure based on lessons learned; re-approve and retain version history. 1
- Define ongoing metrics that do not require statistics to be meaningful, such as “all Severity 1 incidents have a documented exec notification decision in the ticket.”
Deliverable: exercise artifacts + post-incident review artifacts + updated, re-approved procedure.
Where Daydream fits (practical, not theoretical)
If you manage multiple environments and third parties, incident escalation evidence becomes a collection problem: you need the procedure, the approvals, and the operating artifacts tied together. Daydream is useful as a control record system to keep the escalation procedure, version history, approvals, and incident evidence in one place so audits and customer diligence don’t turn into a scavenger hunt. 1
Frequently Asked Questions
Do we need a separate escalation procedure for OT vs IT?
If OT has different impact criteria (safety, reliability, availability), write OT-specific severity triggers and recipients, even if the document is shared. The key is that escalation is established and followed for the scoped environment you assess. 1
What counts as proof that escalation procedures are “followed”?
Time-stamped operating artifacts: incident tickets, paging logs, notification emails, bridge notes, and post-incident reviews that show who was notified and when. Keep enough evidence to reconstruct the escalation timeline. 1
Can we meet the requirement with an incident response plan only?
Only if the plan contains explicit escalation procedures (triggers, roles, channels) and you can show operating evidence that teams used them. A high-level plan without execution artifacts usually fails the “followed” test. 1
How do we handle escalation when a third party detects the incident first (MSSP/cloud)?
Define inbound escalation paths in contracts and runbooks: who they contact, through which channel, and what minimum information they must provide. Retain the notification and the ticket record that shows you processed it according to your procedure. 1
Who should own the escalation procedure—security, IT, or GRC?
Security/IR should own the operational workflow, with GRC ensuring governance (approval, review cadence, evidence retention) and legal/privacy owning notification decision triggers. Ownership is less important than clarity and proof that the procedure is established and followed. 1
What if teams use chat during incidents and forget to update tickets?
Make the incident ticket the system of record and add a required step: every material escalation action (page, exec notification, containment approval) gets recorded in the ticket with timestamps. If needed, assign a scribe role for major incidents. 1
What you actually need to do
Use the cited implementation guidance when translating the requirement into day-to-day operating steps. 2
Footnotes
Frequently Asked Questions
Do we need a separate escalation procedure for OT vs IT?
If OT has different impact criteria (safety, reliability, availability), write OT-specific severity triggers and recipients, even if the document is shared. The key is that escalation is established and followed for the scoped environment you assess. (Source: Cybersecurity Capability Maturity Model v2.1)
What counts as proof that escalation procedures are “followed”?
Time-stamped operating artifacts: incident tickets, paging logs, notification emails, bridge notes, and post-incident reviews that show who was notified and when. Keep enough evidence to reconstruct the escalation timeline. (Source: Cybersecurity Capability Maturity Model v2.1)
Can we meet the requirement with an incident response plan only?
Only if the plan contains explicit escalation procedures (triggers, roles, channels) and you can show operating evidence that teams used them. A high-level plan without execution artifacts usually fails the “followed” test. (Source: Cybersecurity Capability Maturity Model v2.1)
How do we handle escalation when a third party detects the incident first (MSSP/cloud)?
Define inbound escalation paths in contracts and runbooks: who they contact, through which channel, and what minimum information they must provide. Retain the notification and the ticket record that shows you processed it according to your procedure. (Source: Cybersecurity Capability Maturity Model v2.1)
Who should own the escalation procedure—security, IT, or GRC?
Security/IR should own the operational workflow, with GRC ensuring governance (approval, review cadence, evidence retention) and legal/privacy owning notification decision triggers. Ownership is less important than clarity and proof that the procedure is established and followed. (Source: Cybersecurity Capability Maturity Model v2.1)
What if teams use chat during incidents and forget to update tickets?
Make the incident ticket the system of record and add a required step: every material escalation action (page, exec notification, containment approval) gets recorded in the ticket with timestamps. If needed, assign a scribe role for major incidents. (Source: Cybersecurity Capability Maturity Model v2.1)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream