Event Detection
To meet the C2M2 “Event Detection” requirement, you need operating detection coverage across your scoped IT/OT environment and a proven path to report detected cybersecurity events to the right responders for triage and action. In practice, that means defined event sources, tuned alert thresholds, monitored queues, and retained evidence that people reviewed, escalated, and resolved events. 1
Key takeaways:
- Detection is only “met” if events are both detected and reported to personnel who can respond. 1
- Your audit burden is evidence-heavy: show coverage, thresholds, retention, and review/escalation records. 1
- The fastest path is to standardize: a detection source register, alert routing rules, a review cadence, and ticketed follow-up.
“Event detection” sounds straightforward until you try to prove it. C2M2’s RESPONSE-1.A (MIL1) requirement is short, but it creates a measurable expectation: cybersecurity events must be detected, and the organization must report them so they can be analyzed and acted on. 1
For a Compliance Officer, CCO, or GRC lead, the operational goal is not to design a perfect SOC. The goal is to make detection verifiable inside the scope you’ve defined for your C2M2 assessment (business unit, function, or OT environment), and to make reporting repeatable so events don’t die in a console. 1
This page translates the requirement into implementation tasks you can assign to Security Operations, OT operations, IT, and third parties, then validate with artifacts. If you already have a SIEM, EDR, OT monitoring, and an incident process, the work is usually about tightening the seams: coverage mapping, alert thresholds, ownership, and evidence that humans actually reviewed alerts and took follow-up action.
Requirement: Event Detection (C2M2 RESPONSE-1.A, MIL1)
Control intent: Cybersecurity events are detected and reported. 1
Plain-English interpretation You must be able to (1) detect cybersecurity-relevant events across the systems in your C2M2 scope and (2) report those events to appropriate personnel (or functions) for analysis and response. “Reported” means routed to a monitored channel with clear ownership, not just written to a log file. 1
Who it applies to
This applies to organizations using C2M2 within a defined scope, commonly:
- Energy sector organizations and critical infrastructure operators, including environments with OT/ICS and supporting IT. 1
- Operational contexts where detection must span mixed estates: corporate identity, endpoints, servers, network controls, OT sensors, cloud services, and key third-party connections that can affect the scoped environment. 1
Practical scoping rule: if an asset is “in scope” for C2M2, its security events must either flow into your detection capability or have an equivalent monitored detection mechanism with evidence of review.
Regulatory text
Excerpt (C2M2 RESPONSE-1.A, MIL1): “Cybersecurity events are detected and reported.” 1
What the operator must do
- Establish detection capabilities that generate cybersecurity event signals for in-scope systems. 1
- Ensure those signals are reported to appropriate personnel (for example, SOC, incident response on-call, OT control room, or managed security provider) with defined routing and escalation. 1
What you actually need to do (step-by-step)
Use this as an implementation checklist you can assign and track.
Step 1: Lock the scope and detection ownership
- Confirm the C2M2 assessment scope (sites, networks, OT zones, cloud tenants, critical applications).
- Name an Event Detection Owner (often SecOps) and a Reporting Owner (often SOC lead or Incident Commander role).
- Document coverage boundaries: what is monitored by internal teams vs an MSSP vs an OT provider.
Deliverable: “Event Detection Scope & Ownership” one-pager tied to your C2M2 scope statement. 1
Step 2: Build an event detection register (sources, events, thresholds, retention)
Create a register that answers, for every in-scope system category:
- Event source (e.g., EDR, firewall, identity provider, OT monitoring, application logs)
- Event types you rely on (auth failures, privilege changes, malware alerts, network anomalies, configuration changes, OT protocol anomalies)
- Thresholds / rules (what triggers an alert vs informational log)
- Log/alert retention setting and where evidence is stored
- Responsible team for monitoring and first response
This aligns directly to the recommended control: “Document the systems, events, thresholds, and retention settings that support event detection.” 1
Tip for OT: include historian access events, engineering workstation activity, remote access into OT zones, and vendor support session logs if they are in scope.
Step 3: Define “reported” in operational terms (routing + monitored queues)
Reporting must be deterministic:
- Alerts land in a monitored queue (SIEM console, ticketing queue, on-call paging system, OT control room workflow).
- Each alert type has an owner and an escalation path.
- Establish severity mapping (e.g., low/medium/high/critical) and what gets paged vs queued.
Minimum expectation you can defend: a documented workflow that shows how an alert becomes a tracked work item for triage.
Step 4: Implement review and follow-up discipline (and evidence)
Operate the process so you can prove it later:
- Review alerts routinely (shift-based or daily, depending on your operating model).
- Create a ticket for meaningful events, link the ticket to the originating alert, and record disposition (true positive, false positive, benign, needs investigation).
- Escalate per criteria (lateral movement indicators, privileged account anomalies, OT safety impact).
This aligns to the second recommended control: “Keep review evidence, follow-up tickets, and escalation records that show logged events are actively monitored and resolved.” 1
Step 5: Test the reporting path (tabletop + technical validation)
You do not need a full red team to validate MIL1-level capability. You do need proof the path works.
- Generate controlled events (e.g., test malware alert, test privileged group change alert, test remote access connection event).
- Confirm: alert created → routed → acknowledged → ticket created → closed with notes.
Output: a small set of test records with screenshots/exported alert details and linked tickets.
Step 6: Close coverage gaps with a risk-based backlog
Common gaps:
- OT assets producing logs that never reach monitoring.
- Cloud audit logs disabled or not routed.
- Third-party remote access logs not captured.
- Alert fatigue because thresholds are too noisy, causing non-review.
Track gaps in a backlog with owners and target states (enable logging, forward to SIEM, tune thresholds, add suppression rules, update runbooks).
Required evidence and artifacts to retain
Auditors and assessors usually want “show me” evidence across design and operation. Keep:
- Event Detection Register: systems, event types, thresholds, retention, owners. 1
- Architecture/data flow diagram for event collection and reporting (high level is fine).
- Alert routing rules (SIEM notification rules, pager/on-call configs, mailbox routing, MSSP SOP extracts).
- Runbooks for triage and escalation.
- Review evidence: shift logs, daily review sign-offs, queue screenshots, exported alert lists with analyst notes. 1
- Tickets and escalations tied to alerts: incident records, case notes, root cause notes where applicable. 1
- Retention configuration evidence: system settings screenshots, log storage policies, SIEM retention settings. 1
If you manage third parties (MSSP, OT monitoring provider), retain:
- Contract/SOW language on monitoring and notification expectations
- Monthly service reports or alert summaries
- Escalation emails/call logs for significant events
Common exam/audit questions and hangups
Expect questions like:
- “Which in-scope systems generate security events, and where do they go?” 1
- “Show me your thresholds and how you tune them.” 1
- “Who reviews alerts, and how do you prove review happened?” 1
- “Show an example from the last period: alert → triage → escalation → closure.” 1
- “What happens after hours?” 1
- “What about OT events? What about third-party remote support sessions?” 1
Hangups that slow audits:
- No single inventory of detection sources and retention.
- Alerts exist, but no evidence of routine review.
- Events are detected in a tool, but not “reported” to responders (no routing/acknowledgment trail). 1
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails the requirement | How to avoid it |
|---|---|---|
| Treating raw logs as “detection” | Logs without monitoring and alerting are not reliably detected or reported | Define event types and thresholds that produce actionable alerts; document review ownership 1 |
| Relying on an MSSP without evidence | You still need proof of monitoring and escalation | Contract for reporting, keep monthly reports, retain escalation records and linked tickets 1 |
| No retention alignment | You cannot show historical operation during testing | Document and confirm retention settings for each key source 1 |
| Alert fatigue leading to non-review | Undermines “detected and reported” | Tune thresholds, suppress known benign patterns, track tuning changes with approvals |
| OT visibility treated as optional | For critical infrastructure, OT blind spots become major response gaps | Include OT event sources in the register; define OT-specific escalation paths |
Risk implications (why CCOs care)
The C2M2 guidance flags a practical risk: incomplete detection or non-review allows suspicious activity and control failures to go undetected, and it leaves you without operating evidence during testing, audits, customer diligence, or regulator review. 1
Translate that into CCO language:
- Operational risk: delayed containment, larger outage blast radius, more costly recovery.
- Compliance risk: inability to demonstrate control operation even if tools exist.
- Third-party risk: if a service provider detects events but you cannot prove notification and follow-up, accountability fails during an incident.
Practical 30/60/90-day execution plan
No sourced timelines are provided for C2M2, so treat this as an execution pattern you can tailor to your environment.
First 30 days (stabilize and make it auditable)
- Confirm scope and owners for detection and reporting. 1
- Build the Event Detection Register (systems, events, thresholds, retention). 1
- Document reporting paths and on-call/escalation coverage.
- Start retaining review evidence (queue screenshots/exports + sign-off + ticket links). 1
Days 31–60 (prove operation and close obvious gaps)
- Run controlled tests to validate alert routing and acknowledgment.
- Create or tighten runbooks for high-risk alert categories.
- Identify top coverage gaps (OT logging, cloud audit logs, third-party remote access) and assign remediation owners.
- Standardize ticket fields to capture alert ID, triage outcome, escalation decision, closure notes.
Days 61–90 (reduce noise, strengthen reporting reliability)
- Tune thresholds and suppression rules; track tuning changes.
- Implement a lightweight metrics pack for governance (counts of reviewed alerts, escalations, aging tickets) without inventing performance claims.
- Perform a mini “audit packet” dry run: pick sample alerts and compile evidence end-to-end.
- If you use Daydream for third-party risk and control evidence management, map event detection responsibilities across third parties (MSSP, OT vendors, cloud providers) and store escalation evidence alongside the third-party record for faster audits.
Frequently Asked Questions
What qualifies as a “cybersecurity event” for this requirement?
Treat any security-relevant signal that could indicate compromise, misuse, policy violation, or control failure as an event. The key is consistency: define the event types you detect and the thresholds that trigger reporting. 1
Does “reported” mean I must notify regulators or external parties?
Not in this requirement. “Reported” here means routed internally to appropriate personnel for analysis and response, with evidence that someone received and handled it. 1
We have a SIEM. Are we automatically compliant?
No. You need documented coverage (sources, thresholds, retention) and operational proof of monitoring, follow-up tickets, and escalations. Tools without review evidence routinely fail audits. 1
How do I handle event detection when an MSSP monitors for us?
Keep the detection source list and show the MSSP’s monitoring and escalation records, plus your internal tickets demonstrating triage and closure. Contract language should define notification timelines and event categories. 1
What’s the minimum evidence sample an auditor will accept?
Expect to provide a representative set of alerts with linked tickets and escalation records, plus documentation of thresholds and retention settings. You should be able to trace alert → report/notification → response action. 1
Our OT environment is partially air-gapped. How do we meet detection and reporting?
Use OT-appropriate monitoring within the OT boundary and define how events are reported to the responders who can act (OT operations and security). Document the limitations, compensating monitoring, and the reporting workflow you actually run. 1
What you actually need to do
Use the cited implementation guidance when translating the requirement into day-to-day operating steps. 2
Footnotes
Frequently Asked Questions
What qualifies as a “cybersecurity event” for this requirement?
Treat any security-relevant signal that could indicate compromise, misuse, policy violation, or control failure as an event. The key is consistency: define the event types you detect and the thresholds that trigger reporting. (Source: Cybersecurity Capability Maturity Model v2.1)
Does “reported” mean I must notify regulators or external parties?
Not in this requirement. “Reported” here means routed internally to appropriate personnel for analysis and response, with evidence that someone received and handled it. (Source: Cybersecurity Capability Maturity Model v2.1)
We have a SIEM. Are we automatically compliant?
No. You need documented coverage (sources, thresholds, retention) and operational proof of monitoring, follow-up tickets, and escalations. Tools without review evidence routinely fail audits. (Source: Cybersecurity Capability Maturity Model v2.1)
How do I handle event detection when an MSSP monitors for us?
Keep the detection source list and show the MSSP’s monitoring and escalation records, plus your internal tickets demonstrating triage and closure. Contract language should define notification timelines and event categories. (Source: Cybersecurity Capability Maturity Model v2.1)
What’s the minimum evidence sample an auditor will accept?
Expect to provide a representative set of alerts with linked tickets and escalation records, plus documentation of thresholds and retention settings. You should be able to trace alert → report/notification → response action. (Source: Cybersecurity Capability Maturity Model v2.1)
Our OT environment is partially air-gapped. How do we meet detection and reporting?
Use OT-appropriate monitoring within the OT boundary and define how events are reported to the responders who can act (OT operations and security). Document the limitations, compensating monitoring, and the reporting workflow you actually run. (Source: Cybersecurity Capability Maturity Model v2.1)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream