Log Review and Analysis
To meet the C2M2 “Log Review and Analysis” requirement, you must define which systems generate security-relevant logs, centralize or otherwise make them reviewable, and perform documented review and analysis that can identify cybersecurity events and trigger timely follow-up. The fastest path is to standardize log sources, alert thresholds, review cadence, and ticketed escalation with retained evidence.
Key takeaways:
- Scope your log sources first, then standardize events, thresholds, and retention across that scope.
- “Review and analyze” must produce operating evidence: alerts reviewed, decisions made, and follow-up tracked to closure.
- Auditors look for coverage gaps (missing sources), inconsistency (ad hoc reviews), and lack of proof (no tickets, no escalation trail).
“Log review and analysis” is a deceptively small requirement that becomes an audit focal point because it is where detection meets proof. C2M2 SITUATION-1.B (MIL1) asks for a baseline capability: logs are reviewed and analyzed to identify cybersecurity events. That means you need more than log collection. You need a repeatable operating routine that turns log data into security-relevant decisions (triage, escalation, containment, remediation), and you need evidence that routine actually ran.
This page is written for a Compliance Officer, CCO, or GRC lead who has to operationalize the requirement quickly across an enterprise or an operational technology (OT) environment. The practical goal is to (1) define the log sources in scope, (2) define what “review” looks like (who, how often, and against what criteria), (3) define what “analysis” produces (alerts, cases, tickets), and (4) retain artifacts that show follow-through.
C2M2 is a maturity model used heavily in critical infrastructure contexts. If you are adopting C2M2 for a business unit or environment, your “log review and analysis” control needs to be clear enough for an assessor to test without reading between the lines. 1
Regulatory text
Excerpt (C2M2 v2.1 SITUATION-1.B): “Logs are reviewed and analyzed to identify cybersecurity events.” 1
Plain-English interpretation
You must:
- Have logs that matter (security-relevant system, network, identity, and application/OT logs in scope).
- Review them (a defined person/team performs routine checks and triage).
- Analyze them (you correlate, baseline, or otherwise interpret signals to identify potential cybersecurity events, not just store raw data).
- Act on findings (documented follow-up and escalation when the logs indicate suspicious activity or control failure).
The operational test is simple: if an assessor picks a time window and asks “show me how you would have detected a plausible security event,” you can produce log evidence, review records, and resulting tickets/escalations.
Who it applies to
Entity scope
- Energy sector organizations and critical infrastructure operators using C2M2 to assess cybersecurity maturity in a defined scope. 1
- Any organization that has adopted C2M2 for a scoped business unit, function, or OT environment and is being measured against SITUATION-1.B. 1
Operational context where this becomes “real”
- OT/ICS environments where logging is uneven (legacy devices, limited telemetry, segmented networks).
- Hybrid enterprises where logs exist across cloud, identity providers, endpoint tools, network devices, and SaaS platforms.
- Third-party connected systems (MSPs, OEM remote access, vendor-managed equipment) where log ownership is split and you still need detection and evidence.
What you actually need to do (step-by-step)
1) Define the logging scope you will defend
Create a “log coverage register” for the environment in scope. Include:
- Systems that support critical functions (OT historians, SCADA, engineering workstations; or critical business apps).
- Identity systems (directory services, SSO, privileged access where applicable).
- Security tooling (EDR, firewall, VPN, email security, vulnerability tools if they generate event logs).
- Key third-party connections that could introduce risk (remote access gateways, vendor support channels).
Output: A list you can hand to an assessor that answers “what produces logs, and where do those logs go?”
2) Standardize what events you review and why
For each log source, define:
- Event types of interest (authentication failures, privilege changes, remote logins, policy changes, malware detections, blocked network connections, OT command changes if available).
- Minimum fields required to make the logs useful (timestamp, user/service account, host/device, action, outcome).
- Thresholds and rules that turn raw logs into reviewable items (alert conditions, anomaly flags, correlation rules).
This aligns directly to the control expectation to document “systems, events, thresholds, and retention settings.” 1
3) Make logs reviewable (centralize or federate, but be explicit)
You do not need a specific technology to satisfy the requirement, but you do need an operating model:
- Centralized model: logs forwarded to a SIEM/log platform where alerts and searches occur.
- Federated model: logs remain in tool consoles, but you define who reviews which console, how evidence is captured, and how findings roll up into a common case/ticket workflow.
What matters is that review is consistent, auditable, and produces outcomes.
4) Establish a review routine with named roles
Document:
- Reviewers (SOC, IT ops, OT security, or an outsourced provider).
- Review triggers (scheduled reviews plus event-driven reviews from alerts).
- Decision tree for triage (benign/expected, needs investigation, escalate to incident).
- Escalation path (who gets paged/notified, when Legal/Privacy/Operations are involved, who can declare an incident).
Keep it simple. If the routine is too complex, it will drift into “best effort.”
5) Require ticketed follow-up and closure
Your log review must create an evidence trail:
- Every meaningful alert or suspicious pattern becomes a case/ticket.
- The ticket shows triage notes, actions taken, and closure rationale.
- Escalations are captured (handoff to incident response, OT operations, third-party support, etc.).
This aligns to the best-practice expectation to “keep review evidence, follow-up tickets, and escalation records.” 1
6) Define retention and protect log integrity
Write down:
- Retention settings per log source or platform (how long logs are kept and in what tier).
- Access controls (who can read vs. administer vs. delete).
- Time sync approach (so timelines during investigation make sense).
- Exception handling where a device cannot log (compensating controls and a plan).
C2M2’s excerpt does not prescribe a retention duration; your job is to pick a defensible period for your risk and document it.
7) Test the control like an auditor will
Run a tabletop-style evidence test:
- Pick a recent time window.
- Pull sample alerts/events across multiple sources.
- Show review notes, ticket creation, escalation, and closure.
- Confirm that the log source is in the coverage register and the thresholds/events are documented.
If you cannot produce evidence on demand, treat it as a control failure even if the SOC says they “look at dashboards.”
Required evidence and artifacts to retain
Use this as your audit-ready checklist:
| Artifact | What it proves | Minimum contents |
|---|---|---|
| Log coverage register (system inventory for logs) | You know what is in scope and monitored | System/tool, owner, log type, collection method, location, status |
| Logging & monitoring standard / procedure | Review and analysis is defined and repeatable | Roles, cadence/triggers, event types, thresholds, escalation steps |
| Use-case/alert catalog | Analysis criteria exists | Rule name, description, severity, data sources, threshold/logic, tuning notes |
| Review evidence | Reviews occurred | Daily/weekly screenshots, exported reports, analyst notes, review sign-offs |
| Tickets/cases with closure | Findings were handled | Ticket ID, timestamps, triage, actions, escalation, resolution |
| Escalation records | Incident pathway works | Pager/notification logs, emails, IR bridge notes (as appropriate) |
| Retention/config exports | Retention is configured as stated | SIEM retention policy, tool settings, storage tiers |
| Exceptions register | Gaps are known and controlled | System, reason for gap, compensating control, remediation plan |
Daydream (or any GRC system) fits naturally here by mapping log sources to control statements, attaching repeatable evidence, and tying tickets/escalations back to the requirement so you can answer assessors without scrambling.
Common exam/audit questions and hangups
Expect variations of:
- “Which log sources are in scope, and which are excluded? Show approval for exclusions.”
- “Show me your documented thresholds and the rationale for what generates an alert.”
- “Who reviews logs, and how do you know reviews happened on the required cadence?”
- “Pick three alerts. Show end-to-end handling and closure.”
- “How do you prevent tampering or deletion of logs by admins?”
- “How do you cover third-party remote access and vendor-managed assets?”
Hangups that create findings:
- Logs exist, but no proof of review beyond verbal statements.
- Review happens, but no documented criteria (ad hoc “we look for weird stuff”).
- Tickets exist, but do not tie back to log signals (missing correlation between alert and action).
Frequent implementation mistakes and how to avoid them
-
Mistake: Treating “log collection” as compliance.
Fix: Require reviewer evidence and ticketed follow-up as part of “done.” -
Mistake: Monitoring only security tools, not core control planes.
Fix: Ensure identity, privileged activity, remote access, and key configuration changes are in the event list. -
Mistake: Unbounded alerting that burns out reviewers.
Fix: Maintain an alert catalog with tuning notes and periodic review of noisy rules. -
Mistake: No ownership in OT.
Fix: Assign explicit OT log review responsibilities (even if centralized SOC supports) and document the handoffs. -
Mistake: Third-party blind spots.
Fix: Contractually require logs or security event reporting from critical third parties, and document how you ingest or review it.
Enforcement context and risk implications
No public enforcement cases were provided in the supplied source material for this C2M2 requirement. Practically, the risk is operational: incomplete log review lets suspicious activity and control failures go undetected, and you may lack operating evidence during internal control testing, audits, customer diligence, or regulator review. 1
Practical 30/60/90-day execution plan
First 30 days (stabilize and document)
- Build the log coverage register for the scoped environment.
- Publish a one-page procedure: who reviews, what they review, how findings become tickets, and escalation paths.
- Define an initial event/alert list for each major source, with owners.
- Start retaining review evidence in a single repository mapped to the control (Daydream works well for this).
By 60 days (make it repeatable and testable)
- Expand coverage to the highest-risk sources not yet onboarded (identity, remote access, admin actions).
- Create a consistent ticket taxonomy (alert type, severity, disposition, escalation yes/no).
- Tune alert thresholds to reduce noise, and document tuning decisions in the alert catalog.
- Run an internal “audit drill”: sample alerts, trace to tickets, validate closure evidence.
By 90 days (prove operation and close gaps)
- Formalize exceptions and compensating controls for non-logging assets.
- Add integrity controls (admin access restrictions, retention verification exports).
- Establish a standing monthly control check: coverage review, alert catalog review, evidence completeness.
- Prepare an assessor pack: coverage register, procedure, alert catalog, sample evidence set, and a short narrative of how log review ties to incident response.
Frequently Asked Questions
Do we need a SIEM to meet the log review and analysis requirement?
No specific technology is required by the excerpt, but you must make logs reviewable and show documented review and follow-up. A SIEM often makes evidence, correlation, and retention easier to demonstrate.
What counts as “review evidence” that auditors accept?
Evidence should show what was reviewed, by whom, and what decisions were made. Screenshots or exported reports paired with analyst notes and linked tickets are typically stronger than a calendar entry that says “checked logs.”
How do we handle OT devices that cannot generate useful logs?
Document the gap in an exceptions register, add compensating controls (network monitoring, access restrictions, physical controls), and define a remediation plan. Assessors generally accept constraints when you show ownership and risk handling.
How do we include third parties in log review?
Identify third-party connections that could affect the scoped environment, then define whether you ingest their logs, receive security event reports, or review access logs on your side. Capture this in the coverage register and the escalation workflow.
How often do logs need to be reviewed?
The C2M2 excerpt does not specify a cadence, so set one based on your risk and operational reality, then follow it consistently. The key audit issue is inconsistency or an undocumented cadence.
What’s the fastest way to operationalize this for an assessment?
Start with a narrow, defensible scope and produce an assessor-ready evidence pack: coverage register, documented thresholds/events, recent review artifacts, and tickets tied to those events. Then expand coverage iteratively without changing the basic operating routine.
What you actually need to do
Use the cited implementation guidance when translating the requirement into day-to-day operating steps. 2
Footnotes
Frequently Asked Questions
Do we need a SIEM to meet the log review and analysis requirement?
No specific technology is required by the excerpt, but you must make logs reviewable and show documented review and follow-up. A SIEM often makes evidence, correlation, and retention easier to demonstrate.
What counts as “review evidence” that auditors accept?
Evidence should show what was reviewed, by whom, and what decisions were made. Screenshots or exported reports paired with analyst notes and linked tickets are typically stronger than a calendar entry that says “checked logs.”
How do we handle OT devices that cannot generate useful logs?
Document the gap in an exceptions register, add compensating controls (network monitoring, access restrictions, physical controls), and define a remediation plan. Assessors generally accept constraints when you show ownership and risk handling.
How do we include third parties in log review?
Identify third-party connections that could affect the scoped environment, then define whether you ingest their logs, receive security event reports, or review access logs on your side. Capture this in the coverage register and the escalation workflow.
How often do logs need to be reviewed?
The C2M2 excerpt does not specify a cadence, so set one based on your risk and operational reality, then follow it consistently. The key audit issue is inconsistency or an undocumented cadence.
What’s the fastest way to operationalize this for an assessment?
Start with a narrow, defensible scope and produce an assessor-ready evidence pack: coverage register, documented thresholds/events, recent review artifacts, and tickets tied to those events. Then expand coverage iteratively without changing the basic operating routine.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream