Logging and Monitoring

To meet the C2M2 “Logging and Monitoring” requirement, you must log IT and OT assets in a way that matches their risk to delivering the function, then actively monitor those logs and retain evidence that reviews and follow-up actions occurred (Cybersecurity Capability Maturity Model v2.1). Operationalize this by tiering assets by criticality, defining minimum log events per tier, centralizing collection where feasible, and proving routine review through tickets and escalations.

Key takeaways:

  • Scope logging across both IT and OT, then scale depth and frequency based on delivery risk (Cybersecurity Capability Maturity Model v2.1).
  • “Logging” alone is insufficient; you need monitoring and proof of review, triage, and escalation.
  • Your audit win condition is documentation: what is logged, why, retention, alert thresholds, and review evidence.

“Logging and Monitoring” sounds straightforward until you are asked a basic exam question: “Show me which OT assets are logged, what events you capture, how long you keep them, and where you prove someone reviewed the alerts.” C2M2 frames the requirement in a risk-proportionate way: log IT and OT assets commensurate with the risk to the delivery of the function (Cybersecurity Capability Maturity Model v2.1). That gives you flexibility, but it also raises expectations that you can justify your choices.

For a CCO, Compliance Officer, or GRC lead, the fastest path is to treat this as a requirement to (1) define logging coverage standards by risk tier, (2) implement collection and monitoring workflows that operate consistently, and (3) retain evidence that demonstrates operation over time. This page translates the requirement into a practical build plan: what to log, how to document it, how to set monitoring ownership, and what artifacts to keep so audits do not devolve into screenshots and guesswork.

Regulatory text

Requirement (excerpt): “Logging is performed on IT and OT assets that is commensurate with the risk to the delivery of the function.” (Cybersecurity Capability Maturity Model v2.1)

Operator interpretation:
You must identify the IT and OT assets that support the scoped function and implement logging coverage that increases as the asset’s impact on delivery increases (Cybersecurity Capability Maturity Model v2.1). You also need operating proof that logs are monitored and acted on, not just generated, because gaps in review create detection blind spots and leave you without evidence during audits or diligence (Cybersecurity Capability Maturity Model v2.1).

Plain-English requirement (what it really means)

  • If an asset can disrupt delivery, it needs logs. The more it can disrupt delivery, the more detailed, reliable, and reviewable the logs must be.
  • OT is explicitly in scope. You cannot limit logging to enterprise IT if OT supports the function.
  • Monitoring is part of the expectation. If nobody reviews, triages, and escalates, logging becomes a paper control and fails in practice.

Who it applies to (entity and operational context)

This requirement applies when your organization has adopted C2M2 for a defined scope (a business unit, function, or OT environment) and is assessing maturity for that scope (Cybersecurity Capability Maturity Model v2.1). Typical in-scope operators include energy sector organizations and other critical infrastructure operators using C2M2 as a structured capability assessment (Cybersecurity Capability Maturity Model v2.1).

Operationally, it applies to:

  • IT systems supporting the function (identity, endpoints, servers, network infrastructure, cloud control planes where relevant).
  • OT systems supporting the function (control systems, OT endpoints, historians, engineering workstations, remote access paths, and network boundary devices).
  • Third parties with access paths into IT/OT (managed service providers, OEM support channels, remote maintenance) because they generate authentication, remote access, and change activity you need to detect and investigate.

What you actually need to do (step-by-step)

Step 1: Define scope and “delivery risk” in a way you can defend

  1. List the function-in-scope and the systems required to deliver it (IT and OT).
  2. Assign an asset criticality tier based on delivery impact, not just asset type. A practical approach is:
    • Tier 1: Directly affects delivery or safety; compromise disrupts the function quickly.
    • Tier 2: Supports Tier 1; compromise degrades delivery or detection.
    • Tier 3: Peripheral; compromise has limited delivery impact.
  3. Document the rationale for tiering. Auditors accept flexibility when the rationale is consistent.

Artifact: Asset inventory subset for the scoped function, with tiering and business rationale.

Step 2: Set minimum log coverage standards per tier

Create a short “logging standard” that maps tier → required log sources → required event types → retention → review expectations.

Baseline event categories to define (adapt per environment):

  • Identity and access: authentication successes/failures, privilege changes, account lifecycle, remote access sessions.
  • System and endpoint: process execution (where feasible), service changes, malware detections, local admin changes.
  • Network and boundary: firewall allows/denies, VPN/remote access logs, IDS/IPS alerts, DNS where in scope.
  • OT-specific signals (where available): controller/engineering workstation authentication, configuration downloads, logic changes, remote programming sessions, historian access, and OT boundary traffic.

Artifact: Logging & Monitoring Control Matrix (system, event types, thresholds, owner, retention, forwarding destination).

Step 3: Implement collection and normalization (centralize where feasible)

  1. Confirm each Tier 1 and Tier 2 source is actually producing logs. This sounds basic; it is where many programs fail.
  2. Forward logs to a central repository (SIEM, log management platform, or managed SOC tooling) where correlation and retention are controlled.
  3. Protect log integrity: restrict admin access, segregate duties where possible, and ensure time synchronization across sources so investigations can reconstruct sequences.

Evidence to retain: configuration exports, forwarding settings, screenshots only as a last resort, and access control lists for the logging platform.

Step 4: Define monitoring ownership, triage workflow, and escalation rules

Logging passes audits only when you can show operation. Build a simple, testable workflow:

  1. Assign an accountable owner for monitoring (SOC, IT security, OT security, or an outsourced provider).
  2. Define alert thresholds and priorities aligned to tiers (Tier 1 gets tighter thresholds and faster triage).
  3. Write an escalation path that includes OT operations when alerts affect OT assets.
  4. Create a ticketing linkage so alerts generate cases, cases generate actions, and actions close with documented resolution.

C2M2-oriented best practice explicitly expects you to keep review evidence, follow-up tickets, and escalation records (Cybersecurity Capability Maturity Model v2.1).

Artifacts: monitoring runbook, on-call/escalation roster, sample alert-to-ticket records, incident records when applicable.

Step 5: Set retention and review cadences you can consistently execute

C2M2 does not mandate a specific retention duration in the provided text, so set a retention period that matches your risk and operational needs, then enforce it consistently (Cybersecurity Capability Maturity Model v2.1). For auditors, consistency beats aspirational settings.

Define:

  • Retention by tier and log type (hot vs. archived if applicable).
  • Review expectations (daily for high-risk alert queues, periodic review for lower tiers, and periodic health checks for log source coverage).

Artifacts: retention configuration, written retention standard, review checklists or dashboard exports showing reviews occurred.

Step 6: Prove the control works (control testing and coverage validation)

Build a lightweight test plan:

  • Coverage test: pick a sample of Tier 1 assets and prove logs arrive centrally.
  • Detection test: simulate a benign event (e.g., failed logins, privilege change in test environment) and show alerting plus ticketing.
  • Process test: confirm triage notes, escalation, and closure quality.

This directly addresses the stated risk: incomplete logging or unreviewed logs let suspicious activity and control failures go undetected and leave you without operating evidence for audits and diligence (Cybersecurity Capability Maturity Model v2.1).

Artifacts: test results, evidence pack, exceptions register for gaps with remediation dates.

Required evidence and artifacts to retain (audit-ready)

Keep these in a single “Logging & Monitoring” evidence folder per scope:

  • Logging & Monitoring Standard (what must be logged by tier; retention; review expectations).
  • Asset-to-logging mapping (inventory subset + tier + logging status).
  • Log source configurations (forwarders, agents, syslog settings, SIEM connectors).
  • Alert catalog and thresholds (what alerts exist, severity mapping, ownership).
  • Monitoring runbook (triage steps, enrichment, escalation contacts).
  • Review evidence (dashboard exports, analyst notes, sign-offs where used).
  • Tickets and escalations demonstrating follow-up and closure (Cybersecurity Capability Maturity Model v2.1).
  • Exception register (known gaps, compensating controls, remediation tracking).

Common exam/audit questions and hangups

Auditors and assessors typically press on:

  • “Show me OT coverage.” They will ask which OT assets are logged and how you monitor them.
  • “How did you decide what’s commensurate with risk?” They want a tiering method and rationale, not “we log everything we can.”
  • “Prove monitoring happens.” Expect requests for dated alert reviews, tickets, and escalation evidence (Cybersecurity Capability Maturity Model v2.1).
  • “What happens when logging fails?” You need a health-check process and an owner for broken log sources.
  • “Can you reconstruct a timeline?” Time sync and retention become visible during investigations and tabletop exercises.

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: Treating “logging enabled” as compliance.
    Fix: require evidence of review plus ticket closure records (Cybersecurity Capability Maturity Model v2.1).

  2. Mistake: IT-only logging scope.
    Fix: explicitly map OT assets and OT boundary devices into the logging standard.

  3. Mistake: No defensible rationale for gaps.
    Fix: keep an exceptions register with risk acceptance or remediation commitments.

  4. Mistake: Collecting too much low-value noise first.
    Fix: start with Tier 1/Tier 2 sources and a small, high-fidelity alert set. Expand after triage stabilizes.

  5. Mistake: Logs stored but not protected.
    Fix: restrict admin access, monitor changes to logging configurations, and document access controls.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement. Practically, the risk is operational and evidentiary: incomplete logging or lack of monitoring can allow suspicious activity and control failures to persist undetected, and it can leave you unable to prove control operation during internal testing, audits, customer diligence, or regulator review (Cybersecurity Capability Maturity Model v2.1).

Practical 30/60/90-day execution plan

Use this as a phased rollout that aligns to “commensurate with risk” rather than boiling the ocean.

First 30 days (stabilize scope + minimum viable visibility)

  • Confirm the scoped function, in-scope IT/OT asset list, and tiering.
  • Publish a one-page logging standard by tier (events, retention intent, owners).
  • Centralize Tier 1 log sources where feasible; document gaps and owners.
  • Stand up an initial alert queue and ticket workflow; capture evidence of reviews.

Days 31–60 (operationalize monitoring and prove it)

  • Expand to Tier 2 sources; tighten identity/remote access coverage.
  • Finalize runbooks, escalation paths (including OT operations), and on-call coverage.
  • Start routine log health checks and exception tracking.
  • Run a basic control test: coverage, alerting, triage, closure artifacts.

Days 61–90 (harden, reduce noise, improve auditability)

  • Tune alerts to reduce false positives and improve triage quality.
  • Add integrity controls (access restrictions, change monitoring for logging configs).
  • Improve investigation readiness (time sync validation, retention confirmation).
  • Package an audit-ready evidence kit in Daydream so assessors can trace: asset → log source → alert → ticket → closure without ad hoc screenshot requests.

Where Daydream fits: Daydream is most effective as your system of record for the logging control matrix, exception register, and evidence pack, plus the linkage between monitoring reviews and follow-up tickets. That reduces scramble during C2M2 assessments because your rationale and operating evidence stay tied to the scoped function.

Frequently Asked Questions

How do I prove “commensurate with risk” without a numeric standard?

Use a tiering model tied to delivery impact and document it per asset class. Then show that higher tiers have broader event coverage, stronger monitoring, and clearer escalation than lower tiers (Cybersecurity Capability Maturity Model v2.1).

Do I need a SIEM to satisfy this requirement?

The requirement is about performing logging and monitoring proportionate to risk, not buying a specific tool (Cybersecurity Capability Maturity Model v2.1). A SIEM or managed SOC often makes evidence and correlation easier, but you can meet the requirement with other centralized log and review workflows if they are consistent and documented.

What OT logs should I prioritize if I can’t collect everything?

Start with the OT access paths and change surfaces: remote access, authentication to engineering workstations, and configuration or logic change events where your tooling supports them. Document constraints and track gaps in an exceptions register.

What evidence is most persuasive to auditors?

A control matrix showing what is logged, where it goes, retention settings, and who reviews it, plus dated review evidence and tickets that show triage and escalation (Cybersecurity Capability Maturity Model v2.1). Auditors want traceability, not screenshots in isolation.

How do I handle third-party remote support into OT environments?

Treat third-party access paths as Tier 1 or Tier 2 log sources because they can affect delivery. Log session starts/stops, authentication, privileged actions, and change activity, then ensure alerts route to owners who can coordinate with OT operations.

What if log sources are unreliable or intermittently offline?

Track log source health as part of monitoring and open tickets when collection fails. Keep recurring issues in your exceptions register with remediation actions so you can show active management rather than silent gaps.

Frequently Asked Questions

How do I prove “commensurate with risk” without a numeric standard?

Use a tiering model tied to delivery impact and document it per asset class. Then show that higher tiers have broader event coverage, stronger monitoring, and clearer escalation than lower tiers (Cybersecurity Capability Maturity Model v2.1).

Do I need a SIEM to satisfy this requirement?

The requirement is about performing logging and monitoring proportionate to risk, not buying a specific tool (Cybersecurity Capability Maturity Model v2.1). A SIEM or managed SOC often makes evidence and correlation easier, but you can meet the requirement with other centralized log and review workflows if they are consistent and documented.

What OT logs should I prioritize if I can’t collect everything?

Start with the OT access paths and change surfaces: remote access, authentication to engineering workstations, and configuration or logic change events where your tooling supports them. Document constraints and track gaps in an exceptions register.

What evidence is most persuasive to auditors?

A control matrix showing what is logged, where it goes, retention settings, and who reviews it, plus dated review evidence and tickets that show triage and escalation (Cybersecurity Capability Maturity Model v2.1). Auditors want traceability, not screenshots in isolation.

How do I handle third-party remote support into OT environments?

Treat third-party access paths as Tier 1 or Tier 2 log sources because they can affect delivery. Log session starts/stops, authentication, privileged actions, and change activity, then ensure alerts route to owners who can coordinate with OT operations.

What if log sources are unreliable or intermittently offline?

Track log source health as part of monitoring and open tickets when collection fails. Keep recurring issues in your exceptions register with remediation actions so you can show active management rather than silent gaps.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
C2M2 Logging and Monitoring: Implementation Guide | Daydream