Configuration Monitoring
Configuration Monitoring means you continuously check systems against approved configuration baselines, detect deviations quickly, and drive each deviation to a documented resolution (remediation, approved exception, or risk acceptance). To operationalize it, define the baselines in scope, instrument monitoring to detect drift, set triage and escalation rules, and retain evidence that people reviewed alerts and closed tickets. (Cybersecurity Capability Maturity Model v2.1)
Key takeaways:
- You need both technical detection of configuration drift and proof of operational follow-through (tickets, approvals, closures).
- “Baseline” must be explicit and scoped; otherwise monitoring becomes noise and exceptions become invisible.
- Auditors will test operating effectiveness: they will sample deviations and ask you to show review, decisioning, and closure evidence.
If you have adopted C2M2 for a defined scope, configuration monitoring is one of the fastest ways to prove you can maintain control over that scope in day-to-day operations. The requirement is simple on paper: monitor configuration baselines for deviations, and address deviations. In practice, teams fail here for two predictable reasons: they do not define an auditable baseline (so “deviation” is subjective), or they detect drift but cannot prove anyone reviewed and resolved it (so monitoring looks decorative).
For a Compliance Officer, CCO, or GRC lead, the goal is to turn this into a repeatable control that survives sampling. That means (1) scoping which assets and configuration domains matter most, (2) specifying what “good” looks like (the baseline), (3) implementing monitoring that reliably detects drift, and (4) running a workflow that closes the loop with documented outcomes.
This page gives requirement-level implementation guidance aligned to C2M2 v2.1 ASSET-2.B, with a bias toward evidence, auditability, and fast operational rollout. (Cybersecurity Capability Maturity Model v2.1)
Regulatory text
Requirement (C2M2 v2.1 ASSET-2.B, MIL2): “Configuration baselines are monitored for deviations, and deviations are addressed.” (Cybersecurity Capability Maturity Model v2.1)
Operator interpretation:
You must (a) establish or adopt configuration baselines for in-scope assets, (b) monitor those assets for baseline drift (unauthorized or unplanned changes), and (c) take action when drift occurs. “Addressed” means each deviation has a recorded disposition: reverted/remediated, formally approved as an exception, or accepted as risk with an accountable approver.
What “monitoring” must include to be defensible:
- Detection coverage (what assets/config domains are monitored, what is excluded, and why)
- Timely review (who reviews signals, how often, and how they escalate)
- Closure evidence (tickets, change records, exception approvals, and post-fix verification)
Source context: C2M2 is a maturity model used by energy sector and critical infrastructure organizations to assess cybersecurity capabilities within a defined scope. (Cybersecurity Capability Maturity Model v2.1; DOE C2M2 program)
Plain-English requirement
Keep a known-good configuration for important systems, watch for anything that deviates from it, and fix or formally approve every deviation. If you cannot show the baseline, the detection, and the follow-through, you should assume you will fail an assessment interview or evidence review for this capability.
Who it applies to (entity and operational context)
Entity types: Energy sector organizations and other critical infrastructure operators using C2M2 to assess capability maturity. (Cybersecurity Capability Maturity Model v2.1)
Operational context (scope matters):
- Applies to the business unit, function, IT/OT environment, or program scope where you have adopted C2M2 and are measuring maturity. (Cybersecurity Capability Maturity Model v2.1)
- Most teams implement configuration monitoring across a tiered asset set:
- Tier 1: OT/ICS systems that impact safety or reliability, SCADA-related systems, identity systems, core network infrastructure
- Tier 2: Server and endpoint fleets, cloud control planes, security tooling infrastructure
- Tier 3: Lower criticality systems where monitoring is sampled or periodic
What you actually need to do (step-by-step)
Step 1 — Define what “baseline” means in your environment
Create a baseline specification for each in-scope asset class or platform. You do not need a separate baseline document per host if you can define baselines as code, templates, or policy profiles.
Minimum baseline content (make it auditable):
- Asset class / platform (Windows servers, Linux servers, network devices, Kubernetes, cloud accounts, OT endpoints)
- Configuration domains you care about (examples below)
- Baseline source of truth (golden image, CIS-aligned profile, Group Policy, IaC repo, MDM profile, network template)
- Ownership (system owner + security owner)
- Change authority (who can approve baseline changes)
Common configuration domains to baseline:
- Identity and access settings (local admins, service accounts, MFA enforcement where applicable)
- Logging and telemetry settings (audit logging enabled, forwarding configured)
- Network exposure (firewall rules, open management ports, remote access tooling)
- Patch/update configuration (auto-update settings, maintenance windows)
- Security tooling posture (EDR present, tamper protection enabled)
- OT-specific: remote engineering access settings, vendor remote support pathways, protocol gateways (define within your OT governance model)
Step 2 — Instrument monitoring to detect drift (not just “changes”)
Monitoring must detect deviations from baseline, not merely record that “something changed.”
Implementation options (choose what fits your stack):
- Config management / desired state tools (enforce and report drift)
- Endpoint management / MDM (policy compliance reporting)
- Cloud security posture management (control-plane configuration drift)
- Network configuration monitoring (device config diffs)
- File integrity monitoring for high-value configuration files (where appropriate)
Operational rule: map each monitored signal to a baseline requirement. If you cannot describe the baseline control the alert corresponds to, the alert will become noise and won’t survive evidence review.
Step 3 — Define deviation severity and required response paths
Create a simple decision matrix so responders and auditors see consistent handling.
| Deviation type | Example | Default handling | Evidence you must produce |
|---|---|---|---|
| Unauthorized change | Admin group modified outside change window | Treat as incident or high-priority ticket | Alert record, ticket, investigation notes, closure verification |
| Unplanned drift | Agent removed, logging disabled | Remediate and verify | Ticket, before/after state, validation output |
| Planned change not reflected in baseline | New approved port for service | Update baseline via change control | Change record, baseline update PR, approval |
| Business exception | Legacy OT constraint | Document exception with compensating controls | Exception request, risk acceptance, expiration/review |
Step 4 — Connect monitoring to change management (close the “false positive” loop)
A large share of deviations are legitimate changes. Your process must reconcile monitoring with authorized change activity.
Practical workflow:
- Alert triggers deviation record (in SIEM/SOAR, ticketing system, or GRC workflow).
- Analyst checks for a matching approved change record.
- If authorized: update baseline if needed, or mark as expected with trace to approval.
- If unauthorized: remediate, investigate, and escalate per incident criteria.
- Verify end state matches baseline (or approved exception).
- Close ticket with links to evidence.
Step 5 — Run a review cadence and keep “proof of review”
C2M2 maturity assessments and audits often turn on operating evidence. You need more than tool screenshots.
Minimum operating evidence pattern (recommended):
- A queue of deviations (alerts or reports)
- A named reviewer or on-call function
- A documented review action (comment, disposition, ticket link)
- A closed-loop outcome for each sampled deviation
This aligns with recommended practices: document the systems, events, thresholds, and retention settings, and keep review evidence plus follow-up tickets and escalation records. (Cybersecurity Capability Maturity Model v2.1)
Required evidence and artifacts to retain
Store evidence in a way that supports sampling (by date, asset, and deviation type).
Control design artifacts
- Configuration monitoring standard or procedure (scope, roles, what is monitored)
- Baseline definitions (templates, policies, golden images, IaC repos, or baseline documents)
- Monitoring architecture map (what tools cover what asset classes)
- Alert catalog / rule mapping (event → baseline requirement → severity)
Operating effectiveness artifacts
- Deviation logs or compliance reports (exportable)
- Tickets for deviations with timestamps, owner, actions taken, and closure notes
- Change records linked to “expected” deviations
- Exception/risk acceptance records with approver, rationale, compensating controls, and review trigger
- Evidence of review (queue screenshots with metadata, meeting notes, or attestation records)
Retention Set retention to match your broader security logging and GRC evidence approach; document the retention setting in the monitoring procedure so you can explain it during an assessment. (Cybersecurity Capability Maturity Model v2.1)
Common exam/audit questions and hangups
Expect these lines of inquiry:
- “Show me your baselines for the assets in scope. Where are they defined and who approves changes?”
- “How do you know a change was unauthorized versus planned?”
- “Pick three recent deviations. Walk me from detection to closure.”
- “What happens when deviations repeat? Do you trend or trigger problem management?”
- “How do you handle OT constraints or vendor-managed systems?”
Hangups that cause findings:
- Baselines exist but are not mapped to monitoring signals.
- Monitoring exists but there is no consistent triage workflow or closure evidence.
- Exceptions exist in email or chat, not in a controlled register.
Frequent implementation mistakes (and how to avoid them)
-
Monitoring everything, then ignoring it.
Fix: start with critical assets and a narrow set of high-signal deviations. Expand coverage after the workflow works. -
No baseline governance.
Fix: require baseline change approvals through the same mechanism you use for production change control. -
Treating “planned change” as “no action required.”
Fix: planned changes still require reconciliation: update the baseline or document why the deviation is expected. -
Relying on screenshots instead of records.
Fix: make tickets and exports the system of record. Screenshots support, but they do not scale for sampling. -
Third-party managed systems with unclear responsibility.
Fix: document shared responsibility. If a third party administers a system, require configuration monitoring data, reporting, or attestation as part of the contract and governance process.
Enforcement context and risk implications
No C2M2-specific public enforcement cases are provided in the source material. (Cybersecurity Capability Maturity Model v2.1; DOE C2M2 program)
Risk still matters operationally: incomplete monitoring or lack of review allows suspicious activity and control failures to go undetected, and leaves you without operating evidence for internal control testing, audits, customer diligence, or regulator review. (Cybersecurity Capability Maturity Model v2.1)
Practical 30/60/90-day execution plan
First 30 days (stand up the control)
- Confirm C2M2 assessment scope and produce an in-scope asset list by class (IT, cloud, OT as applicable). (Cybersecurity Capability Maturity Model v2.1)
- Define baseline owners and baseline change approval path.
- Publish a “baseline minimums” standard for each asset class (even if initial baselines are thin).
- Identify monitoring tool coverage and gaps; document systems, events, thresholds, and retention settings. (Cybersecurity Capability Maturity Model v2.1)
- Create the deviation workflow in your ticketing system: categories, severities, required fields, and closure criteria.
Next 60 days (make it operate)
- Turn on drift/compliance reporting for priority asset classes.
- Start weekly (or operationally appropriate) deviation review and require ticket linkage for each material deviation.
- Implement reconciliation with change management: responders must link a change record or treat as unauthorized.
- Stand up an exception register for “can’t comply” systems (common in OT and legacy environments) with approvals and review triggers.
By 90 days (make it auditable and scalable)
- Run an internal sampling exercise: select deviations and prove end-to-end closure evidence.
- Add trend reporting for repeat deviations and “top drift causes” to drive engineering fixes.
- Expand coverage to additional asset classes and refine severity thresholds.
- If you use Daydream, centralize baseline definitions, deviation tickets, and exceptions so you can answer sampling requests fast without chasing evidence across tools.
Frequently Asked Questions
Do we need a separate configuration baseline document for every system?
No. Auditors care that the baseline is explicit and controlled. You can define baselines through templates, configuration-as-code, MDM profiles, or standardized build documents, as long as you can show what applies to a given asset and who approves changes.
What counts as “addressing” a deviation?
“Addressed” means you recorded a disposition and acted on it: remediation back to baseline, an approved baseline update through change control, or a documented exception/risk acceptance with an accountable approver. (Cybersecurity Capability Maturity Model v2.1)
How do we handle planned changes that trigger monitoring alerts?
Require responders to link the approved change record to the deviation ticket. If the change is now the new normal, update the baseline through controlled approval so the same deviation does not recur.
What if a third party manages the system and we can’t instrument monitoring ourselves?
Document shared responsibility and require reporting or attestation that covers baseline drift and deviation handling. Track received reports, meetings, and follow-up tickets as your operating evidence.
How should we treat OT/ICS constraints where baselines can’t be enforced automatically?
Define OT-appropriate baselines (including allowed variances), monitor what you can safely monitor, and manage unavoidable deviations through time-bound exceptions with compensating controls and explicit ownership.
What evidence is most persuasive in an assessment?
A tight chain: baseline definition → monitoring output showing a deviation → ticket with triage and decision → proof of remediation or approved exception → verification of end state. Keep this evidence searchable by date, asset, and deviation type. (Cybersecurity Capability Maturity Model v2.1)
Frequently Asked Questions
Do we need a separate configuration baseline document for every system?
No. Auditors care that the baseline is explicit and controlled. You can define baselines through templates, configuration-as-code, MDM profiles, or standardized build documents, as long as you can show what applies to a given asset and who approves changes.
What counts as “addressing” a deviation?
“Addressed” means you recorded a disposition and acted on it: remediation back to baseline, an approved baseline update through change control, or a documented exception/risk acceptance with an accountable approver. (Cybersecurity Capability Maturity Model v2.1)
How do we handle planned changes that trigger monitoring alerts?
Require responders to link the approved change record to the deviation ticket. If the change is now the new normal, update the baseline through controlled approval so the same deviation does not recur.
What if a third party manages the system and we can’t instrument monitoring ourselves?
Document shared responsibility and require reporting or attestation that covers baseline drift and deviation handling. Track received reports, meetings, and follow-up tickets as your operating evidence.
How should we treat OT/ICS constraints where baselines can’t be enforced automatically?
Define OT-appropriate baselines (including allowed variances), monitor what you can safely monitor, and manage unavoidable deviations through time-bound exceptions with compensating controls and explicit ownership.
What evidence is most persuasive in an assessment?
A tight chain: baseline definition → monitoring output showing a deviation → ticket with triage and decision → proof of remediation or approved exception → verification of end state. Keep this evidence searchable by date, asset, and deviation type. (Cybersecurity Capability Maturity Model v2.1)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream