Asset, change, and configuration management

The asset, change, and configuration management requirement means you must maintain an accurate asset inventory across the lifecycle, define secure configuration baselines, and control changes through documented approval, testing, and rollback. To operationalize it fast, standardize how assets are identified, how “baseline” is defined, and how changes are requested, approved, implemented, and evidenced.

Key takeaways:

  • You need a lifecycle asset inventory tied to ownership, criticality, and environment (IT/OT/cloud).
  • You need hardened configuration baselines plus drift detection and exception handling.
  • You need a change process that produces audit-ready evidence: request, approval, implementation, validation, and rollback.

Compliance teams usually fail this requirement for one reason: the organization “does” asset management and change management, but cannot prove it end-to-end for the systems that matter most. C2M2’s expectation is operational discipline: assets are known, their configurations are controlled, and changes are deliberate rather than accidental.

For a CCO or GRC lead, the fastest path is to treat this as a single control system with three linked registers: (1) an authoritative asset inventory, (2) configuration baselines (including secure settings and approved versions), and (3) a change record that explains any deviation from baseline. If you can reliably answer “what do we have, how is it supposed to be configured, and who approved changes,” you can pass most examinations and reduce real incident risk.

This page translates the asset, change, and configuration management requirement into a practical implementation package: scope, steps, artifacts, audit questions, common mistakes, and a 30/60/90-day execution plan. Source: DOE Cybersecurity Capability Maturity Model (C2M2) (DOE C2M2).

Regulatory text

C2M2 requirement (excerpt): “Maintain asset lifecycle and secure configuration management discipline.” 1

What the operator must do:
You must run a repeatable, documented process that (a) identifies and tracks assets through acquisition, deployment, operation, and retirement, (b) defines secure configuration baselines for in-scope technologies, and (c) governs changes so that only authorized, tested, and recorded modifications reach production. Evidence matters: you should be able to show records that connect an asset to its baseline and to approved change events over time.

Plain-English interpretation (what examiners expect you to prove)

Examiners and assessors will not accept “we have a ticketing system” or “IT handles that.” They will look for objective proof that:

  • Asset lifecycle is managed: you can enumerate in-scope assets and show onboarding and offboarding/retirement.
  • Configurations are controlled: you maintain baselines for critical technology stacks and can detect and address drift.
  • Changes are governed: you can show approvals, testing/validation, separation of duties where feasible, and the ability to roll back.

A practical test: pick one critical system and ask for the list of components, their approved versions and key settings, and the last several changes with approvers and validation results. If that package is hard to assemble, your program is not operational.

Who it applies to (entity and operational context)

C2M2 is commonly applied in energy sector organizations and critical infrastructure operators 1. Operationally, this requirement applies wherever the organization depends on technology that could affect:

  • reliability or safety (OT/ICS, field devices, substations, plant networks),
  • business operations (enterprise IT, identity, endpoints),
  • regulated data or system integrity (billing, customer systems, market operations),
  • third-party managed environments (SaaS, managed service providers, OEM remote support).

Scope guidance for serious operators: prioritize systems that are (1) externally accessible, (2) safety/reliability-impacting, (3) privileged-path systems (identity, jump hosts), and (4) high-change velocity (cloud, CI/CD, endpoint fleets).

What you actually need to do (step-by-step)

Step 1: Set scope and ownership

  1. Define “in-scope asset classes.” Include IT, OT, cloud resources, network gear, applications, and key third-party connections.
  2. Assign control ownership. Name an accountable owner for asset inventory, configuration baselines, and change governance (often split across IT/OT, with GRC overseeing).
  3. Define “production” and environments. Audits get stuck when “prod” is ambiguous.

Deliverable: a one-page standard defining scope, roles, and system boundaries.

Step 2: Build an authoritative asset inventory (with lifecycle)

  1. Choose a system of record (CMDB, asset database, or tightly governed spreadsheet as an interim measure).
  2. Define required asset fields (minimum set):
    • unique asset ID, name, type/class, environment (prod/non-prod), owner, support group,
    • location (logical/physical), criticality tier, data/system impact,
    • connectivity attributes (internet-facing, remote access paths),
    • lifecycle state (planned, active, retired) and dates.
  3. Ingest from discovery sources where possible (network discovery, cloud inventory, endpoint management, OT asset tools).
  4. Create onboarding/offboarding triggers. Tie inventory updates to procurement, provisioning, and decommission workflows.
  5. Reconcile and resolve conflicts (duplicates, stale records). Track exceptions explicitly.

Control check: you can produce an export of in-scope assets and defend why it’s complete.

Step 3: Define secure configuration baselines

  1. Select baseline types: operating systems, network devices, IAM configurations, OT controller configurations, critical applications, and golden images.
  2. Document baseline content: include approved versions, security-relevant settings, required agents/logging, encryption settings where applicable, and approved services/ports.
  3. Establish an exception process: how teams request deviations, who approves, and how compensating controls are documented.
  4. Implement drift detection: periodic checks against baseline (tool-based where possible), plus a workflow to remediate drift or accept it as an approved exception.

Practical tip: start with the “highest blast radius” baselines: identity systems, remote access/jump infrastructure, and perimeter devices.

Step 4: Implement change management that produces evidence

  1. Standardize change types: standard (pre-approved), normal (reviewed), emergency (post-implementation review).
  2. Define change record minimums:
    • what is changing and why (risk/impact),
    • affected assets (must reference inventory IDs),
    • security impact assessment (brief but explicit),
    • approvals (including security/OT where relevant),
    • test/validation plan and results,
    • rollback plan and confirmation.
  3. Enforce implementation discipline: no change without a record, except defined emergency process with tight retrospective review.
  4. Tie changes to configuration baselines: a change updates the baseline or creates an approved exception; otherwise you create permanent “unknown drift.”

Evidence goal: a traceable chain from asset → baseline → approved change → validation.

Step 5: Add governance, reporting, and escalation

  1. Define KPIs that support oversight (no fabricated stats required): inventory completeness trend, aged exceptions, unauthorized change findings, drift backlog, emergency change review status.
  2. Run a monthly control review with IT/OT/security leadership: top exceptions, repeated drift, change failures, and systemic root causes.
  3. Escalate chronic noncompliance through risk acceptance or corrective action plans.

Step 6: Extend to third parties (without boiling the ocean)

Where third parties manage assets or changes (MSP, OEM, SaaS, colocation), require:

  • asset and configuration visibility appropriate to your risk,
  • change notification/approval rights for high-impact systems,
  • evidence on request (change logs, patch status, configuration standards).

This is where teams often discover they cannot prove control operation for outsourced environments. Put it in contracts and operational runbooks.

Required evidence and artifacts to retain (audit-ready)

Use this table as your minimum evidence list.

Artifact What it proves What “good” looks like
Asset inventory export (in-scope) You know what you run Fields populated, ownership clear, lifecycle states current
Asset onboarding/offboarding records Lifecycle control Tickets/workflows show adds, moves, retires
Configuration baseline standards Secure configuration discipline Baselines versioned, dated, approved by accountable owners
Drift/scan reports or attestations Baseline enforcement Findings tracked to remediation or approved exception
Exception register Controlled deviations Business justification, compensating controls, expiry/review
Change tickets (sample set) Authorized change governance Approvals, testing, validation, rollback evidence included
Emergency change log + PIRs Controlled urgency Clear criteria, timely retrospective review, corrective actions
Access control for change tools Separation of duties Admin rights limited; audit logs enabled

Common exam/audit questions and hangups

Expect these questions, and prepare standard evidence bundles.

  1. “How do you know the inventory is complete?”
    Hangup: discovery sources don’t reconcile. Prepare your reconciliation method and exception handling.

  2. “Show me the baseline for this critical system and the last changes.”
    Hangup: baseline exists in a wiki, changes exist in a ticket tool, neither references the other. Fix with asset IDs and baseline versioning.

  3. “How do you prevent unauthorized changes?”
    Hangup: approvals happen in chat. Move approvals into the system of record or capture approvals as immutable records.

  4. “How do you handle emergency changes?”
    Hangup: emergency becomes a loophole. Define criteria, require retrospective approval and root cause.

  5. “How do third parties fit into this control?”
    Hangup: “they manage it.” You still need oversight and evidence access, aligned to risk.

Frequent implementation mistakes (and how to avoid them)

  • Mistake: CMDB as a graveyard. Records go stale.
    Avoidance: tie inventory updates to provisioning and decommission workflows; assign owners and operational SLAs as internal expectations.

  • Mistake: Baselines without enforcement. A PDF standard does not control reality.
    Avoidance: implement drift checks and create an exception register with expirations and reviews.

  • Mistake: Change records that omit security impact. Teams treat changes as pure availability work.
    Avoidance: add a short mandatory security impact section and route high-risk changes to security/OT review.

  • Mistake: No linkage between asset, baseline, and change. Audits fail on traceability.
    Avoidance: require asset IDs on change tickets and baseline identifiers in implementation notes.

  • Mistake: Shadow change paths. Engineers change configs directly without records.
    Avoidance: restrict privileged access; monitor admin actions; require post-facto reconciliation when direct access is necessary.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so this page does not list enforcement actions.

Operational risk is still straightforward: weak asset inventory and uncontrolled change are repeat drivers of outages, security incidents, and inability to recover quickly. From a compliance standpoint, the most common “finding” pattern is insufficient evidence: the organization cannot demonstrate disciplined control operation across the lifecycle for in-scope systems 1.

Practical 30/60/90-day execution plan

First 30 days: make the program auditable for a narrow scope

  • Define in-scope asset classes and name accountable owners.
  • Stand up an authoritative inventory (even if interim) and populate it for the most critical systems.
  • Publish baseline templates for your top technology stacks (identity, remote access, perimeter, core OT network segments).
  • Update change ticket templates to require asset ID, approvals, testing/validation, and rollback.

Success condition: you can produce a complete evidence packet for a small set of critical assets.

Days 31–60: connect the systems and start enforcement

  • Integrate inventory identifiers into change tooling (required field).
  • Start drift checks for at least one baseline category and track findings to closure or exception.
  • Create an exception register with approvals and review dates.
  • Formalize emergency change criteria and post-implementation review workflow.

Success condition: drift and exceptions are visible and governed, not hidden in emails.

Days 61–90: expand scope and harden governance

  • Expand inventory coverage to additional environments (cloud accounts, endpoint fleets, OT assets where feasible).
  • Add reporting for governance meetings: aged exceptions, repeat drift, emergency change trends.
  • Put third-party requirements into contracts or operating procedures for managed environments.
  • Run an internal audit-style walkthrough: pick random assets and reconstruct their baseline and change history.

Success condition: traceability works at scale, and leadership sees clear risk decisions.

How Daydream fits (without creating tool dependency)

If you manage multiple frameworks and need consistent evidence packages, Daydream can help you map this asset, change, and configuration management requirement to your internal controls, assign ownership, and track the artifacts auditors ask for. The operational work still lives in your CMDB, change tooling, and configuration systems; Daydream helps you keep the compliance narrative and evidence index tight.

Frequently Asked Questions

What counts as an “asset” for this asset, change, and configuration management requirement?

Treat any technology component that supports delivery of a critical service as an asset, including IT, OT, cloud resources, network devices, applications, and critical third-party connections. If it can be configured or changed and it affects risk, include it.

We have an inventory, but it’s incomplete. Is that automatically noncompliant?

The requirement expects lifecycle discipline, so gaps create compliance exposure. Start by scoping critical assets first, document known gaps as exceptions with a remediation plan, and show progress through governance records.

Do we need a formal CMDB tool to meet the requirement?

No specific tool is mandated in the provided C2M2 excerpt 1. You do need an authoritative system of record with ownership, lifecycle status, and traceability to changes and baselines.

How do we handle emergency changes without failing audits?

Define what qualifies as emergency, require a minimal pre-approval when feasible, and always complete a documented post-implementation review. Auditors accept emergency paths when they are controlled and not used as a default workflow.

How should we treat configuration “drift” that operations insists is necessary?

Convert drift into an approved exception with justification, compensating controls, and a review date, or update the baseline if the new state is the intended standard. Leaving drift unmanaged creates both audit and incident response problems.

Our third party manages key systems. How do we show evidence?

Require evidence access in contracts and runbooks: change logs, baseline statements, and confirmation of approvals for high-impact changes. If you cannot obtain evidence, record that as a third-party risk issue and drive contractual or operational remediation.

Related compliance topics

Footnotes

  1. DOE C2M2

Frequently Asked Questions

What counts as an “asset” for this asset, change, and configuration management requirement?

Treat any technology component that supports delivery of a critical service as an asset, including IT, OT, cloud resources, network devices, applications, and critical third-party connections. If it can be configured or changed and it affects risk, include it.

We have an inventory, but it’s incomplete. Is that automatically noncompliant?

The requirement expects lifecycle discipline, so gaps create compliance exposure. Start by scoping critical assets first, document known gaps as exceptions with a remediation plan, and show progress through governance records.

Do we need a formal CMDB tool to meet the requirement?

No specific tool is mandated in the provided C2M2 excerpt (Source: DOE C2M2). You do need an authoritative system of record with ownership, lifecycle status, and traceability to changes and baselines.

How do we handle emergency changes without failing audits?

Define what qualifies as emergency, require a minimal pre-approval when feasible, and always complete a documented post-implementation review. Auditors accept emergency paths when they are controlled and not used as a default workflow.

How should we treat configuration “drift” that operations insists is necessary?

Convert drift into an approved exception with justification, compensating controls, and a review date, or update the baseline if the new state is the intended standard. Leaving drift unmanaged creates both audit and incident response problems.

Our third party manages key systems. How do we show evidence?

Require evidence access in contracts and runbooks: change logs, baseline statements, and confirmation of approvals for high-impact changes. If you cannot obtain evidence, record that as a third-party risk issue and drive contractual or operational remediation.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream