CM-6(1): Automated Management, Application, and Verification
CM-6(1): Automated Management, Application, and Verification requires you to use automation to manage approved configuration settings, push (apply) those settings to in-scope systems, and continuously verify the systems remain compliant with the approved baseline. Operationally, you need a defined baseline per system class, automated enforcement, automated drift detection, and audit-ready evidence of both deployment and verification. 1
Key takeaways:
- Automation must cover three actions: manage the baseline, apply it to systems, and verify compliance on an ongoing basis. 1
- “Evidence” is the control: you need logs, reports, and change records that prove settings were enforced and drift was detected and handled.
- Scope clarity matters: define which configuration settings and which assets are governed, then map them to tools and owners.
CM-6(1) is the difference between “we have a hardening standard” and “we can prove every in-scope system is hardened and stays hardened.” The requirement focuses on automation because manual configuration reviews fail at scale: people miss settings, systems drift after patches, and exceptions pile up without a reliable record. Your job as a Compliance Officer, CCO, or GRC lead is to convert this into an assessable operating model that security engineering can run day-to-day.
This page is written to help you operationalize the cm-6(1): automated management, application, and verification requirement quickly: what to put in scope, how to define configuration baselines, what “apply” and “verify” look like in practice, and what evidence auditors usually ask for. It also calls out the most common failure mode for CM-6(1): having tools in place but not being able to show a clean chain of custody from approved baseline → automated enforcement → automated verification results → remediation or approved exception.
Primary source references are NIST SP 800-53 Rev. 5 and the OSCAL control catalog. 2
Regulatory text
Requirement (excerpt): “Manage, apply, and verify configuration settings for [organization-defined parameters] using [organization-defined automated mechanisms].” 1
Operator interpretation (what you must do)
To satisfy CM-6(1), you must:
- Manage configuration settings: maintain approved baselines (by platform/system type), including version control and approvals.
- Apply configuration settings: automatically enforce those baselines onto in-scope assets (new builds and existing systems).
- Verify configuration settings: automatically check systems for compliance, identify drift, and record results.
All three elements must be implemented with automated mechanisms you define and can defend during an assessment. 1
Plain-English requirement meaning
CM-6(1) expects a closed loop:
- You define “what good looks like” (baseline).
- Your tooling makes systems match that baseline (enforcement).
- Your tooling proves they still match over time (verification), and you act on drift.
An assessor will look for two things: technical coverage (are the right assets and settings included) and operational integrity (does the program run without heroics, and can you prove it ran).
Who it applies to (entity and operational context)
CM-6(1) commonly applies to:
- Federal information systems and the agencies operating them. 3
- Contractor systems handling federal data, including environments supporting federal contracts where NIST SP 800-53 controls are flowed down or used as the security baseline. 3
Operational contexts where CM-6(1) is typically assessed:
- Enterprise endpoint fleets (workstations, privileged admin workstations).
- Server infrastructure (Windows/Linux).
- Cloud workloads (IaaS instances, managed Kubernetes nodes where configurable).
- Network/security devices with centrally managed configuration (where supported).
- “Golden image” build pipelines (CI/CD and infrastructure-as-code).
What you actually need to do (step-by-step)
Step 1: Define scope and ownership (make it assessable)
Create a one-page scoping statement:
- Asset classes in scope (e.g., Windows servers, RHEL servers, macOS endpoints, container hosts).
- Environments in scope (prod, non-prod, regulated enclaves).
- Control owner (Security Engineering) and compliance owner (GRC).
- Exception owner (risk acceptance authority).
Evidence target: a CM-6(1) control statement that names the automation tools used for enforcement and for verification, plus the scope boundaries. 1
Step 2: Define baseline configuration settings (the “managed” part)
For each asset class:
- Start with an approved baseline (CIS benchmark mapping, DISA STIG mapping, or internal hardening standard). Keep the baseline content consistent with your environment.
- Translate the baseline into machine-enforceable rules (policy definitions, configuration profiles, IaC modules).
- Establish a change workflow: proposed change → security review → approval → versioned release notes.
Practical requirement test: if an engineer asks “what is the approved SSH setting” or “what is the approved local admin policy,” you should have one authoritative, versioned answer.
Step 3: Implement automated application (enforcement)
Pick mechanisms appropriate to each asset class. Examples (not exhaustive):
- Endpoint configuration management (MDM, GPO equivalents, endpoint management suites).
- Server configuration management (desired state configuration, configuration management agents).
- Cloud policy enforcement (policy-as-code, image pipelines with hardened templates, guardrails).
Operationalize enforcement by:
- Enforcing on build (golden images, hardened AMIs/templates).
- Enforcing on change (config management runs after patching).
- Enforcing on join (new device enrollment triggers policy application).
Key design choice: decide whether enforcement is “preventive” (blocks drift) or “detective + corrective” (detects drift and auto-remediates). Either can work if verification and evidence are strong, but preventive enforcement tends to reduce exception volume.
Step 4: Implement automated verification (continuous compliance)
Verification means you can automatically answer:
- Which assets are compliant today?
- Which settings are drifting, and on which assets?
- What remediation occurred, and when?
- What exceptions exist, who approved them, and when they expire?
Common verification approaches:
- Continuous configuration monitoring (CCM) scanners and posture management tools.
- Configuration compliance checks from the same enforcement platform (if it produces tamper-resistant reports).
- Cloud security posture management (CSPM) for cloud configuration settings.
Minimum operating expectation: verification produces a recurring report or dashboard export that is retained as evidence and tied back to the baseline version.
Step 5: Define exception handling for non-compliant settings
CM-6(1) becomes messy without an exception process. Require:
- A documented reason (technical constraint, business need).
- Compensating controls (if applicable).
- Explicit approval by the right authority.
- Review/renewal triggers.
Keep exceptions time-bounded where possible, but choose a cadence you can run reliably.
Step 6: Prove it works (control testing routine)
Set a repeatable internal test:
- Sample assets per class.
- Show baseline version.
- Show enforcement evidence (policy assignment, last successful run).
- Show verification output (compliance result, drift findings).
- Show remediation or approved exception.
If you use Daydream to manage control operations, this is where it fits naturally: map CM-6(1) to a control owner, document the procedure, and define the recurring evidence artifacts so evidence collection is consistent and audit-ready. 1
Required evidence and artifacts to retain
Keep evidence tied to the three verbs in the requirement: manage, apply, verify. 1
Baseline management (manage)
- Approved baseline documents per asset class (versioned).
- Baseline-to-technical-policy mapping (what setting is enforced by what rule).
- Change approvals (tickets/PR approvals) and release notes.
Automated application (apply)
- Tool configuration showing policy assignments and targeting logic (screenshots or exports).
- Job/run logs proving successful application (last run status, output).
- Provisioning pipeline evidence (hardened images, IaC modules, build logs).
Automated verification (verify)
- Scheduled compliance reports (exports, signed reports, or immutable logs).
- Drift findings and remediation records (tickets, auto-remediation logs).
- Exception register with approvals and scope.
Program governance
- CM-6(1) control narrative (what tools do what; which assets are in scope).
- RACI (who approves baselines, who maintains tooling, who accepts exceptions).
- Metrics definitions (even qualitative) used to track control health.
Common exam/audit questions and hangups
Expect these lines of inquiry:
- “Show me the approved baseline and its version history.” Auditors want to see governance, not a static PDF.
- “How do you know systems stayed configured after patching?” This tests the verification loop.
- “What systems are out of scope and why?” Unclear scope is a frequent finding.
- “Are results tamper-resistant?” If verification output can be edited without detection, be ready to explain access controls and logging.
- “How do you handle exceptions?” “We track them in email” rarely passes.
Frequent implementation mistakes (and how to avoid them)
Mistake 1: “We have a benchmark” but no machine enforcement
Fix: translate the baseline into enforceable policies (configuration profiles, DSC, policy-as-code). Keep the mapping table as evidence.
Mistake 2: Automation exists, but verification is ad hoc
Fix: schedule verification and retain periodic exports. Treat evidence retention as part of the runbook, not an audit scramble.
Mistake 3: Tool sprawl with unclear responsibility
Fix: define a single control owner and a single evidence owner. Multiple tools are fine; fragmented ownership is not.
Mistake 4: Exceptions become permanent drift
Fix: require documented approvals and periodic review. Tie exceptions to asset inventory so exceptions do not silently expand to new systems.
Risk implications (what fails when CM-6(1) is weak)
Weak CM-6(1) increases the chance that insecure defaults, unauthorized changes, or configuration drift create exploitable conditions across many systems at once. From a governance perspective, it also creates assessment risk: you may “be doing security work” but still fail an audit because you cannot show that settings were applied and verified through automation. 1
Practical 30/60/90-day execution plan
First 30 days (foundation)
- Name the CM-6(1) control owner and document scope boundaries.
- Inventory in-scope asset classes and identify the authoritative baseline source for each.
- Pick (or confirm) the automated mechanisms for enforcement and for verification per class. 1
- Build the evidence checklist and retention location (GRC repository, ticketing system exports, immutable logging where available).
By 60 days (implementation)
- Convert the baseline into enforceable policies for the highest-risk asset class first (common starting point: servers or privileged endpoints).
- Turn on automated verification reporting and define how findings become tickets.
- Stand up the exception register with approval workflow.
- Run an internal “mock audit”: show manage/apply/verify evidence end-to-end for a small sample.
By 90 days (operationalization)
- Expand coverage to remaining asset classes and environments.
- Establish a steady-state cadence: baseline change control, enforcement monitoring, verification review, exception review.
- Produce a quarterly (or otherwise regular) CM-6(1) control health package: baseline versions, compliance outputs, top drift causes, open exceptions, and remediation trends.
If you need to accelerate audit readiness, Daydream’s most practical role is evidence orchestration: mapping CM-6(1) to an owner, documenting the runbook, and prompting recurring evidence capture so reports, logs, and approvals arrive in a consistent format. 1
Frequently Asked Questions
Does CM-6(1) require one tool for both enforcement and verification?
No. The text requires automated mechanisms for managing, applying, and verifying settings, but it does not require a single product. You do need to document which tool performs each function and how outputs are retained. 1
What counts as “automated verification” if we already run vulnerability scans?
Vulnerability scanning can support verification if it checks configuration settings and produces repeatable, retainable compliance outputs. If it only detects missing patches or CVEs, pair it with configuration compliance checks that directly measure baseline settings. 1
How do we handle cloud services where we can’t control the underlying OS configuration?
Define the baseline at the layer you control (cloud configuration, identity settings, service-level parameters) and use automated policy checks for those settings. Document the shared responsibility boundary in the CM-6(1) scope statement. 3
Can we meet CM-6(1) with scripts and cron jobs?
You can, if the scripts reliably enforce settings, verify compliance, and generate tamper-resistant logs you retain. Most teams fail here because scripts lack governance, change control, and durable evidence, so treat scripts like production code with approvals and versioning. 1
What evidence do auditors ask for most often?
They usually want the approved baseline, proof of automated deployment to in-scope systems, and recurring verification results that show drift detection and response. Keep a clean chain from baseline version to enforcement logs to verification reports. 1
How do third parties fit into CM-6(1)?
If third parties manage or host in-scope systems, contract terms and technical controls should require baseline enforcement and provide verification outputs (or equivalent attestations) you can retain. Treat them as part of system scope, not a documentation footnote. 3
Footnotes
Frequently Asked Questions
Does CM-6(1) require one tool for both enforcement and verification?
No. The text requires automated mechanisms for managing, applying, and verifying settings, but it does not require a single product. You do need to document which tool performs each function and how outputs are retained. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What counts as “automated verification” if we already run vulnerability scans?
Vulnerability scanning can support verification if it checks configuration settings and produces repeatable, retainable compliance outputs. If it only detects missing patches or CVEs, pair it with configuration compliance checks that directly measure baseline settings. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How do we handle cloud services where we can’t control the underlying OS configuration?
Define the baseline at the layer you control (cloud configuration, identity settings, service-level parameters) and use automated policy checks for those settings. Document the shared responsibility boundary in the CM-6(1) scope statement. (Source: NIST SP 800-53 Rev. 5)
Can we meet CM-6(1) with scripts and cron jobs?
You can, if the scripts reliably enforce settings, verify compliance, and generate tamper-resistant logs you retain. Most teams fail here because scripts lack governance, change control, and durable evidence, so treat scripts like production code with approvals and versioning. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What evidence do auditors ask for most often?
They usually want the approved baseline, proof of automated deployment to in-scope systems, and recurring verification results that show drift detection and response. Keep a clean chain from baseline version to enforcement logs to verification reports. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How do third parties fit into CM-6(1)?
If third parties manage or host in-scope systems, contract terms and technical controls should require baseline enforcement and provide verification outputs (or equivalent attestations) you can retain. Treat them as part of system scope, not a documentation footnote. (Source: NIST SP 800-53 Rev. 5)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream