CM-6(3): Unauthorized Change Detection

CM-6(3): unauthorized change detection requirement means you must be able to detect changes to system configuration settings that were not approved, and you must do it in a repeatable, auditable way. Operationalize it by defining “authorized,” instrumenting key configuration baselines for monitoring, alerting on drift, investigating exceptions, and keeping evidence that proves detection and response are working. 1

Key takeaways:

  • Treat “unauthorized” as “not approved through your change control,” and document that decision for each system boundary.
  • Monitor for configuration drift across endpoints, servers, network devices, cloud resources, and security tooling; alerting alone is not compliance.
  • Keep assessor-ready artifacts: baselines, change records, drift alerts, investigations, and periodic reviews mapped to a named control owner.

The cm-6(3): unauthorized change detection requirement sits in the Configuration Management (CM) family and focuses on one operational outcome: you can detect configuration changes that bypass your approved process. This is different from general change management hygiene. Auditors will look for proof that (1) you defined what configuration settings matter, (2) you established a baseline, (3) you continuously or routinely detect drift, and (4) you treat unapproved changes as security events with investigation and closure.

For a CCO, GRC lead, or Compliance Officer, the fastest path to implementation is to translate CM-6(3) into a narrow set of “monitoring commitments” per environment (on-prem, cloud, SaaS administration planes) and then tie each commitment to a tool, an owner, an alert workflow, and recurring evidence. Your goal is not to promise perfection; your goal is to show disciplined detection and response, with clear scope and traceability to the systems handling federal data or operating as federal information systems. 1

Regulatory text

Control requirement: NIST SP 800-53 includes control enhancement CM-6(3): Unauthorized Change Detection. The provided excerpt is: “NIST SP 800-53 control CM-6.3.” 2

Operator interpretation of what you must do:
You must implement a method to detect changes to configuration settings that were not authorized (for your defined scope), and you must be able to show evidence that detection works and is acted on. The assessor expectation is practical: a baseline exists, changes are compared to that baseline or approved change records, anomalies generate tickets/alerts, and someone investigates and closes the loop. 1

Plain-English interpretation (what this control really asks for)

CM-6(3) is about configuration drift and tampering. If an administrator, attacker, script, third party support team, or automation pipeline changes a security-relevant setting without approval, you should detect it before it becomes your new normal.

A working definition you can use internally:

  • Authorized change = a change tied to an approved change request (or an approved standard change) with an identifiable implementer and timestamp.
  • Unauthorized change = anything else: console clicks without a ticket, emergency fixes without after-the-fact approval, changes from compromised credentials, or “temporary” tweaks that never got documented.

This control is easiest to pass when you treat it as a closed-loop workflow: baseline → detect → alert → investigate → remediate → document.

Who it applies to (entity and operational context)

CM-6(3) commonly applies where you have committed to NIST SP 800-53, including:

  • Federal information systems operated by agencies. 1
  • Contractor systems handling federal data, including cloud and managed environments that support federal missions or process federal information. 1

Operationally, scope the control to:

  • Systems in your authorization boundary (or equivalent scope definition) that process, store, or transmit the federal data you are responsible for.
  • Shared services that can change those systems (identity providers, endpoint management, CI/CD, infrastructure-as-code runners, MDM, network management platforms).
  • Third parties with administrative access or “break glass” support paths.

What you actually need to do (step-by-step)

1) Assign ownership and define scope

  1. Name a control owner (often Security Engineering, IT Ops, or Cloud Platform).
  2. Define the in-scope system list and boundaries (prod vs. dev; regulated enclaves; management planes).
  3. Define “configuration settings” categories you will monitor (security baselines first): identity and access, logging, encryption, network exposure, endpoint hardening, critical service settings.

Deliverable: CM-6(3) control statement with scope, owner, and monitored setting categories.

2) Define what “authorized” means in your change process

  1. Identify your system of record for changes (e.g., ITSM tickets, pull requests, change approvals in CI/CD).
  2. Decide what counts as pre-approved “standard change” vs. “normal change” vs. “emergency change.”
  3. Write the mapping: “A change is authorized if it has Ticket/PR ID + approval + implementer identity.”

Common examiner hangup: “Emergency changes.” Auditors won’t accept “emergency” as a permanent exception category. Require retrospective approval and a post-incident review.

3) Establish configuration baselines

  1. Create a baseline per platform type (Windows, Linux, network devices, Kubernetes, cloud accounts, SaaS admin settings).
  2. Store baselines in a controlled repository (versioned, access-controlled).
  3. Tie baselines to a standard (your internal standard, CIS benchmarks if you use them, or policy requirements). If you reference external benchmarks, keep the mapping in your baseline doc.

Evidence: baseline documents, repo commit history, approvals for baseline changes.

4) Implement unauthorized change detection (technical)

Pick detection methods that fit each layer. Most programs use a mix:

  • File integrity / configuration state monitoring for servers and endpoints (critical config files, registry keys, security agents).
  • Cloud configuration monitoring for IaaS/PaaS (security groups, IAM policy changes, storage public access, encryption toggles).
  • Network configuration monitoring for firewall/router/switch configuration changes.
  • Directory and privileged access monitoring for identity changes (admin role assignments, MFA disablement).
  • Infrastructure-as-code drift detection where IaC is the source of truth.

Minimum expectation to operationalize:

  • A defined list of monitored items (what you watch).
  • A collection mechanism (how you watch).
  • Alert routing (who gets paged or ticketed).
  • An investigation path (what happens next).

5) Wire alerts to triage and investigation

  1. Send drift alerts to a queue that is actually worked (SOC, SecOps, IT Ops), not a dead email inbox.
  2. Create a runbook with decision points:
    • Does the change match an approved ticket/PR?
    • If yes, close as authorized and link to the approval.
    • If no, open an incident (or security case), capture who/what changed it, and decide whether to roll back, approve retrospectively, or contain a compromise.
  3. Require closure notes: root cause, corrective action, and whether the baseline needs updating.

Practical tip: Most teams fail here. They can detect drift, but they cannot prove consistent triage and closure.

6) Make it auditable (recurring review)

  1. Hold a periodic review of:
    • top unauthorized change causes,
    • recurring exceptions,
    • systems with missing telemetry,
    • aging investigation tickets.
  2. Update baselines and monitoring rules as environments change.

Evidence: meeting notes, ticket metrics (qualitative is fine), corrective action tracking.

Required evidence and artifacts to retain

Keep artifacts that prove design and operation:

Governance

  • CM-6(3) control narrative: scope, owner, definitions, tooling summary.
  • Change management policy/procedure defining authorized vs. unauthorized changes.

Baselines

  • System configuration baseline documents 2.
  • Baseline approval records and version history (repo logs are fine).

Detection operation

  • Monitoring rule/config exports (what settings are watched).
  • Sample drift alerts (screenshots or SIEM exports) with timestamps.
  • Ticket/incident records showing triage, linkage to change requests, and closure notes.
  • Exception register for approved deviations (with expiry/owner).

Assurance

  • Periodic review notes and action items.
  • Evidence that third parties with admin access are included in logging/alerting scope.

Common exam/audit questions and hangups

  • “Show me how you detect an unauthorized change to a critical setting.” Expect to walk through one end-to-end example.
  • “Which configuration settings are monitored, and why those?” Auditors want rationale, not a massive unscoped list.
  • “How do you determine whether the change was authorized?” You need a clear linkage to tickets/PRs.
  • “What happens if the change was made by a third party?” You must show the same detection and investigation pathway applies.
  • “How do you prevent alert fatigue?” They will accept tuning if you can show coverage for high-risk settings.

Frequent implementation mistakes (and how to avoid them)

  1. Monitoring without a baseline. Drift requires a reference point. Fix: publish baselines per platform and treat baseline changes as controlled changes.
  2. Email-only alerting. Inboxes do not pass audits. Fix: route to ITSM/SIEM/SOC queue with ownership and SLAs you define.
  3. No linkage to authorization records. “We approved it verbally” fails. Fix: require ticket/PR IDs in closure notes.
  4. Ignoring the management plane. Cloud console and identity changes are common failure points. Fix: monitor IAM/role changes and cloud configuration events explicitly.
  5. Third-party admin access outside the workflow. Fix: contractually require change records, log access, and include their actions in your detection pipeline.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so treat this as a framework compliance and auditability risk rather than a cited enforcement trend. The operational risk is straightforward: unauthorized configuration changes can disable security controls, expose data, or create persistence. For regulated federal environments, inability to demonstrate CM-6(3) operation commonly translates into assessment findings, delayed authorizations, corrective action plans, and increased oversight. 1

Practical 30/60/90-day execution plan

First 30 days (stabilize scope and evidence)

  • Assign the CM-6(3) control owner and define in-scope systems and “security-relevant” configuration categories.
  • Document “authorized vs. unauthorized” change criteria and identify your system of record for approvals.
  • Publish initial baselines for your highest-risk platforms (identity, cloud accounts, endpoints/servers).
  • Stand up an evidence folder structure and start collecting sample alerts/tickets.

Days 31–60 (instrument detection and workflow)

  • Configure drift detection for your priority settings and route alerts to a worked queue.
  • Write and train the triage runbook; require ticket linkage for closures.
  • Add third-party administrator actions to scope (logging + alerting + investigation path).
  • Run a tabletop test: simulate one unauthorized change and capture end-to-end evidence.

Days 61–90 (prove repeatability and close gaps)

  • Expand coverage to remaining platforms and “tier 2” settings (network devices, Kubernetes, SaaS admin settings).
  • Tune alert noise without removing coverage for critical settings; document tuning decisions.
  • Implement periodic review and corrective actions tracking.
  • Package an assessor-ready CM-6(3) evidence set: baseline versions, monitoring configuration exports, and multiple closed investigations.

How Daydream fits (without changing your operating model)

Daydream is helpful when CM-6(3) breaks down at the evidence layer: mapping ownership, documenting procedures, and producing recurring artifacts that match what assessors ask for. Use it to assign a control owner, define your CM-6(3) procedure, and standardize the evidence checklist so every review cycle produces the same minimum set of artifacts.

Frequently Asked Questions

What counts as an “unauthorized change” if we allow emergency changes?

Treat emergency changes as authorized only if they follow your emergency workflow and receive retrospective approval with documentation. If a change has no traceable approval record, handle it as unauthorized and investigate.

Do we need real-time detection to meet CM-6(3)?

NIST SP 800-53 does not require a specific detection frequency in the provided excerpt. Set a detection approach that matches your risk and prove it operates consistently with alerts, tickets, and reviews. 1

How do we handle drift caused by automation tools?

Put automation identities and pipelines inside your authorization model. Require change records (PRs or tickets) for pipeline-driven changes and tune detection rules to recognize approved deployments while still flagging out-of-band console changes.

Are SaaS configuration changes in scope?

If the SaaS platform is part of the system boundary or can affect security of federal data, treat key admin settings as in scope. Monitor for changes to identity settings, logging, sharing controls, and privileged roles.

What evidence is most persuasive to auditors?

Closed-loop examples: a drift alert, the linked investigation ticket, proof of authorization (or incident handling), and the remediation or baseline update. Provide at least a few examples across different system types to show repeatability.

How do we include third parties with admin access?

Require contractual and technical controls: named accounts, logged admin actions, and a requirement to use your change process (or provide equivalent records). Then show that third-party changes generate the same drift alerts and investigation tickets as internal changes.

Footnotes

  1. NIST SP 800-53 Rev. 5

  2. NIST SP 800-53 Rev. 5 OSCAL JSON

Frequently Asked Questions

What counts as an “unauthorized change” if we allow emergency changes?

Treat emergency changes as authorized only if they follow your emergency workflow and receive retrospective approval with documentation. If a change has no traceable approval record, handle it as unauthorized and investigate.

Do we need real-time detection to meet CM-6(3)?

NIST SP 800-53 does not require a specific detection frequency in the provided excerpt. Set a detection approach that matches your risk and prove it operates consistently with alerts, tickets, and reviews. (Source: NIST SP 800-53 Rev. 5)

How do we handle drift caused by automation tools?

Put automation identities and pipelines inside your authorization model. Require change records (PRs or tickets) for pipeline-driven changes and tune detection rules to recognize approved deployments while still flagging out-of-band console changes.

Are SaaS configuration changes in scope?

If the SaaS platform is part of the system boundary or can affect security of federal data, treat key admin settings as in scope. Monitor for changes to identity settings, logging, sharing controls, and privileged roles.

What evidence is most persuasive to auditors?

Closed-loop examples: a drift alert, the linked investigation ticket, proof of authorization (or incident handling), and the remediation or baseline update. Provide at least a few examples across different system types to show repeatability.

How do we include third parties with admin access?

Require contractual and technical controls: named accounts, logged admin actions, and a requirement to use your change process (or provide equivalent records). Then show that third-party changes generate the same drift alerts and investigation tickets as internal changes.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream