CM-3(5): Automated Security Response

CM-3(5) requires you to automatically trigger pre-defined security responses when a system’s baseline configuration changes in an unauthorized way. To operationalize it fast, define “baseline” per asset class, detect unauthorized drift continuously, and configure automated actions (contain, revert, alert, ticket, and/or isolate) with approvals, logging, and exception handling. 1

Key takeaways:

  • You need automated detection of unauthorized baseline configuration changes plus automated response actions, not just alerts. 1
  • Define the response set per system type and risk, then prove it runs with logs, playbooks, and test results. 1
  • Auditors will focus on “unauthorized,” “baseline,” and whether the response is truly automatic and consistently evidenced. 1

CM-3(5): automated security response requirement is a configuration management enhancement in NIST SP 800-53 Rev. 5. It closes a common gap: teams often detect configuration drift but rely on humans to respond, leaving a window where attackers can persist, security controls can be disabled, or compliance baselines can silently decay. CM-3(5) pushes you to predefine what “unauthorized change” means and to implement automated responses that execute when that condition occurs. 1

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat CM-3(5) as an operating requirement across three layers: (1) governance definitions (baseline, authorization, exceptions), (2) technical enforcement (monitoring and response automation), and (3) audit-ready evidence (proof the automation exists, is in scope, and works). Your goal is not perfect immutability; your goal is repeatable detection and automatic, policy-aligned response for the systems where baseline integrity matters most. 1

Regulatory text

Requirement (excerpt): “Implement the following security responses automatically if baseline configurations are changed in an unauthorized manner: {{ insert: param, cm-03.05_odp }}.” 1

What the operator must do:

  1. Define baseline configurations for in-scope systems (what “good” looks like).
  2. Detect unauthorized changes to those baselines (what “bad drift” looks like, and who can approve changes).
  3. Automatically execute security responses when unauthorized drift occurs (actions must run without waiting for a person). 1

Because the excerpt references an organization-defined parameter (“cm-03.05_odp”), you must explicitly document the responses you chose and where they apply. Assessors will expect that this placeholder is filled by your organization’s defined response actions, thresholds, and scope. 1

Plain-English interpretation

CM-3(5) means: if someone changes a security-relevant configuration away from your approved baseline without authorization, your systems should automatically react in a way that reduces risk right away.

A practical interpretation that stands up in assessments:

  • “Baseline configuration” = approved, versioned configuration state for a system, component, image, or policy set (OS hardening, endpoint policy, firewall rules, IAM settings, Kubernetes admission policy, etc.).
  • “Unauthorized manner” = change is made outside the approved change process, by an unapproved identity, outside an approved window, without required peer review, or violating policy (for example, disabling logging).
  • “Automated security response” = system-initiated action such as revert/rollback, isolate/quarantine, revoke access, block a rule change, force redeploy from gold image, open an incident, and page on-call—executed automatically based on detection logic. 1

Who it applies to

CM-3(5) is most relevant for:

  • Federal information systems and environments assessed against NIST SP 800-53. 1
  • Contractor systems handling federal data where NIST 800-53 controls are contractually required (for example, via an authorization boundary, system security plan, or agency control overlay). 1

Operationally, you should prioritize:

  • High-integrity systems (identity, logging pipelines, key management, CI/CD, network security tooling).
  • Systems with frequent drift risk (cloud infrastructure, containers, endpoints).
  • Systems where manual response is too slow or inconsistent to meet risk tolerance.

What you actually need to do (step-by-step)

1) Set scope and control ownership

  • Assign a control owner (often Security Engineering or Platform Engineering) and a GRC owner responsible for evidence and assessment mapping.
  • Define the system scope: list which environments and asset classes are covered (production first, then pre-prod, then endpoints).
  • Record which teams can approve baseline changes and through which workflow (ticketing + code review + change calendar).

Deliverable: CM-3(5) control implementation statement with scope, owners, and referenced procedures. 1

2) Define “baseline” in a way you can enforce

Pick a baseline mechanism per asset class:

  • Servers/VMs: hardened image + config management state.
  • Cloud: infrastructure-as-code (IaC) plus policy-as-code guardrails.
  • Kubernetes: declarative manifests + admission controls.
  • Endpoints: MDM/EDR policy profiles and enforced security settings.
  • Network/security tools: versioned configuration exports and rule sets.

Rules of thumb that auditors like:

  • Baselines are versioned (commit hash, release tag, or signed artifact).
  • Baselines are approved (change record, pull request approval, CAB where required).
  • Baselines are retrievable (you can show the exact baseline that was in effect).

Deliverable: baseline standard + baseline inventory mapping systems to baseline sources.

3) Define “unauthorized change” precisely

Write measurable criteria. Examples you can implement:

  • Change made by an identity not in an approved admin group.
  • Change made outside an approved pipeline (console changes vs IaC pipeline).
  • Change lacks required approval evidence (no linked ticket / PR).
  • Change touches protected settings (logging disabled, EDR tamper protection off, firewall opened broadly).

Deliverable: unauthorized change criteria matrix by system type (what triggers, what is exempt, and why). 1

4) Implement automated detection of drift

Common detection patterns:

  • Continuous compliance rules in cloud security posture management for cloud resources.
  • Host configuration drift via configuration management reporting.
  • File integrity monitoring for sensitive configs.
  • Policy enforcement (deny/alert) via admission controllers, org policies, or CI/CD checks.

Minimum expectation: your detection is frequent enough to be meaningful for your risk model, and you can show that it covers the stated scope.

Deliverable: monitoring design showing signals, sources, and coverage.

5) Implement automated response actions (the core of CM-3(5))

This is where teams fail. CM-3(5) requires “security responses” to run automatically when unauthorized change occurs. 1

Build a response catalog with “trigger → automated actions → escalation.” Examples:

  • Auto-revert: roll back to last known good baseline (IaC apply, redeploy, config management enforcement).
  • Auto-contain: isolate host, block network egress, remove from load balancer, disable account/session.
  • Auto-preserve evidence: snapshot instance, export config diff, preserve logs for investigation.
  • Auto-notify: create incident ticket with config diff attached; page on-call; notify system owner.
  • Auto-enforce gates: block deployments or deny policy changes until approved.

Document which responses apply to which classes of drift. Your organization-defined parameter can (and should) list multiple response types.

Deliverable: automated response playbooks + technical runbooks + mapping to triggers. 1

6) Add exception handling without gutting the control

You will need exceptions (break-glass, emergency changes, vendor support access). Make them auditable:

  • Expiring exceptions with approvals.
  • Logging of break-glass use.
  • Post-incident review requirement when a break-glass path is used.
  • Compensating controls (extra monitoring, stricter alerts).

Deliverable: exception register entries linked to business justification and compensating controls.

7) Test and prove the automation works

Assessors will ask: “Show me.” Run controlled tests:

  • Attempt an unauthorized change in a non-production replica.
  • Capture evidence that the response executed automatically (revert occurred, host quarantined, ticket created).
  • Record lessons learned and adjust response thresholds to reduce noise.

Deliverable: test plan + test results + screenshots/log exports + remediation notes.

Required evidence and artifacts to retain

Keep evidence that proves design and operation:

Governance

  • CM-3(5) control narrative: scope, owners, definitions, and the organization-defined responses. 1
  • Baseline configuration standard and approval workflow documentation.
  • Unauthorized change criteria and exception procedure.

Technical

  • Tool configuration exports (policy rules, detection rules, response workflows).
  • Sample config drift events with timestamps and diffs.
  • Automation execution logs (SOAR run logs, pipeline logs, quarantine actions, rollback logs).
  • Evidence of access controls around baseline change mechanisms.

Operational

  • Incident/ticket records created by automation with required fields populated.
  • Test records showing unauthorized change triggers response.
  • Metrics dashboards (qualitative trends are fine; avoid unsourced numeric claims).

A practical approach in Daydream: map CM-3(5) to a single control owner, list your implemented response workflows, and schedule recurring evidence pulls (for example, quarterly exports of response run logs and drift events) so audits stop being a scavenger hunt.

Common exam/audit questions and hangups

Expect these questions:

  1. What is your baseline, exactly, and where is it stored? Provide artifact references (repo, signed images, configuration profiles).
  2. How do you determine “unauthorized”? Show criteria and at least one example event.
  3. Which responses are automatic vs manual? If humans must approve every action, you likely do not meet CM-3(5). 1
  4. How do you prevent console drift in cloud? Be ready to show guardrails and detection.
  5. How do you handle emergencies? Show break-glass plus after-action review.

Hangup to avoid: “We get an alert and then someone reverts.” That is detection plus manual response; CM-3(5) is explicitly about automated responses. 1

Frequent implementation mistakes and how to avoid them

  • Mistake: Baseline is a PDF hardening guide. Fix: tie baseline to machine-enforceable state (gold images, IaC, policy-as-code).
  • Mistake: Only alerting, no automated action. Fix: implement at least one automatic containment or rollback action for high-risk drift scenarios. 1
  • Mistake: Overbroad automation that breaks production. Fix: start with tiered responses (notify for low-risk, auto-revert for medium-risk, auto-isolate for critical).
  • Mistake: Exceptions become the default. Fix: time-bound exceptions, require justification, and review them on a recurring cadence.
  • Mistake: No proof of operation. Fix: keep response run logs and test evidence in a centralized audit folder with a predictable naming scheme.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so you should treat CM-3(5) as an assessment and authorization readiness issue rather than an enforcement-citation checklist.

Risk implications to communicate internally:

  • Unauthorized baseline drift can disable security tooling, open network paths, or weaken identity controls.
  • Automated response reduces dwell time for misconfiguration and helps prove consistent execution during assessments. 1

A practical 30/60/90-day execution plan

First 30 days (establish definitions and pick pilot systems)

  • Assign owners; confirm system scope and boundaries.
  • Define baseline sources for the pilot (for example, IaC repo + gold image process).
  • Define unauthorized-change criteria for the pilot.
  • Choose response actions for the pilot (auto-ticket + auto-revert for selected settings; containment for critical settings).
  • Draft CM-3(5) implementation statement and evidence list aligned to the excerpt. 1

By 60 days (build automation and produce first evidence)

  • Implement drift detection rules and validate signal quality.
  • Implement automated workflows (rollback/quarantine/ticket) and route to on-call.
  • Stand up exception handling (break-glass logging and approvals).
  • Run a tabletop and a controlled technical test; capture logs and tickets as evidence.

By 90 days (expand coverage and operationalize as BAU)

  • Expand from pilot to additional systems (prioritize identity, logging, and internet-facing services).
  • Add preventive guardrails where possible (deny policy changes, pipeline-only changes).
  • Set recurring evidence pulls and quarterly control self-tests.
  • In Daydream, lock the control mapping, attach procedures, and schedule evidence collection reminders so the control stays assessment-ready.

Frequently Asked Questions

Does CM-3(5) require auto-reverting every unauthorized change?

No. It requires that you implement defined security responses automatically when unauthorized baseline changes occur. Your response set can include rollback, isolation, access revocation, evidence capture, and incident creation, based on risk. 1

What counts as a “baseline configuration” for cloud services?

A baseline can be your IaC-defined desired state plus the approved security policies that govern the cloud environment. The key is that it is approved, versioned, and enforceable with evidence you can produce. 1

If we open a ticket automatically but a human fixes it, is that compliant?

Ticket creation alone is usually weak for CM-3(5) because the requirement calls for automated security responses, not only notification. Pair auto-ticketing with at least one automatic risk-reducing action for defined scenarios. 1

How do we handle emergency changes without triggering containment?

Use a break-glass process with explicit approvals, tight access controls, and full logging, then document it as an exception path with compensating monitoring. Your automation can treat break-glass identities differently while still recording the event.

How do we prove the response is “automatic” to an auditor?

Show the workflow configuration (rules and actions), then show event evidence where an unauthorized change triggered the action without manual initiation, including timestamps and system logs. Keep a test record that recreates the scenario. 1

What’s the minimum evidence set we should store centrally?

Keep your CM-3(5) narrative, baseline definitions, unauthorized-change criteria, automation run logs, and at least one test result package. Add a small set of real events (sanitized if needed) showing trigger, action, and ticket/incident linkage.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

Frequently Asked Questions

Does CM-3(5) require auto-reverting every unauthorized change?

No. It requires that you implement defined security responses automatically when unauthorized baseline changes occur. Your response set can include rollback, isolation, access revocation, evidence capture, and incident creation, based on risk. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What counts as a “baseline configuration” for cloud services?

A baseline can be your IaC-defined desired state plus the approved security policies that govern the cloud environment. The key is that it is approved, versioned, and enforceable with evidence you can produce. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

If we open a ticket automatically but a human fixes it, is that compliant?

Ticket creation alone is usually weak for CM-3(5) because the requirement calls for automated security responses, not only notification. Pair auto-ticketing with at least one automatic risk-reducing action for defined scenarios. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we handle emergency changes without triggering containment?

Use a break-glass process with explicit approvals, tight access controls, and full logging, then document it as an exception path with compensating monitoring. Your automation can treat break-glass identities differently while still recording the event.

How do we prove the response is “automatic” to an auditor?

Show the workflow configuration (rules and actions), then show event evidence where an unauthorized change triggered the action without manual initiation, including timestamps and system logs. Keep a test record that recreates the scenario. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What’s the minimum evidence set we should store centrally?

Keep your CM-3(5) narrative, baseline definitions, unauthorized-change criteria, automation run logs, and at least one test result package. Add a small set of real events (sanitized if needed) showing trigger, action, and ticket/incident linkage.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream