CM-3(7): Review System Changes
To meet the cm-3(7): review system changes requirement, you must independently review system changes on a defined schedule and upon defined triggers to confirm only authorized changes occurred, and to detect unauthorized modifications quickly. Operationalize it by setting review cadences, trigger events, reviewers, data sources, and a repeatable exception workflow tied to change and configuration management. 1
Key takeaways:
- Define what gets reviewed, when, and by whom, then document it as an operating procedure.
- Prove operation with repeatable evidence: change tickets, diffs, approvals, review logs, and exception handling records.
- Treat findings as incidents or control failures: investigate, contain, remediate, and prevent recurrence.
CM-3(7) sits inside the NIST configuration management family and addresses a problem auditors see constantly: teams have a change process on paper, but nobody performs a systematic, independent review that would catch changes that bypassed that process. The control enhancement is narrow and operational: you are expected to review changes to the system at defined intervals or upon defined conditions to determine whether unauthorized changes occurred. 1
For a CCO or GRC lead, the fastest path to implementation is to translate CM-3(7) into three concrete decisions: (1) which system components are in scope for review (applications, infrastructure, IAM, network, SaaS configuration), (2) what events trigger an out-of-cycle review (security incidents, privileged access anomalies, emergency changes), and (3) what evidence will satisfy an assessor without requiring heroics every audit cycle.
This page gives requirement-level guidance you can hand to operations: clear review steps, who owns what, what artifacts to retain, what exam questions to expect, and how to avoid the common failure mode of “we could do the review” rather than “we did the review, here’s the proof.”
Requirement: CM-3(7) review system changes
CM-3(7) requires you to review changes to the system on a defined basis or when defined conditions occur, to determine whether unauthorized changes have occurred. The practical objective is detection: you are validating that your actual environment matches your authorized change record. 1
Plain-English interpretation
You need a repeatable control where someone (or a team) periodically compares:
- What changed in production (or other in-scope environments), and
- What was approved and implemented through your formal change process,
…and then documents whether there were any exceptions. If exceptions exist, you investigate and remediate.
This is not the same as “we approve changes.” CM-3(7) is the back-end check that confirms the approval process was followed and that no one made out-of-band changes.
Regulatory text
“Review changes to the system {{ insert: param, cm-03.07_odp.01 }} or when {{ insert: param, cm-03.07_odp.02 }} to determine whether unauthorized changes have occurred.” 1
What the operator must do with the placeholders: those parameters mean your organization must define:
- the review frequency / schedule (the “at intervals” part), and
- the triggering conditions for reviews (the “when” part).
Assessors will look for (a) your defined intervals and triggers, and (b) evidence you performed reviews at those intervals and when triggers occurred. 1
Who it applies to
Entities
- Federal information systems.
- Contractor systems handling federal data (common in NIST-aligned programs). 1
Operational contexts where CM-3(7) is “make or break”
- Environments with multiple administrators, DevOps pipelines, and frequent releases.
- Hybrid infrastructure (on-prem plus cloud) where changes can occur in consoles outside IaC.
- High-privilege IAM ecosystems where policy changes are easy to make and hard to notice without review.
- Systems with third party administrators or managed service providers (MSPs): you still own the control outcome even when a third party executes changes.
What you actually need to do (step-by-step)
Step 1: Assign clear ownership and separation of duties
Define these roles in your control narrative:
- Control owner (accountable): typically Security/GRC or IT Risk.
- Change process owner (responsible): ITSM/DevOps lead.
- Independent reviewer (performs review): someone not approving every change they review; in smaller shops, independence can be “different person than implementer” for sampled items.
Write down how independence is achieved in your org (team separation, peer review rotation, or management review). CM-3(7) is easier to defend when the reviewer is not the primary implementer. 1
Step 2: Define scope precisely (systems, components, environments)
Create an in-scope inventory for “system changes,” such as:
- Application releases and configuration toggles
- Infrastructure changes (compute, storage, network)
- Cloud control-plane changes (security groups, routing, resource policies)
- Identity and access changes (roles, policies, privileged group membership)
- Security tooling configuration (EDR policies, SIEM rules, DLP policies)
Be explicit about which environments are covered (production is the minimum most auditors care about; include staging if it hosts sensitive data).
Step 3: Define review intervals and trigger events (your ODPs)
Document:
- Interval-based reviews: the cadence you commit to (choose what you can sustain and evidence consistently).
- Trigger-based reviews: conditions that force a review outside the normal cadence.
Common trigger categories you can define without overpromising:
- Emergency changes / break-glass activity
- Suspected security incident affecting system integrity
- Detection of privileged access anomalies
- Tooling gaps (e.g., configuration drift alerts, CI/CD control failures)
Write the triggers in operational terms: “If X happens, Security reviews Y logs/config states and reconciles to Z approvals.”
Step 4: Build the reconciliation procedure (the “compare actual vs authorized” method)
Your review needs a repeatable data path:
A. Pull the authorized change record
- ITSM change tickets, approvals, CAB notes
- Pull requests / merge approvals for IaC and application code
- Approved standard changes list (if you use them)
B. Pull the “what actually changed” record Pick sources that reflect reality, for example:
- Source control diffs for IaC and app repos
- CI/CD deployment logs and release records
- Cloud audit logs for console/API changes
- Configuration drift outputs (CIS benchmark tools, MDM baselines, endpoint policies)
- System configuration snapshots (gold images, baseline configs)
C. Reconcile For each review period (or trigger event), document:
- Set of changes observed
- Matching authorization artifacts (ticket ID, PR link, approval)
- Exceptions (changes without authorization, approvals without implementation, or mismatches)
Step 5: Define what counts as “unauthorized” and what you do when found
Document decision rules that reviewers can apply consistently:
- Unauthorized change: no ticket/approval; change by an unapproved identity; change outside an approved window; deviation from approved config baseline.
- Required response: open an incident or control exception; roll back or re-authorize through retrospective approval (only if your policy allows); perform root cause analysis.
Make the workflow tight: who gets paged, who approves remediation, and how you prevent recurrence (access restriction, pipeline guardrails, additional monitoring).
Step 6: Sample intelligently if you can’t review everything
If change volume is high, define a sampling approach that is defensible:
- Always review all privileged/IAM and security-control changes.
- Review all emergency changes.
- Sample routine application releases.
Document the sampling rules and keep the population list so you can prove the sample came from a complete set.
Step 7: Operationalize with a recurring runbook and calendar
Turn CM-3(7) into a scheduled operational task with named owners:
- Calendar event and ticket template for each review cycle
- Checklist that enumerates sources to check and reconciliation steps
- Standard output format (review log)
If you use Daydream, this is where it fits naturally: map CM-3(7) to a control owner, a written procedure, and a recurring evidence set so reviews produce consistent artifacts that are audit-ready without rework. 1
Required evidence and artifacts to retain
Auditors typically want to see both design and operating effectiveness proof. Keep:
Control design artifacts
- Control narrative for CM-3(7): scope, intervals, triggers, roles, independence statement 1
- Change management policy/procedure references (where approvals and emergency changes are defined)
- System boundary/inventory showing what “system” means in your environment
Operating effectiveness artifacts (recurring)
- Review logs (dated, signed/attested, reviewer name/role)
- Change population lists for the review period (export from ITSM, CI/CD, cloud audit logs)
- Reconciliation worksheet or ticket: change → authorization mapping
- Evidence of exception handling: incident tickets, rollback records, retrospective approvals (if allowed), root cause notes, closure approvals
- Access/audit log excerpts showing the underlying events (retain enough to prove the reviewer actually checked, not just asserted)
Keep evidence in a consistent folder structure with naming conventions aligned to the review period and system name.
Common exam/audit questions and hangups
Assessors tend to probe the same weak points:
- “Show me your defined intervals and triggers.” They will look for explicit definitions, not implied practice. 1
- “Who performs the review, and how are they independent?” If the implementer reviews their own changes, explain compensating checks (peer review, management sign-off, automated enforcement with human verification).
- “How do you know the change list is complete?” If you only review ITSM tickets, you miss console changes and drift. Bring audit logs into scope.
- “Walk me through an exception.” Have at least one example showing detection → investigation → remediation → closure.
- “Do you review emergency changes?” Emergency change processes are a common bypass channel; your triggers should cover them.
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails | Fix |
|---|---|---|
| Reviews exist only as a policy statement | No operating evidence; fails control effectiveness | Create a recurring review ticket and attach exports, mappings, and sign-off each cycle |
| Only reviewing approvals, not reality | You confirm paperwork, not system integrity | Reconcile against audit logs, drift outputs, and deployment logs |
| “Everything is a standard change” | Removes meaningful authorization and review | Define what qualifies as standard; require review focus on privileged/security changes |
| Console changes by admins aren’t captured | Unauthorized changes slip through | Require audit-log review and restrict console permissions where feasible |
| Exceptions are “fixed” verbally | No traceability | Track exceptions in a ticketing system with documented closure and approvals |
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so you should treat CM-3(7) as a baseline control expectation rather than a case-driven mandate. 1
Risk-wise, CM-3(7) is a practical safeguard against:
- Insider or compromised-admin activity that modifies configurations
- Shadow changes during outages (“quick fix” modifications that never get recorded)
- Integrity failures where systems drift away from approved baselines
The business impact shows up as outages, security control weakening, and audit findings that cascade into broader control failures (change control, access control, incident response).
Practical 30/60/90-day execution plan
First 30 days (stand up the minimum viable review)
- Name the control owner, reviewers, and escalation path.
- Define scope for one high-value system boundary (start where federal data or regulated workloads live). 1
- Write the CM-3(7) procedure: interval + triggers + data sources + reconciliation steps.
- Run your first review cycle and produce a complete evidence packet, even if manual.
Next 60 days (expand coverage and reduce manual work)
- Add trigger-based reviews for emergency changes and integrity-related incidents.
- Integrate additional “actual change” sources (cloud audit logs, CI/CD logs, drift checks).
- Formalize sampling rules if volume is high; document population extraction.
- Build a standard exception workflow with ownership and closure criteria.
By 90 days (stabilize, prove repeatability, prepare for assessment)
- Demonstrate multiple completed review cycles with consistent artifacts.
- Test independence and escalation: pick a reviewer rotation and document it.
- Run a tabletop for an unauthorized change scenario: confirm detection path and ticketing.
- Use Daydream (or your GRC system) to map CM-3(7) to the control owner, procedure, and recurring evidence artifacts so audit requests become a simple export instead of a scramble. 1
Frequently Asked Questions
What counts as a “system change” for CM-3(7)?
Treat any modification that can affect confidentiality, integrity, or availability as a system change: code releases, infrastructure edits, cloud policy changes, and IAM/security configuration changes. Document your scoped categories so reviewers apply consistent rules. 1
Do we have to review every single change?
CM-3(7) requires review at defined intervals or triggers, but it does not prescribe “review every change.” If you sample, document the sampling method and always include high-risk change types like privileged access and security control changes.
Can an automated tool satisfy the review requirement?
Automation can generate the change and drift data, but you still need a recorded review decision that evaluates authorization and documents exceptions. Keep the tool outputs plus a reviewer attestation or ticket showing reconciliation.
How do we handle emergency changes without blocking operations?
Define emergency changes as a trigger event for an out-of-cycle review, then require retrospective documentation and approval within your process. The review should confirm the emergency change was necessary, authorized under the emergency process, and either rolled back or formally baselined.
What evidence is most persuasive to an auditor?
A dated review log that maps actual changes (audit logs/diffs) to approvals (tickets/PRs), with one or two closed-loop exceptions, usually satisfies the intent. Consistency matters more than polished formatting.
We outsource administration to a third party. Who performs the review?
You can have the third party produce change reports, but you should assign an internal reviewer (or independent function) to validate authorization and handle exceptions. Keep the contract/SOW language that requires timely change reporting and audit-log access.
Footnotes
Frequently Asked Questions
What counts as a “system change” for CM-3(7)?
Treat any modification that can affect confidentiality, integrity, or availability as a system change: code releases, infrastructure edits, cloud policy changes, and IAM/security configuration changes. Document your scoped categories so reviewers apply consistent rules. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Do we have to review every single change?
CM-3(7) requires review at defined intervals or triggers, but it does not prescribe “review every change.” If you sample, document the sampling method and always include high-risk change types like privileged access and security control changes.
Can an automated tool satisfy the review requirement?
Automation can generate the change and drift data, but you still need a recorded review decision that evaluates authorization and documents exceptions. Keep the tool outputs plus a reviewer attestation or ticket showing reconciliation.
How do we handle emergency changes without blocking operations?
Define emergency changes as a trigger event for an out-of-cycle review, then require retrospective documentation and approval within your process. The review should confirm the emergency change was necessary, authorized under the emergency process, and either rolled back or formally baselined.
What evidence is most persuasive to an auditor?
A dated review log that maps actual changes (audit logs/diffs) to approvals (tickets/PRs), with one or two closed-loop exceptions, usually satisfies the intent. Consistency matters more than polished formatting.
We outsource administration to a third party. Who performs the review?
You can have the third party produce change reports, but you should assign an internal reviewer (or independent function) to validate authorization and handle exceptions. Keep the contract/SOW language that requires timely change reporting and audit-log access.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream