CM-7(1): Periodic Review

To meet the cm-7(1): periodic review requirement, run a recurring, documented review of each in-scope system to identify unnecessary or nonsecure functions, ports, protocols, software, and services, then disable, remove, or formally justify what remains. Operationalize it by setting a review cadence, assigning owners, baselining “approved” items, and retaining evidence that changes were made or risks were accepted. 1

Key takeaways:

  • CM-7(1) is a repeatable control: define a cadence, scope, method, and required outputs, then prove you executed.
  • The core deliverable is an “approved vs. observed” comparison for services/ports/protocols/software, with remediation or risk acceptance.
  • Auditors look for operational evidence (scan outputs, tickets, change records), not a policy statement.

CM-7(1) sits under the NIST SP 800-53 Configuration Management family and is commonly mapped to “attack surface reduction.” The requirement is simple to read and easy to fail in practice: teams can list “allowed ports,” yet never prove they reviewed real configurations and removed what was unnecessary. CM-7(1) closes that gap by forcing periodic review of what your systems actually expose and run, then driving action.

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat CM-7(1) like an operational routine with defined inputs (inventories and scan data), a decision rule (approved baseline vs. exceptions), and outputs (tickets, changes, and formal approvals). The control also becomes far easier to defend when you connect it to your change management process and your vulnerability/asset tooling: reviews produce remediation work, and closures prove the control runs.

This page gives requirement-level implementation guidance you can hand to control owners and assessors: who is in scope, what to do step-by-step, what evidence to keep, and where audits typically get stuck. 2

Regulatory text

Excerpt (CM-7(1)): “Review the system {{ insert: param, cm-07.01_odp.01 }} to identify unnecessary and/or nonsecure functions, ports, protocols, software, and services; and” 1

What the operator must do

You must (1) perform a recurring review of each in-scope system, (2) compare observed functions/ports/protocols/software/services to what is necessary and secure, and (3) take action: remove/disable items that are unnecessary or nonsecure, or document a justified exception with approval and compensating controls where appropriate. The “periodic” aspect is not optional; auditors expect a defined cadence and evidence of repeated execution. 1

Plain-English interpretation (what CM-7(1) really means)

CM-7(1) requires you to continuously reduce and govern your system’s attack surface by checking what is actually enabled and installed. You are proving two things:

  1. You know what is running and exposed (services, ports, protocols, installed software, and system functions).
  2. You actively remove or control what you don’t need (or formally accept risk when business needs require an exception).

A policy that says “we follow least functionality” supports CM-7, but CM-7(1) demands operational proof that you reviewed and acted. 2

Who it applies to (entity and operational context)

CM-7(1) is most directly applicable where NIST SP 800-53 is required or contractually imposed, including:

  • Federal information systems.
  • Contractor systems handling federal data. 1

Operationally, it applies to:

  • Production systems (the highest audit scrutiny).
  • Infrastructure components (firewalls, load balancers, Kubernetes nodes, hypervisors, IAM-facing services) because they often expose ports/protocols.
  • Gold images and baselines (VM templates, container base images) since unnecessary packages become systemic.
  • Third-party delivered components you operate (appliances, managed platforms where you still control configuration). Even if a third party manages a layer, you still need clarity on what you can review, what they attest to, and what evidence you retain.

What you actually need to do (step-by-step)

Use this as a practical runbook. Assign a control owner (often Infrastructure/SecOps) and a compliance owner (GRC) who verifies evidence quality.

Step 1: Define scope and the review “unit”

Decide what you review as a unit, such as:

  • A system boundary in your SSP/ATO package
  • An application stack (app + database + supporting services)
  • A platform cluster (Kubernetes cluster, VPC/VNet landing zone)

Write down which environments are included (prod first; add lower environments if contract requires). Tie scope to your asset inventory so you can prove completeness.

Output: CM-7(1) scope statement; in-scope asset list or system inventory extract.

Step 2: Establish an “approved baseline” for functionality

Create an allowlist-style baseline for each system type:

  • Approved network exposure: inbound/outbound ports, protocols, allowed source ranges, approved TLS versions if you manage them.
  • Approved services/daemons: what must run, and what must be disabled.
  • Approved software: packages, agents, libraries where you can govern; plus explicit prohibited items if you have them.
  • Approved functions: admin interfaces, debug endpoints, management planes, remote access methods.

Keep it pragmatic: start with what you can measure and enforce.

Output: Baseline standard (configuration standard), per system class; “approved ports/services/software” matrix.

Step 3: Collect observed data (prove reality, not intent)

Choose data sources you can reproduce:

  • Host configuration data (EDR, MDM, configuration management, OS queries)
  • Network exposure data (cloud security groups, firewall rules, network scanning results)
  • Software inventory data (package manager inventory, SBOM where available, endpoint inventory)

The audit goal is traceability: an assessor should see “observed state” evidence for the review period.

Output: Timestamped exports/screenshots/reports showing observed ports/services/protocols/software.

Step 4: Compare observed vs. approved and classify findings

For each deviation, tag it as:

  • Unnecessary (remove/disable)
  • Nonsecure (remove/disable or harden; if you keep it, document why and how you reduce risk)
  • Approved exception (business justification + risk acceptance + compensating controls + expiry/re-review date)

Avoid hand-waving. The finding record should say what changed, who approved, and how you confirmed remediation.

Output: CM-7(1) review worksheet; findings register; exception log entries.

Step 5: Drive remediation through change management

Make remediation auditable:

  • Open tickets for each item to be removed/disabled/hardened.
  • Implement through your change process (normal or emergency change as appropriate).
  • Validate closure with a re-scan or re-check of configuration state.

This is where CM-7(1) commonly fails: teams identify issues but cannot prove closure.

Output: Tickets, change records, approvals, implementation logs, post-change validation evidence.

Step 6: Management review and sign-off

Have the system owner (or delegated authority) review results:

  • Confirm that unnecessary items were removed or scheduled with dates.
  • Approve exceptions formally.
  • Record acceptance of residual risk where applicable.

Output: Review meeting notes or electronic approval; attestation record.

Step 7: Set recurrence and automate the evidence stream

CM-7(1) says “periodic” without prescribing a number in the excerpt you have. You should define a cadence based on system criticality and change rate, then document it in your procedure so auditors see a consistent pattern.

Output: Documented cadence; calendar invites; automated report schedules; recurring tasks.

Required evidence and artifacts to retain

Auditors typically want to see a chain from requirement → procedure → execution → remediation.

Minimum evidence set:

  • CM-7(1) procedure: scope, cadence, roles, data sources, comparison method, exception handling.
  • System inventory / scope evidence: list of in-scope systems reviewed for the period.
  • Approved baseline: ports/protocols/services/software allowlist per system type or boundary.
  • Observed state evidence: scan outputs, configuration exports, software inventory snapshots, dated.
  • Review record: findings list with classification and disposition.
  • Remediation proof: tickets, change approvals, implementation records, validation re-check results.
  • Exception register: justification, approver, compensating controls, and re-review triggers.

If you run this in Daydream, structure the control so evidence artifacts are recurring by design: one owner, one procedure, and a predictable evidence bundle per review cycle.

Common exam/audit questions and hangups

Expect these, and pre-answer them in your evidence package:

  1. “What’s your review frequency and why?”
    Have a documented cadence tied to risk/criticality, plus proof you met it.

  2. “How do you know you reviewed all in-scope systems?”
    Show scope mapping to inventory and a completeness check (systems reviewed vs. inventory).

  3. “What counts as ‘nonsecure’?”
    Define criteria: unsupported services, insecure protocols, unnecessary admin interfaces, weak cipher configurations you control, or services with no business owner.

  4. “Show me one example from detection through closure.”
    Provide an end-to-end sample: observed evidence → finding → ticket → change → validation.

  5. “How do exceptions expire?”
    If exceptions have no end date or re-review trigger, auditors treat them as permanent waivers.

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails Fix
Relying on a “secure configuration policy” alone CM-7(1) requires periodic review evidence Add a recurring review workflow with dated outputs and sign-off 1
No approved baseline You can’t prove something is “unnecessary” without a necessity standard Create a minimal allowlist per system class; iterate
Findings tracked in chat/email No audit trail, no closure proof Use tickets and change records; link to evidence artifacts
Exceptions with no owner Risk acceptance becomes orphaned Require a named approver and re-review trigger in the exception log
Reviews run only by SecOps Business necessity decisions get missed Require system owner participation for exception approvals

Enforcement context and risk implications

No public enforcement cases were provided in the supplied source catalog for this requirement, so you should treat CM-7(1) primarily as an assessment and contractual compliance risk driver in NIST-aligned programs. The operational risk is straightforward: unnecessary services and exposed ports expand attack surface and increase the chance that a misconfiguration or vulnerable component becomes exploitable. 2

Practical 30/60/90-day execution plan

Use a staged plan with clear deliverables instead of time-based promises of “full automation.”

First 30 days (stand up the control)

  • Assign control owner and approver(s) for exceptions.
  • Define scope and review unit; produce the in-scope inventory extract.
  • Draft baseline for one high-value system class (often internet-facing or identity-adjacent).
  • Run a pilot review on a small set of systems; capture observed state evidence and findings.
  • Define the evidence bundle template (what gets saved each cycle).

Days 31–60 (operationalize and connect to change)

  • Expand baseline coverage across remaining system classes in scope.
  • Formalize exception workflow: justification, approval, compensating controls, re-review trigger.
  • Ensure each finding produces a ticket and, when needed, a change record.
  • Start recurring review scheduling and add management sign-off.

Days 61–90 (scale and harden)

  • Automate collection where practical (scheduled exports, standard reports).
  • Implement completeness checks against inventory each cycle.
  • Produce an “audit-ready” packet: procedure + last review bundle + one end-to-end closure example.
  • Add quality checks: evidence timestamps, reviewer identity, and clear closure validation.

Frequently Asked Questions

What qualifies as a “periodic” review for CM-7(1)?

The excerpt requires periodic review but does not specify an exact frequency, so you must define and document a cadence that matches your risk and change rate. Auditors then test whether you followed your own cadence with dated evidence. 1

Do we need to review every port and service on every host?

You need coverage that is defensible for the system boundary you define, using data sources that reflect real configurations and exposure. Many teams start with internet-facing assets and standardized images, then expand as inventories and tooling mature. 2

We use managed cloud services. How do we satisfy CM-7(1) if we can’t see the underlying OS?

Review what you can control and observe: service configuration, network access rules, enabled endpoints, and integrations. For what you cannot observe directly, retain third-party attestations or service documentation as supporting evidence and document the boundary in your procedure. 2

Is a vulnerability scan enough evidence of CM-7(1)?

A scan can support “observed state,” but CM-7(1) also expects a review decision and action: unnecessary/nonsecure items identified, removed/disabled, or formally excepted. Keep the scan output plus the findings disposition and remediation records. 1

How should we document exceptions for services we must keep (for example, legacy protocols)?

Record the business justification, compensating controls, named approver, and a required re-review trigger so exceptions do not become permanent. Link the exception to the specific system and configuration item so it is testable. 2

What’s the fastest way to become audit-ready for CM-7(1)?

Produce one complete review cycle for a meaningful slice of scope: baseline, observed evidence, findings, tickets/changes, and validation proof. Daydream helps by mapping CM-7(1) to an owner, a written procedure, and recurring evidence artifacts so each cycle produces a consistent audit packet. 1

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What qualifies as a “periodic” review for CM-7(1)?

The excerpt requires periodic review but does not specify an exact frequency, so you must define and document a cadence that matches your risk and change rate. Auditors then test whether you followed your own cadence with dated evidence. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Do we need to review every port and service on every host?

You need coverage that is defensible for the system boundary you define, using data sources that reflect real configurations and exposure. Many teams start with internet-facing assets and standardized images, then expand as inventories and tooling mature. (Source: NIST SP 800-53 Rev. 5)

We use managed cloud services. How do we satisfy CM-7(1) if we can’t see the underlying OS?

Review what you can control and observe: service configuration, network access rules, enabled endpoints, and integrations. For what you cannot observe directly, retain third-party attestations or service documentation as supporting evidence and document the boundary in your procedure. (Source: NIST SP 800-53 Rev. 5)

Is a vulnerability scan enough evidence of CM-7(1)?

A scan can support “observed state,” but CM-7(1) also expects a review decision and action: unnecessary/nonsecure items identified, removed/disabled, or formally excepted. Keep the scan output plus the findings disposition and remediation records. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How should we document exceptions for services we must keep (for example, legacy protocols)?

Record the business justification, compensating controls, named approver, and a required re-review trigger so exceptions do not become permanent. Link the exception to the specific system and configuration item so it is testable. (Source: NIST SP 800-53 Rev. 5)

What’s the fastest way to become audit-ready for CM-7(1)?

Produce one complete review cycle for a meaningful slice of scope: baseline, observed evidence, findings, tickets/changes, and validation proof. Daydream helps by mapping CM-7(1) to an owner, a written procedure, and recurring evidence artifacts so each cycle produces a consistent audit packet. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream