CM-14: Signed Components
CM-14: Signed Components requires you to block installation of specified software/firmware/components unless your systems verify the component is digitally signed and the signing certificate is on an organization-approved trust list. Operationalize it by defining the in-scope component types, approving acceptable publishers/certificates, enforcing signature checks in endpoint/server tooling, and retaining logs and exception approvals as evidence. 1
Key takeaways:
- Define exactly which “components” are in scope, then enforce “no valid signature, no install.”
- Maintain an approved certificate/publisher trust list and a process to add/remove trust quickly.
- Evidence is mostly technical: policy, baselines, enforcement settings, install/deny logs, and documented exceptions.
The cm-14: signed components requirement is one of the fastest ways to reduce supply-chain risk in day-to-day operations: it shifts your posture from “detect bad software after the fact” to “stop untrusted code from landing.” CM-14’s text is short, but implementation decisions are not. You must decide what counts as a “component” for your environment (applications, packages, drivers, container images, scripts, browser extensions, firmware updates), what “recognized and approved” means for certificates and publishers, and where enforcement happens (endpoints, servers, CI/CD, golden images, MDM, and privileged install workflows).
From an assessor’s perspective, CM-14 fails in predictable ways: the policy exists but isn’t enforced, signature verification is enabled on some fleets but not others, exceptions are informal, and teams cannot show the trust list and how it is governed. This page is written for a Compliance Officer, CCO, or GRC lead who needs requirement-level implementation guidance you can hand to IT/SecOps and then audit with confidence against NIST SP 800-53 control expectations. 2
Regulatory text
“Prevent the installation of {{ insert: param, cm-14_prm_1 }} without verification that the component has been digitally signed using a certificate that is recognized and approved by the organization.” 1
Operator translation: you must (1) identify the component types you will control, (2) require a digital signature check before install, (3) only accept signatures that chain to certificates/publishers your organization has approved, and (4) make the control preventative (blocking), not advisory. 1
Plain-English interpretation (what CM-14 really demands)
CM-14 is a “gate” control in configuration management. It expects technical enforcement that stops untrusted or tampered components from being installed. “Signed” is necessary but not sufficient: CM-14 also requires that the signing certificate be recognized and approved by your organization, which implies a governed trust decision (who you trust, why, and how that trust is maintained). 1
The practical outcome you want: if a user or admin tries to install software (or another defined component) that is unsigned, signed by an unknown publisher, or signed with an unapproved certificate chain, the install is blocked and recorded, with a controlled exception path for business-critical cases.
Who it applies to (entity and operational context)
Entity scope: organizations implementing NIST SP 800-53 controls, including federal information systems and contractor systems handling federal data. 2
Operational scope (where CM-14 shows up):
- Endpoints: user laptops/desktops where installers, browser extensions, and drivers appear.
- Servers and admin hosts: where agents, packages, and configuration tooling run.
- Build and deployment paths: golden images, package repositories, CI/CD dependencies, and artifacts promoted to production.
- Privileged install workflows: remote support tools, admin “break glass,” and software distribution platforms.
If you have a mixed environment, you do not need one tool. You do need one consistent rule: only approved signatures can cross the install boundary.
What you actually need to do (step-by-step)
1) Define “component” for your CM-14 scope
CM-14’s parameter (“{{ insert: param, cm-14_prm_1 }}”) is where many programs stumble: teams enforce signatures for “apps” but forget drivers, agents, and scripts. Create an explicit list of component categories you will block when unsigned or unapproved, such as:
- OS applications and packages (MSI/EXE, macOS PKG, Linux RPM/DEB)
- Kernel extensions and drivers
- Privileged agents (EDR, backup, monitoring, remote access)
- Container images and orchestrator add-ons (if applicable)
- Firmware updates (where your process controls installation)
Write this list into your CM/secure configuration standard so assessors can see the intended enforcement boundary. 1
2) Establish your “recognized and approved” trust model
You need a documented trust decision, not “whatever the OS trusts.” Build and govern:
- Approved publishers / signing identities (e.g., specific software publishers you allow)
- Approved certificate authorities or chains (internal PKI and/or external trust anchors you accept)
- Approval criteria (who reviews, what evidence is required, and when trust is revoked)
Operationally, treat this like an allowlist with change control. Tie changes to a ticket, include justification, and record the approver.
3) Implement technical enforcement points (block, don’t just alert)
Pick enforcement mechanisms that can prevent installs. Common patterns (choose what fits your platform):
- Application allowlisting / application control to require signed executables and restrict by publisher/certificate.
- MDM/UEM controls to restrict untrusted packages on managed endpoints.
- Privileged access workflows where installation occurs only through managed software distribution or elevated sessions with guardrails.
- Repository controls for packages and images, where only signed artifacts can be promoted.
Your enforcement must be hard enough that “unsigned but installed anyway” is rare and explainable via an exception record.
4) Build an exception process that won’t collapse under pressure
Unsigned components happen: niche hardware drivers, legacy line-of-business tools, emergency hotfixes. CM-14 does not forbid exceptions, but you must control them:
- Require a time-bound exception with business owner sign-off.
- Require compensating controls (hash allowlist, isolated host, additional monitoring) when signature trust cannot be established.
- Track exceptions centrally and review them regularly for removal.
Assessors will look for a pattern: exceptions that never expire are a sign the control is not functioning.
5) Operational monitoring and continuous assurance
Turn enforcement into a measurable program:
- Alert on blocked install attempts and trend by host/group.
- Review the trust list changes and confirm they follow your approval workflow.
- Sample endpoints/servers to confirm the policy is applied (not just configured in a template).
If you can’t show ongoing operation, CM-14 will be assessed as “paper control.”
6) Map ownership and recurring evidence (make it assessable)
CM-14 commonly fails because no one owns the full chain from policy to endpoints. Assign:
- Control owner: accountable for policy, scope, and audit response.
- Technical owner(s): endpoint engineering, server engineering, IAM/PAM, DevOps as needed.
- Evidence owner: the person who can pull logs/config screenshots/exports on demand.
Daydream (or any GRC system you use) becomes useful here when it maps CM-14 to a named owner, a repeatable procedure, and a defined evidence set so you can produce consistent artifacts every cycle instead of rebuilding proof during audits. 1
Required evidence and artifacts to retain
Keep evidence that proves design (what you intended) and operation (what happened).
Design artifacts
- CM-14 policy/standard stating in-scope components and the “block if not signed by approved cert” rule. 1
- Approved trust list documentation: approved publishers/cert chains, and the approval workflow.
- Configuration baselines for endpoints/servers showing signature enforcement settings.
Operational artifacts
- System-generated logs of allowed/blocked installations (sampled across fleets).
- Reports showing coverage (which hosts are under enforcement policy vs not).
- Change records for trust list additions/removals (tickets with approver).
- Exception register with approvals, rationale, compensating controls, and closure evidence.
Common exam/audit questions and hangups
Expect these and pre-answer them in your evidence pack:
- “What components are in scope for CM-14 in your environment?” Have the list ready and consistent with tooling coverage.
- “Show me the enforcement configuration.” Auditors will want proof from the control plane (policy) and from endpoints (effective state).
- “What certificates are ‘recognized and approved’?” Produce the trust list and a recent change ticket.
- “How do you handle unsigned but required software?” Show exception workflow and a few closed exceptions.
- “How do you know this is working continuously?” Provide blocked install events, trend reports, and periodic review outputs.
Frequent implementation mistakes (and how to avoid them)
- Mistake: relying on “signed” without verifying approval. A valid signature from an unapproved publisher still fails the “recognized and approved” expectation. Maintain a curated trust list with governance. 1
- Mistake: scoping too narrowly (only user apps). Include drivers, agents, and admin-installed components in your scope statement, then enforce accordingly.
- Mistake: monitoring-only mode. CM-14 says “prevent the installation,” so you need blocking in your control design. 1
- Mistake: exception-by-email. Informal approvals are hard to defend. Route exceptions through a tracked system with approver identity, time bounds, and compensating controls.
- Mistake: inconsistent enforcement across fleets. A common audit finding is “implemented on endpoints but not servers” (or vice versa). Document justified carve-outs and a rollout plan.
Enforcement context and risk implications
No public enforcement cases were provided in the source data for CM-14, so you should treat this as a control-assurance and supply-chain integrity expectation rather than a citation-driven penalty topic. The risk is practical: unsigned or unapproved components increase exposure to trojanized installers, malicious updates, and lateral movement via rogue admin tools. CM-14 reduces the chance that unauthorized code becomes “normal” system state. 2
Practical 30/60/90-day execution plan
Exact timelines vary, but these phases map to how teams actually implement CM-14 without stalling.
First 30 days (foundation and scoping)
- Name the CM-14 control owner and technical owners.
- Define the in-scope “component” categories for your environment and record them in a standard. 1
- Inventory current install paths (software center, manual installs, CI/CD artifacts, admin tools).
- Draft the trust model: what “recognized and approved” means, who approves, and how trust is revoked.
Days 31–60 (enforcement pilot + evidence design)
- Choose enforcement points per platform (endpoint app control/MDM/server controls/build pipeline checks).
- Pilot blocking in a controlled group; tune for business software that breaks.
- Stand up the exception workflow and exception register.
- Define your evidence pack: baseline configs, log sources, and recurring exports.
Days 61–90 (scale + operational cadence)
- Roll enforcement to broader fleets with documented carve-outs.
- Implement recurring review: trust list changes, exception aging, and blocked install trends.
- Run an internal “mini-audit”: sample hosts, reproduce evidence, and confirm you can answer common audit questions quickly.
- Register CM-14 in Daydream (or your GRC tool) with owners, procedure, and evidence tasks so collection is repeatable. 1
Frequently Asked Questions
What counts as a “component” under cm-14: signed components requirement?
CM-14 leaves the component type as a parameter, so you must define what you will control (applications, drivers, agents, packages, images). Write the scope down and align it to where you can actually prevent installation. 1
Do we need to block all unsigned software immediately?
The requirement outcome is prevention, but most teams phase rollout to avoid business disruption. Use a pilot, add a formal exception path, then expand coverage while keeping the end state as “block unless approved.” 1
Is “signed by a trusted OS root store” enough to meet the “recognized and approved” requirement?
Not by itself. CM-14 expects the organization to recognize and approve the certificate, which usually means an explicit trust list and a process to manage it, even if you start from OS trust anchors. 1
How should we handle internally developed software?
Sign internal releases with your managed code-signing process and treat your internal CA/signing certs as part of the approved trust list. Keep approval and certificate lifecycle evidence the same way you do for third-party publishers. 1
What evidence do auditors ask for most often?
They usually want proof of enforcement (policy settings and effective state) plus operational proof (blocked/allowed install logs and exception approvals). Keep a ready evidence pack that maps directly to CM-14’s “prevent without verification” language. 1
Where does CM-14 fit with other NIST 800-53 controls?
CM-14 sits in Configuration Management and supports broader integrity and change-control goals by stopping unapproved code from being introduced. Document those relationships in your control narrative, but keep CM-14 evidence focused on signature verification and prevention. 2
Footnotes
Frequently Asked Questions
What counts as a “component” under cm-14: signed components requirement?
CM-14 leaves the component type as a parameter, so you must define what you will control (applications, drivers, agents, packages, images). Write the scope down and align it to where you can actually prevent installation. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Do we need to block all unsigned software immediately?
The requirement outcome is prevention, but most teams phase rollout to avoid business disruption. Use a pilot, add a formal exception path, then expand coverage while keeping the end state as “block unless approved.” (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Is “signed by a trusted OS root store” enough to meet the “recognized and approved” requirement?
Not by itself. CM-14 expects the organization to recognize and approve the certificate, which usually means an explicit trust list and a process to manage it, even if you start from OS trust anchors. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How should we handle internally developed software?
Sign internal releases with your managed code-signing process and treat your internal CA/signing certs as part of the approved trust list. Keep approval and certificate lifecycle evidence the same way you do for third-party publishers. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What evidence do auditors ask for most often?
They usually want proof of enforcement (policy settings and effective state) plus operational proof (blocked/allowed install logs and exception approvals). Keep a ready evidence pack that maps directly to CM-14’s “prevent without verification” language. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Where does CM-14 fit with other NIST 800-53 controls?
CM-14 sits in Configuration Management and supports broader integrity and change-control goals by stopping unapproved code from being introduced. Document those relationships in your control narrative, but keep CM-14 evidence focused on signature verification and prevention. (Source: NIST SP 800-53 Rev. 5)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream