CM-11(1): Alerts for Unauthorized Installations
CM-11(1) requires you to generate and route alerts when software is installed without authorization on systems in scope, so your team can quickly investigate, contain, and remediate. To operationalize it, define “authorized” software, instrument endpoints and servers to detect installs, send alerts to a monitored queue, and retain evidence that alerts fire and are handled. 1
Key takeaways:
- Define what “unauthorized installation” means in your environment, then enforce it through allowlisting and approved software workflows.
- Generate alerts from endpoint and system telemetry, route them to an owned queue, and require documented triage and closure.
- Keep audit-ready artifacts: configuration, alert samples, tickets, and periodic testing results mapped to a control owner and procedure.
The cm-11(1): alerts for unauthorized installations requirement is a detection-and-response expectation inside configuration management. It is less about writing a policy and more about proving that you can catch software installs that bypass your approved process. Assessors will look for two things: (1) a clear rule for what counts as “authorized,” and (2) evidence that alerts are generated, received, investigated, and closed consistently.
This requirement often fails in practice for a simple reason: teams rely on preventive controls (admin rights restrictions, allowlisting, packaging) but never validate detection coverage across all endpoints, servers, and ephemeral workloads. Another common gap is operational ownership. Alerts that land in an unmonitored inbox or a noisy SIEM dashboard do not count as an operational control.
This page gives requirement-level implementation guidance you can execute quickly: who owns the control, what systems are in scope, what configurations you need, how to route and handle alerts, and what evidence to keep. Where helpful, it also points to how teams track control design and recurring evidence in Daydream to stay assessment-ready without turning this into a monthly fire drill.
Regulatory text
Control reference: CM-11(1) “Alerts for Unauthorized Installations.”
Provided excerpt: “NIST SP 800-53 control CM-11.1.” 1
Operator interpretation: You must implement automated or system-supported alerting that triggers when software is installed without authorization, and you must operationalize those alerts so they drive action. This is not satisfied by a statement that “users are prohibited from installing software.” You need detection plus monitored alert handling that produces evidence. 2
Plain-English interpretation (what the requirement is really asking)
You need a reliable way to know when a machine in scope gets new software that didn’t come through your approved path. Then you need to notify the right team fast enough that the install can be investigated (benign vs malicious), contained (remove software, isolate host), and corrected (close the access/process gap).
Think of CM-11(1) as the “tripwire” behind your software control strategy:
- Prevention reduces unauthorized installs.
- Alerts prove you can detect failures of prevention and respond.
Who it applies to (entity and operational context)
Applies to:
- Federal information systems and contractor systems handling federal data where NIST SP 800-53 is the governing control set. 2
Operational contexts where CM-11(1) is commonly assessed:
- End-user endpoints (managed laptops/desktops)
- Windows/Linux servers (including domain controllers, jump hosts, admin workstations)
- VDI / shared workstations
- Cloud workloads where an “install” can occur through package managers, golden image changes, or container build pipelines
- Third-party managed devices if they are in your system boundary and your contract makes you responsible for monitoring and response
What you actually need to do (step-by-step)
1) Define “authorized installation” in enforceable terms
Create a short standard (one page is fine) that answers:
- Authorized sources: software center/MDM, managed package repo, golden images, CI/CD pipeline, approved scripts.
- Authorized actors: IT admins, endpoint management service accounts, build pipeline identities.
- Authorized change paths: ticketed request + approval, emergency change process, pre-approved software catalog.
- Prohibited paths: local admin ad-hoc installs, unsigned installers, downloads from the internet, unmanaged package managers.
Decision point you must settle early: are you allowlisting by publisher, hash, product, or package name? Pick one approach per platform and document it.
2) Assign a single operational owner and an escalation path
Auditors will ask “who gets paged?” and “who closes the loop?” Set:
- Control owner: typically Endpoint Engineering or Security Operations (shared is fine, but name one accountable owner).
- Alert recipient: SOC queue (SIEM/SOAR), endpoint management queue, or a dedicated security mailbox that is actively monitored.
- Escalation: IR lead for suspected malware, IT ops lead for standard policy violations.
Practical tip: write a short RACI for (a) tuning, (b) triage, (c) remediation, (d) exception approvals.
3) Instrument detection on each major platform in scope
You need telemetry that can detect software installation events. Your implementation can vary by environment, but the exam expectation is consistent: detection exists and is enabled broadly.
A workable pattern:
- Endpoints: endpoint security/EDR agent and/or OS logging that captures install activity.
- Servers: same as endpoints, plus tighter change windows and higher severity routing.
- Cloud images/containers: detect drift from approved images, and detect unauthorized changes in build artifacts.
Define minimum coverage rules (qualitative is fine if you can’t quantify): “all corporate-managed endpoints,” “all production servers,” “all privileged admin workstations,” and document exclusions with compensating monitoring.
4) Build alerts that are specific enough to action
Poor alerts fail audits because they are noisy and ignored. Good alerts have:
- Hostname, user, process, installer name/path, and time
- Software name/version (where available)
- Source (download URL, repo, package manager, installer hash) if your tools capture it
- Context: is the software on the approved list? did it come from the approved deployment tool?
Severity guidance:
- High severity: installs on privileged hosts, security tools being installed/removed, unsigned installers.
- Medium: unknown software on standard endpoints.
- Low: installs that match approved packages but were executed outside the normal tool (often indicates a process gap).
5) Route alerts to a monitored system and require ticketed handling
Operationalize with a workflow:
- Alert triggers.
- Alert creates a case/ticket automatically (preferred) or is manually logged.
- Triage within your defined response process:
- Validate if authorized (check approved software list and change/ticket).
- If unauthorized, determine risk (malware suspicion, persistence, lateral movement potential).
- Response:
- Remove/quarantine software.
- Isolate host if needed.
- Revoke admin privileges if misuse occurred.
- Create a follow-up task to address root cause (packaging, allowlist update, training).
- Closure requires documented disposition and evidence.
If you use Daydream, this is where it fits naturally: map CM-11(1) to a named owner, the triage procedure, and the recurring evidence set (alert samples, ticket exports, tuning reviews) so you can answer assessor requests without rebuilding the story each time.
6) Test the control on a recurring basis
You need to prove the alerts fire. Build a simple test script/runbook:
- Attempt an install that should be blocked or flagged in a test group.
- Confirm the alert triggers and arrives in the queue.
- Confirm a ticket is created and can be closed with the right fields.
Keep the test lightweight and repeatable. The goal is evidence of ongoing operation, not a one-time setup screenshot.
Required evidence and artifacts to retain
Keep artifacts that show design + operation:
Design evidence
- Approved software policy/standard defining “authorized installation”
- Software allowlist/approved catalog (or reference to the authoritative system)
- Alert logic documentation (what constitutes unauthorized, severity rules, routing destinations)
- RACI / ownership and escalation path
Operational evidence
- Screenshots or exports showing alert rules enabled and routing configured
- Sample alerts (sanitized) showing relevant fields (host, user, software, timestamp)
- Ticket/case records for a small set of alerts showing triage, disposition, and remediation
- Control test record (date, test steps, expected outcome, actual outcome, remediation if failed)
- Exception register for approved deviations (with expiry and compensating controls)
Common exam/audit questions and hangups
Assessors commonly probe:
- “Show me how an unauthorized install is detected on endpoints and on servers.”
- “Where do alerts go, and who is responsible for triage?”
- “How do you define authorized software? Where is the source of truth?”
- “Show evidence of a real alert and the ticket trail to closure.”
- “How do you handle developers who need package managers or admin rights?”
Hangups that trigger findings:
- Alerts exist but no one can show triage records.
- Coverage gaps (servers monitored, endpoints not; corporate devices monitored, VDI not).
- “Authorized” is undefined or purely informal (“we just know what’s normal”).
Frequent implementation mistakes (and how to avoid them)
-
Relying on policy-only controls.
Fix: add technical detection with alert routing and ticketed handling. -
Noisy alerts that get tuned out.
Fix: enrich alerts with allowlist context; suppress known-good installers executed by approved tools; maintain a tuning log. -
No ownership.
Fix: name a single accountable owner and an on-call or daily review rotation; document escalation. -
Exceptions become permanent.
Fix: require expiry dates and periodic review; tie exceptions to compensating controls (extra monitoring, restricted scope). -
Cloud and CI/CD blind spots.
Fix: treat golden images and build pipelines as software installation paths; detect drift and unauthorized changes in artifacts.
Enforcement context and risk implications
No public enforcement cases were provided in the source material for this requirement, so this page does not list specific actions or penalties.
Risk-wise, unauthorized installations are a common precursor to malware execution, persistence mechanisms, and unapproved remote access tooling. From a governance angle, repeated unauthorized installs often indicate excessive local admin rights, weak change control, or unmanaged endpoints. CM-11(1) gives you an auditable way to detect and correct those breakdowns. 2
Practical execution plan (30/60/90-day plan)
Use this as an execution sequence, not a calendar promise.
First 30 days (Immediate: define + stand up alerting)
- Confirm scope: endpoints, servers, privileged workstations, VDI, key cloud workloads.
- Publish a crisp definition of “authorized installation” and identify the source of truth for approved software.
- Enable install-detection telemetry on one platform end-to-end (often endpoints first).
- Route alerts to a monitored queue and create a minimum triage playbook (fields required to close a ticket).
By 60 days (Near-term: expand coverage + reduce noise)
- Expand detection to remaining platforms (servers, VDI, admin workstations).
- Add enrichment: correlate alerts against approved catalog and approved deployment tools.
- Implement exception workflow with expiry and approvals.
- Run a control test and retain evidence; fix gaps uncovered by the test.
By 90 days (Ongoing: make it durable and audit-ready)
- Establish steady-state operations: recurring review of alert volume, tuning decisions, and closure quality.
- Add reporting that shows the control is operating (sample cases, trend notes, exception status).
- Map CM-11(1) in your GRC system (including Daydream) to the owner, procedure, and recurring evidence artifacts so audits become evidence retrieval, not archaeology.
Frequently Asked Questions
What counts as an “unauthorized installation” for CM-11(1)?
Any software installed outside your approved sources, approved actors, or approved change paths should be treated as unauthorized. Write this definition so it can be enforced technically and tested in a control runbook. 2
Do I need to block installations, or just alert?
CM-11(1) is specifically about alerting on unauthorized installations, not blocking them. Blocking may exist elsewhere in your control set, but you still need alerts and an operational response trail. 1
How do we handle developers who need admin rights or package managers?
Use a documented exception path with a defined scope (devices/users) and compensating monitoring, and ensure installs still generate alerts for review. Pair it with an approved internal repository or managed packaging flow to reduce ad-hoc installs.
What evidence is strongest for auditors?
A small set of real alert-to-ticket examples with clear disposition and remediation is usually stronger than screenshots alone. Add a periodic control test record that proves alerts still fire after tooling changes.
Can our third-party MSP handle these alerts?
Yes, if your contract and operating procedures make alert monitoring, triage, and response explicit, and you can obtain the underlying evidence (alerts, cases, closure notes). You still own accountability for the control outcome.
How do we keep this from turning into constant false positives?
Start with higher-risk hosts and common unauthorized tools, then tune by adding allowlist context and suppressing known-good deployment methods. Keep a tuning log so you can explain why you changed rules during an assessment.
Footnotes
Frequently Asked Questions
What counts as an “unauthorized installation” for CM-11(1)?
Any software installed outside your approved sources, approved actors, or approved change paths should be treated as unauthorized. Write this definition so it can be enforced technically and tested in a control runbook. (Source: NIST SP 800-53 Rev. 5)
Do I need to block installations, or just alert?
CM-11(1) is specifically about alerting on unauthorized installations, not blocking them. Blocking may exist elsewhere in your control set, but you still need alerts and an operational response trail. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How do we handle developers who need admin rights or package managers?
Use a documented exception path with a defined scope (devices/users) and compensating monitoring, and ensure installs still generate alerts for review. Pair it with an approved internal repository or managed packaging flow to reduce ad-hoc installs.
What evidence is strongest for auditors?
A small set of real alert-to-ticket examples with clear disposition and remediation is usually stronger than screenshots alone. Add a periodic control test record that proves alerts still fire after tooling changes.
Can our third-party MSP handle these alerts?
Yes, if your contract and operating procedures make alert monitoring, triage, and response explicit, and you can obtain the underlying evidence (alerts, cases, closure notes). You still own accountability for the control outcome.
How do we keep this from turning into constant false positives?
Start with higher-risk hosts and common unauthorized tools, then tune by adding allowlist context and suppressing known-good deployment methods. Keep a tuning log so you can explain why you changed rules during an assessment.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream