User-Installed Software

To meet the user-installed software requirement, you must define who can install software, under what conditions, and how you will technically block, approve, and verify installations across your environment. NIST SP 800-53 Rev. 5 CM-11 expects a written policy, enforced controls (not just guidance), and routine compliance monitoring with evidence. 1

Key takeaways:

  • Write a clear “user-installed software” policy that names allowed/blocked software and required approvals. 1
  • Enforce the policy with technical controls (least privilege, allowlisting, endpoint management, and logging), not training alone. 1
  • Monitor compliance on a defined cadence, document exceptions, and keep artifacts that prove enforcement and review. 1

“User-installed software” looks simple until you get to the edge cases: browser extensions, developer tooling, package managers, SaaS desktop agents, and “portable” executables that don’t require admin rights. CM-11 forces you to stop treating these as ad hoc decisions and make them governable: define policy, enforce it, and monitor compliance. 1

For a Compliance Officer, CCO, or GRC lead, the operational goal is straightforward: reduce unvetted code entering your environment while keeping the business functional. That means aligning three moving parts: (1) rules users can understand (what’s permitted and what isn’t), (2) technical guardrails that make the rules real, and (3) a repeatable review motion that produces audit-ready evidence.

This page translates CM-11 into requirement-level implementation steps you can hand to IT, security engineering, and endpoint teams. It also calls out the exam friction points: auditors typically ask you to prove enforcement and show monitoring results over time, not just present a policy document.

Regulatory text

NIST SP 800-53 Rev 5 CM-11 requires you to: (1) establish organization-defined policies governing the installation of software by users; (2) enforce those policies through organization-defined methods; and (3) monitor compliance at an organization-defined frequency. 1

What the operator must do:

  • Define the rules: who can install, what they can install, where they can install it, and how exceptions work.
  • Enforce the rules with technical mechanisms that prevent or constrain noncompliant installs.
  • Monitor routinely and keep evidence that monitoring occurred and issues were handled. 1

Plain-English interpretation (what CM-11 is really asking)

Users installing software is a controlled activity, not a personal preference. You need an explicit stance on:

  • Authority: which roles (if any) can install software without prior approval.
  • Scope: endpoints (workstations/laptops), servers, VDI, managed mobile devices, and privileged admin workstations.
  • Software types: installed applications, browser extensions, scripts, packages (e.g., Python/npm), drivers, and “portable” apps.
  • Decisioning: what “approved” means (security review, licensing review, data handling review), and how fast approvals happen.

Auditors interpret CM-11 as “show me the controls that stop unapproved software from being installed” plus “show me you check that the controls work.” Policy-only implementations routinely fail in practice because they don’t reduce risk or produce technical evidence. 1

Who it applies to

Entities: Cloud Service Providers and Federal Agencies operating against FedRAMP Moderate expectations for this control. 1

Operational context (where CM-11 shows up in real life):

  • Corporate-managed endpoints used by employees and contractors (including third parties with issued devices).
  • Administrative workstations used to manage production cloud infrastructure.
  • Build systems and CI runners where developers may add tooling.
  • Support environments (jump boxes, bastions, VDI) where “quick installs” are common.
  • Kiosk/shared devices and call center images where software drift causes instability.

What you actually need to do (step-by-step)

1) Set policy boundaries (write what you will enforce)

Create a “User-Installed Software Policy” (often a subsection of Configuration Management or Endpoint Security) with these required decisions:

  • Default rule: Standard users cannot install software without approval, unless explicitly allowed.
  • Approved software sources: company app catalog, managed app store, signed packages from approved repositories, or IT-installed software only.
  • Prohibited categories: remote access tools, unauthorized password managers, crypto miners, P2P clients, unsigned drivers, and unapproved browser extensions.
  • Approval workflow: security review inputs (publisher, signature, patching cadence, data access), license review, and business justification.
  • Exceptions: time-bound, owner, compensating controls, and removal criteria.
  • Monitoring cadence: define how often you check compliance and what triggers an out-of-cycle check (new critical vulnerability, new malware campaign, audit finding). 1

Deliverable: a policy that is specific enough to map to controls and logs.

2) Define the enforcement methods (technical controls list)

Pick a small set of enforcement mechanisms and document them as “methods” under CM-11:

  • Least privilege on endpoints: remove local admin rights by default; use privileged elevation tools for approved installs.
  • Application allowlisting / execution control: only approved binaries/scripts run; block unknown publishers and unsigned code where feasible.
  • Endpoint management (MDM/EDR configuration): restrict install sources, require signed installers, control browser extension install, prevent tamper.
  • Software distribution: approved apps delivered via managed catalog; users request apps through ticketing.
  • Server/build environment controls: lock down package repositories, pin dependencies, restrict outbound downloads, control who can modify base images.
  • Logging: collect install events, execution events, and policy blocks centrally for investigation and audit evidence.

Document “what enforces what.” Example mapping:

  • “No local admin” enforces “users can’t install system-wide software.”
  • “Extension allowlist” enforces “only approved extensions in managed browsers.”
  • “App allowlisting” enforces “portable executables cannot run.” 1

3) Build an intake and approval path that users will follow

If approvals take too long, users route around you (portable apps, personal devices, shadow IT). Define:

  • Request form fields: app name/version, publisher, use case, required permissions, data access needs, and urgency.
  • Risk review checklist: signature validation, update mechanism, admin requirement, telemetry, and data storage behavior.
  • Decision outcomes: approve and publish to catalog, approve with constraints, deny with alternatives, or grant a time-bound exception.

Practical tip: keep a “pre-approved” list for common roles (finance, engineering, support) to reduce repetitive reviews, but tie it to your allowlisting/catalog so it stays enforceable.

4) Monitor compliance on a defined frequency (and prove it)

Monitoring must be operational, not aspirational. Implement:

  • Automated reporting: inventory installed software and extensions; identify drift from baseline; flag prohibited categories.
  • Alerting: blocks or unauthorized install attempts generate tickets for follow-up.
  • Periodic review: compliance owner reviews reports, signs off, and tracks remediation to closure.

What auditors want to see: dated reports, exceptions, and remediation evidence that show the monitoring loop works. 1

5) Handle exceptions and removals cleanly

Define an exception process with:

  • Named business owner and security approver
  • Compensating controls (extra logging, network segmentation, limited user account)
  • Time limit and re-approval conditions
  • Removal plan (uninstall date, replacement tool)

Also define an uninstall/removal workflow for prohibited or vulnerable software that appears in inventory.

6) Make it measurable (control objectives and success criteria)

Translate CM-11 into testable objectives:

  • “Unauthorized installers are blocked on managed endpoints.”
  • “Approved software is installed through managed distribution.”
  • “Software inventory is reviewed and exceptions are tracked.”

This is where tools like Daydream help: you can map CM-11 to your specific enforcement mechanisms, attach policy, tickets, and reports as evidence, and keep the monitoring cadence and sign-offs in one audit-ready workspace.

Required evidence and artifacts to retain

Keep artifacts that prove each clause of CM-11 (policy, enforcement, monitoring). Common evidence set:

  • Policy document covering user-installed software rules, approvals, exceptions, and monitoring frequency. 1
  • Technical configuration evidence: screenshots/exports of MDM policies, allowlisting rules, browser extension restrictions, privilege management settings.
  • Approved software catalog (or list) with owner and review history.
  • Request/approval records: tickets with approvals, risk notes, and deployment actions.
  • Exception register: scope, justification, compensating controls, approval, expiry, closure.
  • Monitoring outputs: software inventory reports, blocked install events, alerts, periodic review sign-off, and remediation tracking.
  • Training/communications (optional but useful): user guidance on how to request software and what is prohibited.

Common exam/audit questions and hangups

Expect these questions:

  • “Show me how a standard user is prevented from installing software.” (They want technical proof.)
  • “How do you control browser extensions and portable executables?”
  • “Where is your approved software list, and how is it kept current?”
  • “How often do you review installed software, and who signs off?”
  • “Show exceptions and prove they expire or get re-approved.” 1

Hangups that stall audits:

  • Policy says “users can’t install,” but local admin rights are widespread.
  • Monitoring exists, but no evidence of review and remediation.
  • Teams treat dev tooling as “out of scope,” even on corporate endpoints.

Frequent implementation mistakes (and how to avoid them)

  • Mistake: Writing a policy with no enforcement mapping.
    Fix: add a table in the policy: rule → enforcement method → logging source → control owner.
  • Mistake: Ignoring non-admin installs (portable apps, per-user installs).
    Fix: use application control and executable/script controls, not only admin restrictions.
  • Mistake: No defined “frequency” for monitoring.
    Fix: set an explicit cadence and record each review occurrence with sign-off. 1
  • Mistake: Exceptions become permanent.
    Fix: require expiry, re-approval triggers, and periodic exception review.
  • Mistake: Treating SaaS add-ons and extensions as “not software.”
    Fix: include browser extensions, plugins, and agents in scope and enforcement.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for CM-11, so you should plan for assessment and authorization scrutiny rather than case-law style precedent. The risk is practical: user-installed software is a common entry point for malware, data exfiltration tools, and license violations; it also undermines system baselines and incident response because you lose inventory integrity. CM-11 is designed to make your environment predictable and defensible under assessment. 1

Practical 30/60/90-day execution plan

First 30 days (Immediate stabilization)

  • Assign control ownership: endpoint owner, policy owner, monitoring owner.
  • Draft the policy with explicit allow/deny rules, approval flow, exception requirements, and monitoring frequency. 1
  • Identify enforcement gaps: local admin prevalence, unmanaged browsers, lack of inventory.
  • Stand up “minimum viable enforcement” on high-risk populations (admins, production access users): remove admin rights where feasible; restrict install sources; enable install/event logging.

Days 31–60 (Enforcement rollout + intake workflow)

  • Launch the software request workflow and publish the “approved catalog.”
  • Implement application control for common bypass paths (portable executables, scripts) where feasible.
  • Add browser extension controls for managed browsers.
  • Start the monitoring loop: generate inventory reports, review them, open remediation tickets, and document closure. 1

Days 61–90 (Audit-ready operations)

  • Expand enforcement to remaining endpoint groups with documented exceptions.
  • Formalize the exception register and periodic exception review.
  • Run an internal audit-style test: attempt an unauthorized install, verify it is blocked/logged, and verify the incident/ticket flow.
  • Centralize artifacts and sign-offs (policy versions, configs, reports, tickets) in a system of record such as Daydream to speed assessments and reduce evidence gaps.

Frequently Asked Questions

Does “user-installed software” include browser extensions and plugins?

Treat it as in scope if users can add executable capability or data access through it. CM-11 is about user-driven software installation and policy enforcement, and extensions are a common path to introduce unreviewed code. 1

Can we allow developers to install tools freely on their laptops?

You can, but only if your policy explicitly allows it and you still enforce boundaries (approved sources, logging, and monitoring). Most teams do better with a pre-approved developer tool catalog plus controlled elevation for installs. 1

What counts as “enforcement” under CM-11?

Enforcement means technical controls that prevent or constrain noncompliant installs, plus evidence that those controls are active. Training and policy acknowledgment help, but they do not replace enforcement mechanisms. 1

How do we handle urgent “need it today” software requests?

Create an expedited path with minimum required checks (publisher validation, signature, basic risk review) and require post-install review within your normal process. Record the approval and any time-bound constraints as an exception if needed. 1

What evidence is most likely to satisfy an assessor quickly?

A clear policy, screenshots/exports of endpoint restrictions, an approved software catalog, and a dated monitoring report with sign-off and remediation tickets. Assessor confidence rises when policy statements directly match technical configurations and logs. 1

How do we prove “monitoring frequency” without creating busywork?

Automate reporting and alerting, then keep a lightweight review record (who reviewed, when, what exceptions or tickets resulted). The goal is a repeatable loop with artifacts, not manual spreadsheet churn. 1

Footnotes

  1. NIST Special Publication 800-53 Revision 5

Frequently Asked Questions

Does “user-installed software” include browser extensions and plugins?

Treat it as in scope if users can add executable capability or data access through it. CM-11 is about user-driven software installation and policy enforcement, and extensions are a common path to introduce unreviewed code. (Source: NIST Special Publication 800-53 Revision 5)

Can we allow developers to install tools freely on their laptops?

You can, but only if your policy explicitly allows it and you still enforce boundaries (approved sources, logging, and monitoring). Most teams do better with a pre-approved developer tool catalog plus controlled elevation for installs. (Source: NIST Special Publication 800-53 Revision 5)

What counts as “enforcement” under CM-11?

Enforcement means technical controls that prevent or constrain noncompliant installs, plus evidence that those controls are active. Training and policy acknowledgment help, but they do not replace enforcement mechanisms. (Source: NIST Special Publication 800-53 Revision 5)

How do we handle urgent “need it today” software requests?

Create an expedited path with minimum required checks (publisher validation, signature, basic risk review) and require post-install review within your normal process. Record the approval and any time-bound constraints as an exception if needed. (Source: NIST Special Publication 800-53 Revision 5)

What evidence is most likely to satisfy an assessor quickly?

A clear policy, screenshots/exports of endpoint restrictions, an approved software catalog, and a dated monitoring report with sign-off and remediation tickets. Assessor confidence rises when policy statements directly match technical configurations and logs. (Source: NIST Special Publication 800-53 Revision 5)

How do we prove “monitoring frequency” without creating busywork?

Automate reporting and alerting, then keep a lightweight review record (who reviewed, when, what exceptions or tickets resulted). The goal is a repeatable loop with artifacts, not manual spreadsheet churn. (Source: NIST Special Publication 800-53 Revision 5)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
FedRAMP Moderate: User-Installed Software | Daydream