SC-18(2): Acquisition, Development, and Use

To meet the sc-18(2): acquisition, development, and use requirement, you must verify that every instance of mobile code you acquire, build, or run in your system conforms to your organization’s defined criteria (the SC-18(2) “organization-defined parameter”). Put a concrete verification gate in procurement and CI/CD, and retain evidence that the gate ran and passed. 1

Key takeaways:

  • Define what “acceptable mobile code” means for your environment, then enforce it at intake and release.
  • Treat “verify” as an auditable control gate, not a policy statement.
  • Keep run-level evidence (approvals, scan results, allowlists, exceptions) tied to deployments.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON; NIST SP 800-53 Rev. 5

SC-18 in NIST SP 800-53 focuses on mobile code, which is broadly understood in security programs as code that can be delivered, executed, or updated dynamically (for example: scripts, browser-executed code, signed applets, macros, plugins, extensions, and other interpretable or downloaded code). SC-18(2) is the “make it operational” enhancement: it requires you to verify that mobile code you acquire, develop, or use meets your organization-defined requirements before it is deployed in the system. 1

For a CCO or GRC lead, the fastest way to operationalize SC-18(2) is to stop treating it as a security-only standard. It is a cross-functional requirement spanning procurement, application security, platform engineering, endpoint management, and change management. Your job is to make the verification step explicit, measurable, and repeatable, then make sure artifacts are retained so an assessor can trace “this mobile code in production” back to “this verification happened under defined rules.”

This page gives requirement-level guidance you can hand to control owners and auditors without rewriting it into abstract policy language.

Regulatory text

Control requirement (excerpt): “Verify that the acquisition, development, and use of mobile code to be deployed in the system meets {{ insert: param, sc-18.02_odp }}.” 2

Operator interpretation of the text:

  • “Verify” means you need an actual check (automated, manual, or hybrid) that can fail, block, or require documented exception handling. A policy sentence alone does not verify anything.
  • “Acquisition, development, and use” means the requirement applies across the lifecycle: third-party intake, internal build pipelines, and runtime enablement/configuration.
  • “To be deployed in the system” means scope includes production, staging, and any environment where the code could affect confidentiality, integrity, or availability of system data/services.
  • “Meets {{ sc-18.02_odp }}” means you must define your criteria (the organization-defined parameter) in a way that is testable and produces evidence. 2

Plain-English requirement statement

Before mobile code is allowed into your environment (procured, merged/released, or enabled to run), you must check it against defined security and operational rules and keep proof that the check occurred.


Who it applies to

Entity scope (typical):

  • Federal information systems and programs implementing NIST SP 800-53 controls. 3
  • Contractor systems handling federal data where NIST SP 800-53 is flowed down via contract, ATO boundary, or security requirements. 3

Operational scope (where SC-18(2) shows up in real life):

  • Third-party software intake: JavaScript libraries, SDKs, plugins, browser extensions, macros-enabled documents, packaged scripts, CI/CD actions, infrastructure modules.
  • Internal development: scripts committed to repos, client-side code shipped to browsers, administrative scripts run by SRE/IT, automation runbooks.
  • Runtime enablement: endpoint settings allowing macros, browser policies allowing extensions, application settings allowing plugins, server settings allowing dynamic module loading.

Control owners you will need:

  • Application Security / Product Security (verification rules, toolchain)
  • Engineering Platform / DevOps (CI/CD gates)
  • Procurement / Third-party risk (acquisition checks)
  • IT / Endpoint Management (runtime restrictions for macro/script execution)
  • Change Management (exceptions, approvals, emergency changes)

What you actually need to do (step-by-step)

Step 1: Define “mobile code” for your boundary

Write a short scope statement that lists what you treat as mobile code in your environment. Keep it pragmatic and tied to actual technologies you run (for example: “client-side JavaScript shipped to browsers,” “PowerShell scripts executed on servers,” “Office macros,” “browser extensions”).

Artifact: Mobile Code Scope Statement (1–2 pages) approved by the control owner.

Step 2: Define the SC-18(2) criteria (your organization-defined parameter)

Your “{{ sc-18.02_odp }}” must be testable. Examples of criteria that work operationally:

  • Allowed sources (approved repositories, signed packages, approved marketplaces)
  • Integrity requirements (signature validation, checksum verification, provenance attestations)
  • Security checks required before deployment (static analysis, dependency scanning, malware scanning)
  • Configuration rules (macros disabled by default; only signed macros allowed by exception)
  • Logging/monitoring requirements when code executes (where feasible)
  • Exception process requirements (who can approve, what compensating controls apply)

Design tip: Write criteria as “deployment gate conditions” rather than “best practices.” The gate must output pass/fail plus evidence.

Artifacts:

  • Mobile Code Standard (the criteria)
  • Exception Standard (approval authority, expiry, compensating controls)

Step 3: Implement acquisition verification (third-party intake)

Put a verification checkpoint into procurement and intake so that downloaded or purchased mobile code does not bypass review.

Minimum operational pattern:

  1. Third party requests are routed through a defined intake workflow.
  2. Intake requires attestations or proof aligned to your criteria (source, integrity, security testing).
  3. Security review outcome is recorded (approve/deny/approve with conditions).
  4. Approved items are placed on an allowlist (repo, artifact registry, endpoint policy).

Evidence you want: ticket history, approval, scan outputs, allowlist entry.

Step 4: Implement development verification (CI/CD gate)

For internally built mobile code (and dependencies pulled during build), make the pipeline enforce the criteria.

Minimum operational pattern:

  1. CI pipeline runs required checks on commits/PRs affecting mobile code.
  2. Release pipeline blocks deployment if checks fail.
  3. Artifact repository only accepts signed/verified builds (if that’s in your criteria).
  4. Merge requires code-owner approval for high-risk mobile code paths (if defined).

Evidence you want: CI job logs, scan results, signed build attestations, PR approvals.

Step 5: Implement “use” verification (runtime controls)

“Use” is where teams fail audits. You need proof that production systems only execute mobile code that meets your criteria.

Minimum operational pattern:

  • Endpoint management policies prevent unapproved macros/extensions/scripts.
  • Server hardening prevents ad hoc script execution outside approved paths.
  • Application settings restrict plugins/extensions to allowlisted packages.
  • Monitoring alerts on execution of unauthorized interpreters or unsigned scripts (where feasible).

Evidence you want: configuration baselines, policy screenshots/exports, change records, and periodic compliance reports.

Step 6: Build an exception path that auditors can follow

Mobile code exceptions happen (legacy macros, urgent hotfix scripts, vendor-required plugins). Make the exception path structured:

  • documented risk acceptance,
  • defined compensating controls,
  • time-bounded approval and review,
  • removal plan.

Evidence you want: exception ticket, approver, expiry date, follow-up closure.

Step 7: Create an evidence map for assessment readiness

Map SC-18(2) to:

  • a control owner,
  • the systems/components in scope,
  • the verification gates,
  • the recurring evidence you will produce each cycle.

Daydream is useful here as a system of record to assign ownership, document the procedure, and schedule evidence collection so SC-18(2) does not turn into a last-minute evidence chase.


Required evidence and artifacts to retain

Keep artifacts in a way that supports traceability from “running code” back to “verification performed.”

Evidence type What “good” looks like Where it usually lives
Mobile code criteria (SC-18(2) parameter) Clear pass/fail conditions tied to technologies Policy/standards repository, GRC tool
Intake approvals for acquired mobile code Approval + conditions + provenance checks Procurement system, TPRM workflow, tickets
CI/CD gate outputs Logs showing checks ran and passed; blocking behavior on fail CI tool logs, artifact registry metadata
Allowlist / approved sources Explicit list of permitted repos, registries, extensions Config management, endpoint tool, docs
Runtime enforcement configs Baselines showing macros/extensions/scripts restricted MDM/EDR exports, server config baselines
Exceptions Risk acceptance, compensating controls, expiry, closure GRC tool, ticketing system

Common exam/audit questions and hangups

  1. “What is your organization-defined parameter for SC-18(2), and where is it documented?”
    Hangup: teams describe tools but cannot show the written criteria.

  2. “Show me evidence that mobile code X was verified before deployment.”
    Hangup: no traceability between a production artifact and CI logs/approvals.

  3. “How do you control mobile code acquired by developers outside procurement?”
    Hangup: direct downloads bypass intake.

  4. “How do you enforce restrictions for macros, scripts, or extensions on endpoints?”
    Hangup: policy exists but endpoint configuration is inconsistent or undocumented.

  5. “What’s your exception process, and show examples.”
    Hangup: exceptions are informal (Slack approvals) with no expiry.


Frequent implementation mistakes (and how to avoid them)

  • Mistake: Defining criteria that cannot be tested.
    Fix: rewrite criteria as gate checks (for example: “packages must come from approved registry X” rather than “packages should be reputable”).

  • Mistake: CI checks exist but do not block deployment.
    Fix: enforce hard fails for defined severities or require documented override with approval.

  • Mistake: Treating “use” as out of scope.
    Fix: implement endpoint/server policy controls and keep exports as evidence.

  • Mistake: No inventory of where mobile code exists.
    Fix: start with a pragmatic inventory: key repos, artifact registries, endpoint extension catalogs, and macro-enabled document locations.

  • Mistake: Evidence stored in too many places with no map.
    Fix: maintain a single SC-18(2) evidence register (what, where, owner, frequency). Daydream can hold the control narrative and evidence pointers so audits become retrieval work, not archaeology.


Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite specific enforcement actions.

Operationally, SC-18(2) maps to a common failure mode: unvetted scriptable content or dynamically executed code becomes an entry point for malicious activity, data exposure, or operational disruption. The compliance risk is straightforward: if you cannot show verification happened under defined rules, assessors will record a control design or operating effectiveness gap.


Practical 30/60/90-day execution plan

First 30 days: Define and assign

  • Name the SC-18(2) control owner and backup.
  • Draft the mobile code scope statement for your system boundary.
  • Write the organization-defined parameter (testable criteria) and get sign-off.
  • Identify where verification should occur: procurement intake, CI/CD, endpoint/server controls.
  • Stand up a simple evidence map (even a table) listing artifacts, owners, and storage locations.

Days 31–60: Build the gates

  • Add or tighten CI/CD checks for mobile code paths and third-party dependencies.
  • Add an acquisition checklist to third-party intake for mobile code components.
  • Implement allowlisting for approved sources (registries, repos, extension stores).
  • Publish an exception workflow with approval authority and documentation requirements.
  • Run a small pilot on one critical application and one endpoint group; fix friction points.

Days 61–90: Prove it operates

  • Expand gates to remaining in-scope apps and endpoints.
  • Produce an “audit packet” for one representative deployment: intake (if acquired), build logs, approvals, runtime config evidence, and any exceptions.
  • Schedule recurring evidence pulls (CI logs exports, policy compliance reports, exception reviews).
  • Capture lessons learned in the procedure so the next quarter’s evidence is consistent.

Frequently Asked Questions

What counts as “mobile code” for SC-18(2) in practice?

Define it based on what can execute dynamically in your environment, such as scripts, client-side code, macros, plugins, or extensions. Your definition must be consistent across acquisition, development, and runtime enforcement.

Does SC-18(2) require a specific security tool (SAST, SCA, EDR)?

No. It requires verification against your defined criteria. Tools are acceptable if they produce pass/fail results and evidence you can retain. 2

How do we prove “verification” to an auditor?

Provide (1) the written criteria and (2) run-level evidence showing the checks ran and passed for the mobile code deployed. CI job logs, approval tickets, and configuration exports are typical artifacts.

We buy a SaaS product that includes scripts and browser code. Is that “acquisition” under SC-18(2)?

If that code executes within your system boundary (for example, extensions, embedded scripts, or downloaded agents), treat it as acquired mobile code and route it through intake and allowlisting.

What if engineering needs to ship a hotfix script quickly?

Use a documented exception path with an approver, compensating controls, and a planned removal or normalization into the standard pipeline. Retain the exception record as evidence.

How does this relate to third-party risk management?

SC-18(2) forces you to treat third-party mobile code as a controlled intake item with verification evidence. Your TPRM workflow should collect provenance and security assurances that map directly to your SC-18(2) criteria.


Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON; NIST SP 800-53 Rev. 5

  2. NIST SP 800-53 Rev. 5 OSCAL JSON

  3. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as “mobile code” for SC-18(2) in practice?

Define it based on what can execute dynamically in your environment, such as scripts, client-side code, macros, plugins, or extensions. Your definition must be consistent across acquisition, development, and runtime enforcement.

Does SC-18(2) require a specific security tool (SAST, SCA, EDR)?

No. It requires verification against your defined criteria. Tools are acceptable if they produce pass/fail results and evidence you can retain. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we prove “verification” to an auditor?

Provide (1) the written criteria and (2) run-level evidence showing the checks ran and passed for the mobile code deployed. CI job logs, approval tickets, and configuration exports are typical artifacts.

We buy a SaaS product that includes scripts and browser code. Is that “acquisition” under SC-18(2)?

If that code executes within your system boundary (for example, extensions, embedded scripts, or downloaded agents), treat it as acquired mobile code and route it through intake and allowlisting.

What if engineering needs to ship a hotfix script quickly?

Use a documented exception path with an approver, compensating controls, and a planned removal or normalization into the standard pipeline. Retain the exception record as evidence.

How does this relate to third-party risk management?

SC-18(2) forces you to treat third-party mobile code as a controlled intake item with verification evidence. Your TPRM workflow should collect provenance and security assurances that map directly to your SC-18(2) criteria. ---

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream