SC-18(1): Identify Unacceptable Code and Take Corrective Actions
SC-18(1): Identify Unacceptable Code and Take Corrective Actions requires you to define what “unacceptable code” is for your environment, implement detection to find it in systems and components, and execute documented corrective actions when it is discovered. Operationalize it by publishing a clear code policy, wiring scanning and monitoring to that policy, and keeping evidence that findings were triaged, remediated, and prevented from recurring. 1
Key takeaways:
- You must explicitly define “unacceptable code” for your system boundary, not rely on tool defaults. 2
- Detection is incomplete without corrective action workflows that drive removal, containment, and prevention. 1
- Assessment success depends on durable evidence: definitions, detection coverage, tickets, and closure proof. 2
SC-18 sits in the System and Communications Protection (SC) family and addresses “mobile code” risk. The enhancement SC-18(1) narrows the operational expectation to two outcomes: (1) you identify code you consider unacceptable, and (2) you take corrective actions when it appears. 1
For a Compliance Officer, CCO, or GRC lead, the fastest way to make this real is to translate “unacceptable code” into concrete, testable rules that engineering and security can enforce. Examples include unauthorized scripts executing in browsers, unapproved macros, unsigned or untrusted binaries, prohibited interpreter runtimes on servers, or third-party embedded code that bypasses your standard build pipeline. Your exact list will differ by mission, threat model, data types, and system architecture, but auditors will expect you to have made the list, implemented detection against it, and shown consistent remediation outcomes. 2
This page gives requirement-level implementation guidance you can assign to an owner, turn into procedures, and evidence for assessments across federal systems and contractor environments handling federal data. 1
Regulatory text
Control excerpt: “Identify {{ insert: param, sc-18.01_odp.01 }} and take {{ insert: param, sc-18.01_odp.02 }}.” 2
Operator meaning:
- “Identify … unacceptable code” means you define what code is prohibited or restricted in your environment and implement mechanisms to detect it in scope systems (endpoints, servers, cloud workloads, containers, CI/CD artifacts, and common ingress points like email/web downloads). 1
- “Take … corrective actions” means you do more than alert. You have a repeatable workflow that contains the risk, removes or disables the code, addresses root cause, and prevents recurrence (policy, configuration, allowlisting, pipeline controls). 2
Plain-English interpretation (what an assessor is looking for)
Assessors typically probe three questions:
- Definition: What is “unacceptable code” here, and who approved that definition? 1
- Coverage: Where do you look for it, how do you know you’re looking everywhere that matters, and how often do you detect it? 2
- Follow-through: Show me the last few findings and how you corrected them, including closure evidence and prevention actions. 1
Who it applies to
Entity scope
- Federal information systems implementing NIST SP 800-53 controls. 1
- Contractor systems handling federal data where NIST SP 800-53 is flowed down contractually or via program requirements. 1
Operational context (where the requirement “shows up”)
SC-18(1) becomes material anywhere code can be introduced outside your governed software delivery path, including:
- End-user computing (email attachments, macros, browser extensions, downloaded executables).
- Administrative tooling (PowerShell, Bash, Python, remote management scripts).
- Web-facing systems (injected scripts, unauthorized client-side code).
- Containers and images (unapproved packages, embedded miners, unknown startup scripts).
- Third-party components embedded into products (SDKs, browser plugins, scripts) that did not pass your intake checks.
What you actually need to do (step-by-step)
Step 1: Assign ownership and define the system boundary
- Name a control owner (often Security Engineering or IT Security) and a GRC owner accountable for evidence readiness.
- Document the in-scope environments: endpoints, servers, cloud accounts, CI/CD, SaaS where code executes (for example, office macro environments or integration platforms).
Deliverable: SC-18(1) control implementation statement mapped to owners, tools, and evidence cadence. 2
Step 2: Define “unacceptable code” as enforceable policy
Create a short, specific standard that includes:
- Prohibited code types (examples you tailor): unsigned executables, unapproved scripting engines, unauthorized browser extensions, macros from the internet, code running from temp directories, unauthorized remote admin tools.
- Restricted code: allowed only with documented exception (business justification, time-bound approval, compensating controls).
- Trust rules: signed code requirements, approved publishers, allowlisted hashes, approved repositories, approved container registries.
- Third-party code intake: what must be scanned or reviewed before it can execute in production.
Keep the policy testable. If a rule can’t be detected, it will fail in practice. 1
Deliverable: “Unacceptable Code Standard” with version control and approval record.
Step 3: Implement detection mapped to the definition
Build a detection matrix that maps each unacceptable-code category to at least one detection mechanism and one monitoring output.
Practical detection options (choose what fits your stack):
- Endpoint protections: application control/allowlisting, EDR detections for scripting abuse, blocking unapproved interpreters.
- Email/web controls: attachment sandboxing, macro blocking policies, download restrictions.
- Server/workload controls: file integrity monitoring, privileged command auditing, runtime allowlisting.
- CI/CD controls: SAST, dependency scanning, artifact signing verification, image scanning, policy-as-code gates.
- Cloud controls: serverless function policy checks, container admission controls, registry scanning and signature enforcement.
Deliverable: “Unacceptable Code Detection Coverage” table (see below) and evidence exports showing it runs. 2
Detection coverage table (template):
| Unacceptable code category | Where it can appear | Detection control/tool | Alert/report name | Owner | Corrective action playbook |
|---|---|---|---|---|---|
| Unapproved scripts | Endpoints, admin hosts | EDR rule + script control | Weekly scripting exceptions report | SecOps | Contain host, remove script, block hash/path |
| Unsigned binaries | Servers, endpoints | App allowlisting | Execution block logs | IT Security | Validate need, approve exception or uninstall |
| Unapproved container packages | Build pipeline, runtime | Image scan + admission policy | Registry scan findings | DevSecOps | Rebuild image, pin deps, block deploy |
Step 4: Define corrective actions as a closed-loop workflow
Write a short playbook that answers:
- Triage: severity criteria and assignment rules (SecOps, IT, Engineering).
- Containment: isolate host, block execution, revoke credentials if compromise suspected.
- Eradication: remove code, uninstall extension, revert image, restore clean baseline.
- Recovery: validate system integrity, monitor for reappearance.
- Root cause and prevention: update allowlists/deny rules, harden configuration, add CI/CD gate, revise third-party intake rules, update user policy/training when applicable.
Make “corrective action” measurable: ticket opened, actions logged, closure criteria, and verification step. 1
Step 5: Operationalize exceptions without losing control
Unacceptable code programs fail when teams create informal bypasses. Set up:
- A time-bound exception process with compensating controls.
- A required business justification and technical rationale.
- A required expiration and review.
- A detection tag so exceptions still show up in reporting.
Step 6: Evidence, metrics, and continuous improvement
Track a small set of operational metrics that show the control runs:
- Volume of detections (by category, environment).
- Time to triage and remediate (trends matter more than single values).
- Recurrence by root cause (policy gap vs. misconfiguration vs. user behavior vs. third-party introduction).
Daydream (or any GRC system) fits best here when it ties your SC-18(1) definition, detection sources, and corrective-action tickets into a single evidence trail you can hand to an assessor without rebuilding the story each cycle. 2
Required evidence and artifacts to retain
Minimum set that typically satisfies assessment testing:
- Unacceptable Code Standard (approved, versioned). 1
- SC-18(1) procedure/playbook for triage and corrective action. 2
- Detection coverage matrix mapping categories to systems/tools.
- Tool configurations / policies (screenshots, exported policies, or config-as-code commits).
- Sample alerts/findings showing identification of unacceptable code.
- Tickets or incident records showing corrective actions taken, with timestamps, assignees, and closure evidence.
- Exception register for any allowed unacceptable-code cases, with approvals and expirations.
Common exam/audit questions and hangups
- “Show me your definition of unacceptable code and who approved it.” Expect a hangup if your definition is only “malware.” 1
- “How do you know your detection covers cloud workloads and CI/CD artifacts, not just laptops?” Expect sampling across environments. 2
- “Walk through the last finding end-to-end.” Expect to show containment/removal plus prevention. 1
- “How do exceptions work?” Expect to show a controlled process, not informal Slack approvals.
Frequent implementation mistakes and how to avoid them
- Tool-first, definition-later. Teams enable an EDR rule and call it done. Fix: publish the unacceptable code standard first, then map tools to it. 1
- No corrective action proof. Alerts exist, but remediation isn’t tracked. Fix: require tickets for every confirmed detection, with closure criteria and verification steps. 2
- Scope gaps. Endpoints are covered; servers, containers, and build systems are not. Fix: create the detection coverage table and have owners attest to coverage.
- Exceptions that never expire. Fix: time-box exceptions and review them; keep them visible in reporting.
- Over-broad policy. “No scripts allowed” is not enforceable for admins and developers. Fix: define restricted zones, signing requirements, and approved paths.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite enforcement actions.
Risk-wise, SC-18(1) is often assessed through the lens of real compromise paths: unauthorized scripts, embedded third-party code, and ungoverned execution environments. If you cannot show detection plus corrective action, an assessor can reasonably conclude the control is not implemented or not operating effectively. 1
Practical 30/60/90-day execution plan
First 30 days (foundation)
- Assign control owner(s) and publish the SC-18(1) implementation statement mapped to evidence artifacts. 2
- Draft and approve the Unacceptable Code Standard (prohibited + restricted + exception process).
- Inventory primary execution surfaces: endpoints, servers, cloud workloads, CI/CD, email/web download paths.
Days 31–60 (detection + workflow)
- Build the detection coverage matrix and identify gaps by environment.
- Configure or tune detection sources to produce reportable outputs tied to unacceptable-code categories.
- Stand up the corrective action playbook and ticket workflow, including assignment and closure criteria.
- Run a tabletop using a realistic scenario (for example, unapproved macro execution or an unsigned binary on a server) and capture lessons learned.
Days 61–90 (evidence hardening + prevention)
- Collect a clean evidence package: policy, configs, sample findings, and closed tickets.
- Add prevention controls where recurring findings indicate root causes (CI/CD gates, allowlisting, signature enforcement, hardened endpoint policies).
- Operationalize monthly reporting to management: trends, exceptions, and systemic fixes.
- In Daydream, map SC-18(1) to the control owner, procedure, and recurring evidence artifacts so the next assessment cycle is mostly refresh, not rebuild. 2
Frequently Asked Questions
What counts as “unacceptable code” under SC-18(1)?
The control expects you to define it for your environment, then enforce that definition through detection and remediation. Document prohibited and restricted code types in a standard that engineering and security can test. 1
Do we have to block all mobile code and scripting?
No. SC-18(1) is about identifying what you deem unacceptable and taking corrective actions when it appears. Many organizations allow scripts with restrictions such as signing, approved repositories, and monitored execution paths. 1
Is an EDR alert enough to satisfy “identify unacceptable code”?
Only if the alert reliably maps to your documented unacceptable-code definition and you can show coverage for in-scope systems. You also need evidence of corrective actions, not just detection. 2
What evidence should we provide to an assessor?
Provide the unacceptable code standard, detection coverage mapping, tool configurations, and a sample set of findings with tickets that show containment/removal and closure verification. Keep exception approvals and expirations. 1
How do we handle third-party software or embedded components?
Treat third-party code as an intake pathway: require scanning and approval before it can execute in production, and define what is unacceptable (for example, unsigned components or unapproved update mechanisms). Then show detection and corrective action when something slips through. 1
Who should own SC-18(1), security or engineering?
Security typically owns the policy, detection, and response workflow, while engineering owns fixes in build pipelines and applications. Make ownership explicit in your control implementation statement and evidence plan. 2
Footnotes
Frequently Asked Questions
What counts as “unacceptable code” under SC-18(1)?
The control expects you to define it for your environment, then enforce that definition through detection and remediation. Document prohibited and restricted code types in a standard that engineering and security can test. (Source: NIST SP 800-53 Rev. 5)
Do we have to block all mobile code and scripting?
No. SC-18(1) is about identifying what you deem unacceptable and taking corrective actions when it appears. Many organizations allow scripts with restrictions such as signing, approved repositories, and monitored execution paths. (Source: NIST SP 800-53 Rev. 5)
Is an EDR alert enough to satisfy “identify unacceptable code”?
Only if the alert reliably maps to your documented unacceptable-code definition and you can show coverage for in-scope systems. You also need evidence of corrective actions, not just detection. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What evidence should we provide to an assessor?
Provide the unacceptable code standard, detection coverage mapping, tool configurations, and a sample set of findings with tickets that show containment/removal and closure verification. Keep exception approvals and expirations. (Source: NIST SP 800-53 Rev. 5)
How do we handle third-party software or embedded components?
Treat third-party code as an intake pathway: require scanning and approval before it can execute in production, and define what is unacceptable (for example, unsigned components or unapproved update mechanisms). Then show detection and corrective action when something slips through. (Source: NIST SP 800-53 Rev. 5)
Who should own SC-18(1), security or engineering?
Security typically owns the policy, detection, and response workflow, while engineering owns fixes in build pipelines and applications. Make ownership explicit in your control implementation statement and evidence plan. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream