SC-35: External Malicious Code Identification

SC-35 requires you to deploy and operate security components that proactively detect network-delivered malicious code and access to malicious websites, then prove those components are consistently enforced across your environment. To operationalize it fast, standardize where web/DNS/email/edge traffic is inspected, block or quarantine high-confidence threats, and retain repeatable evidence of coverage, tuning, and response. 1

Key takeaways:

  • You need preventive detection at the network boundary and user egress, not just endpoint antivirus. 1
  • Auditors will focus on coverage and proof: where inspection happens, what is blocked, and how exceptions are governed.
  • Treat “malicious websites” as a control objective across DNS, web proxy/SWG, and firewall policy, with monitoring and incident linkage.

The sc-35: external malicious code identification requirement is operational: you must include system components that actively look for malicious code delivered over the network and prevent or identify user access to malicious websites. 1 For a CCO or GRC lead, the fastest path is to translate this into a small set of enforceable control points (DNS, web egress, email, and network perimeter), assign clear ownership to SecOps, and define what “proactively seek to identify” means in your environment: continuous inspection, automated blocking/quarantine where feasible, and human triage tied to incident handling.

This requirement tends to fail in practice for predictable reasons: partial coverage (only corporate laptops, not servers or VDI), bypass paths (direct-to-internet from cloud workloads), and weak evidence (a policy statement without screenshots/config exports, logs, or test results). SC-35 is “medium” severity in many control baselines because it directly reduces exposure to common infection and phishing delivery paths without requiring perfect user behavior.

Use this page as a requirement-level checklist: interpret the text, decide your inspection architecture, implement in steps, and assemble an evidence pack you can hand to an assessor without rebuilding it during the audit.

Regulatory text

Requirement (excerpt): “Include system components that proactively seek to identify network-based malicious code or malicious websites.” 1

Operator meaning: You must implement technical controls that inspect relevant network traffic and destinations to detect malicious payloads and known-bad (or high-risk) web destinations, then take action (block, quarantine, alert, or isolate) according to defined rules. This is not satisfied by a document-only policy, and it is not limited to endpoint scanning. The control expects capability in the system design: components are present, enabled, and monitored. 1

Plain-English interpretation

SC-35 is about network-delivered threats and malicious destinations. In plain terms:

  • You run controls that watch traffic crossing key boundaries (ingress/egress) and user browsing paths.
  • Those controls compare content/behavior/destinations against threat intelligence and detection logic.
  • The controls generate actionable outputs (block/quarantine/alert) and your team reviews and responds.

Think of SC-35 as the “don’t let the internet hand you malware” control, enforced with network and web security components, not just endpoints. 1

Who it applies to

SC-35 applies broadly where NIST SP 800-53 is in scope, including:

  • Federal information systems and the environments that process, store, or transmit their data. 1
  • Contractor systems handling federal data, including cloud-hosted and hybrid architectures. 1

Operationally, apply it to:

  • Corporate user egress (managed endpoints, BYOD where permitted, VDI).
  • Email and collaboration ingress (attachments, URLs).
  • Data center and cloud workload egress (servers reaching out to the internet).
  • Remote access paths (VPN, ZTNA, split tunneling decisions).
  • Third-party connectivity points (B2B VPNs, inbound APIs where content is accepted).

What you actually need to do (step-by-step)

1) Set scope and control points (design decision you can defend)

Create an inventory of “internet touchpoints” and pick your enforcement layers:

  • DNS layer: resolver logging + blocking for malicious domains.
  • Web layer: secure web gateway (SWG) / proxy or equivalent cloud web filtering for URL/category/reputation controls.
  • Network edge: next-gen firewall/IPS with malware inspection where appropriate.
  • Email security: attachment detonation/sandboxing and URL rewriting/protection.
  • Cloud egress: cloud firewall/proxy/DNS policies for workloads and containers.

Deliverable: a one-page architecture diagram showing where each inspection happens and what traffic is covered.

2) Define “proactive” outcomes as measurable behaviors

Write a short SC-35 control procedure that states:

  • What is automatically blocked vs allowed with alerting.
  • How quickly detections are triaged (tie to your incident process, even if you do not specify hard SLAs).
  • How you handle false positives and urgent business exceptions.
  • Which logs are retained and where (SIEM/data lake).

Keep it concrete: “All corporate DNS queries resolve through approved resolvers with threat blocking enabled” is testable.

3) Implement baseline protections (start with destination control)

Fastest risk reduction usually comes from malicious destination blocking:

  • Turn on DNS threat blocking and log all queries.
  • Enforce web filtering for managed endpoints and remote users.
  • Disable or tightly govern direct-to-internet egress from workloads; route through controlled egress.

Make bypass paths an explicit backlog item. If split tunneling exists, document compensating controls.

4) Add payload inspection where it fits your environment

For “network-based malicious code,” destination blocking is not always enough. Add inspection methods aligned to your architecture:

  • NGFW/IPS malware scanning for north-south traffic where decryption and throughput allow.
  • Email attachment sandboxing/detonation.
  • Download scanning at SWG/proxy.
  • For encrypted traffic, decide where TLS inspection is permitted and document exclusions (privacy, regulated data types, technical constraints).

5) Operationalize tuning, exceptions, and continuous improvement

Examiners expect you to run the control, not just enable it once.

  • Establish a detection tuning cadence (review top blocked domains, false positives, missed detections).
  • Create an exception workflow: requester, business justification, expiry, approval, and periodic re-validation.
  • Link detections to incident handling: what constitutes an incident, what gets escalated, and how containment happens.

6) Map ownership and recurring evidence (audit-readiness)

Assign:

  • Control owner: usually Security Engineering or SecOps.
  • Evidence owner: GRC can collect, but engineering must produce exports/screenshots/log samples.
  • System owners: app/cloud teams for workload egress routing and firewall policies.

A simple control mapping with “who provides what evidence and when” prevents the common SC-35 failure: missing implementation proof. 1

Where Daydream fits naturally: Use Daydream to map SC-35 to a named owner, a step-by-step implementation procedure, and a recurring evidence checklist so the control stays testable across quarters and system changes. 1

Required evidence and artifacts to retain

Keep an “SC-35 evidence pack” that answers: coverage, configuration, operations, and results.

Minimum artifacts (practical set):

  • Control narrative / procedure describing proactive identification for malicious code and websites, scope, and responsibilities. 1
  • Architecture / data flow diagram showing DNS, web proxy/SWG, email security, edge firewall/IPS placement.
  • Configuration evidence (exports or screenshots):
    • DNS threat blocking enabled; resolver enforcement settings.
    • SWG/proxy policies for URL reputation/category, download scanning, TLS inspection settings if applicable.
    • Firewall/IPS profiles and malware inspection policies.
    • Email security policies for URL/attachment scanning.
  • Coverage evidence:
    • Endpoint/network routing proof that traffic is forced through controls (sample device configs, VPN/ZTNA policy, egress route tables, cloud firewall route associations).
  • Log evidence:
    • Sample blocked events (malicious domains/URLs, malware detections) and where they land (SIEM queries or log archive locations).
  • Operational evidence:
    • Exception tickets with expiry and approvals.
    • Tuning/change records for policy updates.
    • Incident records tied to SC-35 detections (when they occur).

Common exam/audit questions and hangups

Auditors tend to ask questions that expose coverage gaps:

  1. “Where is this implemented?”
    Be ready to show exact enforcement points for users, servers, and cloud workloads.

  2. “How do you know traffic can’t bypass it?”
    Expect scrutiny on split tunneling, local DNS settings, direct egress from cloud subnets, and unmanaged devices.

  3. “Show me evidence it’s proactive and ongoing.”
    They will look for logs, detections, tuning records, and alert triage, not a static policy.

  4. “How do you handle false positives and business needs?”
    If you cannot show time-bound exceptions with review, you will get findings for uncontrolled bypass.

Frequent implementation mistakes and how to avoid them

Mistake Why it fails SC-35 How to avoid it
Relying only on endpoint AV/EDR SC-35 explicitly targets network-based malicious code and malicious websites at system components, not only endpoints. 1 Keep EDR, but add DNS/SWG/email/edge controls and document the architecture.
“We have a firewall” with no malware/URL features enabled Presence is not operation Export the security profiles and show active policies plus logs.
Partial coverage (only HQ network) Remote users and cloud workloads still browse and beacon Force DNS and web controls for remote devices; route workload egress through controlled points.
No exception governance Teams create informal bypasses Require approvals, expiry dates, and periodic review; log all allow-listing.
Evidence gathered ad hoc during audit Produces gaps and inconsistencies Create a recurring evidence schedule owned by named roles; track in Daydream. 1

Enforcement context and risk implications

No public enforcement cases were provided in the supplied sources for SC-35, so treat this as a baseline control expectation rather than a control with a single headline case. Operational risk is straightforward: if you do not identify malicious code and websites at the network layer, phishing links, drive-by downloads, and command-and-control callbacks have fewer barriers. That raises the likelihood that incidents become outages, data exposure, or reportable events, depending on your environment.

Practical 30/60/90-day execution plan

First 30 days (stabilize scope and minimum viable enforcement)

  • Assign SC-35 control owner and evidence owner; publish a short control procedure. 1
  • Document traffic paths: user web, DNS, email ingress, cloud egress.
  • Turn on or verify DNS threat blocking and logging at approved resolvers.
  • Stand up an evidence folder and begin capturing config exports and sample logs.

By 60 days (close bypass paths and connect to operations)

  • Enforce web filtering/SWG policies for managed endpoints, including remote users.
  • Implement an exception workflow with approvals and expirations.
  • Route cloud workload egress through controlled DNS/web/edge where feasible; document any technical constraints and compensating controls.
  • Send key detections to SIEM and confirm SecOps triage ownership.

By 90 days (harden, tune, and make it audit-repeatable)

  • Expand payload inspection: email detonation, download scanning, IPS malware profiles, and TLS inspection decisions documented.
  • Run a tabletop or controlled test (safe test URLs/domains appropriate to your tools) and retain the results as evidence of operation.
  • Establish recurring reviews: top blocks, false positives, allow-lists, and change tracking.
  • Convert the evidence pack into a recurring checklist (Daydream can track owners and collection cadence). 1

Frequently Asked Questions

Does SC-35 require TLS/SSL decryption?

SC-35 requires proactive identification of network-based malicious code and malicious websites, but the text does not mandate a specific method. 1 If you choose not to decrypt, document where visibility is limited and what compensating controls you apply.

Is DNS filtering alone sufficient for SC-35?

DNS filtering addresses “malicious websites” for domain-level access, but SC-35 also covers network-based malicious code. 1 Many programs pair DNS controls with SWG/email scanning and, where feasible, perimeter or egress malware inspection.

How do we show “proactively seek to identify” to an auditor?

Provide configuration evidence that detection and blocking features are enabled, plus logs showing detections over time and tickets/incidents that demonstrate triage. 1 A control narrative alone rarely passes.

What about unmanaged devices or partners on our network?

Treat them as a separate access class and enforce controls at shared choke points (guest networks, DNS resolvers, web gateways) where you have authority. Document any accepted exposure and the boundary controls you still apply. 1

How should we handle allow-listing business-critical sites that get blocked?

Use a time-bound exception with documented justification, approving authority, and an expiry date, then review renewals with security context. Keep the ticket and the policy change record in your evidence pack.

How do we operationalize SC-35 across multiple cloud accounts and regions?

Standardize patterns (central DNS, egress controls, logging to a central SIEM) and require application teams to inherit them through guardrails. Your evidence should show both the standard and spot-check proof from representative accounts.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

Frequently Asked Questions

Does SC-35 require TLS/SSL decryption?

SC-35 requires proactive identification of network-based malicious code and malicious websites, but the text does not mandate a specific method. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON) If you choose not to decrypt, document where visibility is limited and what compensating controls you apply.

Is DNS filtering alone sufficient for SC-35?

DNS filtering addresses “malicious websites” for domain-level access, but SC-35 also covers network-based malicious code. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON) Many programs pair DNS controls with SWG/email scanning and, where feasible, perimeter or egress malware inspection.

How do we show “proactively seek to identify” to an auditor?

Provide configuration evidence that detection and blocking features are enabled, plus logs showing detections over time and tickets/incidents that demonstrate triage. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON) A control narrative alone rarely passes.

What about unmanaged devices or partners on our network?

Treat them as a separate access class and enforce controls at shared choke points (guest networks, DNS resolvers, web gateways) where you have authority. Document any accepted exposure and the boundary controls you still apply. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How should we handle allow-listing business-critical sites that get blocked?

Use a time-bound exception with documented justification, approving authority, and an expiry date, then review renewals with security context. Keep the ticket and the policy change record in your evidence pack.

How do we operationalize SC-35 across multiple cloud accounts and regions?

Standardize patterns (central DNS, egress controls, logging to a central SIEM) and require application teams to inherit them through guardrails. Your evidence should show both the standard and spot-check proof from representative accounts.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream