SC-39(1): Hardware Separation
SC-39(1) requires you to use hardware-based separation to isolate processes so that a compromise or failure in one workload cannot directly access or interfere with another. To operationalize it fast, identify where software isolation is insufficient, choose hardware separation mechanisms (dedicated hosts, strong VM isolation, hardware enclaves), document the design decision, and retain implementation evidence tied to system scope. 1
Key takeaways:
- Hardware separation is about isolating processes using the platform, not only OS permissions or containers. 1
- Your “done” state includes both technical implementation and assessor-ready evidence mapped to owners and recurring artifacts. 1
- Scope matters: apply it where mixed-trust workloads, sensitive processing, or hostile multi-tenancy make software-only isolation a weak bet. 2
SC-39(1): hardware separation is a requirement-level control enhancement that pushes you beyond logical boundaries and into platform-enforced isolation. The regulatory text is short, but the operational decisions are not: you must decide which workloads need stronger isolation, which hardware mechanisms you will rely on, and how you will prove it to an assessor without hand-waving.
For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat SC-39(1) as a scoping and evidence problem as much as a technical one. Your job is to (1) define when hardware separation is required in your environment, (2) ensure engineering has an approved pattern to meet it, and (3) make the control assessable with repeatable artifacts.
This page focuses on quick operationalization: applicability, step-by-step implementation, evidence to retain, common audit questions, and common missteps. It assumes you are implementing NIST SP 800-53 Rev. 5 in a federal information system or a contractor system that handles federal data. 2
Regulatory text
Requirement (verbatim): “Implement hardware separation mechanisms to facilitate process isolation.” 1
What the operator must do: put an architectural boundary in place where the separation is enforced by hardware (or hardware-assisted mechanisms), so processes in one isolation domain cannot read, write, or interfere with processes in another domain through shared compute resources. This generally means choosing a compute pattern where isolation is enforced by the hypervisor/CPU/firmware boundary (or dedicated physical hardware), not relying only on OS user permissions, container boundaries, or application-level checks. 1
Plain-English interpretation of the sc-39(1): hardware separation requirement
You need stronger isolation than “we run different services on the same box and trust Linux permissions.” SC-39(1) expects you to separate processes using hardware-enforced boundaries so one workload can’t break out and touch another workload’s memory, CPU state, or execution environment.
Think in terms of isolation domains:
- Domain A: regulated/sensitive process (e.g., handles federal data)
- Domain B: lower-trust process (e.g., dev tools, batch jobs, third-party agents) SC-39(1) pushes you to avoid co-residency without a hardware-backed boundary.
Who it applies to (entity and operational context)
Entities
- Federal information systems implementing NIST SP 800-53 controls. 2
- Contractor systems handling federal data (including systems in cloud environments) where NIST SP 800-53 is imposed by contract, overlay, or authorization boundary. 2
Operational contexts where assessors expect you to discuss SC-39(1)
- Mixed-trust multi-tenancy: sensitive workloads share compute with less-trusted workloads (internal or third party).
- Privileged tooling on shared hosts: EDR agents, monitoring collectors, CI/CD runners, backup agents.
- High-risk processing: authentication/identity services, key management functions, data processing pipelines with regulated data.
- Cloud designs that assume “namespace separation” or “container isolation” is enough without demonstrating hardware-backed boundaries.
What you actually need to do (step-by-step)
1) Assign ownership and write the “when required” rule
- Control owner: usually Infrastructure/SRE or Cloud Platform Engineering, with Security Architecture as approver and GRC as evidence owner.
- Decision rule: define when you must use hardware separation. Keep it simple enough that engineering can apply it consistently.
- Example rule format: “Workloads of trust level X may not share a compute host with trust level Y unless the separation mechanism is hardware-enforced and approved.”
Deliverable: SC-39(1) control implementation statement with owner, scope, and decision rule. 1
2) Identify in-scope systems and isolation domains
Create (or reuse) a system inventory and draw boundaries:
- Which systems/processes handle federal data?
- Which hosts/clusters run both regulated and non-regulated workloads?
- Which execution environments are shared (bare metal, hypervisor, Kubernetes nodes, serverless runtime)?
Technique that works in practice: produce a one-page “co-residency map” listing each compute pool and the workload trust levels allowed on it.
Deliverable: scoped list of compute pools and their allowed workload classes.
3) Choose the hardware separation mechanism(s) you will standardize on
Pick approved patterns, not one-off solutions. Common patterns that typically satisfy the intent:
- Dedicated physical hardware for the sensitive workload (no co-residency).
- Virtualization with strong isolation (hypervisor-enforced boundaries), where you can show the regulated workload runs in a separate VM/host group and does not share the same OS instance with lower-trust processes.
- Hardware-backed secure enclaves / TEEs where applicable to isolate specific processes (only if you can operationalize key management, attestation, and monitoring).
Your standard should specify:
- Which pattern is approved for which workload category.
- How you prevent drift (policy-as-code, provisioning controls, cluster admission rules).
- Who can grant exceptions and how they are time-bounded.
Deliverable: an “Approved Hardware Separation Patterns” standard owned by platform/security architecture.
4) Enforce separation in provisioning and scheduling (not in a wiki)
Auditors will look for enforcement, not guidance. Implement controls that make noncompliant placement hard:
- Separate accounts/subscriptions/projects per trust domain (where it supports isolation goals).
- Separate clusters/node pools for regulated workloads.
- Host affinity/anti-affinity rules.
- Restrictions on running privileged or third-party agents on sensitive compute pools.
- Change control gates for adding new workloads to sensitive pools.
Deliverable: technical guardrails (configurations, IaC modules, admission policies) plus change records showing they’re in use.
5) Validate isolation and document the rationale
Perform a practical validation:
- Confirm the sensitive process executes only in approved isolation domains.
- Confirm lower-trust processes cannot be scheduled onto the same underlying compute where prohibited.
- Confirm admin access model doesn’t collapse domains (e.g., the same human or automation has unrestricted host access across all domains without compensating controls).
Document:
- What is separated from what.
- What mechanism enforces it.
- Known limitations and how you mitigate them.
Deliverable: SC-39(1) validation memo or test record tied to each in-scope system.
6) Operationalize recurring evidence (assessment readiness)
SC-39(1) often fails in audits because teams can’t show repeatable proof. Build an evidence cadence:
- periodic exports of cluster/node pool configuration
- screenshots or config outputs showing dedicated hosts / VM placement rules
- exception register with approvals and expiry
Daydream can help by mapping SC-39(1) to a named control owner, a short implementation procedure, and a recurring evidence checklist so your team produces the same artifacts every time without rebuilding the story for each assessment. 1
Required evidence and artifacts to retain
Keep artifacts that prove design, implementation, and operation:
Design
- SC-39(1) control statement (scope, owner, decision rule). 1
- Architecture diagrams showing isolation domains and allowed data flows.
- Approved patterns standard (dedicated host, VM isolation, enclave pattern) with approval history.
Implementation
- Infrastructure-as-code modules or configuration baselines that create separate compute pools.
- Cloud/virtualization configuration exports showing host groups, dedicated instances, node pools, scheduling constraints.
- Change tickets/PRs that introduced guardrails.
Operation
- Periodic configuration snapshots demonstrating continued separation.
- Exception register (who approved, why, compensating controls, expiration).
- Access control lists for hypervisor/host admin roles relevant to separation.
Common exam/audit questions and hangups
Expect questions like:
- “Show me where the hardware separation is enforced, not described.”
- “Which workloads share the same physical hosts? Prove your answer.”
- “Are containers considered hardware separation in your environment? If yes, show the hardware-backed boundary that enforces isolation.” 1
- “How do you prevent a new service from being scheduled onto the regulated node pool?”
- “Who can administer the hosts/hypervisor, and does that break isolation expectations?”
Hangups that cause findings:
- No written decision rule, so scoping is inconsistent.
- Evidence is screenshots from months ago with no recurring proof.
- “We use Kubernetes namespaces” presented as isolation without showing host-level separation.
Frequent implementation mistakes and how to avoid them
-
Mistake: Treating OS permissions as ‘hardware separation.’
Avoidance: require a hardware-backed boundary for mixed-trust workloads (separate hosts or VMs with enforced placement rules). 1 -
Mistake: One compliant enclave inside a noncompliant compute pool.
Avoidance: separate the pool first, then add workload-specific protections. Document why the boundary meets the intent for process isolation. -
Mistake: No guardrails; only diagrams.
Avoidance: implement scheduling/provisioning constraints that block noncompliant co-residency by default. -
Mistake: Exceptions become permanent.
Avoidance: keep an exception register with expiration and a named approver; review exceptions as part of your control testing. -
Mistake: Third-party agents installed everywhere.
Avoidance: define which third-party software may run on regulated hosts, require security review, and isolate collection where feasible.
Risk implications (why assessors care)
SC-39(1) reduces the blast radius of process compromise. Without hardware separation, a single host-level weakness, misconfiguration, or privileged process can become a pathway into regulated workloads. In regulated environments, this becomes an authorization and boundary integrity issue: you cannot credibly claim isolation if lower-trust code executes in the same execution environment without a strong boundary. 2
Practical execution plan (30/60/90-day plan)
Because this requirement set does not provide time-bound mandates, treat the plan as phased execution you can adapt to your system lifecycle and release constraints. 2
First 30 days (Immediate)
- Assign SC-39(1) control owner and approver; write the decision rule and scope statement. 1
- Build the co-residency map for in-scope systems (where workloads run today).
- Choose standard patterns (dedicated hosts, hypervisor-enforced isolation) and document them.
- Identify the highest-risk shared pools and open remediation work items.
By 60 days (Near-term)
- Implement technical guardrails for at least one priority environment (prod first if feasible): separate pools, placement constraints, restricted admin access.
- Stand up an exception workflow and register.
- Produce the first evidence package: configs, diagrams, and validation notes tied to system boundaries.
By 90 days (Operationalize)
- Expand guardrails across remaining in-scope environments.
- Add recurring evidence capture to your control testing program (pull configs, review exceptions, confirm placement constraints still active).
- Run an internal assessment dry-run: have someone not involved in implementation try to “prove SC-39(1)” using only retained artifacts.
Frequently Asked Questions
Does virtualization count for SC-39(1) hardware separation?
It can, if the separation is enforced by the hypervisor/CPU boundary and you can prove regulated processes run in a distinct isolation domain with enforced placement controls. If your proof relies on “same host, different container,” expect pushback under SC-39(1). 1
Are Kubernetes namespaces or network policies enough?
Usually not for SC-39(1) because they are logical controls inside a shared OS and shared worker nodes. Use separate node pools or dedicated hosts for regulated workloads, then layer namespaces and network controls on top. 1
What’s the minimum evidence an auditor will accept?
A written implementation statement, diagrams showing isolation domains, and system-generated configuration evidence that the separation is enforced in production. Add an exception register if you allow any mixed-trust co-residency. 1
How do we handle third-party agents (monitoring/EDR) on isolated hosts?
Treat them as workloads that may cross trust boundaries. Maintain an allowlist for regulated compute pools, require review before installation, and document the rationale and access model for any agent that runs with elevated privileges.
What if we can’t separate all workloads due to legacy constraints?
Document a time-bound exception with compensating controls and a migration plan to an approved hardware separation pattern. Keep the exception explicit, approved, and easy to audit.
How should we describe SC-39(1) in the SSP or control narrative?
Write a short statement that names the isolation domains, the hardware separation mechanism, and how you enforce placement and prevent drift. Then list the exact artifacts you will provide during assessment. 1
Footnotes
Frequently Asked Questions
Does virtualization count for SC-39(1) hardware separation?
It can, if the separation is enforced by the hypervisor/CPU boundary and you can prove regulated processes run in a distinct isolation domain with enforced placement controls. If your proof relies on “same host, different container,” expect pushback under SC-39(1). (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Are Kubernetes namespaces or network policies enough?
Usually not for SC-39(1) because they are logical controls inside a shared OS and shared worker nodes. Use separate node pools or dedicated hosts for regulated workloads, then layer namespaces and network controls on top. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What’s the minimum evidence an auditor will accept?
A written implementation statement, diagrams showing isolation domains, and system-generated configuration evidence that the separation is enforced in production. Add an exception register if you allow any mixed-trust co-residency. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How do we handle third-party agents (monitoring/EDR) on isolated hosts?
Treat them as workloads that may cross trust boundaries. Maintain an allowlist for regulated compute pools, require review before installation, and document the rationale and access model for any agent that runs with elevated privileges.
What if we can’t separate all workloads due to legacy constraints?
Document a time-bound exception with compensating controls and a migration plan to an approved hardware separation pattern. Keep the exception explicit, approved, and easy to audit.
How should we describe SC-39(1) in the SSP or control narrative?
Write a short statement that names the isolation domains, the hardware separation mechanism, and how you enforce placement and prevent drift. Then list the exact artifacts you will provide during assessment. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream