SC-39: Process Isolation
SC-39 requires you to keep each executing system process in a separate execution domain so one process cannot directly read, write, or interfere with another process’s memory or resources. To operationalize it fast, define where process isolation is required, implement OS/container/runtime isolation patterns, and retain configuration and test evidence that proves isolation works in production. 1
Key takeaways:
- Scope SC-39 to systems where process-to-process interference would create security impact, then standardize isolation patterns by platform.
- Your “pass” depends on enforceable technical controls (not policy text) plus repeatable evidence: configs, hardening baselines, and test results.
- Auditors will probe exceptions (shared service accounts, privileged processes, “break-glass,” legacy hosts) more than your target-state design.
SC-39: process isolation requirement is one of those controls that looks simple in one sentence and becomes messy in real operations. Most environments run thousands of processes across endpoints, servers, containers, and managed platforms. The compliance job is not to isolate “everything equally.” The job is to prove you maintain separate execution domains in the places where a process compromise could laterally affect other workloads, tenants, or sensitive data.
For a CCO, GRC lead, or security compliance operator, the fastest path is to translate SC-39 into a small set of approved engineering patterns and a small set of artifacts you can collect repeatedly. This page gives you requirement-level guidance: how to define scope, what to implement on Windows/Linux/container platforms, how to handle exceptions, and what evidence an assessor will accept.
If you need to coordinate across infrastructure, application engineering, and third parties, treat SC-39 as a “platform control” with measurable deployment coverage and a clear owner. Daydream can help by mapping SC-39 to an owner, a concrete procedure, and recurring evidence artifacts so you do not rebuild the same proof package each assessment cycle. 1
Regulatory text
Requirement (SC-39): “Maintain a separate execution domain for each executing system process.” 1
Operator meaning (what you must do):
- Ensure the runtime environment enforces boundaries so one process cannot directly access another process’s memory space, file descriptors, IPC mechanisms, or privileged OS objects outside explicit, controlled interfaces.
- Prefer isolation that is enforced by the OS kernel, hypervisor, or container runtime configuration, not by application conventions alone.
- Document the mechanism used per platform and prove it is enabled and operating as intended in production. 1
Plain-English interpretation
SC-39 is about preventing “process breaks process.” If a low-privileged or internet-facing process is compromised, process isolation reduces the chance the attacker can:
- read secrets from another process’s memory,
- tamper with a higher-privilege process,
- or escape into adjacent workloads running on the same host.
In practice, SC-39 is satisfied by standard OS process isolation plus hardening that prevents common isolation bypasses (overbroad privileges, unsafe IPC, shared identities, debug permissions, permissive container settings). Your proof should show that isolation is designed, consistently implemented, and monitored for drift. 1
Who it applies to
Entities
- Federal information systems and contractor systems that handle federal data where NIST SP 800-53 is the controlling framework. 1
Operational context (where assessors focus)
- Multi-tenant or shared-host environments (container clusters, shared VMs, shared jump hosts).
- Systems processing sensitive or regulated data where lateral movement between processes changes impact.
- High-privilege service hosts (directory services, IAM components, CI/CD runners).
- Third-party hosted platforms where you still configure runtime isolation (Kubernetes, PaaS buildpacks, managed container services). You cannot outsource the requirement; you can only inherit parts of it.
What you actually need to do (step-by-step)
1) Assign ownership and define “execution domain” for your environment
- Control owner: usually Infrastructure/SRE for host and container runtime isolation; AppSec for application runtime controls; GRC for evidence packaging.
- Definition statement: write a one-page control implementation statement describing what constitutes an execution domain for:
- OS processes (kernel-enforced address space separation, user separation, permissions),
- containers (namespaces/cgroups plus runtime profiles),
- VMs/hypervisors (guest OS boundary),
- serverless (provider runtime boundary, plus your config constraints).
- Success criteria: “Production workloads run with least privilege; inter-process access is only through approved channels; exceptions are documented and time-bound.”
Deliverable: SC-39 implementation standard (1–2 pages) that is technical and testable.
2) Inventory scope and tier systems by isolation sensitivity
Create a simple table for in-scope systems:
| System/workload | Platform | Data sensitivity | Multi-tenant? | Isolation mechanism | Owner | Exceptions |
|---|
Focus first on:
- internet-exposed services,
- hosts that run multiple critical services together,
- container platforms,
- shared admin tooling.
Deliverable: SC-39 system scope register mapped to your system inventory/CMDB.
3) Standardize approved isolation patterns by platform
Pick patterns engineers can follow without debates.
Linux/Unix hosts
- Enforce least privilege: distinct service accounts per service, no shared “app” account for unrelated processes.
- Restrict process inspection: disable/limit ptrace where appropriate, control
/procexposure, restrict debugging tools to admin groups. - Systemd hardening (where applicable): use unit-level controls such as
NoNewPrivileges,PrivateTmp,ProtectSystem,ProtectHome,RestrictAddressFamilies, andCapabilityBoundingSetfor services that do not need broad capabilities. - Mandatory Access Control (where feasible): SELinux/AppArmor profiles for high-risk services.
Windows hosts
- Separate services by identity and privilege: distinct service accounts; avoid running multiple services as LocalSystem unless justified.
- Apply OS protections that reduce cross-process tampering where applicable (for example, protected process light in supported scenarios, exploit protection policies), and restrict debug rights.
Containers (Kubernetes or similar)
- Require non-root containers; drop Linux capabilities by default; add only what is required.
- Use runtime confinement (seccomp/AppArmor) and disallow privileged containers except by explicit exception.
- Enforce namespace isolation defaults; limit hostPath mounts; block host networking unless justified.
- Apply NetworkPolicies to prevent unexpected east-west access; while network policy is not “process isolation,” auditors often treat it as supporting control evidence for lateral-movement reduction.
Application runtime
- Run separate processes for separate trust zones (for example, do not run admin tasks inside the same long-lived process that handles untrusted input unless you have a strong reason).
- Prefer memory-safe isolation boundaries (separate processes) for plugins/extensions that handle third-party code.
Deliverable: Platform baseline requirements (host, container, runtime) with “must/should/exception” language.
4) Implement enforcement: guardrails, not guidance
Assessors look for enforceable controls:
- Policy-as-code: admission controls for Kubernetes (disallow privileged, require runAsNonRoot, require seccomp profile).
- Golden images: hardened AMIs/images with systemd hardening and baseline settings baked in.
- Configuration management: drift detection and remediation for runtime isolation settings.
- CI/CD checks: block insecure container settings and unsafe runtime flags.
Deliverable: Evidence that guardrails exist and block noncompliant deployments (config + sample blocked deployment record if available).
5) Test that isolation works (and keep the test repeatable)
You do not need exotic research-grade testing; you need repeatable checks that show boundaries are in place:
- Host checks: verify service accounts, privileges, debug rights, and hardening settings.
- Container checks: verify pod security settings, capabilities, privileged flags, seccomp/AppArmor status.
- Validation exercises: attempt a controlled “breakout” pattern (for example, disallowed host mounts) in a non-production environment to prove guardrails stop it.
Deliverable: Test procedure + last-run results + issue tickets for failures.
6) Manage exceptions explicitly
Common valid exceptions: legacy apps requiring shared accounts, vendor appliances, privileged monitoring agents.
- Require: business justification, compensating controls, approval, and a time-bound remediation plan.
- Track exceptions centrally and review them on a set cadence you already use for risk acceptances.
Deliverable: SC-39 exception register entries with approvals and expiration dates.
Required evidence and artifacts to retain
Keep artifacts that show design, implementation, and operating effectiveness:
Design / governance
- SC-39 control statement and platform standards (what isolation means in your environment). 1
- RACI showing control owner(s) and approvers.
Technical configuration evidence
- Kubernetes policies (admission controller constraints, Pod Security configuration, seccomp/AppArmor policy references).
- Sample workload manifests showing compliant settings (non-root, dropped capabilities, no privileged).
- Host hardening baselines (systemd unit templates, SELinux/AppArmor status evidence, Windows service identity configuration approach).
- Golden image build pipeline outputs showing isolation settings baked in.
Operational evidence
- Recent compliance scan results (container posture, host posture) and ticketing evidence for remediation.
- Exception register and approvals.
- Change management records for baseline updates.
Daydream-ready packaging
- A single SC-39 evidence checklist mapped to each platform baseline and a monthly/quarterly evidence pull. Daydream is useful here because it turns “collect screenshots and hope” into a defined evidence routine with owners and due dates. 1
Common exam/audit questions and hangups
- “Define execution domain in your architecture.” Expect to explain process vs container vs VM boundaries in plain language.
- “Show me enforcement.” Auditors prefer preventative guardrails over detective reports.
- “How do you stop privileged containers or shared service accounts?” If you allow them, show exception governance.
- “What about managed services?” Show your configuration constraints and inherited responsibility notes.
- “Prove it’s consistent.” One system example is not enough; show coverage reporting across fleets.
Frequent implementation mistakes (and how to avoid them)
- Mistake: Treating default OS separation as “done,” with no hardening. Fix: document your minimum hardening posture for high-risk services and enforce it via images/config management.
- Mistake: Allowing broad debug/inspection rights in production. Fix: restrict debugging permissions to a small admin group and require break-glass workflow.
- Mistake: Kubernetes allows privileged pods “temporarily” and they never go away. Fix: enforce admission policies, then manage exceptions with expirations.
- Mistake: One shared service account runs unrelated services. Fix: separate identities; it is one of the fastest wins for isolation and audit clarity.
- Mistake: Evidence is ad hoc. Fix: create a recurring evidence calendar and a single control binder for SC-39.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for SC-39. Treat this as an assessment and breach-impact control: poor process isolation increases the blast radius of a single-process compromise and complicates incident containment. In federal and contractor environments, SC-39 gaps frequently surface as assessment findings because they are visible in configurations, images, and Kubernetes policies. 1
Practical 30/60/90-day execution plan
First 30 days (stabilize scope and standards)
- Name the SC-39 control owner(s) and publish a one-page control implementation statement. 1
- Inventory in-scope platforms: hosts, clusters, and managed runtimes; identify high-risk shared hosts.
- Draft platform baselines (Linux/Windows/container) with clear “required” settings and an exception process.
- Stand up an SC-39 evidence checklist in Daydream so every artifact has an owner and refresh cadence. 1
Days 31–60 (enforce guardrails and start evidence collection)
- Implement preventative controls where the blast radius is highest (container admission policies; golden image updates for key server tiers).
- Separate service identities for the most critical services; remove unnecessary privileges and capabilities.
- Run first compliance scans; open tickets for deviations; document compensating controls for exceptions.
- Collect first evidence set: configs, baselines, scan outputs, and exception records.
Days 61–90 (prove operating effectiveness and close gaps)
- Expand enforcement coverage across remaining tiers and clusters.
- Add repeatable tests (pre-prod “policy block” validation, drift detection reports).
- Review and reduce exceptions; require time-bound remediation plans.
- Prepare your assessor packet: control statement, scope, baselines, enforcement configs, and latest operating evidence.
Frequently Asked Questions
Does SC-39 require one VM per process?
No. SC-39 requires separate execution domains for processes, which is commonly provided by OS process separation and strengthened through hardening and least privilege. Use VMs or separate hosts where your risk model requires stronger isolation. 1
Are containers “separate execution domains” by themselves?
Containers can provide process isolation through kernel features, but you must configure and enforce them safely (non-root, least privileges, confinement profiles). Auditors will expect proof that insecure container modes are blocked or tightly governed. 1
What’s the fastest evidence to collect for an audit?
Start with your platform baseline documents, the enforcement configurations (golden image settings or admission policies), and a current scan/report showing compliant settings across a representative set of production assets. Pair that with an exception register. 1
How do we handle third-party hosted platforms where we can’t see the underlying OS?
Document what isolation controls are inherited from the provider and what you still configure (tenant isolation settings, runtime configuration constraints, identity/privilege separation). Keep contracts or provider attestations in your third-party due diligence file if available. 1
Our legacy app needs to run as root/admin. Is that automatically noncompliant?
It is a high-friction exception. Record a formal exception with compensating controls (host dedicated to that app, tighter network controls, monitoring, restricted admin access) and a remediation plan to reduce privilege over time. 1
How should a GRC team track SC-39 without living in engineering tools?
Track SC-39 as a platform control with a clear owner, a standard procedure, and recurring evidence pulls (baselines, policy configs, scan exports, exception review). Daydream works well for assigning ownership and keeping the evidence set current for assessments. 1
Footnotes
Frequently Asked Questions
Does SC-39 require one VM per process?
No. SC-39 requires separate execution domains for processes, which is commonly provided by OS process separation and strengthened through hardening and least privilege. Use VMs or separate hosts where your risk model requires stronger isolation. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Are containers “separate execution domains” by themselves?
Containers can provide process isolation through kernel features, but you must configure and enforce them safely (non-root, least privileges, confinement profiles). Auditors will expect proof that insecure container modes are blocked or tightly governed. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What’s the fastest evidence to collect for an audit?
Start with your platform baseline documents, the enforcement configurations (golden image settings or admission policies), and a current scan/report showing compliant settings across a representative set of production assets. Pair that with an exception register. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How do we handle third-party hosted platforms where we can’t see the underlying OS?
Document what isolation controls are inherited from the provider and what you still configure (tenant isolation settings, runtime configuration constraints, identity/privilege separation). Keep contracts or provider attestations in your third-party due diligence file if available. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Our legacy app needs to run as root/admin. Is that automatically noncompliant?
It is a high-friction exception. Record a formal exception with compensating controls (host dedicated to that app, tighter network controls, monitoring, restricted admin access) and a remediation plan to reduce privilege over time. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How should a GRC team track SC-39 without living in engineering tools?
Track SC-39 as a platform control with a clear owner, a standard procedure, and recurring evidence pulls (baselines, policy configs, scan exports, exception review). Daydream works well for assigning ownership and keeping the evidence set current for assessments. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream