Process Isolation

To meet the FedRAMP Moderate process isolation requirement, you must run each system process in a separate execution domain so a compromise or fault in one process cannot directly read, write, or control another process’s memory or privileges. In practice, you implement this with OS/hypervisor/container isolation, least privilege, and hard boundaries validated by configuration evidence and testing. 1

Key takeaways:

  • You need enforceable technical boundaries between processes, not just “separate apps” or separate teams. 1
  • The control lives in engineering choices (OS, containers, virtualization, sandboxing) and in repeatable configuration management evidence. 1
  • Auditors will look for isolation-by-design plus proof: hardened builds, runtime settings, and tests that show containment. 1

Process isolation is a requirement-level control that prevents one running process from interfering with another. Under FedRAMP Moderate, SC-39 pushes you to design execution so that routine software failures and security incidents stay contained. “Separate execution domain” is the key phrase: it means processes get boundaries enforced by the platform, not by convention or application code alone. 1

For a Compliance Officer, CCO, or GRC lead, the fastest path to operationalizing SC-39 is to translate it into specific engineering patterns and the evidence package an assessor expects: what isolation mechanisms exist (VM boundaries, container boundaries, OS user/process separation, sandboxing), where they are required (multi-tenant planes, management interfaces, sensitive workloads), and how you prove they are consistently deployed (golden images, baseline configs, automated checks, and exceptions). 1

This page gives requirement-level implementation guidance you can hand to engineering and verify through artifacts. It focuses on what to implement, what to collect, and what typically fails during review.

Regulatory text

Requirement (SC-39): “Maintain a separate execution domain for each executing system process.” 1

Operator meaning: You must ensure each process executes with isolation controls that prevent it from directly accessing or modifying the execution state of other processes (memory, CPU context, privileged resources) except through explicitly controlled interfaces. You meet the requirement through technical enforcement at the operating system, hypervisor, and runtime layers, backed by configuration baselines and verification. 1

Plain-English interpretation (what “process isolation” really requires)

Process isolation means compromise containment. If an attacker gets code execution in Process A, your architecture should prevent them from:

  • Reading Process B’s memory or secrets directly
  • Injecting code into Process B
  • Escalating to Process B’s privileges by default
  • Using shared execution context as a shortcut to system-level control

In practical terms, you implement separation using one or more of these mechanisms:

  • OS process separation (separate address spaces, permissions, user IDs, kernel controls)
  • Virtualization (separate VMs for stronger isolation)
  • Container isolation (namespaces, cgroups, seccomp/AppArmor/SELinux, and strict capability sets)
  • Sandboxing / MAC (mandatory access controls and policy-based execution limits)

A common compliance trap: running microservices in separate containers but giving them broad host access, shared privileges, or shared secrets in a way that collapses the boundary. The question to ask is: “If this process is hostile, what can it touch?”

Who it applies to (entity and operational context)

SC-39 applies to:

  • Cloud Service Providers operating a FedRAMP Moderate system boundary 1
  • Federal Agencies deploying or operating systems under the same baseline expectations 1

Operationally, prioritize process isolation wherever one of these is true:

  • Multi-tenant components: anything serving multiple customers/agencies in the same runtime or host class
  • Privileged control planes: management APIs, admin consoles, orchestration, CI/CD runners
  • Sensitive data processors: services that handle secrets, tokens, encryption material, regulated data types
  • Third-party code execution surfaces: plugins, extensions, embedded scripting, document parsers, image converters

What you actually need to do (step-by-step)

1) Define “execution domain” for your environment

Create a short internal standard that maps SC-39 to your compute stack:

  • For VMs: execution domain = VM boundary + guest OS process separation
  • For containers: execution domain = container boundary plus enforced kernel controls (no privileged containers; minimal capabilities)
  • For serverless: execution domain = provider runtime boundary plus your IAM/runtime constraints (document what is and is not under your control)

Deliverable: “Process Isolation Standard” (one page) that engineering can follow and auditors can read.

2) Inventory process-hosting patterns in your system boundary

You need an accurate view of where processes run:

  • VM fleets, node pools, container clusters
  • Shared hosts (bastions, CI runners, jump boxes)
  • Data plane services vs. control plane services
  • Any legacy or “pet servers” inside the authorization boundary

Deliverable: workload inventory with hosting model and isolation mechanism per workload.

3) Set minimum isolation baselines per hosting model

Write enforceable requirements. Examples of what you typically specify:

  • No shared OS user for unrelated services (prevents lateral access via filesystem and runtime permissions)
  • Least privilege for service accounts (per-process identity; no shared high-privilege identities)
  • Container hardening: disallow privileged containers, restrict Linux capabilities, require read-only root filesystem where feasible, block hostPath mounts except by approved exception, enforce seccomp/AppArmor/SELinux profiles where supported
  • Network segmentation supports isolation: processes should only talk over defined ports and identities (this complements execution domains)

Deliverable: hardened baseline configurations and policy-as-code rules (where available).

4) Engineer out cross-process secret sharing

A frequent boundary collapse is secret sprawl:

  • Move secrets to a centralized secret manager
  • Issue per-service credentials
  • Rotate credentials and scope them to the smallest set of resources
  • Avoid putting shared long-lived secrets in environment variables across multiple services unless there is a documented need and compensating controls

Deliverable: secret inventory and per-process/service identity model.

5) Validate isolation with testing, not assumptions

You need a verification step that proves the boundary is real:

  • Configuration checks: confirm container securityContext, capabilities, privilege settings, sandbox flags, SELinux/AppArmor status
  • Runtime checks: verify processes cannot access each other’s namespaces, filesystems, or memory
  • Adversarial thinking: attempt container breakout paths in a controlled test plan where feasible

Deliverable: test results and continuous compliance checks tied to change management.

6) Put exceptions on rails

Some workloads will require elevated privileges (agents, kernel modules, backup tooling). Treat these as exceptions:

  • Document the business/technical necessity
  • Add compensating controls (stronger host isolation, dedicated nodes/VMs, tighter monitoring)
  • Time-bound the exception and require re-approval on significant change

Deliverable: exception register with approvals and compensating controls.

Required evidence and artifacts to retain

Auditors typically want to see that isolation is designed, implemented, and repeatable. Keep:

  • Process Isolation Standard mapped to SC-39 1
  • Architecture diagrams showing execution boundaries (VMs, clusters, node pools, control plane vs. data plane)
  • Secure configuration baselines for OS images, Kubernetes/container runtime settings, and any sandbox controls
  • Policy-as-code outputs (CI checks, admission controller policies, infrastructure scan results) showing enforcement
  • Workload inventory with hosting model and isolation approach per service
  • Change records for baseline updates and exceptions
  • Test evidence: hardening validation, runtime checks, and results from internal assessments focused on isolation failures

If you use Daydream to manage third-party risk and due diligence, store these artifacts as control evidence and link them to system components and third parties that operate parts of your stack (managed Kubernetes, CI/CD providers, security agents). That creates a single audit trail for “who runs what” and “how isolation is enforced” across internal and third-party operated layers.

Common exam/audit questions and hangups

Expect questions like:

  • “Define ‘execution domain’ for your system. Where is it enforced?” 1
  • “Show me how you prevent one container/process from gaining host-level privileges.”
  • “Which workloads run with elevated privileges, and what compensating controls exist?”
  • “How do you stop secrets from being shared across processes or namespaces?”
  • “Prove this is continuously enforced, not a one-time build setting.”

Hangups that slow audits:

  • Relying on narrative descriptions without config evidence
  • Saying “Kubernetes isolates containers” without showing your pod security settings and runtime constraints
  • Exceptions that exist in practice but are missing from a formal register

Frequent implementation mistakes (and how to avoid them)

  1. Privileged containers as the default
  • Fix: enforce policy that blocks privileged mode and dangerous capabilities, require exceptions with dedicated isolation.
  1. Shared service accounts across multiple processes
  • Fix: implement per-service identity and scope permissions to the minimum required resources.
  1. Mixing control plane and data plane on the same hosts
  • Fix: separate node pools/VMs for admin/control functions; apply stronger isolation to management components.
  1. Host filesystem mounts that bypass container boundaries
  • Fix: restrict hostPath mounts; use managed volumes with access controls; log and review any required host mounts.
  1. No verification loop
  • Fix: add continuous checks in CI/CD and periodic runtime validation; treat drift as an incident or a change failure.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite enforcement actions.

From a risk perspective, process isolation failures create direct paths to lateral movement, privilege escalation, and cross-tenant exposure. In FedRAMP environments, those outcomes translate into authorization risk and heightened scrutiny during assessment because the control is foundational to containment. 1

Practical 30/60/90-day execution plan

Use this plan to move from “we think we isolate” to “we can prove it.”

First 30 days (baseline and visibility)

  • Publish a one-page Process Isolation Standard mapped to SC-39. 1
  • Inventory workloads and classify them by hosting model and sensitivity.
  • Identify where isolation is weakest: privileged workloads, shared hosts, shared service accounts, shared secrets.
  • Start an exceptions register immediately, even if incomplete.

By 60 days (enforcement and evidence)

  • Implement policy controls that prevent common boundary breaks (privileged containers, excessive capabilities, unsafe mounts).
  • Create hardened golden images/baselines for hosts and clusters.
  • Separate high-risk workloads onto dedicated infrastructure where needed (for example, control plane tooling).
  • Produce an evidence pack: diagrams, configs, and sample runtime proofs.

By 90 days (validation and steady-state operations)

  • Add continuous compliance checks in CI/CD and deployment gates.
  • Run isolation-focused validation tests and document results.
  • Operationalize exception lifecycle: approval, compensating controls, re-review on change.
  • Centralize evidence collection (including third parties) in a system like Daydream so audits pull from one place.

Frequently Asked Questions

Does “separate execution domain” mean every process must have its own VM?

No. SC-39 requires a separate execution domain for each executing system process, but the mechanism can be OS isolation, containers with enforced runtime controls, or virtualization, as long as the boundary is real and validated. 1

Are containers automatically compliant with the process isolation requirement?

Not automatically. Containers can support isolation, but privileged mode, broad Linux capabilities, unsafe host mounts, and weak secret handling often collapse boundaries and create cross-process risk.

How do I scope this control in a Kubernetes environment?

Define what “execution domain” means for pods/containers in your standard, then enforce it with cluster policies, hardened node configuration, and workload securityContext requirements. Keep evidence of the policies and their enforcement results.

What evidence is most persuasive to an assessor for SC-39?

Configuration baselines plus proof of enforcement. Show hardened runtime settings, policy checks that block unsafe deployments, an exception register for the few elevated workloads, and test results that demonstrate containment.

How should we handle third-party managed components (managed Kubernetes, CI runners, agents)?

Treat them as part of the system boundary decision and document the shared responsibility model in your evidence pack. Keep third-party assurances and your own validation steps together so you can show end-to-end isolation across handoffs.

What’s the fastest way to find isolation gaps?

Start with a targeted review for privileged execution, shared identities, shared secrets, and host-level access patterns. Those issues usually reveal where “separate execution domain” exists in theory but not in operational reality.

Footnotes

  1. NIST Special Publication 800-53 Revision 5

Frequently Asked Questions

Does “separate execution domain” mean every process must have its own VM?

No. SC-39 requires a separate execution domain for each executing system process, but the mechanism can be OS isolation, containers with enforced runtime controls, or virtualization, as long as the boundary is real and validated. (Source: NIST Special Publication 800-53 Revision 5)

Are containers automatically compliant with the process isolation requirement?

Not automatically. Containers can support isolation, but privileged mode, broad Linux capabilities, unsafe host mounts, and weak secret handling often collapse boundaries and create cross-process risk.

How do I scope this control in a Kubernetes environment?

Define what “execution domain” means for pods/containers in your standard, then enforce it with cluster policies, hardened node configuration, and workload securityContext requirements. Keep evidence of the policies and their enforcement results.

What evidence is most persuasive to an assessor for SC-39?

Configuration baselines plus proof of enforcement. Show hardened runtime settings, policy checks that block unsafe deployments, an exception register for the few elevated workloads, and test results that demonstrate containment.

How should we handle third-party managed components (managed Kubernetes, CI runners, agents)?

Treat them as part of the system boundary decision and document the shared responsibility model in your evidence pack. Keep third-party assurances and your own validation steps together so you can show end-to-end isolation across handoffs.

What’s the fastest way to find isolation gaps?

Start with a targeted review for privileged execution, shared identities, shared secrets, and host-level access patterns. Those issues usually reveal where “separate execution domain” exists in theory but not in operational reality.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
FedRAMP Moderate Process Isolation: Implementation Guide | Daydream