SI-14: Non-persistence

The si-14: non-persistence requirement means you must run defined system components as non-persistent (ephemeral) instances that start from a known-good baseline and are terminated according to your defined triggers, so malware and unauthorized changes do not survive across sessions. Operationalize SI-14 by scoping which components must be ephemeral, enforcing rebuild-on-start and destroy-on-stop in your platform, and retaining evidence that the pattern is enforced.

Key takeaways:

  • Define what is non-persistent (which components) and when it terminates (events/conditions), then implement those rules in your orchestration stack.
  • “Known state” requires a controlled golden image/baseline plus guardrails that prevent drift (patching, config, secrets, logging).
  • Audits fail more from missing evidence and inconsistent scoping than from tooling gaps; design your evidence trail up front.

SI-14 is a build-and-destroy control. Assessors will look for proof that certain system elements do not accumulate changes over time, because those changes can hide persistence mechanisms (malware, backdoors, unauthorized admin tools) and complicate incident response. The control’s wording is short, but implementation touches architecture, operations, and evidence.

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat SI-14 like a requirement you can “productize” internally: define the non-persistent population, define the known-good baseline, define termination triggers, enforce the pattern technically, and run it as an operational standard with recurring evidence.

This page focuses on requirement-level execution: what SI-14 expects, who owns it, how to implement it in real environments (cloud, containers, VDI, server fleets), what artifacts to retain, and where audits get stuck. It also gives a practical execution plan you can hand to an infrastructure lead and then test during control walkthroughs.

Regulatory text

Requirement (verbatim): “Implement non-persistent {{ insert: param, si-14_odp.01 }} that are initiated in a known state and terminated {{ insert: param, si-14_odp.02 }}.” 1

Operator meaning: You must choose the applicable system elements (the parameterized object) and implement them so they (1) start in a known state and (2) are terminated per defined conditions. Your job is to remove ambiguity: specify the object (workloads, endpoints, VDI sessions, build agents, containers, etc.) and specify termination logic (on logout, after job completion, on reboot, after detected compromise, after inactivity, at session end, etc.). 2

Plain-English interpretation (what SI-14 really requires)

SI-14 expects an ephemeral-by-design pattern for selected components:

  • Initiated in a known state: every time the component starts, it is created from an approved baseline (golden image, immutable container image, standardized VM template, or equivalent). Any required configuration is applied from controlled code (IaC, config management) and not from ad hoc admin changes.
  • Terminated by rule: the component is destroyed or reset in a predictable way, so it cannot carry forward unauthorized changes. Termination can be event-driven (job ends) or administrative (re-image on reboot), but it must be defined and consistently applied.

Where teams get it wrong: they treat “non-persistent” as a documentation claim. Assessors look for technical enforcement and evidence that drift does not persist.

Who SI-14 applies to (entity + operational context)

SI-14 is typically in scope for:

  • Federal information systems and contractor systems handling federal data that adopt NIST SP 800-53 controls. 2
  • Environments where persistence risk is high or operational rebuild patterns already exist:
    • Virtual desktop infrastructure (VDI) pools
    • Containerized workloads and Kubernetes
    • CI/CD build runners and ephemeral test environments
    • Auto-scaling groups and immutable infrastructure patterns
    • High-risk admin workstations or jump hosts

Third-party angle: if a third party hosts, manages, or operates the components you designate as non-persistent, SI-14 becomes a shared responsibility requirement. Your contracts and oversight should ensure the third party’s platform behavior matches your SI-14 scoping and termination triggers.

What you actually need to do (step-by-step)

1) Scope the “non-persistent” population (make the parameters real)

Create a short scoping statement that answers:

  • Which components must be non-persistent? (examples: “internet-facing web tier,” “CI runners,” “VDI sessions for privileged admins”)
  • Which are explicitly out of scope, and why? (examples: databases, stateful message queues, forensic jump box)

Deliverable: an SI-14 scope table you can show an assessor.

Suggested scope table fields

Field Example
Component type Kubernetes worker node / CI runner / VDI session
System boundary “Prod Payment Platform”
Non-persistent mechanism Immutable image + redeploy
Known state source Golden AMI / signed container image
Termination trigger Job complete / logout / node rotation
Owner Infrastructure / Platform Engineering

2) Define “known state” as a controlled baseline

Your “known state” should be something you can defend:

  • Golden images / templates with controlled change management
  • Immutable container images built in CI with provenance controls
  • Configuration-as-code for runtime config (do not rely on manual changes)
  • Secrets injected at runtime (avoid baking credentials into images)

Also define what cannot be part of known state (for example, local user-created files that persist, unmanaged packages installed manually, local admin tooling that survives reboots).

3) Implement termination rules that actually remove persistence

Pick termination that matches the technology:

  • VDI: non-persistent desktops that reset on logoff; user data redirected to managed storage.
  • Containers/Kubernetes: workloads run from immutable images; pods are replaced rather than patched in place; node groups rotated.
  • VM fleets: immutable infrastructure where patching produces a new image and instances are replaced; restrict in-place changes.
  • CI/CD runners: ephemeral runners created per job and destroyed after job completion; no caching that can hide persistence without controls.

Write termination triggers down as an operational standard (what event causes destruction/reset, and what happens to logs/artifacts).

4) Add guardrails to prevent drift and “hidden persistence”

Non-persistent patterns fail if operators can still create persistence paths. Common guardrails:

  • Access controls: restrict interactive admin access to non-persistent instances; require ticketed break-glass for exceptions.
  • File system controls: prevent writing to locations that survive resets (or ensure they are wiped).
  • Configuration controls: block manual package installs; enforce desired state.
  • Logging: ensure logs are exported off-host before termination so evidence and forensic telemetry survive.

5) Operationalize: exceptions, break-glass, and monitoring

You need an exceptions path for legitimate cases (debugging, incident response, vendor support):

  • Time-bound exception approvals
  • Compensating controls (extra monitoring, EDR, tighter egress)
  • Explicit reversion steps to return to non-persistent mode

Monitoring should detect drift signals that indicate persistence:

  • Instances running beyond expected lifecycle
  • Unauthorized writes to protected paths
  • Manual admin sessions on fleets intended to be immutable

6) Build the audit story and recurring evidence cycle

SI-14 audits are often evidence-driven. Decide what evidence you will produce on a recurring basis:

  • Current scope list
  • Baseline definitions (images, templates, signed artifacts)
  • Platform configs enforcing replace/destroy patterns
  • Termination logs/events proving the pattern operates

Daydream fit (practical, not theoretical): teams use Daydream to map SI-14 to a named control owner, document the implementation procedure, and schedule recurring evidence pulls so the audit package stays current without rebuilding it every assessment.

Required evidence and artifacts to retain

Keep artifacts that prove both “known state” and “termination”:

Governance artifacts

  • SI-14 control statement (scope + intent)
  • RACI / control owner assignment
  • Exception procedure and approval records

Technical artifacts

  • Golden image build pipeline records (change history, approvals)
  • Image/template inventory and versioning
  • IaC/config management code references and change logs
  • Orchestration policies enforcing replacement (autoscaling/rotation settings, Kubernetes deployment strategy, VDI pool config)
  • Termination evidence (job logs for ephemeral runners, session logoff reset logs, instance termination events)
  • Log export configuration proving logs persist off-host

Assessment-ready artifacts

  • Screenshots or exports of platform settings (where appropriate)
  • Sample lifecycle evidence from representative components
  • A short narrative: “start from baseline → run → export logs → terminate/reset”

Common exam/audit questions and hangups

Expect these in walkthroughs:

  • “What exactly is non-persistent here?” If you cannot list component types and boundaries, you will lose time.
  • “Show me the ‘known state.’” Auditors want the approved baseline source, how it is updated, and who approves changes.
  • “How do you prove termination happens?” Bring lifecycle event logs, orchestration configuration, and a sample trail.
  • “Where do logs go when the instance is destroyed?” If telemetry dies with the instance, your story is weak.
  • “What stops admins from making one-off changes that persist?” You need technical restrictions and monitoring.

Frequent implementation mistakes and how to avoid them

  • Mistake: declaring non-persistence without scoping. Fix: publish a scope table and keep it current as architecture changes.
  • Mistake: “golden image” exists, but servers are patched in place. Fix: require rebuild-and-replace for the scoped components; treat in-place patching as an exception.
  • Mistake: termination exists, but data persists locally. Fix: redirect user data to managed storage; wipe ephemeral disks; validate no persistence paths remain.
  • Mistake: evidence is ad hoc. Fix: define a recurring evidence set and automate collection where possible.
  • Mistake: third party runs the platform, but you have no oversight. Fix: contract for the pattern (ephemeral sessions, reset behavior), and collect independent evidence (reports, configs, attestations).

Risk implications (why operators care)

SI-14 reduces the dwell time advantage attackers get from persistence and reduces configuration drift that undermines patching and hardening. In incident response, non-persistent components simplify containment because you can replace rather than remediate in place. From a compliance standpoint, the highest risk is a “paper control” where architecture and operations contradict the control statement.

Practical 30/60/90-day execution plan

First 30 days (design + scoping)

  • Assign SI-14 control owner and approvers.
  • Build the scope table: in-scope components, out-of-scope rationale, owners.
  • Define “known state” sources for each component class (image/template/artifact).
  • Define termination triggers per component class and document exceptions.

Next 60 days (implement + pilot evidence)

  • Implement/validate technical enforcement for one high-impact in-scope class (example: CI runners or VDI pool).
  • Add guardrails: restrict admin access, enforce immutable builds, ensure log export.
  • Run a pilot evidence collection: collect lifecycle proofs, baseline references, and settings exports.
  • Test exception workflow end-to-end.

Next 90 days (scale + audit readiness)

  • Expand enforcement to remaining scoped components.
  • Automate recurring evidence pulls where feasible (inventory, image versions, termination events).
  • Perform an internal control walkthrough with Infrastructure and Security Ops.
  • Package artifacts in your GRC system (many teams operationalize this with Daydream as the system of record for control procedures and evidence cadence).

Frequently Asked Questions

What counts as “non-persistent” for SI-14?

Non-persistent means the component does not retain unauthorized or unapproved changes across restarts or sessions. In practice, it is rebuilt from an approved baseline and then destroyed/reset according to defined triggers. 1

Does SI-14 require every system to be ephemeral?

No. SI-14 is scoped by your defined parameters and system design. You should apply it where non-persistence reduces risk and is technically feasible, and document exclusions with rationale.

How do we prove “initiated in a known state”?

Show the baseline source (golden image/template or immutable image tag), the approval/change process for that baseline, and evidence that deployments instantiate only from that baseline. Assessors expect traceability from baseline definition to runtime instantiation.

What evidence best proves “terminated”?

Provide platform lifecycle events (instance termination logs, job completion destruction records, VDI reset logs) plus configuration showing the termination rule is enforced. Pair configuration evidence with an example event trail from a representative workload.

Can we keep user data if desktops are non-persistent?

Yes, if user data is stored outside the non-persistent component (redirected folders, managed storage, enterprise file services) and the session image resets cleanly. Document where data lives and how you prevent local persistence.

How does SI-14 interact with third parties that host our workloads?

Treat it as shared responsibility: you define which components must be non-persistent, and the third party must provide configuration and evidence that sessions/workloads start from a baseline and terminate by rule. Put those expectations in contracts and validate through periodic reviews.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as “non-persistent” for SI-14?

Non-persistent means the component does not retain unauthorized or unapproved changes across restarts or sessions. In practice, it is rebuilt from an approved baseline and then destroyed/reset according to defined triggers. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Does SI-14 require every system to be ephemeral?

No. SI-14 is scoped by your defined parameters and system design. You should apply it where non-persistence reduces risk and is technically feasible, and document exclusions with rationale.

How do we prove “initiated in a known state”?

Show the baseline source (golden image/template or immutable image tag), the approval/change process for that baseline, and evidence that deployments instantiate only from that baseline. Assessors expect traceability from baseline definition to runtime instantiation.

What evidence best proves “terminated”?

Provide platform lifecycle events (instance termination logs, job completion destruction records, VDI reset logs) plus configuration showing the termination rule is enforced. Pair configuration evidence with an example event trail from a representative workload.

Can we keep user data if desktops are non-persistent?

Yes, if user data is stored outside the non-persistent component (redirected folders, managed storage, enterprise file services) and the session image resets cleanly. Document where data lives and how you prevent local persistence.

How does SI-14 interact with third parties that host our workloads?

Treat it as shared responsibility: you define which components must be non-persistent, and the third party must provide configuration and evidence that sessions/workloads start from a baseline and terminate by rule. Put those expectations in contracts and validate through periodic reviews.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream