SI-21: Information Refresh

To meet the si-21: information refresh requirement, you must define what “information” needs to stay current (for example, cached threat intel, directory data, vulnerability feeds, allowlists/blocklists), set a refresh trigger (scheduled or on-demand), and delete that information when it’s no longer needed. Document ownership, refresh logic, and retain evidence that refresh and deletion happen as designed.

Key takeaways:

  • Treat SI-21 as a control over stale security-relevant data that can drive wrong decisions in detection, response, and access control.
  • You need explicit refresh rules (what, how often/when, where, and by whom) plus deletion/retention rules for the same data.
  • Auditors look for repeatable operation and evidence: logs, job runs, configurations, and data lifecycle records.

SI-21 sits in the NIST SP 800-53 System and Information Integrity (SI) family and targets a common operational failure: security tooling and system decisions based on information that is old, incomplete, or retained longer than necessary. If your environment relies on cached or periodically pulled data (threat indicators, asset inventory snapshots, patch/vuln metadata, DNS reputations, certificate revocation status, identity attributes, EDR signatures, or internal “known bad” lists), SI-21 requires that you keep it refreshed on a defined cadence or generate it on demand. It also requires you to delete it when it’s no longer needed.

For a CCO, GRC lead, or Compliance Officer, the fastest path to operationalizing SI-21 is to: (1) enumerate the “information” in scope, (2) assign a control owner per information type, (3) implement refresh triggers and monitoring, and (4) prove it with clean, recurring artifacts. The highest-risk gap is not technical complexity; it’s ambiguity about what counts as “information,” inconsistent refresh across teams, and weak evidence that refresh and deletion are actually happening.

Regulatory text

NIST SI-21 states: “Refresh {{ insert: param, si-21_odp.01 }} at {{ insert: param, si-21_odp.02 }} or generate the information on demand and delete the information when no longer needed.” 1

Operator interpretation:

  • You must identify specific categories of information that need to stay current (the organization defines the parameterized items).
  • You must define the refresh condition (time-based frequency, event-based trigger, or on-demand generation).
  • You must delete the information when it’s no longer needed (tie this to retention, data minimization, and operational purpose).
    NIST publishes SI-21 in SP 800-53 Rev. 5 as a baseline security control for federal systems and many federal-adjacent environments. 2

Plain-English interpretation of the requirement

SI-21 is a data freshness and lifecycle control for security-relevant information that systems and operators depend on. If data goes stale, you can:

  • miss detections (out-of-date indicators or signatures),
  • block the wrong things (stale allowlists/blocklists),
  • make incorrect access decisions (stale identity attributes, group membership, device posture),
  • waste time during incidents (stale asset inventory, stale ownership/contacts).

SI-21 also prevents “security data hoarding” by requiring deletion when the information no longer serves a defined purpose.

Who it applies to (entity and operational context)

Entity scope

  • Federal information systems implementing NIST SP 800-53 controls. 2
  • Contractor systems handling federal data where NIST 800-53 is required by contract, ATO boundary, or control inheritance model. 2

Operational scope (where SI-21 shows up in real environments) SI-21 typically applies where you have any of the following:

  • Scheduled “pull” jobs (threat intel feeds, vulnerability plugin updates, domain reputation downloads).
  • Cached query results (directory lookups, device posture assertions, authorization attributes).
  • Replicated datasets (SIEM enrichment tables, CMDB exports, SOAR lookup tables).
  • Locally stored “reference data” used for security decisions (known-bad IP lists, phishing URL lists, certificate trust stores, YARA rules, EDR content, WAF rulesets).

If you have multiple tenants or enclaves (separate business units, subsidiaries, or segmented networks), treat SI-21 as per-boundary: information refresh must be consistent within each assessed system boundary.

What you actually need to do (step-by-step)

Step 1: Define “information in scope” (make it concrete)

Create an SI-21 Information Register. For each entry, capture:

  • Information name (example: “Threat intel indicators for SIEM enrichment”)
  • Where it lives (system, database, bucket, endpoint, tool)
  • What it is used for (detection, prevention, response, triage, access decisioning)
  • Source of truth (vendor feed, internal system, third party, manual curation)
  • Downstream dependencies (SIEM rules, SOAR playbooks, firewall objects)

Practical scoping tip: start with information that changes frequently and influences automated actions (block, quarantine, deny access, alert triage).

Step 2: Set the refresh method and trigger (scheduled vs on-demand)

For each information type, choose one:

  • Scheduled refresh: define the refresh interval and the mechanism (job, pipeline, agent update, API sync).
  • On-demand generation: define what “on demand” means (runtime query, just-in-time fetch) and how you prevent caching from becoming stale.
  • Hybrid: scheduled baseline plus on-demand validation for high-impact decisions.

Write it down as an operational rule: “System X refreshes dataset Y via method Z; failures alert team A within team workflow B.”

Step 3: Define “freshness” acceptance criteria

Auditors will ask how you know data is “refreshed,” not merely “job ran.” For each dataset, define at least:

  • What a successful refresh means (new version ingested, checksum updated, last-updated timestamp advanced, record count sanity check).
  • What “stale” means (time since last successful refresh exceeds defined threshold; last-updated value missing).
  • What happens when stale is detected (block automation, fail open/closed, escalate, revert to safe defaults).

Step 4: Implement monitoring and failure handling

Build operational safeguards:

  • Alert on missed refresh, partial refresh, or failed refresh.
  • Track refresh success rates in the same monitoring stack you use for other controls.
  • Define manual backstop steps for when automated refresh fails (who runs it, how to validate, how to document).

Step 5: Implement deletion rules (“delete when no longer needed”)

For each information type, document:

  • Purpose-bound retention rule (retain only as long as needed for that purpose).
  • Deletion mechanism (TTL, lifecycle policy, scheduled purge job, archival then deletion).
  • Exceptions (legal hold, active incident, contract requirement). Keep exceptions explicit and time-bound.

Deletion is part of SI-21. Treat it as a lifecycle control tied to your retention schedule and system configuration, not a vague policy statement.

Step 6: Assign ownership and build a repeatable evidence package

Map each information type to:

  • Control owner (role, not person)
  • Operator(s)
  • Backup owner
  • Evidence producer (who exports logs, screenshots, configs)
  • Evidence frequency (how often you capture and store the artifacts)

This is where tools like Daydream fit naturally: you can map SI-21 to an owner, link the implementation procedure, and schedule recurring evidence collection so refresh and deletion don’t become “tribal knowledge.”

Required evidence and artifacts to retain

Store evidence that proves design and operation. A strong SI-21 packet usually includes:

Design evidence (what is supposed to happen)

  • SI-21 Information Register (list of in-scope information)
  • Data flow diagram or architecture notes for each major dataset (source → processing → storage → consumers)
  • Documented refresh rules (cadence/trigger, method, acceptance criteria)
  • Retention/deletion standard mapped to each dataset

Operational evidence (what actually happened)

  • Job run logs (scheduler history, pipeline runs, agent update logs)
  • “Last refreshed” system records (timestamps, version metadata, feed update receipts)
  • Monitoring alerts and tickets for failed refreshes (including closure notes)
  • Configuration exports (lifecycle policies, TTL settings, cron/scheduler configs)
  • Sample validation results (spot checks showing old vs new version replaced)

Evidence quality rule: favor machine-generated records over screenshots. Screenshots are fine as supplements, but they age poorly.

Common exam/audit questions and hangups

Expect these questions from assessors working against NIST SP 800-53:

  1. What information is covered by SI-21 in your system boundary?
    Hangup: teams say “threat intel” but cannot list datasets, locations, or dependencies.

  2. How did you choose refresh frequency vs on-demand generation?
    Hangup: no rationale; refresh schedules exist only because “that’s the default.”

  3. How do you detect and respond to refresh failures?
    Hangup: jobs fail silently; no alerting; no ticket trail.

  4. Show me evidence that refresh happened and that stale data is removed/overwritten.
    Hangup: evidence proves the job ran, not that the dataset updated.

  5. How do you delete the information when no longer needed?
    Hangup: retention is a policy statement with no system enforcement.

Frequent implementation mistakes and how to avoid them

Mistake Why it fails SI-21 What to do instead
Defining “information” too narrowly (only threat feeds) SI-21 is broader than threat intel Include identity attributes, enrichment tables, allowlists/blocklists, vuln metadata, CMDB exports where used for security decisions
“Refresh” means “sync attempted” Attempt ≠ success Define success criteria and capture proof (version change, checksum, timestamp advance)
No stale-data behavior Systems keep acting on stale data Add fail-safe behavior: alert, block automation, or degrade safely with documented rationale
Deletion handled by “we clean it up sometimes” Not measurable or repeatable Implement TTL/lifecycle policies and keep configuration exports as evidence
No single owner Everyone assumes someone else manages refresh Assign owner per dataset and publish the register

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for SI-21. Practically, SI-21 gaps tend to surface during ATO assessments, continuous monitoring reviews, or incident post-mortems when stale reference data contributed to missed or incorrect decisions. Your risk statement should be operational: stale security data can produce false negatives, false positives, or inappropriate access decisions, and excessive retention can expand breach impact and discovery scope.

Practical 30/60/90-day execution plan

First 30 days (stabilize scope and ownership)

  • Build the SI-21 Information Register for the assessed boundary.
  • Assign owners and backups for each information type.
  • Document refresh triggers and deletion rules at a minimum “policy-to-procedure” level.
  • Identify where evidence will come from (logs, configs, monitoring dashboards, ticketing system).

By 60 days (implement measurable operation)

  • Implement or tighten scheduled refresh/on-demand logic for high-impact datasets.
  • Add monitoring and alerting for refresh failures and staleness conditions.
  • Implement deletion mechanisms (TTL/lifecycle policies/purge jobs) for datasets without enforcement.
  • Run a tabletop test: simulate refresh failure and confirm alert → ticket → remediation → evidence captured.

By 90 days (audit-ready and continuous)

  • Convert procedures into repeatable runbooks with named systems and commands.
  • Establish recurring evidence capture and storage (monthly/quarterly packets, depending on your assessment cycle).
  • Add QA checks: sampling to confirm data freshness and deletion are working.
  • Feed findings into continuous improvement (exceptions, compensating controls, boundary changes).

Frequently Asked Questions

What counts as “information” for SI-21 in practice?

Treat it as security-relevant reference data that changes over time and influences decisions: indicators, signatures, allowlists/blocklists, identity attributes, enrichment tables, vulnerability content, and inventory exports used for security workflows.

Can we satisfy SI-21 with on-demand queries only?

Yes, if you genuinely generate the information on demand and control caching so results do not become stale. You still need evidence that on-demand generation is the default behavior and that cached copies are deleted when no longer needed. 1

How do we prove “refresh” to an auditor?

Show machine-generated evidence that the dataset changed as intended, such as last-updated timestamps, version identifiers, checksums, ingestion receipts, and job logs tied to the dataset location. Pair that with alerts/tickets for failures.

Does SI-21 require a specific refresh frequency?

No. SI-21 is parameterized; you define what to refresh and when based on system needs and risk. Document the rationale and ensure monitoring detects missed refreshes. 1

What does “delete when no longer needed” mean if we have retention obligations?

You can retain information when there is a defined need (legal hold, contractual requirement, incident investigation), but you should still implement purpose-bound retention rules and deletion for everything else. Document exceptions and who approves them.

How should a GRC team operationalize SI-21 without owning the tools?

Own the register, ownership mapping, and evidence plan. Engineering or SecOps owns the jobs and configurations. Use a workflow (for example, Daydream) to assign control owners, standardize runbooks, and collect recurring artifacts on schedule.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as “information” for SI-21 in practice?

Treat it as security-relevant reference data that changes over time and influences decisions: indicators, signatures, allowlists/blocklists, identity attributes, enrichment tables, vulnerability content, and inventory exports used for security workflows.

Can we satisfy SI-21 with on-demand queries only?

Yes, if you genuinely generate the information on demand and control caching so results do not become stale. You still need evidence that on-demand generation is the default behavior and that cached copies are deleted when no longer needed. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we prove “refresh” to an auditor?

Show machine-generated evidence that the dataset changed as intended, such as last-updated timestamps, version identifiers, checksums, ingestion receipts, and job logs tied to the dataset location. Pair that with alerts/tickets for failures.

Does SI-21 require a specific refresh frequency?

No. SI-21 is parameterized; you define what to refresh and when based on system needs and risk. Document the rationale and ensure monitoring detects missed refreshes. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What does “delete when no longer needed” mean if we have retention obligations?

You can retain information when there is a defined need (legal hold, contractual requirement, incident investigation), but you should still implement purpose-bound retention rules and deletion for everything else. Document exceptions and who approves them.

How should a GRC team operationalize SI-21 without owning the tools?

Own the register, ownership mapping, and evidence plan. Engineering or SecOps owns the jobs and configurations. Use a workflow (for example, Daydream) to assign control owners, standardize runbooks, and collect recurring artifacts on schedule.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream