SI-7(1): Integrity Checks

To meet the si-7(1): integrity checks requirement, you must implement repeatable, evidence-backed integrity checks over the software, firmware, and/or information elements you designate as integrity-critical, then act on failures. Operationalize this by defining scope, selecting technical methods (hashing, signature verification, file integrity monitoring), scheduling checks, alerting, and retaining results. 1

Key takeaways:

  • Define “what gets checked” (integrity-critical objects) and “how often,” then automate wherever possible. 2
  • Prove operation with logs, baselines, and tickets that show detection, triage, and remediation. 3
  • Assign a control owner and recurring evidence set so SI-7(1) stays audit-ready through change. 2

SI-7 is the System and Information Integrity family’s core control for protecting systems from unauthorized changes; SI-7(1) adds a direct operational expectation: perform integrity checks on defined targets. The hard part in audits is rarely the tooling. It’s scope clarity, consistent execution, and evidence that ties results to response.

For most organizations, integrity checking becomes real when you answer three questions in writing: (1) what assets and files are “integrity-relevant” for mission, safety, regulated data, or security boundaries; (2) what method validates integrity (cryptographic signatures, hashes, file integrity monitoring, secure boot attestations, database constraints, etc.); and (3) what happens when an integrity check fails (alert routing, containment, restoration, and lessons learned).

This page is written for a Compliance Officer, CCO, or GRC lead who needs to implement SI-7(1) quickly and defensibly. It gives you requirement-level interpretation, a step-by-step build, evidence to retain, audit traps, and an execution plan you can hand to engineering and still own as a control. 3

Regulatory text

Control statement (excerpt): “Perform an integrity check of {{ insert: param, si-7.1_prm_1 }} {{ insert: param, si-7.1_prm_2 }}.” 2

Operator meaning: NIST expects you to (a) identify the object(s) that require integrity validation (the parameters in the catalog are organization-defined), and (b) actually perform integrity checks on those objects on a defined cadence or trigger. “Perform” implies the check runs in production conditions, not just in a design document. 1

Plain-English interpretation (what SI-7(1) requires)

You need a controlled way to detect unauthorized or unexpected changes to integrity-critical components (code, configurations, firmware, images, packages, system files, sensitive datasets, or security tooling). SI-7(1) is satisfied when:

  • Your scope is explicit (which components are covered and why).
  • Your integrity method is defined (hash/signature/attestation/FIM/DB checks).
  • Checks run on a schedule or event trigger you define.
  • Failures create actionable alerts and tickets, with response evidence.
  • You can show auditors repeatable outputs over time. 3

Who it applies to (entity and operational context)

Entities: Federal information systems and contractors operating systems that handle federal data, including cloud-hosted environments and managed services where you operate the system boundary. 3

Operational contexts where auditors focus:

  • Golden images and build pipelines (tampering risk between build and deploy).
  • Privileged paths (admin tools, authentication components, endpoint security agents).
  • Configuration-as-code and critical configuration stores (drift and unauthorized edits).
  • Firmware and boot chains (where applicable).
  • Data integrity for regulated/high-impact datasets (where integrity loss is mission-impacting). 3

If a third party operates a layer (SaaS, managed endpoint, managed database), SI-7(1) still lands on you to confirm integrity checks exist in that layer or to implement compensating checks at your boundary (for example, verifying signed artifacts before deployment). 3

What you actually need to do (step-by-step)

Use the steps below as your implementation runbook. Keep each step tied to an artifact you can hand to an assessor.

1) Assign ownership and define the control boundary

  • Name a control owner (often Security Engineering, Platform Engineering, or IT Ops).
  • Document the system boundary: environments, accounts/subscriptions, endpoints, and CI/CD where integrity checks must run.
  • Decide which teams must execute parts of the control (DevOps, IAM, endpoint team). 3

Practical tip: Put SI-7(1) on the same ownership map as your change management and incident response owners; integrity check failures become incidents fast.

2) Define “integrity-critical” objects (your parameter values)

Create an inventory list with rationale. Common categories:

  • OS/system binaries and critical libraries
  • Security tools and agents (EDR, logging forwarders)
  • Infrastructure images (VM/container base images)
  • Application release artifacts (signed packages, SBOM-relevant outputs if you maintain them)
  • Key configuration files (auth, network policy, encryption, logging)
  • Firmware/boot components (where in scope)
  • Sensitive datasets where integrity impacts decisions or reporting 2

Deliverable: Integrity Check Scope Register (table) with columns:

  • Object/category
  • Location/path/repo
  • Environment(s)
  • Integrity method
  • Check trigger (schedule, deploy-time, boot-time)
  • Owner
  • Alert destination
  • Exception process

3) Choose integrity methods that match the object and threat

Pick methods you can evidence. Examples:

  • Cryptographic signature verification for release artifacts and packages.
  • Hash baselines for static files where expected changes are controlled.
  • File Integrity Monitoring (FIM) for servers/endpoints where drift risk exists.
  • Image provenance checks for containers/VMs (verify digests before deploy).
  • Configuration drift detection for policy/config stores (detect out-of-band changes).
  • Database integrity constraints and validation jobs for datasets where integrity matters. 3

Selection criteria:

  • Can the method detect unauthorized change within your required time?
  • Can you reduce false positives by aligning to approved change windows?
  • Can you export logs and retain them?

4) Define cadence and triggers (what “perform” means in your shop)

Document when checks run:

  • Event-driven: on build, on deploy, on boot, on privileged change, on package install.
  • Scheduled: periodic checks for drift (servers/endpoints/config repos). 3

Avoid vague language like “regularly.” Write: “on every deployment to production,” “at instance startup,” or “daily scheduled task,” then implement it.

5) Implement alerting and response workflow for failures

Integrity checks without response are audit bait. Minimum workflow:

  • Alert routes to a monitored queue (SIEM, ticketing, on-call).
  • Triage steps: validate change authorization, check recent change records, confirm scope.
  • Containment: isolate host, block deployment, roll back to known-good artifact, or revoke credentials if tampering suspected.
  • Recovery: restore baseline, redeploy signed artifact, rebuild image from trusted pipeline.
  • Post-incident: update baselines and add detection tuning. 3

Deliverable: Integrity Check Failure Playbook with decision points and escalation.

6) Baseline management and change alignment

Your integrity checks must coexist with legitimate change:

  • Establish “known-good” baselines from trusted builds or approved configurations.
  • Tie baseline updates to change control: a baseline update requires a change ticket or release record.
  • Prevent engineers from “fixing” alerts by disabling checks; require an exception process with expiry. 3

7) Operationalize evidence collection (so audits are low-drama)

Set recurring evidence capture:

  • Export integrity check logs/reports to a central store.
  • Create an evidence packet per system and per period (whatever your audit cycle requires).
  • Review exceptions and failures in a recurring control meeting (or as part of security operations review). 2

Where Daydream fits: Many teams fail SI-7(1) on evidence hygiene, not technical controls. Daydream helps you map SI-7(1) to a control owner, a written procedure, and a recurring evidence set so you can demonstrate operation without reconstructing history during an assessment. 2

Required evidence and artifacts to retain

Keep evidence that proves design and operation:

Design / governance

  • Control narrative for SI-7(1) (scope, method, cadence, triggers) 3
  • Integrity Check Scope Register (inventory and parameter values) 2
  • Integrity Check Failure Playbook (triage/contain/recover) 3
  • Exception register with approvals and expirations 3

Operational / technical

  • Tool configuration exports (FIM policies, signature verification settings, CI/CD gates)
  • Sample reports showing checks ran successfully
  • Alert logs for failures (even if benign) plus linked tickets
  • Change tickets showing baseline updates were authorized
  • Access controls for who can modify baselines/configs (supporting context) 3

Common exam/audit questions and hangups

Auditors tend to ask:

  • “What exactly are you integrity-checking?” Provide the scope register with rationale. 2
  • “Show me it ran.” Provide time-stamped logs/reports for multiple periods. 3
  • “What happens on failure?” Show alerts, tickets, and the playbook. 3
  • “How do you handle approved changes without constant false positives?” Show baseline change procedure tied to change control. 3
  • “What about cloud/SaaS layers?” Show boundary decisions and third-party assurances or compensating checks at deploy time. 3

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails Fix
Integrity scope is “all systems” with no object list Un-testable and impossible to evidence Publish an integrity-critical list with owners and locations. 2
Checks run, but nobody monitors results “Perform” implies detection plus action path Route to monitored alerting and require tickets for failures. 3
Baselines updated ad hoc Attackers can “bless” tampered files Require approvals and tie baseline updates to change records. 3
Integrity checks only in a policy document Assessor will ask for run evidence Retain logs/reports and samples across periods. 3
Overreliance on a third party’s statement Boundary responsibility stays with you Obtain evidence (reports/attestations) or implement boundary checks before deploy. 3

Enforcement context and risk implications

No public enforcement cases were provided for this requirement in the source catalog, so this page does not cite specific actions. Practically, SI-7(1) gaps increase the likelihood that unauthorized changes persist undetected, which can escalate into reportable incidents, service outages, and audit findings that expand assessor sampling across the SI family. 3

A practical 30/60/90-day execution plan

Use this as an operator plan. Adjust sequencing to your environment, but keep the deliverables.

First 30 days (foundation and scope)

  • Appoint SI-7(1) control owner and backups. 3
  • Draft Integrity Check Scope Register for highest-risk systems first (auth, logging, build/deploy, endpoints). 2
  • Decide approved integrity methods per object category; document in the control narrative. 3
  • Define failure workflow and escalation path; publish the playbook. 3
  • Set evidence retention approach (where logs live, who exports them, how often). 3

Days 31–60 (implement and integrate)

  • Implement integrity checks for the scoped objects (CI/CD verification, FIM policies, image digest verification, drift detection). 3
  • Integrate alerts into your ticketing/on-call process; test with a controlled change. 3
  • Implement baseline update controls tied to change management (approval requirement, record linkage). 3
  • Start producing recurring evidence packets (reports + screenshots/exports + tickets). 3

Days 61–90 (prove operation and harden)

  • Run an internal audit-style walkthrough: pick a system, trace integrity scope → check execution → a failure example → ticket closure. 3
  • Tune noise and tighten scope gaps (add missing critical paths; reduce false positives by aligning with change windows). 3
  • Formalize exceptions (expiry, review cadence, compensating controls). 3
  • Use Daydream to keep the control mapped to owner, procedure, and recurring evidence so the packet stays current through org and tooling changes. 2

Frequently Asked Questions

What counts as an “integrity check” for SI-7(1)?

Any defined mechanism that detects unauthorized or unexpected modification of integrity-critical objects, such as signature verification, hashing against a baseline, or file integrity monitoring. You must be able to show it runs and produces auditable results. 3

Do we have to integrity-check every file on every system?

No. SI-7(1) is parameterized; you define the objects that are integrity-critical and document that selection. Auditors will test whether your scope matches your risk and boundary claims. 2

How do we handle frequent application deployments without constant integrity alerts?

Treat deployment artifacts as the unit of integrity, then verify signatures/digests at build and deploy time. Update baselines only through approved release/change records. 3

If a cloud/SaaS provider says they do integrity monitoring, is that enough?

It can support your control, but you still need boundary clarity and evidence. Keep the provider’s documentation or reports, and add compensating checks you control where feasible (for example, verifying signed artifacts before deployment). 3

What evidence is strongest for auditors?

Time-stamped integrity check outputs plus a traced example of detection to response (alert → ticket → remediation). Pair that with your scope register and baseline/change procedure. 3

Who should own SI-7(1) in a modern DevOps org?

Put day-to-day operation with the team running the platforms (endpoint, infrastructure, CI/CD), and keep accountability with a named control owner in security or GRC. The key is one owner who can produce evidence on demand. 3

Footnotes

  1. NIST SP 800-53 Rev. 5; NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5 OSCAL JSON

  3. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as an “integrity check” for SI-7(1)?

Any defined mechanism that detects unauthorized or unexpected modification of integrity-critical objects, such as signature verification, hashing against a baseline, or file integrity monitoring. You must be able to show it runs and produces auditable results. (Source: NIST SP 800-53 Rev. 5)

Do we have to integrity-check every file on every system?

No. SI-7(1) is parameterized; you define the objects that are integrity-critical and document that selection. Auditors will test whether your scope matches your risk and boundary claims. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we handle frequent application deployments without constant integrity alerts?

Treat deployment artifacts as the unit of integrity, then verify signatures/digests at build and deploy time. Update baselines only through approved release/change records. (Source: NIST SP 800-53 Rev. 5)

If a cloud/SaaS provider says they do integrity monitoring, is that enough?

It can support your control, but you still need boundary clarity and evidence. Keep the provider’s documentation or reports, and add compensating checks you control where feasible (for example, verifying signed artifacts before deployment). (Source: NIST SP 800-53 Rev. 5)

What evidence is strongest for auditors?

Time-stamped integrity check outputs plus a traced example of detection to response (alert → ticket → remediation). Pair that with your scope register and baseline/change procedure. (Source: NIST SP 800-53 Rev. 5)

Who should own SI-7(1) in a modern DevOps org?

Put day-to-day operation with the team running the platforms (endpoint, infrastructure, CI/CD), and keep accountability with a named control owner in security or GRC. The key is one owner who can produce evidence on demand. (Source: NIST SP 800-53 Rev. 5)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream