SI-7(6): Cryptographic Protection
SI-7(6) requires you to use cryptography to detect unauthorized changes to software, firmware, and information by making tampering evident (for example, via digital signatures, authenticated hashes, or signed updates). To operationalize it, define what “integrity-protected” means for each asset class, implement signing and verification in your build/update paths, and retain evidence that verification runs and blocks untrusted changes. 1
Key takeaways:
- Treat this as an engineering control: sign, verify, and fail closed on invalid integrity checks for code, firmware, and critical data. 1
- Scope is broader than “production code”; include update mechanisms, configuration baselines, and integrity of high-value data stores. 1
- Audits often fail on evidence and coverage mapping, not on crypto algorithms; document where checks happen and show logs/reports that they run. 2
SI-7(6): Cryptographic Protection is an integrity requirement, not a generic “encrypt things” statement. Your assessor will expect proof that your organization can detect unauthorized changes to (1) software, (2) firmware, and (3) information using cryptographic mechanisms, and that these mechanisms are embedded in normal operations rather than performed ad hoc. 1
For a Compliance Officer, CCO, or GRC lead, the fastest path is to translate the control into a small set of enforceable “integrity gates”: what must be signed, who can sign it, where verification occurs, and what happens if verification fails. Then you map those gates to systems in scope (federal information systems and contractor systems handling federal data), assign an owner, and collect recurring artifacts. 3
This page is requirement-level implementation guidance aimed at getting you to audit-ready operation: scoping, control design, step-by-step execution, evidence you should retain, and the exam questions that commonly stall teams.
Regulatory text
Requirement excerpt: “Implement cryptographic mechanisms to detect unauthorized changes to software, firmware, and information.” 1
Operator translation (what you must do):
- Pick cryptographic integrity mechanisms appropriate to the asset: signed code/packages, digitally signed firmware, authenticated hashes/HMAC, or integrity tags anchored in trusted keys. The mechanism must make unauthorized modification detectable. 1
- Place verification at control points where changes could be introduced: CI/CD pipelines, deployment tooling, update services, boot processes, file distribution, and data ingest/replication paths. 2
- Define and enforce failure behavior: if verification fails, the system should block execution/load/use, quarantine the artifact, alert, and trigger incident handling consistent with your integrity monitoring program. 1
This is not satisfied by a policy that “developers should hash releases.” Assessors look for consistent verification and tamper evidence across the covered asset classes. 2
Plain-English interpretation of the requirement
SI-7(6) expects you to prevent “silent” tampering. If an attacker (or a well-meaning admin) modifies a binary, container image, firmware blob, or a protected dataset, your cryptographic controls should detect it reliably, and your operational workflow should react in a defined way. 1
A practical way to frame it for implementation is:
- Software integrity: signed builds or signed artifacts, verified before deploy and before run where feasible. 2
- Firmware integrity: signed firmware updates and verification at boot or update install, with controlled signing keys. 2
- Information integrity: cryptographic checks that detect unauthorized modification to sensitive or mission-critical data (for example, integrity checks for exports, backups, or controlled files). 1
Who it applies to (entity and operational context)
In-scope entities
- Federal information systems implementing NIST SP 800-53 controls. 2
- Contractor systems handling federal data, including cloud and SaaS platforms used to process, store, or transmit federal information. 2
Operational contexts that commonly trigger SI-7(6) scope
- CI/CD pipelines building deployable artifacts for federal environments. 2
- Enterprise software distribution: endpoint management, package repositories, golden images. 2
- Firmware management: network gear, servers, IoT/OT devices, hardware security modules, BIOS/UEFI update processes. 2
- High-integrity data flows: configuration baselines, security telemetry, audit logs, backups, and controlled exports where undetected modification would be a material risk. 1
What you actually need to do (step-by-step)
1) Define “integrity-protected” coverage and owners
- Identify asset classes: software, firmware, information. Create a minimal inventory of what you will protect first (production deployables, update channels, critical data stores). 2
- Assign a control owner and technical implementers (platform engineering, endpoint/IT ops, firmware/hardware team, data platform).
- Write a one-page implementation standard: what must be signed, what keys are used, where verification occurs, and minimum logging. 1
2) Implement signing for software artifacts
- Standardize artifact formats (packages, containers, binaries) and require cryptographic signing at build or release time. 2
- Protect signing keys: restrict access, separate duties (build vs sign approval), and store keys in an approved key management approach aligned to your cryptographic program. 2
- Record provenance: which pipeline/job produced the artifact and which key signed it, so you can show traceability during an assessment. 2
3) Verify integrity before deploy and, where feasible, before execution
- Add a verification gate in CI/CD: deployments must verify signatures/hashes from a trusted root before promotion. 1
- Add runtime or host-level verification where practical (for example, admission control for containers, package manager verification on hosts, signed script enforcement). The key is that tampering becomes detectable at a control point you operate. 2
4) Implement firmware integrity controls
- Require signed firmware images from the manufacturer or your internal firmware build process, and verify signatures during update/install. 2
- Define an exception process for legacy devices that cannot verify signatures; document compensating controls (tighter physical access, restricted update paths, enhanced monitoring), and track a replacement plan. 2
5) Protect “information” with cryptographic tamper detection where it matters
This is where teams under-scope SI-7(6). Pick the information assets where unauthorized modification is high impact, then implement cryptographic checks aligned to the workflow:
- Integrity checks for backups/archives (authenticated hashes stored separately from the data set). 1
- Signed exports/reports where recipients must verify authenticity and integrity. 2
- Integrity protection for configuration baselines distributed to fleets (signed config bundles, verified on endpoints before apply). 2
6) Define failure handling, alerting, and logging
- Establish “fail closed” behavior for critical paths: invalid signature or hash should block use and generate an actionable alert to the right team. 1
- Log verification events with enough detail for investigation: artifact identifier, signer identity (key/cert), verification result, host/service, timestamp, and disposition. 2
7) Make it assessable: map, test, and retain recurring evidence
- Maintain a control-to-system mapping: which systems, pipelines, and tools provide cryptographic integrity checks for each asset class. 2
- Test continuously: add routine checks that verification is enforced (for example, attempt to deploy an unsigned artifact in a non-production environment and confirm it is blocked, with tickets/logs retained). 2
How Daydream fits without slowing you down: teams commonly lose time on evidence sprawl. Daydream can track the SI-7(6) owner, the implementation procedure, and the recurring evidence artifacts in one place so you can answer assessors quickly with consistent artifacts across systems. 1
Required evidence and artifacts to retain
Keep evidence that proves design and operation:
Design artifacts
- SI-7(6) control narrative: scope, cryptographic mechanisms, verification points, failure behavior. 1
- Key management standard for signing keys: access controls, rotation/issuance process, separation of duties. 2
- Architecture diagrams or data flow notes showing where signing and verification occur. 2
Operational artifacts
- CI/CD pipeline configs showing signing and verification steps enabled. 1
- Sample logs of verification (pass and fail), plus alert/ticket evidence of triage for a failed integrity check. 2
- Firmware update procedures and records showing only signed firmware is accepted. 2
- Exception register for assets that cannot support cryptographic verification, with compensating controls and remediation plan. 2
Common exam/audit questions and hangups
Expect these questions from assessors:
- “Show me where you verify signatures before deployment. Is it enforced or advisory?” 2
- “Which systems are covered for software, firmware, and information? How did you decide?” 1
- “Who can sign releases, and how do you protect the signing keys?” 2
- “What happens when verification fails? Show evidence of a failure scenario and response.” 2
- “How do you handle third-party software and updates? Do you verify publisher signatures or your internal signatures?” 2
The hangup that stalls audits: teams can describe controls verbally but cannot produce a clean system-by-system mapping and recurring evidence. 2
Frequent implementation mistakes and how to avoid them
| Mistake | Why it fails SI-7(6) | Fix |
|---|---|---|
| Signing exists, but verification is optional | Tampering may not be detected in real operations | Enforce verification gates and block on failure. 1 |
| Only production binaries are covered | Firmware and “information” remain mutable without detection | Extend to firmware updates and high-integrity data flows. 1 |
| Signing keys stored like normal secrets | Compromise of keys defeats integrity | Restrict access, separate duties, and document issuance and approvals. 2 |
| No exception handling | Legacy systems create silent gaps | Maintain an exception register with compensating controls and a replacement track. 2 |
| Weak evidence discipline | You can’t prove operation | Standardize artifacts: pipeline configs, logs, sample blocked events, and mappings. 2 |
Enforcement context and risk implications
No public enforcement case sources were provided for this requirement in the supplied source catalog, so this page does not list enforcement actions.
Operational risk is still clear: without cryptographic tamper detection, unauthorized changes can persist long enough to corrupt systems, poison updates, or manipulate data relied on for security decisions. That turns integrity events into incident response and reporting problems, especially in federal environments where assessment readiness is contractual and recurring. 2
A practical 30/60/90-day execution plan
You asked for speed. Use this phased plan as a workback schedule; adjust to your environment and release cadence.
First 30 days (stabilize scope and minimum viable enforcement)
- Name an SI-7(6) owner and publish a short implementation standard covering software, firmware, and information integrity expectations. 1
- Inventory the highest-risk paths: release pipelines, update distribution, firmware management, and critical data exports/backups. 2
- Implement signing plus a verification gate for one “golden path” application or platform service, and start collecting logs as evidence. 2
Next 60 days (expand coverage and formalize exceptions)
- Expand verification gates across remaining CI/CD pipelines and deployment methods in scope. 1
- Roll out a firmware integrity procedure: only accept signed firmware, document verification steps, and centralize records. 2
- Stand up an exception register and compensating controls process for systems that cannot support signature verification. 2
By 90 days (audit-ready mapping and repeatable evidence)
- Produce a system-by-system mapping of SI-7(6) coverage, with links to pipeline configs, signing key controls, and verification logs. 2
- Run an integrity “failure drill” in a controlled environment and retain proof that unsigned/modified artifacts are blocked and investigated. 2
- Operationalize recurring evidence capture in your GRC workflow (Daydream or your current system) so artifacts are collected consistently each assessment cycle. 1
Frequently Asked Questions
Does SI-7(6) require encryption of data at rest?
SI-7(6) is about detecting unauthorized changes via cryptographic integrity mechanisms, not confidentiality encryption. Encryption can be part of an overall cryptographic program, but this requirement focuses on tamper detection for software, firmware, and information. 1
What counts as “cryptographic mechanisms” for integrity?
Digital signatures and authenticated hashes are common mechanisms because they let you verify authenticity and detect modification. The key test is whether an unauthorized change becomes detectable at your verification point. 1
Do we need runtime verification, or is CI/CD verification enough?
Put verification where it meaningfully reduces risk: CI/CD and deploy-time gates are usually the first priority, and runtime controls are additive where feasible. Document your control points and show they are enforced. 2
How do we handle third-party software and updates?
Require verification of publisher signatures where available, or re-sign artifacts in a controlled internal process after intake scanning and approval. Keep records that show what you accepted, how it was verified, and how it was distributed. 2
What’s the simplest way to address the “information” part without boiling the ocean?
Start with information that drives security or mission decisions, then protect distribution and storage with cryptographic integrity checks and separate storage of integrity metadata. Document your selection rationale and expand iteratively. 1
What evidence will an assessor accept if we can’t easily demonstrate a failed verification in production?
Use a controlled test environment to generate a failed verification event, then retain logs and tickets showing the block and triage path. Pair that with production “pass” verification logs and enforcement configurations. 2
Footnotes
Frequently Asked Questions
Does SI-7(6) require encryption of data at rest?
SI-7(6) is about detecting unauthorized changes via cryptographic integrity mechanisms, not confidentiality encryption. Encryption can be part of an overall cryptographic program, but this requirement focuses on tamper detection for software, firmware, and information. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What counts as “cryptographic mechanisms” for integrity?
Digital signatures and authenticated hashes are common mechanisms because they let you verify authenticity and detect modification. The key test is whether an unauthorized change becomes detectable at your verification point. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Do we need runtime verification, or is CI/CD verification enough?
Put verification where it meaningfully reduces risk: CI/CD and deploy-time gates are usually the first priority, and runtime controls are additive where feasible. Document your control points and show they are enforced. (Source: NIST SP 800-53 Rev. 5)
How do we handle third-party software and updates?
Require verification of publisher signatures where available, or re-sign artifacts in a controlled internal process after intake scanning and approval. Keep records that show what you accepted, how it was verified, and how it was distributed. (Source: NIST SP 800-53 Rev. 5)
What’s the simplest way to address the “information” part without boiling the ocean?
Start with information that drives security or mission decisions, then protect distribution and storage with cryptographic integrity checks and separate storage of integrity metadata. Document your selection rationale and expand iteratively. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What evidence will an assessor accept if we can’t easily demonstrate a failed verification in production?
Use a controlled test environment to generate a failed verification event, then retain logs and tickets showing the block and triage path. Pair that with production “pass” verification logs and enforcement configurations. (Source: NIST SP 800-53 Rev. 5)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream