SI-7(5): Automated Response to Integrity Violations
SI-7(5) requires you to predefine and configure an automated response that triggers as soon as an integrity violation is detected, so the system can contain impact without waiting for human triage. To operationalize it quickly, pick integrity “tripwires” (file, config, code, database), map each to an automated action (block, quarantine, restore, alert), and retain proof the automation executed. 1
Key takeaways:
- Define what counts as an “integrity violation” for your environment, then bind each detection to an automated response playbook.
- Automate both containment and evidence capture (alerts, tickets, logs, snapshots) so you can prove the response ran.
- Make the control auditable: clear ownership, test results, and recurring evidence artifacts.
The si-7(5): automated response to integrity violations requirement is one of the fastest ways to reduce blast radius from tampering, unauthorized changes, and corrupted data. It is also a common audit friction point because teams can show detection (hashing, EDR, FIM, CI/CD checks) but cannot show an automated response that consistently triggers on integrity failures.
Operationally, treat SI-7(5) as a binding between two things you already have: (1) integrity checks that discover violations, and (2) response actions that happen automatically and predictably. “Automatically” is the key word. A Slack message alone rarely satisfies intent unless it reliably initiates containment and evidence capture without manual decision-making.
This page gives requirement-level guidance you can implement: scoping integrity violation types, selecting response actions, implementing automation paths across endpoints, servers, cloud, and CI/CD, and producing assessor-ready evidence. The goal is simple: when integrity violations happen, your environment responds on its own and leaves a trail you can audit.
Regulatory text
NIST SI-7(5) states: “Automatically {{ insert: param, si-07.05_odp.01 }} when integrity violations are discovered.” 1
Operator meaning: you must configure your system(s) so that discovery of an integrity violation triggers an automatic response you define. The variable text in the control indicates the organization specifies what automated action(s) occur (for example: quarantine, block execution, roll back, restore known-good state, disable accounts, isolate a host, stop a pipeline, or open an incident with required fields). Your job is to (a) pick the automated actions, (b) wire them to integrity detections, and (c) prove it runs.
Plain-English interpretation
An integrity violation is any condition where you no longer trust the correctness or authorized state of software, firmware, configuration, or data. SI-7(5) expects a machine-driven response at detection time, not a best-effort human response.
Think in “if/then” terms:
- IF a protected object’s integrity check fails (hash mismatch, unsigned binary, unauthorized config drift, database tamper signal, IaC policy violation),
- THEN the system automatically executes the predefined action(s) and records evidence.
This is not the same as “we get an alert.” Alerts are part of the chain, but the control enhancement is about an automated response to reduce dwell time between detection and containment.
Who it applies to
Entity types (typical):
- Federal information systems
- Contractor systems handling federal data 1
Operational context where assessors look hardest:
- Systems that process sensitive government data or regulated workloads where tampering creates mission impact.
- CI/CD pipelines and artifact repositories (integrity of code and build outputs).
- Cloud control planes (integrity of IAM policies, security groups, logging configs).
- Endpoint fleets (integrity of binaries, drivers, critical registry keys).
- High-value databases (integrity of records and schema).
Common scoping decision: apply SI-7(5) first to “crown jewels” and shared control-plane components, then expand coverage. Document the scope boundary explicitly so audits don’t turn into a debate about every file on every server.
What you actually need to do (step-by-step)
Step 1: Define integrity violation categories and triggers
Create a short list of integrity conditions you will treat as violations. Keep it operational, not philosophical.
Suggested categories:
- File integrity monitoring (FIM): changes to critical binaries, configs, or scripts outside approved change windows.
- Code integrity: unsigned executables, invalid signatures, unauthorized kernel modules/drivers.
- Configuration drift: security-critical settings changed from baseline (CIS-aligned settings, logging disabled, IAM policy broadened).
- Pipeline integrity: build artifacts fail provenance checks, unexpected dependency changes, tampered container images.
- Data integrity: database tamper alerts, unexpected schema changes, integrity constraints violated in suspicious patterns.
Deliverable: an “Integrity Violation Trigger Matrix” listing each trigger, where it is detected, and its severity.
Step 2: Choose automated response actions per trigger (the SI-7(5) “ODP”)
For each trigger, specify what automation does. Use a tiered approach so you don’t accidentally take down production for low-confidence signals.
Example response actions:
- Containment: isolate host from network; disable a compromised service account; block process execution via EDR.
- Quarantine: move file/object to quarantine bucket; stop container; cordon node.
- Recovery: restore known-good file from golden image; roll back config via IaC; redeploy from trusted artifact.
- Workflow: open an incident ticket with required metadata; page on-call; enrich alert with asset owner and last known change.
- Preservation: capture forensic triage package (logs, process list, file hashes, cloud trail) and store immutably.
Deliverable: “Automated Response Playbooks” (one page each) that include trigger, action, systems touched, and rollback conditions.
Step 3: Implement the automation paths
You need reliable plumbing from detection to action. Typical patterns:
- EDR/XDR actioning: detection rule → automatic host isolate / kill process / quarantine file.
- SOAR playbooks: SIEM alert → SOAR workflow → IAM disable, security group update, ticket creation, evidence capture.
- Cloud-native automation: event rule → function/runbook → revert config drift, rotate keys, snapshot volume.
- CI/CD gates: policy-as-code check fails → pipeline auto-stops, blocks deploy, creates incident, tags commit/build.
Design requirement: automation must be deterministic enough to demonstrate it triggers without humans clicking “run.”
Step 4: Build “audit-mode” evidence capture into the response
SI-7(5) fails in audits when automation exists but leaves no trace you can produce later. Instrument the workflow to emit evidence automatically:
Minimum evidence events per run:
- detection event (timestamp, rule, affected object)
- response action event (what was executed, by what identity, success/failure)
- notification/ticket event (who was notified, incident ID)
- captured artifacts references (log bundle location, snapshot ID)
Step 5: Assign ownership and align with incident response
Name a control owner (usually Security Operations or Platform Security) and an operations owner for each automation domain (EDR owner, SOAR owner, cloud ops owner, CI/CD owner). Tie SI-7(5) runs into incident management so responders don’t bypass the automation trail.
Step 6: Test and keep test evidence
Run controlled tests that simulate integrity violations in non-production or a lab tenant:
- modify a monitored config file
- attempt to run an unsigned binary
- change a cloud logging setting from baseline
- introduce an unauthorized dependency in a build
Capture the resulting evidence trail end-to-end. Store it with the control artifacts.
Required evidence and artifacts to retain
Keep artifacts that show design, implementation, and operation:
- Control narrative for SI-7(5) mapped to systems in scope 1
- Integrity Violation Trigger Matrix (triggers, detection source, severity, response)
- Automated Response Playbooks (what happens automatically)
- Configuration evidence (sanitized screenshots/exported configs):
- EDR response policies
- SIEM/SOAR rules and playbook versions
- cloud event rules/runbooks
- CI/CD gate policies
- Execution logs for sample events (detection → action → ticket)
- Test results from tabletop plus technical simulations, with timestamps and outcomes
- Access control evidence showing least-privileged automation identities (service accounts/roles)
- Exception register for systems/triggers not yet automated, with compensating controls and target remediation date (your internal target)
Tip for operators: assessors respond well to a single evidence bundle per “automation chain” rather than dozens of scattered screenshots.
Common exam/audit questions and hangups
What assessors ask most often:
- “Show me an integrity violation and the automatic response that followed. Where is the log?”
- “What integrity signals are in scope? Who decided?”
- “Is the response automatic or does an analyst click a button?”
- “How do you prevent the automation from making outages worse?”
- “How do you know the automation still works after tool upgrades?”
Hangups that slow audits:
- Detections exist, but responses are manual (alerts only).
- Response runs, but evidence is missing or not correlated to the detection.
- Integrity violations are defined too broadly (“any file change”), creating noise and exceptions.
Frequent implementation mistakes (and how to avoid them)
- Automation that only notifies. Fix: include at least one containment or preservation action that runs automatically for high-confidence triggers.
- No baseline definition. Fix: define approved state (golden images, IaC baselines, signed artifacts) so “violation” is measurable.
- Over-aggressive auto-containment. Fix: tier responses by confidence; quarantine and evidence capture can be safer than network isolation for some workloads.
- No ownership for playbooks. Fix: assign a single accountable owner per automation domain and require change control for playbook edits.
- Un-auditable tooling. Fix: ensure automation identities, logs, and tickets are retained and searchable.
Enforcement context and risk implications
Public enforcement case sources were not provided for this requirement in the supplied catalog, so this page does not cite specific cases. Practically, SI-7(5) reduces risk from integrity attacks that can lead to unauthorized changes, fraudulent transactions, data corruption, and persistence mechanisms. Audit risk is also real: if you cannot show automated action and evidence retention, assessors often record a control gap even if your team responds quickly in practice. 2
Practical 30/60/90-day execution plan
First 30 days: establish scope, triggers, and one working automation chain
- Pick the initial system boundary (crown jewels + control plane components).
- Publish the Integrity Violation Trigger Matrix for that scope.
- Implement one end-to-end chain: detection → automated action → ticket → evidence bundle.
- Decide where evidence lives and who can retrieve it during an assessment.
Days 31–60: expand coverage and harden the evidence trail
- Add additional triggers (config drift, unsigned code, pipeline integrity).
- Standardize playbook templates and naming so logs correlate cleanly.
- Add guardrails: allowlists for approved change windows, rollback steps, and escalation paths.
- Run simulations and store test evidence as recurring artifacts.
Days 61–90: operationalize and make it repeatable
- Add monitoring for “automation failure” (playbook errors, permission failures, API limits).
- Integrate into change management so baseline updates don’t trigger false violations.
- Formalize recurring reviews: rule tuning, exception review, and evidence sampling.
- If you use Daydream for third-party and control readiness workflows, map SI-7(5) to a named control owner, attach procedures, and schedule recurring evidence pulls so audits don’t become screenshot hunts.
Frequently Asked Questions
What counts as an “integrity violation” for SI-7(5)?
Any event where a protected object deviates from an authorized, trusted state, such as unauthorized file/config changes, invalid signatures, or tampered artifacts. Define the categories and scope in your trigger matrix so the term is testable and auditable.
Does an alert to the SOC satisfy “automated response”?
Usually not by itself. Alerts are useful, but SI-7(5) expects an automated action triggered by the violation, such as quarantine, isolation, rollback, or automated evidence capture. 1
How do we avoid outages from automated containment?
Tier your responses by confidence and criticality. For lower-confidence signals, start with evidence capture plus ticketing; reserve disruptive actions (isolation, service stop) for high-confidence triggers with clear rollback steps.
What evidence is most persuasive to auditors?
A correlated, time-stamped record showing detection and the automated action that executed, plus the resulting ticket/incident and preserved artifacts. Keep at least one complete example per automation chain in scope.
We have multiple tools (EDR, SIEM, SOAR, cloud runbooks). How do we present this as “one control”?
Treat each “detection-to-action” path as an automation chain with a single playbook and evidence bundle. Then roll them up under SI-7(5) with a clear scope statement and an owner.
How often should we test SI-7(5) automations?
Test whenever you change the detection rules, response playbooks, or underlying platforms, and on a recurring schedule your program can sustain. What matters is that you can show recent test evidence and that failures trigger remediation work.
Footnotes
Frequently Asked Questions
What counts as an “integrity violation” for SI-7(5)?
Any event where a protected object deviates from an authorized, trusted state, such as unauthorized file/config changes, invalid signatures, or tampered artifacts. Define the categories and scope in your trigger matrix so the term is testable and auditable.
Does an alert to the SOC satisfy “automated response”?
Usually not by itself. Alerts are useful, but SI-7(5) expects an automated action triggered by the violation, such as quarantine, isolation, rollback, or automated evidence capture. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How do we avoid outages from automated containment?
Tier your responses by confidence and criticality. For lower-confidence signals, start with evidence capture plus ticketing; reserve disruptive actions (isolation, service stop) for high-confidence triggers with clear rollback steps.
What evidence is most persuasive to auditors?
A correlated, time-stamped record showing detection and the automated action that executed, plus the resulting ticket/incident and preserved artifacts. Keep at least one complete example per automation chain in scope.
We have multiple tools (EDR, SIEM, SOAR, cloud runbooks). How do we present this as “one control”?
Treat each “detection-to-action” path as an automation chain with a single playbook and evidence bundle. Then roll them up under SI-7(5) with a clear scope statement and an owner.
How often should we test SI-7(5) automations?
Test whenever you change the detection rules, response playbooks, or underlying platforms, and on a recurring schedule your program can sustain. What matters is that you can show recent test evidence and that failures trigger remediation work.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream