SI-4(5): System-generated Alerts
To meet the si-4(5): system-generated alerts requirement, you must configure your security monitoring tools to automatically generate alerts to defined recipients when specified indicators of compromise (IOCs) or potential compromise occur, then prove those alerts are reliably generated, routed, and acted on. Operationalize it by defining alert triggers, recipients, severity rules, and test evidence. 1
Key takeaways:
- Define exactly which “system-generated indications” trigger alerts, and document the logic and thresholds. 1
- Route alerts to named roles (not tools), with clear on-call coverage, escalation, and response expectations.
- Retain evidence of configuration, test alerts, alert routing, and incident tickets that show alerts drive action.
SI-4(5) is an execution control: auditors are looking for working alerts, not general “we monitor security events” statements. The control enhancement requires you to alert designated recipients when system-generated indications of compromise or potential compromise occur. 1 The two variables you must fill in operationally are (1) who gets alerted and (2) which system-generated conditions must produce an alert.
For a CCO, GRC lead, or security governance owner, the fastest path is to treat SI-4(5) as a small, testable set of detection-and-notification requirements. You should be able to answer: Which telemetry sources produce these alerts (EDR, SIEM, cloud logs, IAM, email security)? Which detection rules are in scope? How do alerts reach humans with authority to act? What happens after the alert fires?
This page gives requirement-level implementation guidance you can hand to a SOC manager or IT operations lead, then collect the evidence you need for assessment readiness. It is written to help you implement and defend the si-4(5): system-generated alerts requirement under NIST SP 800-53 Rev. 5. 2
Regulatory text
Requirement (excerpt): “Alert [organization-defined personnel or roles] when the following system-generated indications of compromise or potential compromise occur: [organization-defined indications].” 1
What the operator must do: You must (a) define the recipients, (b) define the “system-generated indications” that require alerting, and (c) implement tooling so that when those indications occur, alerts are automatically generated and delivered to those recipients. You also need to demonstrate the alerting works in practice with configuration and test evidence. 1
Plain-English interpretation
SI-4(5) means your systems should not “quietly log” signs of compromise. When a defined compromise signal appears (for example, an EDR high-severity detection, a cloud account takeover indicator, or a privileged account anomaly), the environment must automatically notify the right humans or on-call function through your alerting channels. 1
A practical way to read the enhancement:
- “System-generated” means produced by a technical control or platform (SIEM, EDR, IDS/IPS, cloud security, IAM, email gateway), not a manual observation.
- “Indications of compromise or potential compromise” are your defined detection outcomes that warrant attention, triage, and possible containment.
- “Alert” is an active notification to recipients (paging, ticket creation, email/ChatOps), not passive storage in a log index. 1
Who it applies to
Entity types: Federal information systems and contractor systems handling federal data commonly scope NIST SP 800-53 controls, including SI-4(5). 2
Operational context (where it matters most):
- Environments with centralized monitoring (SIEM/SOC) where alerts must reach an on-call responder.
- Distributed IT/security teams where alerts must route by system ownership (cloud team, endpoint team, IAM team).
- Hybrid organizations that rely on third parties for SOC operations; you still own the requirement, even if a managed security service provider (MSSP) receives alerts first.
What you actually need to do (step-by-step)
Step 1: Assign control ownership and define accountable recipients
Define a control owner who can answer “who receives which alerts and why.” Then define recipients as roles with coverage, not named individuals. Examples: “SOC On-Call,” “Cloud Security On-Call,” “Incident Commander,” “IAM Duty Officer.”
Minimum operational decisions to document:
- Primary recipients per alert category (e.g., endpoint, identity, cloud, network).
- Backup/escalation recipients when primary does not acknowledge.
- Hours of coverage and handoff method (business hours vs on-call).
This mapping is the fastest way to make SI-4(5) assessable. 1
Step 2: Define “system-generated indications” (alert triggers) in a scoping list
Create an organization-defined list of indications of compromise or potential compromise that must generate alerts. Keep it tight and defensible.
A workable structure is a table like this:
| Alert category | System source | Example indication (your definition) | Severity | Recipient role |
|---|---|---|---|---|
| Endpoint | EDR | High-confidence malware/ransomware detection | High | SOC On-Call |
| Identity | IAM | Privileged role assigned outside approved workflow | High | IAM On-Call |
| Cloud | CSP logs/CSPM | Root/API key created; unusual geo access for admin | High | Cloud Sec On-Call |
| Network | IDS/Firewall | Critical exploit signature match | High | SOC On-Call |
| Data | DLP/CASB | Sensitive data exfiltration indicator | High | Incident Commander |
You are not required to pick these exact indications, but you must pick some, and they must be meaningful compromise signals for your environment. The audit failure mode is “we didn’t define them” or “they exist but don’t alert anyone.” 1
Step 3: Implement alert generation in the tools you already run
For each system source, implement alert logic in the native platform or in the SIEM/SOAR.
Operational checklist:
- Telemetry onboarded: Confirm the log/telemetry source is present and current (EDR agents reporting, cloud audit logs enabled, IAM logs flowing).
- Detection rule exists: Identify the rule(s) that represent each indication in your list (vendor detections, custom SIEM correlation, policy violation rule).
- Alert action configured: Configure “create alert / notable event / case” rather than “log only.”
- Notification route set: Configure paging/ticketing destinations (on-call system, case management, ticket queue).
- Dedup/suppression rules: Add suppression for known benign noise, but document exceptions so you do not suppress true compromise indications without approval.
- Severity mapping: Ensure the severity of the alert aligns to the recipient and expected response path.
Your goal is predictable alert behavior that a tester can reproduce. 1
Step 4: Tie alerts to response actions (so alerts are not decorative)
SI-4(5) is alerting, but assessors will still ask: “What happens next?” Connect alerts to your incident response process.
Minimum operational linkage:
- Alerts create a ticket/case with required fields (time, source, impacted asset, triage owner).
- Triage SLA expectations exist (you can define internal targets without needing a regulatory citation).
- Escalation criteria exist (e.g., “High severity IOC -> incident declared”).
- Closure requires a disposition (“true positive,” “benign,” “false positive,” “duplicate”) plus notes.
Step 5: Test alerting and keep the proof
Run a controlled test for each major alert category. The test does not need to be risky; it needs to demonstrate the pipeline: detection -> alert -> recipient notification -> case/ticket creation.
Examples of low-risk tests:
- Generate a harmless EICAR test file for endpoint protection if approved by security engineering.
- Create a test IAM policy violation in a non-production account to confirm alerting.
- Trigger a known test rule in the SIEM (synthetic event) to validate routing.
Record what you did, who approved it (if needed), and the evidence outputs.
Step 6: Operationalize recurring evidence collection
SI-4(5) frequently fails on evidence, not intent. Build a lightweight recurring evidence set:
- Monthly (or other cadence you choose) snapshot of “top high-severity alerts,” showing alerts exist and are routed.
- A sample of closed cases tied to alerts.
- A change record when alert rules or recipients change.
Daydream can help by mapping SI-4(5) to a named control owner, a written implementation procedure, and a recurring evidence checklist so you are not rebuilding the same packet every assessment cycle. 1
Required evidence and artifacts to retain
Retain artifacts that prove (1) defined scope, (2) working configuration, and (3) operational use.
Design evidence
- SI-4(5) control narrative: recipients, alert sources, defined indications, severity tiers. 1
- Alert matrix/table (indication -> detection rule -> recipient role).
- On-call roster policy or escalation path documentation.
Configuration evidence
- Screenshots or exports of SIEM/EDR/SOAR rules and notification actions.
- Integration proof: ticketing connector config, pager routing, email/ChatOps channel settings.
- Asset inventory list of covered systems (or a scoping statement if limited).
Operating effectiveness evidence
- Test alert records (event ID, timestamp, recipient notification).
- Tickets/cases created from alerts with triage notes and closure dispositions.
- Samples showing escalations occurred when required.
Common exam/audit questions and hangups
Expect these questions, and pre-answer them in your evidence packet:
- “Show me the list of system-generated compromise indications that require alerting.” 1
- “Who receives alerts, and how do you ensure coverage after hours?”
- “Demonstrate an alert firing and the resulting ticket/case.”
- “How do you prevent missed alerts due to misrouting or tool outages?”
- “How do you review alert quality and tune false positives without suppressing real IOCs?”
Hangups that slow teams down:
- Recipients defined as individuals instead of roles.
- Alerts routed to a shared inbox without ownership or escalation.
- Rules exist, but alert actions were never enabled after a platform migration.
Frequent implementation mistakes and how to avoid them
-
Mistake: Defining “indications” as vague categories.
Fix: Write indications as concrete detection outcomes with severity thresholds (even if the threshold is “vendor severity = High”). -
Mistake: “We have a SIEM” used as proof.
Fix: Provide rule-level evidence and a test alert with recipient delivery. -
Mistake: Alert noise causes responders to ignore notifications.
Fix: Implement tuning with documented approval, and track dispositions in tickets so you can show improvement without hiding risk. -
Mistake: Alerts go to the MSSP only, with no internal accountability.
Fix: Contractually and operationally require forwarding/escalation into your case system and name an internal incident owner. -
Mistake: No evidence retention plan.
Fix: Put evidence on a recurring calendar task tied to the SI-4(5) owner; store exports/screenshots centrally with access control.
Enforcement context and risk implications
No public enforcement cases were provided in the supplied source catalog for this requirement, so this page does not cite enforcement actions. Practically, SI-4(5) failures increase the chance that compromise indicators are detected too late or not acted on, which can expand incident scope and complicate incident reporting and contractual obligations.
Practical 30/60/90-day execution plan
First 30 days (establish scope and routing)
- Appoint SI-4(5) control owner and backup.
- Draft the “indications of compromise” list and map each to a system source and recipient role. 1
- Confirm alert delivery paths (paging/ticketing) exist and are owned.
- Identify gaps: sources not onboarded, rules missing, recipients unclear.
Days 31–60 (implement and prove operation)
- Enable or build alert rules for the defined indications across core sources.
- Configure escalation paths and acknowledgement expectations for high-severity alerts.
- Run controlled tests per alert category and collect evidence artifacts.
- Start ticket-based triage workflow tied to alerts.
Days 61–90 (stabilize, tune, and make it assessable)
- Tune noisy rules and document suppression logic and approvals.
- Implement a recurring evidence routine (exports, samples of closed cases).
- Add change management touchpoints for alert logic and recipient routing changes.
- Package everything into an assessor-ready control narrative and evidence index (Daydream can keep this mapped and repeatable). 1
Frequently Asked Questions
What counts as a “system-generated indication of compromise” for SI-4(5)?
It’s an automated detection outcome produced by a system or security tool that suggests compromise or likely compromise, as defined by your organization. Your job is to document the specific indications you chose and ensure they trigger alerts to defined roles. 1
Do alerts have to go to a SIEM, or can they come from EDR and cloud tools directly?
SI-4(5) does not require a specific architecture; it requires alerts to be generated and sent to defined recipients when defined indications occur. Centralizing in a SIEM often simplifies evidence and routing, but direct tool-to-on-call alerting can meet the requirement if it is controlled and provable. 1
Can we define recipients as a shared mailbox?
You can, but it’s a common audit hangup because mailboxes lack clear accountability and escalation. Define role-based recipients with on-call coverage, and use the mailbox only as a secondary notification or archive.
How many indications do we need to define?
NIST leaves this organization-defined, so pick a set that reflects your highest-risk compromise signals across endpoints, identity, network, and cloud. Keep the list manageable and map each indication to an implemented rule and recipient so you can test and show evidence. 1
What evidence is usually sufficient to prove SI-4(5) in an assessment?
Assessors typically accept a control narrative, rule/routing configuration exports, and a small set of alert samples that show detection, notification delivery, and resulting ticket/case handling. The evidence must tie back to your organization-defined indications and recipients. 1
We outsource monitoring to a third party. Are we still on the hook?
Yes. You can delegate operations, but you still must define recipients, ensure alerts occur for defined indications, and retain evidence that alerts are delivered and acted on. Contract terms and shared ticketing/case evidence help close the accountability gap.
Footnotes
Frequently Asked Questions
What counts as a “system-generated indication of compromise” for SI-4(5)?
It’s an automated detection outcome produced by a system or security tool that suggests compromise or likely compromise, as defined by your organization. Your job is to document the specific indications you chose and ensure they trigger alerts to defined roles. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Do alerts have to go to a SIEM, or can they come from EDR and cloud tools directly?
SI-4(5) does not require a specific architecture; it requires alerts to be generated and sent to defined recipients when defined indications occur. Centralizing in a SIEM often simplifies evidence and routing, but direct tool-to-on-call alerting can meet the requirement if it is controlled and provable. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Can we define recipients as a shared mailbox?
You can, but it’s a common audit hangup because mailboxes lack clear accountability and escalation. Define role-based recipients with on-call coverage, and use the mailbox only as a secondary notification or archive.
How many indications do we need to define?
NIST leaves this organization-defined, so pick a set that reflects your highest-risk compromise signals across endpoints, identity, network, and cloud. Keep the list manageable and map each indication to an implemented rule and recipient so you can test and show evidence. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What evidence is usually sufficient to prove SI-4(5) in an assessment?
Assessors typically accept a control narrative, rule/routing configuration exports, and a small set of alert samples that show detection, notification delivery, and resulting ticket/case handling. The evidence must tie back to your organization-defined indications and recipients. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
We outsource monitoring to a third party. Are we still on the hook?
Yes. You can delegate operations, but you still must define recipients, ensure alerts occur for defined indications, and retain evidence that alerts are delivered and acted on. Contract terms and shared ticketing/case evidence help close the accountability gap.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream