Eradication Procedures
Eradication Procedures require you to remove every component of a security incident after containment: malware, compromised accounts, and the vulnerability or weakness that allowed the intrusion. Operationally, this means executing a documented, repeatable eradication playbook that includes verified removal, credential and access resets, vulnerability remediation, and evidence capture to prove the environment is clean.
Key takeaways:
- Eradication starts only after containment, and it must address both the payload (malware/accounts) and the root cause (exploited vulnerability).
- “Delete the malware” is not enough; you need verification steps and re-entry checks before returning systems to production.
- Auditors look for artifacts: tickets, scripts/commands, EDR logs, account actions, patch records, and sign-offs tied to the incident timeline.
“Eradication” is the step that stops an incident from coming back. Containment buys you time by limiting spread; eradication removes what the attacker left behind and closes the door they used to get in. NIST SP 800-61 Rev. 2 makes the expectation explicit: after containment, you must eliminate all components of the incident, including malware, compromised accounts, and exploited vulnerabilities (Computer Security Incident Handling Guide).
For a Compliance Officer, CCO, or GRC lead, the operational challenge is consistency. Teams can respond heroically in a crisis and still fail an exam because they cannot show a defined eradication procedure, clear decision rights, and evidence that eradication actually happened. This requirement page translates the NIST expectation into a short, runnable program: who owns what, which steps must occur every time, which artifacts to retain, and how to avoid the common traps that lead to re-infection, repeated account compromise, and audit findings.
Regulatory text
NIST SP 800-61 Rev. 2, Section 3.3.4 requires organizations to “eliminate all components of the incident including malware, compromised accounts, and exploited vulnerabilities after containment” (Computer Security Incident Handling Guide).
Operator interpretation: you must have a defined eradication process that (1) removes malicious code and persistence mechanisms, (2) neutralizes compromised identities and access paths, (3) fixes or mitigates the exploited weakness, and (4) verifies eradication before recovery and return to normal operations.
Plain-English interpretation (what the requirement means)
Eradication Procedures mean your incident response program cannot stop at “we contained it.” You need a repeatable method to:
- remove attacker tools and footholds (malware, backdoors, persistence, unauthorized scheduled tasks, rogue admin tools);
- neutralize compromised accounts and tokens (password resets, key rotation, session invalidation, MFA rebind, disablement where needed);
- remediate the entry point (patch the exploited vulnerability, fix misconfigurations, tighten firewall rules, close exposed services, correct IAM policy issues);
- confirm, with evidence, that the environment is clean and ready for recovery.
A useful way to think about eradication is three layers you must clear each incident:
- Workload/device layer: endpoints, servers, containers, images.
- Identity/control plane: user accounts, service accounts, API keys, OAuth apps, SSO sessions.
- Weakness layer: vulnerabilities and misconfigurations that enabled access.
Who it applies to
Entity scope: Federal agencies and organizations using NIST SP 800-61 as the incident handling guide or mapped requirement set (Computer Security Incident Handling Guide).
Operational context: Any time you declare an incident and move past containment, eradication procedures apply across:
- Corporate endpoints and servers (on-prem and cloud IaaS)
- SaaS environments (email, collaboration, CRM, ticketing)
- Identity providers and directory services (SSO/IdP, AD/AAD)
- Critical applications and databases
- Third parties where you have administrative access or where their compromise affects you (for example, a managed service provider account used to access your environment)
Trigger: containment is in place (segmentation, isolation, blocks, takedowns) and you are ready to remove artifacts and remediate the exploited weakness.
What you actually need to do (step-by-step)
Below is a practical eradication runbook you can put into an IR playbook and require teams to follow.
1) Establish eradication decision rights and entry criteria
- Entry criteria: containment controls are active; volatile evidence collection requirements are satisfied; eradication actions won’t destroy needed forensic data.
- Decision rights: name who can authorize destructive actions (reimage, wipe, disable accounts, rotate org-wide keys). Put this in the IR plan and escalation matrix.
- Change control stance: define the “incident change window” where emergency changes are permitted but still logged and approved after-the-fact.
2) Build and maintain a “known-bad” eradication scope
Create a single incident-specific list that drives all eradication work:
- affected hosts, users, service accounts, cloud resources, SaaS tenants
- indicators of compromise (hashes, domains, IPs, registry keys, persistence methods)
- attacker actions (created accounts, privilege changes, mailbox rules, OAuth consents)
- suspected initial access vector (vulnerability, phishing, credential stuffing, exposed key)
This scope list should be versioned inside the incident record, not scattered across chat threads.
3) Remove malware and persistence mechanisms
Actions depend on environment, but your procedure should require:
- quarantine or remove malicious binaries and scripts through EDR/SOAR workflows
- identify persistence (startup items, scheduled tasks, services, cronjobs, launch agents, WMI subscriptions, container entrypoints)
- validate system integrity where possible (gold image comparison, file integrity monitoring outputs)
- decide between clean (surgical removal) vs rebuild (reimage/redeploy). For many incidents, rebuild is more reliable than attempting to clean.
Control expectation: you can show that the organization has a standard approach and that it is applied consistently per incident.
4) Neutralize compromised accounts and access paths
Your procedure should include a minimum set of identity actions:
- disable confirmed compromised accounts quickly; for suspected compromise, force reset and session revoke
- reset passwords and rotate credentials for service accounts involved in the incident
- rotate API keys, access tokens, signing keys, and secrets exposed or likely accessed
- revoke active sessions (SSO, refresh tokens, device sessions) and rebind MFA if compromise is suspected
- remove unauthorized OAuth apps, mailbox forwarding rules, delegated permissions, and newly created admin roles/groups
Practical note: account cleanup is often where incidents recur. Make “session invalidation + credential rotation + privilege review” a checklist item, not a best-effort task.
5) Fix the exploited vulnerability (or misconfiguration)
Eradication is incomplete if the entry point remains open. Require:
- patching or upgrading the exploited component, or applying compensating controls if patching is not immediately possible
- configuration remediation (close exposed ports, restrict management interfaces, enforce least privilege, tighten conditional access)
- validation that remediation is active (vulnerability scan results, configuration drift checks, WAF rule validation)
6) Verify eradication before recovery
Define explicit verification gates; do not rely on “looks quiet.”
- rerun detection searches for IOCs across endpoints, logs, SIEM, email, and cloud audit logs
- confirm EDR status and policy enforcement on rebuilt/cleaned hosts
- check for re-creation of accounts, reappearance of persistence, new outbound connections to known-bad infrastructure
- confirm vulnerability remediation via rescans or configuration checks
Verification should end with a documented decision: “eradication complete” with approver, time, and supporting evidence references.
7) Capture lessons learned into permanent controls
After eradication, require a short control hardening loop:
- what control failed (patching SLA gap, weak MFA, insufficient logging, excessive privileges)
- what you changed (new detections, revised baselines, added guardrails)
- what must be monitored for re-entry (specific log sources, alerts, watchlists)
This is how you prevent the same incident class from repeating.
Required evidence and artifacts to retain
Auditors and incident reviewers expect proof that eradication happened and was verified. Retain, per incident:
- Incident timeline showing containment → eradication start → eradication complete → recovery
- Eradication checklist/runbook execution record (ticket(s) or IR case notes with tasks completed)
- EDR/SIEM evidence: detection and remediation logs, IOC hunt results, post-eradication negative searches
- Account actions: disablement logs, password reset records, session revocation outputs, privilege changes, OAuth app removal logs
- Vulnerability remediation evidence: patch/change tickets, config change records, vulnerability rescan outputs, exception approvals if compensating controls used
- Reimage/rebuild records: asset IDs, gold image versions, redeploy logs
- Approvals and sign-offs: who authorized destructive actions and who declared “eradication complete”
- Third-party communications if a third party was part of the incident scope (requests, attestations, access revocations)
If you use a platform like Daydream to track third-party risk and due diligence, connect incident eradication artifacts to the affected third party record (for example, “MSP admin account rotated” and “third-party access revoked”), so you can show ongoing oversight without hunting across tools.
Common exam/audit questions and hangups
Expect these questions in audits, assessments, and tabletop reviews:
- “Show me your eradication procedure. Where is it documented, and who owns it?”
- “How do you prove the exploited vulnerability was remediated?”
- “How do you handle eradication in SaaS (email rules, OAuth apps, delegated access)?”
- “How do you ensure compromised sessions are invalidated, not just passwords changed?”
- “What is your verification step before returning systems to production?”
- “How do you coordinate eradication actions with forensics to avoid destroying evidence?”
Common hangup: teams can describe what they did, but cannot produce a clean evidence package tied to the incident record.
Frequent implementation mistakes (and how to avoid them)
-
Stopping at malware removal.
Fix: make “identity cleanup” and “entry-point remediation” mandatory checklist sections. -
No session revocation.
Fix: explicitly require token/session invalidation for IdP, email, VPN, and key SaaS platforms. -
Cleaning systems that should be rebuilt.
Fix: define criteria for rebuild (unknown persistence, privileged compromise, widespread lateral movement) and require a documented decision. -
Eradication actions scattered across chat.
Fix: run everything through an IR case/ticket with timestamps, owners, and attachments. -
No verification gate.
Fix: require a formal “eradication complete” approval with referenced evidence searches and rescans.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement. Practically, weak eradication increases the chance of reinfection, repeat unauthorized access, extended downtime, and integrity issues. From a governance standpoint, inability to evidence eradication turns a technical failure into a control failure: you cannot demonstrate that incident handling procedures were executed as required (Computer Security Incident Handling Guide).
Practical 30/60/90-day execution plan
First 30 days (stand up the minimum runnable process)
- Publish an eradication runbook checklist aligned to: malware removal, account/access cleanup, vulnerability remediation, verification gate (Computer Security Incident Handling Guide).
- Define decision rights for destructive actions and emergency changes.
- Standardize evidence capture: a single IR case template with required attachments and sign-offs.
Days 31–60 (make it consistent across systems, including SaaS)
- Build platform-specific eradication mini-playbooks: endpoints/servers, cloud, email, IdP, key SaaS.
- Define “credential rotation matrix” (which secrets rotate for which incident types) and pre-stage access to do it quickly.
- Add verification queries/hunts to your SIEM/EDR playbooks and require recorded outputs in the case file.
Days 61–90 (stress-test and integrate with governance)
- Run a tabletop that forces hard eradication calls (rebuild vs clean, mass token revocation, third-party access removal) and update the runbook based on gaps.
- Add eradication completion criteria to recovery/change management gates.
- Tie third-party-related incidents to your third-party inventory and oversight workflows (Daydream can serve as the system of record for third-party engagement, ownership, and evidence linking).
Frequently Asked Questions
Do eradication procedures start during containment or after?
NIST frames eradication as the phase after containment, once you have limited spread and can safely remove components without losing needed evidence (Computer Security Incident Handling Guide).
What counts as “all components of the incident”?
At minimum: malware/artifacts, compromised accounts or access tokens, and the exploited vulnerability or weakness that enabled access (Computer Security Incident Handling Guide). Your procedure should also address persistence mechanisms and unauthorized configuration changes.
How do we handle eradication for a SaaS-only incident like business email compromise?
Focus on identity and tenant controls: remove malicious inbox rules and OAuth grants, reset credentials, revoke sessions, review privileged roles, and verify no new persistence appears in audit logs.
Is patching always required to meet the eradication requirement?
You must mitigate the exploited vulnerability. Patching is the cleanest path, but if you cannot patch immediately, document compensating controls and verify they are active, then track patching to completion.
What evidence is most persuasive to auditors?
Time-bound artifacts tied to the incident case: account disable/reset logs, session revocations, EDR remediation logs, and proof of vulnerability remediation plus verification hunts showing no remaining indicators.
How do we operationalize eradication when a third party is involved?
Treat third-party access paths as in-scope components: revoke or rotate shared credentials, restrict integrations, require the third party to confirm their own eradication actions, and retain those communications and attestations in your incident record.
Frequently Asked Questions
Do eradication procedures start during containment or after?
NIST frames eradication as the phase after containment, once you have limited spread and can safely remove components without losing needed evidence (Computer Security Incident Handling Guide).
What counts as “all components of the incident”?
At minimum: malware/artifacts, compromised accounts or access tokens, and the exploited vulnerability or weakness that enabled access (Computer Security Incident Handling Guide). Your procedure should also address persistence mechanisms and unauthorized configuration changes.
How do we handle eradication for a SaaS-only incident like business email compromise?
Focus on identity and tenant controls: remove malicious inbox rules and OAuth grants, reset credentials, revoke sessions, review privileged roles, and verify no new persistence appears in audit logs.
Is patching always required to meet the eradication requirement?
You must mitigate the exploited vulnerability. Patching is the cleanest path, but if you cannot patch immediately, document compensating controls and verify they are active, then track patching to completion.
What evidence is most persuasive to auditors?
Time-bound artifacts tied to the incident case: account disable/reset logs, session revocations, EDR remediation logs, and proof of vulnerability remediation plus verification hunts showing no remaining indicators.
How do we operationalize eradication when a third party is involved?
Treat third-party access paths as in-scope components: revoke or rotate shared credentials, restrict integrations, require the third party to confirm their own eradication actions, and retain those communications and attestations in your incident record.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream