ID.RA-04: Potential impacts and likelihoods of threats exploiting vulnerabilities are identified and recorded
To meet the id.ra-04: potential impacts and likelihoods of threats exploiting vulnerabilities are identified and recorded requirement, you must run a repeatable risk-analysis workflow that ties real vulnerabilities to plausible threat scenarios, assigns consistent likelihood and impact ratings, and records the results in a system of record that drives remediation and risk acceptance decisions (NIST CSWP 29).
Key takeaways:
- You need a documented method to score likelihood and impact for threat–vulnerability pairings, not just a vulnerability list (NIST CSWP 29).
- Recording means a durable, reviewable record with ownership, dates, rationale, and decision outcomes (NIST CSWP 29).
- Audit readiness depends on traceability: asset → vulnerability → threat scenario → likelihood/impact → treatment decision → evidence (NIST CSF 1.1 to 2.0 Core Transition Changes).
ID.RA-04 is a requirement about decision-quality risk records. Vulnerability scans and penetration tests tell you what is technically weak, but they do not, by themselves, satisfy this control outcome. You must also document what could happen if a threat exploits each meaningful vulnerability, how likely that is in your context, and what the business impact would be.
For a Compliance Officer, CCO, or GRC lead, the fastest path is to standardize a small set of scoring criteria, define who owns the analysis, and create a single system of record (a GRC tool, ticketing platform, or governed spreadsheet) where every material vulnerability gets a consistent likelihood and impact assessment, plus a treatment decision.
NIST CSF 2.0 does not mandate a specific risk model, but examiners and auditors will expect you to show consistent application, management visibility for high-risk items, and evidence that the records are kept current as threats, assets, and exposure change (NIST CSWP 29). This page gives you requirement-level implementation guidance you can put into operation immediately.
Regulatory text
Requirement (excerpt): “Potential impacts and likelihoods of threats exploiting vulnerabilities are identified and recorded” (NIST CSWP 29; NIST CSF 1.1 to 2.0 Core Transition Changes).
What the operator must do:
- Identify relevant threats and vulnerabilities for your environment, 2) determine the potential business/mission impacts if exploitation occurs, 3) estimate likelihood in a consistent way, and 4) record both impact and likelihood in a durable record that supports prioritization and governance decisions (NIST CSWP 29).
Plain-English interpretation
You are expected to maintain a living set of risk entries that answer, for each meaningful weakness:
- What threat could exploit it? (threat scenario)
- How likely is exploitation here? (likelihood)
- What happens if it’s exploited? (impact)
- What are we doing about it, and who approved the decision? (treatment and accountability)
A clean implementation produces a list of prioritized risk items that security, IT, and the business can act on, with rationale that holds up in an audit trail (NIST CSWP 29).
Who it applies to
Entities: Any organization operating a cybersecurity program and mapping to NIST CSF 2.0 outcomes, including regulated organizations and critical suppliers supporting regulated customers (NIST CSWP 29).
Operational contexts where ID.RA-04 shows up fast:
- Internet-facing systems (cloud workloads, SaaS admin consoles, APIs)
- Identity and access (privileged access paths, SSO misconfigurations)
- Sensitive data environments (customer PII, financial data, health data)
- Operational technology / mission systems where downtime has outsized impact
- Third-party dependencies where vulnerabilities exist in supplier-controlled components but create your business impact
What you actually need to do (step-by-step)
1) Define “system of record” and control ownership
Pick one place where ID.RA-04 records live (GRC platform, risk register, or ticketing tool with controlled fields). Assign:
- Control owner: usually GRC or Security Risk
- Technical contributors: vulnerability management, cloud/platform, appsec
- Business approvers: system owners and risk owners for acceptance
Operational rule: if it is not in the system of record, it does not exist for ID.RA-04.
2) Establish a simple, repeatable scoring method
Create written criteria for:
- Impact dimensions: confidentiality, integrity, availability, safety, financial/operational disruption, legal/regulatory exposure (choose the set that matches your enterprise risk language)
- Likelihood inputs: exposure (internet/internal), exploit maturity, compensating controls, ease of exploitation, asset criticality, authentication required, segmentation, monitoring/detection capability
Keep the scale consistent across teams. Auditors will focus less on the “perfect” model and more on whether the model is defined, consistently applied, and tied to governance (NIST CSWP 29).
3) Normalize what counts as a “threat exploiting a vulnerability”
Teams often stop at “CVE severity is high.” ID.RA-04 expects a scenario such as:
- “External actor exploits unpatched VPN vulnerability to gain initial access, then escalates privileges via weak admin controls.”
- “Threat actor uses exposed storage bucket permissions to exfiltrate customer files.”
- “Insider misuses excessive privileges to modify financial reporting data.”
Require each recorded item to include a short scenario statement. This prevents low-quality risk entries that cannot drive prioritization.
4) Build the intake from real data sources
Connect inputs that already exist:
- Vulnerability scans (infrastructure, container, endpoint)
- Cloud security posture findings
- Application security findings (SAST/DAST/dependency scanning)
- Pen test results
- Incident and near-miss learnings
- Threat intelligence relevant to your technology stack
Then define a threshold for what becomes a formal ID.RA-04 record (for example: “all high-criticality assets and all externally exposed vulnerabilities above a defined severity”). The threshold can be policy-set; NIST CSF does not prescribe it (NIST CSWP 29).
5) Perform the analysis and record it with required fields
Minimum fields to capture per record:
- Asset/system name, owner, environment (prod/non-prod)
- Vulnerability identifier (CVE or finding ID) and detection source/date
- Threat scenario (plain language)
- Likelihood rating + rationale
- Impact rating + rationale (tie to business process/data)
- Existing controls/compensating controls
- Risk treatment decision: remediate, mitigate, transfer, accept
- Target dates (if remediating) and approval (if accepting)
- Link to tickets, change requests, and validation evidence
This is the “identified and recorded” outcome in practice (NIST CSWP 29).
6) Tie records to action: prioritization and governance
ID.RA-04 fails in audits when risk entries do not drive anything. Add two routines:
- Operational prioritization: High combined risk items feed patch queues, backlog ranking, and emergency change windows.
- Governance cadence: A risk committee (or security steering group) reviews top items, overdue remediation, and risk acceptances.
If you use Daydream, treat ID.RA-04 as a mapped control with an owner, a recurring evidence schedule, and automated collection from scanners and ticketing systems so your likelihood/impact records stay connected to live findings (NIST CSF 1.1 to 2.0 Core Transition Changes; NIST CSWP 29).
7) Validate and refresh
Define triggers to refresh likelihood/impact:
- New exploit activity relevant to your environment
- Major architecture changes (new internet exposure, new IAM model)
- Control changes (WAF added, segmentation modified)
- Asset reclassification (system becomes more critical)
- Scan coverage expands and finds new classes of issues
Track “last reviewed” and “reviewed by” in the system of record to prove the control operates over time.
Required evidence and artifacts to retain
Keep evidence that shows both method and operation:
Design evidence
- Risk assessment procedure covering threat–vulnerability scenarios, likelihood scoring, impact scoring, and recording requirements (NIST CSWP 29)
- Defined rating criteria (rubric) and approval matrix (who can accept what level of risk)
- Data source inventory (scanner tools, appsec tools, TI feeds) and coverage statement
Operating evidence
- Export of the risk register entries showing likelihood/impact fields populated and dated
- Samples of completed analyses with rationale and approvals
- Linked remediation tickets and closure evidence (patch applied, config changed, compensating control implemented)
- Meeting notes or agendas where top risks and acceptances were reviewed
- Exception log for accepted risks, including expiration/review date and sign-off
Common exam/audit questions and hangups
Auditors typically test ID.RA-04 with traceability checks:
- “Show me three high-risk vulnerabilities and how you determined likelihood and impact.”
- “Where is the record stored, and who approves acceptance?”
- “How do you ensure the data is current and not a one-time exercise?”
- “Do you include cloud misconfigurations and identity exposures, or only CVEs?”
- “How do third-party components factor into likelihood and impact?”
Hangups happen when security can show findings but cannot show documented likelihood/impact rationale aligned to business context (NIST CSWP 29).
Frequent implementation mistakes (and how to avoid them)
-
Relying on CVSS as ‘likelihood.’
Fix: require a likelihood rationale that includes exposure and compensating controls. -
Recording vulnerabilities without threat scenarios.
Fix: make “scenario statement” a required field; reject entries without it. -
No decision trail for risk acceptance.
Fix: require approver, date, scope, and review trigger for any acceptance. -
Risk register disconnected from remediation workflows.
Fix: every record must link to a ticket/change item or an explicit acceptance. -
Stale records after environment changes.
Fix: add refresh triggers and track “last reviewed.”
Enforcement context and risk implications
NIST CSF is a framework, not a regulator, so “enforcement” usually comes indirectly: customer security reviews, contractual requirements, sector regulators referencing NIST outcomes, and audit findings that cite weak risk analysis and weak documentation (NIST CSWP 29). The risk implication is practical: if you cannot show recorded likelihood and impact, you will struggle to justify prioritization decisions after an incident, defend risk acceptances, or demonstrate reasonable governance to stakeholders.
30/60/90-day execution plan
First 30 days (stand up the control)
- Assign control owner and approvers; define RACI for analysis and sign-off.
- Select the system of record and create required fields (scenario, likelihood, impact, rationale, decision).
- Publish the scoring rubric and a one-page procedure aligned to ID.RA-04 language (NIST CSWP 29).
- Pilot on a limited scope: internet-facing production systems plus identity stack.
By 60 days (operate with consistency)
- Expand intake sources (cloud findings, appsec, pen test outputs).
- Run working sessions with system owners to calibrate impact scoring.
- Establish governance cadence for top risks and overdue remediation.
- Start sampling-based QA: check that rationale is present and approvals are correct.
By 90 days (make it durable and auditable)
- Expand scope to all critical systems and key third-party dependencies.
- Add refresh triggers and document the review process.
- Produce an evidence package: procedure, rubric, risk register export, and traceability samples.
- If using Daydream, map ID.RA-04 to your policy/procedure, assign ownership, and automate recurring evidence pulls from scanners and ticket systems to reduce manual audit prep (NIST CSF 1.1 to 2.0 Core Transition Changes; NIST CSWP 29).
Frequently Asked Questions
Do we have to score every single vulnerability?
No specific threshold is mandated in the requirement text. Define a documented threshold (by asset criticality, exposure, or severity) and prove you apply it consistently, then record likelihood/impact for the items that meet it (NIST CSWP 29).
Can we use CVSS as our likelihood score?
CVSS can be an input, but ID.RA-04 expects likelihood in your environment, which depends on exposure and controls. Keep CVSS, but add a required likelihood rationale tied to your context (NIST CSWP 29).
What does “recorded” mean for auditors?
Recorded means a durable record with dates, ownership, rationale, and decision outcomes, not a chat message or ephemeral dashboard. Auditors will expect you to retrieve historical entries and show how decisions were made (NIST CSWP 29).
How do we handle vulnerabilities in third-party SaaS platforms?
Record the scenario and your business impact, even if remediation is supplier-owned. Your treatment may be “mitigate” (configuration, monitoring, access controls) or “transfer/accept” with documented approval and follow-up actions.
How often should we refresh likelihood and impact?
Refresh on meaningful triggers: new exploit activity, new exposure, major control changes, or asset criticality changes. Document the triggers and capture “last reviewed” in the record to show the process runs continuously (NIST CSWP 29).
What’s the minimum evidence sample size to keep?
Keep enough samples to show the control operates across different asset types (cloud, on-prem, apps) and different decision types (remediate vs accept). Choose a sampling approach your audit team accepts and apply it consistently.
Frequently Asked Questions
Do we have to score every single vulnerability?
No specific threshold is mandated in the requirement text. Define a documented threshold (by asset criticality, exposure, or severity) and prove you apply it consistently, then record likelihood/impact for the items that meet it (NIST CSWP 29).
Can we use CVSS as our likelihood score?
CVSS can be an input, but ID.RA-04 expects likelihood in your environment, which depends on exposure and controls. Keep CVSS, but add a required likelihood rationale tied to your context (NIST CSWP 29).
What does “recorded” mean for auditors?
Recorded means a durable record with dates, ownership, rationale, and decision outcomes, not a chat message or ephemeral dashboard. Auditors will expect you to retrieve historical entries and show how decisions were made (NIST CSWP 29).
How do we handle vulnerabilities in third-party SaaS platforms?
Record the scenario and your business impact, even if remediation is supplier-owned. Your treatment may be “mitigate” (configuration, monitoring, access controls) or “transfer/accept” with documented approval and follow-up actions.
How often should we refresh likelihood and impact?
Refresh on meaningful triggers: new exploit activity, new exposure, major control changes, or asset criticality changes. Document the triggers and capture “last reviewed” in the record to show the process runs continuously (NIST CSWP 29).
What’s the minimum evidence sample size to keep?
Keep enough samples to show the control operates across different asset types (cloud, on-prem, apps) and different decision types (remediate vs accept). Choose a sampling approach your audit team accepts and apply it consistently.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream