ID.RA-05: Threats, vulnerabilities, likelihoods, and impacts are used to understand inherent risk and inform risk response prioritization

ID.RA-05 requires you to quantify inherent cyber risk by consistently combining threats, vulnerabilities, likelihood, and impact, then using those results to prioritize risk responses (mitigate, transfer, avoid, accept) across your assets, systems, and third parties. Operationalize it by standardizing scoring criteria, linking scores to a response workflow, and retaining evidence that decisions follow the model. 1

Key takeaways:

  • Use a repeatable method to tie threat + vulnerability + likelihood + impact into an inherent risk rating. 1
  • Make the rating drive action: risk response priority, owners, due dates, and escalation thresholds. 1
  • Keep audit-ready evidence: model definitions, inputs, outputs, approvals, and proof that work tracked to the prioritization. 1

For a CCO or GRC lead, ID.RA-05 is the difference between “we have a risk register” and “we can prove our cybersecurity decisions are rational, repeatable, and tied to business impact.” The requirement is straightforward: your organization must use four ingredients (threats, vulnerabilities, likelihood, and impacts) to understand inherent risk, and then use that understanding to prioritize what you do next. 1

Operators usually stumble in two places. First, they collect threat intel and vulnerability scan results, but never translate them into likelihood and business impact in a consistent way. Second, they produce scores, but the scores do not control work intake, funding decisions, exception handling, or timelines. Auditors and regulators do not need your math to be fancy; they need it to be defined, consistently applied, and connected to decisions. 1

This page gives requirement-level implementation guidance you can execute fast: scope, method, governance, workflow integration, evidence to retain, and a practical execution plan. It also calls out common exam questions and the artifacts that reduce friction during audits and board reporting. 1

Regulatory text

Excerpt (ID.RA-05): “Threats, vulnerabilities, likelihoods, and impacts are used to understand inherent risk and inform risk response prioritization.” 1

What the operator must do:

  1. Define how you identify and describe threats, vulnerabilities, likelihood, and impact for the environments you run (enterprise IT, cloud, product, OT, third parties). 1
  2. Apply those definitions consistently to produce an inherent risk result (before compensating controls). 1
  3. Use that result to prioritize risk response work (what gets fixed first, what gets funding, what gets accepted, and what gets escalated). 1

NIST CSF 2.0 also provides the updated core structure and transition context that supports how organizations map and maintain these outcomes over time. 2

Plain-English interpretation

You need a repeatable way to answer: “Given the threats we face, the vulnerabilities we have, and the likely business impact, what should we do first?” ID.RA-05 expects your prioritization to be grounded in a defined approach, not whoever is loudest, newest, or most technically interesting. 1

“Inherent risk” matters here. If your method mixes in existing controls without making that explicit, you can end up under-prioritizing high-consequence exposures because “we already have a tool for that.” Keep inherent risk clear, then separately document residual risk if you track it. 1

Who it applies to (entity and operational context)

Applies to: any organization running a cybersecurity program that needs to identify and prioritize cyber risk decisions. 1

Operational contexts where auditors expect to see ID.RA-05 working:

  • Enterprise risk management: cyber risks ranked in a risk register with consistent scoring and response decisions. 1
  • Vulnerability management: vulnerability backlog prioritized by likelihood and business impact, not only CVSS. 1
  • Third-party risk management: critical third parties rated for inherent risk based on threat exposure, known weaknesses, and business impact of failure. 1
  • Cloud/product security: changes, exceptions, and design decisions justified with traceable risk analysis and approval paths. 1

What you actually need to do (step-by-step)

Step 1: Set scope and risk unit

Pick the unit you score and track. Common choices:

  • Application/service (best for product and customer-facing systems)
  • Business process (best for ERM alignment)
  • Asset group (best for infrastructure environments)
  • Third party (for supplier/outsourcer concentration risk)

Document your choice and how it maps to your asset inventory and service catalog. 1

Step 2: Define your four inputs (tight definitions)

Create a one-page scoring standard that defines each input with allowed sources.

Input Definition (operational) Example evidence sources
Threats Realistic adversaries/events relevant to the scoped system or third party Threat intel summaries, incident trends, sector advisories
Vulnerabilities Weaknesses that could be exploited (technical, process, identity, architecture) Scanner outputs, pen test findings, architecture reviews, audit findings
Likelihood Probability that the threat will exploit the vulnerability in your context Exposure, ease of exploitation, presence of controls (tracked separately), adversary interest
Impact Business consequence if the event occurs Data classification, revenue/process criticality, legal/regulatory impact, safety/availability needs

Keep this document stable. Change control it like a policy standard. 1

Step 3: Choose a scoring model you can defend

Use a simple ordinal model that business owners can understand and security teams can apply consistently (for example, low/medium/high for likelihood and impact, then a matrix). Your scoring method matters less than your ability to show it is defined and consistently applied. 1

Minimum operator requirement: document the matrix/table, rating definitions, and how ties are broken (for example, “impact wins” or “likelihood wins” rules). 1

Step 4: Produce an inherent risk rating (and keep it separate)

For each risk item (or each scoped system), record:

  • threat scenario (plain language)
  • mapped vulnerabilities/weaknesses
  • likelihood rating with rationale
  • impact rating with rationale
  • inherent risk result

If you also track residual risk, record the compensating controls and an adjusted residual score as a separate field, with separate rationale. 1

Step 5: Tie the rating to a response prioritization workflow

ID.RA-05 is not complete until the rating changes what happens next. Hardwire it into your workflow:

Risk response decision table (example):

Inherent risk Required response path Mandatory fields
High Mitigate plan or formal acceptance by authorized leader owner, milestones, due dates, exception rationale
Medium Mitigate or transfer with documented rationale planned control(s), target date, tracking ticket
Low Accept or backlog with periodic review review cadence, trigger for re-score

Define who can accept risk at each tier and what evidence is required for acceptance. 1

Step 6: Assign ownership and cadence (governance)

  • Name a control owner for ID.RA-05 (often GRC) and operational co-owners (vuln mgmt lead, IR lead, TPRM lead).
  • Set review triggers: new critical system, major architecture change, new third party with sensitive access, material incident, or major threat shifts. 1

A practical control that auditors like: explicitly map ID.RA-05 to a policy, procedure, control owner, and recurring evidence collection. That mapping is also a clean way to show completeness across teams. 1

Step 7: Instrument and evidence the process (make it auditable)

Put the workflow in a system of record (GRC tool, ticketing + approvals, or structured register) so you can show:

  • the scoring criteria used at the time
  • the inputs used
  • who approved the response and when
  • how remediation or acceptance was tracked to closure 1

Daydream fits naturally here when you need to keep ID.RA-05 mapped to owners and evidence, then generate auditor-ready output without chasing screenshots across tools.

Required evidence and artifacts to retain

Auditors typically ask for proof of design and proof of operation. Keep both.

Design evidence (static or change-controlled):

  • Risk assessment methodology covering threats, vulnerabilities, likelihood, impact, and inherent risk scoring. 1
  • Risk matrix and rating definitions (including what “impact” means in business terms). 1
  • RACI for risk scoring, risk response decisions, and risk acceptance authority. 1
  • Procedure for prioritization and tracking (vuln backlog, risk register workflow, third-party onboarding). 1

Operational evidence (sampled over time):

  • Completed inherent risk assessments with rationale fields populated. 1
  • Tickets/plans showing prioritized risk responses tied to inherent risk. 1
  • Risk acceptance memos/records with approvals and expiration/review trigger. 1
  • Exception register entries and compensating control narratives (if applicable). 1
  • Periodic reporting pack (top risks, trend, overdue items, accepted risk inventory). 1

Common exam/audit questions and hangups

Questions you should be ready to answer with artifacts:

  • “Show me your definitions for likelihood and impact, and how you ensure consistent scoring across teams.” 1
  • “Pick a high risk item. Walk me from identification to prioritization to closure, including approvals.” 1
  • “How do third-party risks enter the same prioritization system as internal risks?” 1
  • “How do you ensure inherent risk isn’t understated because you assumed controls are effective?” 1

Hangups that slow audits:

  • Scores with no written rationale.
  • Multiple scoring methods across teams with no mapping.
  • Acceptance approvals that do not match documented authority.
  • Risk items that remain “high” with no plan or stale target dates. 1

Frequent implementation mistakes and how to avoid them

  1. Mistake: Confusing vulnerability severity with risk.
    Fix: require a likelihood and impact rating even for vulnerability-driven items, and record business context (system criticality, data type, exposure). 1

  2. Mistake: Inherent and residual risk collapsed into one score.
    Fix: store inherent and residual separately, even if residual is optional. Document assumptions about control effectiveness. 1

  3. Mistake: Risk scoring happens, but work prioritization ignores it.
    Fix: build a rule that high inherent risk items must have a response decision, owner, and tracked plan; audit that rule monthly. 1

  4. Mistake: No evidence trail for decisions.
    Fix: use a system workflow with immutable timestamps (or controlled approvals) for acceptance, deferrals, and priority overrides. 1

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement. Practically, ID.RA-05 failures show up as governance breakdowns: inability to justify prioritization, unmanaged high-impact exposures, and inconsistent risk acceptance. That increases regulatory and audit risk because you cannot demonstrate a defensible basis for cybersecurity decisions. 1

Practical 30/60/90-day execution plan

First 30 days (stabilize the method)

  • Publish the scoring standard: definitions, matrix, and required rationale fields. 1
  • Decide the system of record (GRC register, ticketing integration, or Daydream) and enforce required fields.
  • Identify your top scoped areas: critical apps/services and critical third parties, then pilot scoring on a small set to calibrate. 1

Next 60 days (make it drive prioritization)

  • Integrate inherent risk rating into vulnerability management and third-party onboarding workflows. 1
  • Implement the response decision table with clear acceptance authority and expiration/review triggers. 1
  • Start a monthly governance review: top inherent risks, overdue response plans, and accepted risks nearing review. 1

By 90 days (prove operation and audit readiness)

  • Run an internal audit-style sample: select several items across vuln mgmt, third party, and enterprise risks; verify end-to-end traceability from scoring to action. 1
  • Produce a consistent reporting pack for leadership: prioritized risk list, response status, and exceptions. 1
  • Lock recurring evidence collection: scheduled exports, approval logs, and version history of the scoring method (a common gap is missing historical method versions). 1

Frequently Asked Questions

Do we have to use a quantitative model (ALE, dollars) for ID.RA-05?

No specific math model is required by the requirement text; you need a defined, repeatable way to combine threats, vulnerabilities, likelihood, and impact into inherent risk and use it to prioritize responses. Keep the approach understandable and consistently applied. 1

How do we score likelihood without making up numbers?

Use ordinal ratings with written criteria tied to observable conditions (exposure, exploitability, adversary relevance, history of similar events). Require a short rationale so reviewers can test consistency. 1

Does CVSS satisfy the “vulnerabilities, likelihoods, and impacts” part?

CVSS can inform vulnerability severity, but ID.RA-05 also requires likelihood and impact in your business context. Add impact criteria (data, availability needs, customer harm) and likelihood criteria (exposure, exploit path) to reach inherent risk. 1

How should we handle third-party inherent risk under ID.RA-05?

Score third-party scenarios the same way: threat (e.g., ransomware), vulnerabilities (security posture gaps and access paths), likelihood (exposure + history), and impact (business dependency and data). Then use that rating to prioritize due diligence depth, contract controls, and monitoring. 1

What evidence is most persuasive to auditors?

Completed assessments with rationale, a documented scoring method, and examples where the risk score clearly changed priority and timelines. Approval records for acceptance and overrides reduce debate. 1

Who should be allowed to accept high inherent risk?

Assign acceptance authority to a role with business accountability for the impacted service or process, and document the delegation. Auditors look for alignment between authority, impact ownership, and evidence of review. 1

Footnotes

  1. NIST CSWP 29

  2. NIST CSF 1.1 to 2.0 Core Transition Changes

Frequently Asked Questions

Do we have to use a quantitative model (ALE, dollars) for ID.RA-05?

No specific math model is required by the requirement text; you need a defined, repeatable way to combine threats, vulnerabilities, likelihood, and impact into inherent risk and use it to prioritize responses. Keep the approach understandable and consistently applied. (Source: NIST CSWP 29)

How do we score likelihood without making up numbers?

Use ordinal ratings with written criteria tied to observable conditions (exposure, exploitability, adversary relevance, history of similar events). Require a short rationale so reviewers can test consistency. (Source: NIST CSWP 29)

Does CVSS satisfy the “vulnerabilities, likelihoods, and impacts” part?

CVSS can inform vulnerability severity, but ID.RA-05 also requires likelihood and impact in your business context. Add impact criteria (data, availability needs, customer harm) and likelihood criteria (exposure, exploit path) to reach inherent risk. (Source: NIST CSWP 29)

How should we handle third-party inherent risk under ID.RA-05?

Score third-party scenarios the same way: threat (e.g., ransomware), vulnerabilities (security posture gaps and access paths), likelihood (exposure + history), and impact (business dependency and data). Then use that rating to prioritize due diligence depth, contract controls, and monitoring. (Source: NIST CSWP 29)

What evidence is most persuasive to auditors?

Completed assessments with rationale, a documented scoring method, and examples where the risk score clearly changed priority and timelines. Approval records for acceptance and overrides reduce debate. (Source: NIST CSWP 29)

Who should be allowed to accept high inherent risk?

Assign acceptance authority to a role with business accountability for the impacted service or process, and document the delegation. Auditors look for alignment between authority, impact ownership, and evidence of review. (Source: NIST CSWP 29)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream