Risk Significance and Likelihood Assessment

Risk significance and likelihood assessment requires you to analyze each identified risk by estimating its potential impact and probability, then deciding how the risk will be managed (accept, mitigate, transfer, avoid) based on defined criteria. To operationalize it, you need a consistent scoring method, documented assumptions, clear risk responses, and governance that ties results to controls and monitoring (COSO IC-IF (2013)).

Key takeaways:

  • You must score both impact (significance) and likelihood for each risk using defined criteria and documented rationale (COSO IC-IF (2013)).
  • The assessment is incomplete until you determine and record a risk response and link it to control activities and owners (COSO IC-IF (2013)).
  • Evidence quality matters: examiners and internal audit will test consistency, traceability, and decision records, not just a heat map.

A “risk register” that lists risks without defensible impact/likelihood ratings does not meet the intent of COSO’s requirement. COSO expects a repeatable process that turns identified risks into management decisions: how big is the risk, how likely is it, and what will you do about it (COSO IC-IF (2013)). For a CCO, GRC lead, or compliance officer, the practical challenge is consistency. Different teams score differently, assumptions stay in people’s heads, and the risk response becomes a vague “monitor” with no owner, control linkage, or follow-through.

This page translates the requirement into an operator-ready workflow you can run across compliance, security, privacy, finance, and third-party risk. The goal is simple: produce risk ratings you can defend, decisions you can evidence, and actions you can track. You will see step-by-step guidance, the artifacts to retain for audit readiness, common examiner questions, and a phased execution plan you can put into motion quickly. Everything maps back to the COSO Internal Control – Integrated Framework, Principle 7 point of focus on analyzing risks for significance and likelihood and determining how the risk should be managed (COSO IC-IF (2013)).

Regulatory text

COSO requirement (excerpt): “Identified risks are analyzed through a process that includes estimating the potential significance of the risk and determining how the risk should be managed.” (COSO IC-IF (2013))

Operator meaning: once you identify a risk, you need a documented method to:

  1. estimate significance (impact) and likelihood, and
  2. select a risk response and show how management will address it (COSO IC-IF (2013)).

This is a decision requirement, not a documentation requirement. Your documentation is the evidence that the decision process exists, is consistently applied, and results in managed risk.

Plain-English interpretation (what the requirement is asking for)

You need a repeatable process that converts risk identification into a prioritized, actionable plan:

  • “Significance” means the plausible impact if the risk occurs (financial, operational, legal/regulatory, customer harm, safety, reputation, strategic objectives).
  • “Likelihood” means the plausible probability of occurrence within a defined time horizon, given current conditions and controls.
  • “Determine how the risk should be managed” means recording a clear response (avoid, mitigate, transfer/share, accept) plus the actions, owners, and controls that make that response real (COSO IC-IF (2013)).

If two different teams assess the same risk and reach dramatically different ratings without reconciling criteria and assumptions, auditors will treat the process as weak even if the spreadsheet looks polished.

Who it applies to (entity and operational context)

COSO applies broadly to organizations that use the Internal Control – Integrated Framework to design, operate, or evaluate internal control, and it is routinely used by internal auditors as an evaluation benchmark (COSO IC-IF (2013)).

Operationally, this requirement applies wherever you run risk assessments, including:

  • Enterprise risk management and compliance risk assessments (regulatory change, conduct risk, AML/sanctions, privacy).
  • Internal controls over financial reporting (ICFR) risk assessment and scoping decisions.
  • Third-party risk management (risk rate inherent risk of a third party relationship, residual risk after controls, and response decisions such as contract controls or offboarding).
  • Security and technology risk (availability, integrity, confidentiality events and control gaps).
  • Product, model, and change risk (launch decisions, new markets, material system changes).

What you actually need to do (step-by-step)

Step 1: Set the assessment scope and time horizon

Define:

  • Scope: business units, processes, systems, and third parties included.
  • Time horizon: the period your likelihood rating refers to (for example, “within the next planning cycle” or “within the next year”). Pick one and keep it consistent across the register.
  • Risk taxonomy (lightweight): enough categories to aggregate reporting, not so much that teams argue about labels.

Evidence: documented scope statement and risk assessment charter/plan.

Step 2: Define scoring criteria for significance (impact)

Create an impact scale with clear, testable anchors. Avoid vague labels like “High = bad.” Use multiple dimensions if needed, but keep scoring friction low.

Practical approach:

  • Single overall impact score informed by dimensions such as regulatory exposure, customer impact, operational downtime, and financial statement impact.
  • Document impact guidance: what “Low/Medium/High” means in your context, including non-financial impact examples.

Evidence: approved scoring rubric, including definitions and examples.

Step 3: Define scoring criteria for likelihood

Likelihood is where teams hand-wave. Reduce subjectivity with prompts:

  • frequency of triggering events,
  • control strength and coverage,
  • exposure level (volume of transactions, number of users, number of third parties),
  • change velocity (new systems, new regulations, M&A),
  • known issues or incidents.

If you can’t quantify likelihood credibly, document it as a structured judgment with stated assumptions and data inputs (incident history, control testing results, audit issues, monitoring metrics).

Evidence: likelihood rubric and “how to score” guidance.

Step 4: Score inherent risk first, then residual risk

To make ratings actionable, separate:

  • Inherent risk: impact/likelihood without considering current controls.
  • Residual risk: impact/likelihood after considering control design and operating effectiveness.

This forces the organization to articulate what controls exist and whether they work. It also clarifies where risk acceptance is genuine versus accidental.

Evidence: risk register fields for inherent and residual scores, plus control mapping and control performance inputs.

Step 5: Document rationale and assumptions for each rating

For each risk, require short, structured rationale:

  • What scenario are we rating?
  • What is the impact driver?
  • Why is likelihood set at this level?
  • What assumptions or data sources were used?
  • What controls were considered for residual scoring?

This is the difference between a defendable assessment and a color-coded opinion.

Evidence: completed risk narratives, linked supporting materials (control test results, incident tickets, audit reports, KRIs).

Step 6: Determine the risk response and make it executable

COSO explicitly requires you to determine how the risk should be managed (COSO IC-IF (2013)). Record one primary response:

  • Mitigate: implement/strengthen controls, reduce exposure, add monitoring.
  • Avoid: stop the activity, exit a market, discontinue a product feature.
  • Transfer/share: insurance, outsourcing with contractual risk allocation (still manage retained risk).
  • Accept: explicit acceptance with rationale, within risk appetite/tolerance.

Make every response operational with:

  • a named risk owner,
  • specific actions (control changes, policy updates, training, system fixes),
  • due dates and dependencies,
  • success criteria (what “done” looks like),
  • ongoing monitoring method.

Evidence: risk treatment plans, tickets/projects, approval records for acceptance, and links to updated controls.

Step 7: Add governance and challenge to prevent “rubber-stamping”

Implement a review structure:

  • First line proposes ratings and responses.
  • Second line (compliance/GRC) challenges consistency, assumptions, and completeness.
  • Senior leadership reviews top risks and accepts/approves material residual risks.

Standardize “challenge notes” so rating changes and acceptances are traceable.

Evidence: review meeting minutes, sign-offs, documented challenge and resolution.

Step 8: Integrate results into control activities and monitoring

Risk scoring should drive action:

  • map high residual risks to control testing plans,
  • adjust monitoring frequency and depth,
  • inform third-party due diligence depth and contract controls,
  • feed internal audit planning and compliance testing.

If the risk register does not change what gets tested, funded, or fixed, you will struggle to show the process is real.

Evidence: crosswalks from risks to controls, test plans, monitoring dashboards, audit plan inputs.

Required evidence and artifacts to retain

Keep artifacts in a system of record (GRC platform, controlled repository, or workflow tool). Minimum set:

  • Risk assessment methodology (impact/likelihood rubrics, inherent vs residual definitions) (COSO IC-IF (2013)).
  • Risk register with versioning and dated snapshots.
  • Risk narratives with assumptions and supporting references.
  • Control mappings and control performance inputs (test results, issues, exceptions).
  • Risk response decisions: treatment plans, acceptance memos, approvals, and escalation records.
  • Governance artifacts: review schedules, attendance, challenge logs, sign-offs.
  • Change log: what changed since prior cycle and why.

Daydream can help by structuring these fields as required inputs (so ratings can’t be saved without rationale), maintaining audit-ready version history, and tying risks to third-party records, controls, and remediation workflows without chasing spreadsheets.

Common exam/audit questions and hangups

Expect internal audit (and external assurance teams, if applicable) to probe:

  • “Show me the criteria for impact and likelihood. Who approved them?”
  • “How do you ensure consistent scoring across business units?”
  • “What data supports these likelihood ratings?”
  • “Where is inherent vs residual documented, and what controls reduce the risk?”
  • “For accepted risks, who approved acceptance and on what basis?”
  • “Show me a high residual risk and the treatment plan. What is the status?”
  • “How does this assessment change testing plans or monitoring?”

Hangups usually come from missing rationale, inconsistent scoring, and “accepted” risks with no evidence of decision authority.

Frequent implementation mistakes (and how to avoid them)

  1. Heat map without decisioning.
    Fix: require a risk response and owner for every risk, even if the response is accept.

  2. No separation of inherent vs residual.
    Fix: add both scores and require control mapping before residual scoring is final.

  3. Likelihood defined as “gut feel.”
    Fix: mandate a short list of likelihood inputs (incidents, control results, exposure indicators) and capture them in the record.

  4. Risk acceptance treated as a default.
    Fix: define who can accept which level of residual risk, and require a recorded approval.

  5. One-time exercise.
    Fix: tie updates to triggers: major changes, incidents, audit issues, third-party onboarding, and periodic refresh cycles.

Execution plan (30/60/90-day)

First 30 days: stand up the method and governance

  • Draft impact and likelihood rubrics, including inherent vs residual definitions (COSO IC-IF (2013)).
  • Define required data inputs for likelihood and residual scoring (control test results, incidents, monitoring).
  • Establish governance: risk owners, second-line challenge, escalation, and approval authority for acceptance.
  • Configure your system of record (or controlled templates) with mandatory fields: scenario, inherent score, residual score, rationale, response, owner, and status.

By 60 days: run an initial assessment and calibrate

  • Run workshops with key functions (compliance, security, privacy, finance, operations, third-party risk).
  • Score top risks using the new rubric; capture assumptions and sources.
  • Calibrate scoring across teams: pick a sample of risks and reconcile differences.
  • Identify high residual risks lacking treatment plans; create actionable remediation items with owners.

By 90 days: make it operational and auditable

  • Formalize sign-off for risk responses, including acceptance approvals.
  • Integrate risk outputs into monitoring and testing plans (compliance testing, control testing, third-party oversight).
  • Produce management reporting that shows movement: new risks, re-rated risks, treatment progress, accepted risks by approver.
  • Run an internal quality check: trace a sample of risks from identification → scoring → response → control linkage → monitoring evidence.

Frequently Asked Questions

Do we have to quantify impact and likelihood with numbers?

COSO requires estimation and determination of management response, not a specific numeric scale (COSO IC-IF (2013)). You can use qualitative scales if definitions are tight and consistently applied.

What’s the difference between significance (impact) and likelihood in practice?

Impact is the severity of the outcome if the risk event occurs; likelihood is the probability of occurrence within your defined horizon. Treat them separately so a rare catastrophic event and a frequent minor issue don’t get blurred into the same rating.

How do we defend likelihood ratings without hard data?

Use structured judgment with documented inputs: incident history, control test results, exposure drivers, and change factors. The key is documenting assumptions and applying the same prompts across risks.

What is the minimum documentation needed for a risk acceptance decision?

Record the residual rating, the rationale for acceptance, who approved it, and any conditions (time-bound acceptance, compensating monitoring). Keep the approval record and the basis for decision tied to the risk entry (COSO IC-IF (2013)).

How does this requirement apply to third-party risk management?

Treat each third-party relationship risk as an identified risk: score inherent risk (before controls like contractual terms and monitoring), score residual risk (after controls), then document the response (enhanced due diligence, contract controls, ongoing monitoring, or exit).

How often should we redo significance and likelihood assessments?

Reassess on a defined cadence and on triggers such as major business changes, new products, material incidents, control failures, or significant third-party changes. Your cadence matters less than having defined triggers and evidence that reassessment happens.

Frequently Asked Questions

Do we have to quantify impact and likelihood with numbers?

COSO requires estimation and determination of management response, not a specific numeric scale (COSO IC-IF (2013)). You can use qualitative scales if definitions are tight and consistently applied.

What’s the difference between significance (impact) and likelihood in practice?

Impact is the severity of the outcome if the risk event occurs; likelihood is the probability of occurrence within your defined horizon. Treat them separately so a rare catastrophic event and a frequent minor issue don’t get blurred into the same rating.

How do we defend likelihood ratings without hard data?

Use structured judgment with documented inputs: incident history, control test results, exposure drivers, and change factors. The key is documenting assumptions and applying the same prompts across risks.

What is the minimum documentation needed for a risk acceptance decision?

Record the residual rating, the rationale for acceptance, who approved it, and any conditions (time-bound acceptance, compensating monitoring). Keep the approval record and the basis for decision tied to the risk entry (COSO IC-IF (2013)).

How does this requirement apply to third-party risk management?

Treat each third-party relationship risk as an identified risk: score inherent risk (before controls like contractual terms and monitoring), score residual risk (after controls), then document the response (enhanced due diligence, contract controls, ongoing monitoring, or exit).

How often should we redo significance and likelihood assessments?

Reassess on a defined cadence and on triggers such as major business changes, new products, material incidents, control failures, or significant third-party changes. Your cadence matters less than having defined triggers and evidence that reassessment happens.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
COSO: Risk Significance and Likelihood Assessment | Daydream