Risk Analysis and Prioritization

To meet the C2M2 “Risk Analysis and Prioritization” requirement, you must take every identified cybersecurity risk, analyze it using defined criteria (likelihood, impact, and context), then rank it so leadership can make consistent treatment decisions and fund the right work first 1. Operationalize this by standardizing scoring, review/approval, and tracking evidence from identification through remediation.

Key takeaways:

  • You need a documented, repeatable method to score and rank risks, not ad hoc judgments 1.
  • Prioritization must connect to decisions: accept, mitigate, transfer, or avoid, with named owners and due dates.
  • Evidence matters: keep the scoring rationale, decision records, and remediation tracking for audit, customers, and internal control testing.

“Identified cybersecurity risks are analyzed and prioritized” sounds simple until you try to run it across real teams, real assets, and real constraints 1. Most failures happen in the seams: risks are logged but not scored consistently, scoring happens but doesn’t translate into funded work, or “priorities” change with whoever is in the room.

This page gives requirement-level guidance for implementing C2M2 v2.1 RISK-1.B (MIL1) in a way that holds up during internal audits, customer security reviews, and regulator-facing examinations in critical infrastructure contexts 1. The focus is operational: a practical method, clear roles, step-by-step workflow, and the artifacts you should retain.

If you run third-party risk, this requirement also applies to cybersecurity risks introduced by third parties (software suppliers, integrators, managed service providers, OEMs), because those risks still land in your environment and must be ranked against internal risks for treatment decisions. The end state is a single, defensible risk priority list that drives action.

Regulatory text

Excerpt (C2M2 v2.1 RISK-1.B, MIL1): “Identified cybersecurity risks are analyzed and prioritized.” 1

What the operator must do

You must implement a repeatable process that:

  1. Takes risks your organization has already identified (from assessments, incidents, testing, audits, third-party reviews, vulnerability management, OT engineering findings, etc.).
  2. Analyzes each risk using defined criteria (at minimum, likelihood and impact, adjusted for organizational context).
  3. Produces a prioritized output that drives risk treatment decisions and sequencing of remediation work 1.

Plain-English interpretation (what “good” looks like)

  • Analyze means you can explain why a risk matters, how it could occur, what it affects, and what conditions increase or reduce likelihood.
  • Prioritize means you can rank risks against each other using consistent rules, not the loudest stakeholder or most recent incident.
  • Context means you adjust scoring based on where the risk sits: crown-jewel OT assets, safety implications, regulated operations, exposed external services, third-party access paths, or business-critical dependencies 1.

A simple maturity test: pick any risk from your log. Can you show the scoring inputs, who reviewed them, what decision was made, and what happened next? If you can’t, you have activity, not an operating requirement.

Who it applies to

Entities

This requirement applies to organizations using C2M2 v2.1 to assess and improve cybersecurity capability, commonly in energy and other critical infrastructure environments 2.

Operational context (where it shows up)

  • Enterprise IT risk management: security findings, control gaps, vulnerability backlog, identity issues.
  • Operational technology (OT) / industrial control systems (ICS): engineering deviations, remote access pathways, segmented network exceptions, patch constraints.
  • Third-party risk management: supplier software risk, MSP access, OEM remote support, hosted platforms, critical SaaS dependencies.
  • Projects and change management: new systems, new integrations, cloud migrations, plant upgrades.

Scope matters. Define the scope you are assessing (business unit, environment, or function) and apply the same risk analysis method within that scope 1.

What you actually need to do (step-by-step)

Step 1: Define the risk analysis standard (one page is fine)

Document the minimum scoring model and rules. Keep it short enough that engineers and operators will follow it.

  • Risk statement format: “If [threat/event] exploits [vulnerability/condition], then [impact] to [asset/process].”
  • Scoring dimensions (minimum viable):
    • Likelihood (with defined levels and examples)
    • Impact (with defined levels and examples)
    • Context modifiers (criticality, exposure, safety/regulatory, compensating controls)
  • Decision thresholds: what score triggers escalation, what can be handled within a team backlog, what requires leadership acceptance.
  • Who can score vs. who must approve (see Step 4).

This aligns with the C2M2 expectation that risks are analyzed and prioritized using defined criteria and follow-through 1.

Step 2: Normalize inputs from multiple sources into one risk register

You cannot prioritize what you can’t compare. Consolidate risks from:

  • vulnerability findings (including OT constraints),
  • penetration tests,
  • control assessments,
  • incident/post-incident reviews,
  • third-party assessments and due diligence,
  • audit issues and regulatory gaps.

Minimum fields to capture:

  • unique risk ID, title, description
  • asset/system and business process impacted
  • owner (named role)
  • source (assessment, incident, third party)
  • current controls/compensating measures
  • likelihood score + rationale
  • impact score + rationale
  • overall priority + rationale
  • decision (accept/mitigate/transfer/avoid) and approver
  • target date and tracking link

Step 3: Perform analysis consistently (the “why” behind the score)

Require a short rationale for each score so two analysts would land in the same range.

  • Likelihood analysis prompts
    • Is the pathway exposed (internet-facing, vendor remote access, flat OT network segment)?
    • How easy is exploitation given your environment and controls?
    • Has it occurred before (internally) or is it plausible given your threat profile?
  • Impact analysis prompts
    • Confidentiality: sensitive data exposure, customer impact
    • Integrity: process manipulation, incorrect operational setpoints, fraudulent changes
    • Availability: downtime, loss of view/control
    • Safety and reliability implications (common in energy/OT contexts)

Context modifiers should be explicit. Example: “Same vulnerability on a lab system” versus “same vulnerability on a control center gateway” should not rank equally.

Step 4: Establish a review and prioritization cadence with clear decision rights

C2M2 MIL1 does not require complex governance, but you need repeatability.

  • Working-level triage: Security/OT security validates the risk statement and scoring.
  • Business/asset owner review: Confirms asset criticality, operational constraints, planned outages, and compensating controls.
  • Risk acceptance authority: Defines who can accept what level of risk and for how long.

If you run third-party risk: require a path for third-party-introduced risks to be scored and ranked alongside internal risks. Otherwise, supplier findings die in a separate tracker and never compete for remediation capacity.

Step 5: Convert priorities into a treatment plan you can execute

Prioritization must produce action. For each high-priority risk:

  • pick a treatment (mitigate, accept, transfer, avoid),
  • assign an accountable owner,
  • set a target date,
  • define success criteria (control implemented, access removed, segmentation exception closed, contract clause updated, third-party compensating control verified).

Track progress in a system that supports evidence export (ticketing, GRC tool, or risk module). Daydream is often the simplest way to connect risk scoring, approvals, and remediation evidence in one place without building spreadsheets that break under audit pressure.

Step 6: Retest and re-prioritize when conditions change

Re-score when:

  • the asset becomes more critical,
  • exposure changes (new remote access, new integration),
  • a control changes (MFA rolled out, segmentation improved),
  • a relevant incident occurs,
  • a third party changes service scope or access model.

The point is stability with justified updates, not constant churn.

Required evidence and artifacts to retain

Auditors and customer assessors look for proof that the process runs and decisions are documented. Retain:

  • Risk analysis methodology: scoring criteria, definitions, thresholds, context modifiers.
  • Risk register export: including likelihood, impact, overall priority, and rationales.
  • Review artifacts: meeting notes or approvals showing scoring review and prioritization decisions.
  • Risk treatment decisions: acceptance memos/records, remediation plans, transfer decisions (insurance or contractual), avoidance decisions.
  • Remediation tracking: tickets, change records, project plans, and closure evidence mapped back to risk IDs.
  • Exception records: compensating controls, expiration dates, and approver for any accepted/temporarily deferred risk.

These artifacts directly support the recommended controls: document criteria and decision process, and retain outputs and tracking that show risks were evaluated and addressed 1.

Common exam/audit questions and hangups

Expect variants of:

  • “Show me how you prioritize risks across IT and OT.”
  • “How do you ensure scoring is consistent across assessors and sites?”
  • “Who can accept risk, and where is that documented?”
  • “Pick three top risks. Show end-to-end evidence from identification to closure.”
  • “How do third-party risks enter this process, and how do they get prioritized against internal work?”

Hangups that derail reviews:

  • No documented scoring criteria (everyone “knows” how it works).
  • Scores without rationale (looks arbitrary).
  • Prioritization exists, but remediation tracking is separate and not mapped to risk IDs.
  • Accepted risks with no approver, no expiry, and no compensating controls.

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: Treating vulnerability severity as risk priority.
    Fix: Map vulnerabilities into risk statements that include asset criticality and exposure, then score likelihood/impact.

  2. Mistake: One-size-fits-all scoring that ignores OT constraints.
    Fix: Add context modifiers for safety, operational uptime constraints, and compensating controls relevant to OT.

  3. Mistake: Risk register becomes a graveyard.
    Fix: Tie top risks to funded work, with owners and status reporting.

  4. Mistake: Third-party risk lives in a separate silo.
    Fix: Create an intake path so third-party findings generate risk entries scored under the same model.

  5. Mistake: Risk acceptance is informal.
    Fix: Require recorded approval, rationale, and an expiration or review trigger.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this specific C2M2 requirement. Treat the risk as practical and defensibility-driven: without documented criteria and follow-up, exposures remain unaddressed and your decisions often fail internal control testing, audits, customer diligence, or regulator review 1.

Practical 30/60/90-day execution plan

First 30 days (stand up the minimum viable operating process)

  • Define scope for C2M2 assessment and risk prioritization.
  • Publish a one-page scoring method with definitions and decision rights.
  • Consolidate current risks into a single register with required fields.
  • Run a pilot scoring session on a small set of representative risks (IT, OT, third party).

By 60 days (make it repeatable and decision-driven)

  • Establish a regular review cadence and a standard agenda (new risks, rescoring triggers, top priorities, decisions needed).
  • Implement approval workflow for acceptance and escalation thresholds.
  • Connect prioritized risks to remediation tickets/projects and ensure traceability to closure evidence.

By 90 days (make it auditable and resilient)

  • Run a sampling-based quality check: re-score a subset to test consistency and adjust definitions.
  • Add metrics that show flow: risks identified, analyzed, prioritized, treated, overdue decisions.
  • Prepare an “audit packet” template: methodology, current top risks, and three end-to-end examples with evidence.

Daydream can reduce friction here by keeping the scoring method, approvals, and remediation evidence tied to each risk record so you can answer “show me” requests quickly during audits and customer assessments.

Frequently Asked Questions

Do we need a quantitative risk model to meet this requirement?

No. C2M2 MIL1 expects that risks are analyzed and prioritized, not that you run a mathematically complex model 1. A clear qualitative model with defined levels and documented rationale is usually sufficient if it drives consistent decisions.

How do we handle risks that come from third-party assessments?

Treat them as first-class risks in the same register, scored with the same criteria. Include context about the third party’s access, integration points, and compensating controls so the priority reflects your real exposure.

Who should own the risk scoring: security, IT/OT, or the business?

Security should define the method and facilitate consistency, but asset/process owners must validate impact and constraints. Risk acceptance should sit with the delegated authority in your governance model, and the approver must be documented.

What evidence is most persuasive to auditors?

A documented scoring methodology plus end-to-end traceability: risk identified, analyzed with rationale, prioritized, decision recorded, and remediation tracked to closure 1. Auditors also look for consistent decision rights and time-bounded exceptions.

How often should we re-prioritize risks?

Re-prioritize when material conditions change, such as new exposure, new controls, incidents, or changes in asset criticality. Also re-score when third-party service scope or access changes materially.

We have multiple trackers (GRC tool, ticketing system, spreadsheets). Is that a problem?

It becomes a problem when you can’t prove linkage between priority, decision, and remediation status. If you keep multiple systems, enforce a unique risk ID and require tickets/projects to reference it; otherwise, consolidate in a system like Daydream to keep evidence and workflow together.

What you actually need to do

Use the cited implementation guidance when translating the requirement into day-to-day operating steps. 3

Footnotes

  1. Cybersecurity Capability Maturity Model v2.1

  2. Cybersecurity Capability Maturity Model v2.1; Source: DOE C2M2 program

  3. DOE C2M2 program

Frequently Asked Questions

Do we need a quantitative risk model to meet this requirement?

No. C2M2 MIL1 expects that risks are analyzed and prioritized, not that you run a mathematically complex model (Source: Cybersecurity Capability Maturity Model v2.1). A clear qualitative model with defined levels and documented rationale is usually sufficient if it drives consistent decisions.

How do we handle risks that come from third-party assessments?

Treat them as first-class risks in the same register, scored with the same criteria. Include context about the third party’s access, integration points, and compensating controls so the priority reflects your real exposure.

Who should own the risk scoring: security, IT/OT, or the business?

Security should define the method and facilitate consistency, but asset/process owners must validate impact and constraints. Risk acceptance should sit with the delegated authority in your governance model, and the approver must be documented.

What evidence is most persuasive to auditors?

A documented scoring methodology plus end-to-end traceability: risk identified, analyzed with rationale, prioritized, decision recorded, and remediation tracked to closure (Source: Cybersecurity Capability Maturity Model v2.1). Auditors also look for consistent decision rights and time-bounded exceptions.

How often should we re-prioritize risks?

Re-prioritize when material conditions change, such as new exposure, new controls, incidents, or changes in asset criticality. Also re-score when third-party service scope or access changes materially.

We have multiple trackers (GRC tool, ticketing system, spreadsheets). Is that a problem?

It becomes a problem when you can’t prove linkage between priority, decision, and remediation status. If you keep multiple systems, enforce a unique risk ID and require tickets/projects to reference it; otherwise, consolidate in a system like Daydream to keep evidence and workflow together.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
C2M2 Risk Analysis and Prioritization: Implementation Guide | Daydream