Risk Tolerance Definition

Define your cybersecurity risk tolerance in writing, get the right leaders to approve it, and communicate it so teams make consistent “accept/mitigate/transfer/avoid” decisions. Under C2M2 v2.1 RISK-1.E, you must be able to show documented criteria, decision authority, and evidence that stakeholders received and use the tolerance. 1

Key takeaways:

  • You need a documented, decision-grade risk tolerance statement tied to business impact, not vague “low/medium/high” language.
  • Communication is part of the requirement; train and enable the people who approve exceptions, sign off risks, and own remediation.
  • Retain proof: approval records, dissemination artifacts, and samples of risk decisions that align to the stated tolerance.

Footnotes

  1. Cybersecurity Capability Maturity Model v2.1

Risk tolerance definition is the control that prevents every risk decision from becoming an argument. If your organization cannot articulate how much cyber risk it is willing to accept (and under what conditions), you will see inconsistent exception approvals, unclear remediation priorities, and “paper” risk registers that do not drive action.

C2M2 v2.1 RISK-1.E states: “Organizational cybersecurity risk tolerance is defined and communicated to relevant stakeholders.” 1 For a Compliance Officer, CCO, or GRC lead, the operational goal is simple: create a repeatable decision standard that business and technical leaders actually use, then retain evidence that proves it.

This page focuses on speed to implementation. You will get: a plain-English interpretation, applicability, a step-by-step build and rollout process, required evidence, common audit questions, typical mistakes, and an execution plan you can run without waiting for a months-long strategy program. Where tools help, Daydream can centralize the artifacts and approvals so you can demonstrate consistency without chasing screenshots.

Regulatory text

Excerpt (requirement): “Organizational cybersecurity risk tolerance is defined and communicated to relevant stakeholders.” 1

Operator interpretation (what you must do):

  1. Define cybersecurity risk tolerance for the scoped organization (what level of cyber risk is acceptable, and what is not).
  2. Document the criteria and decision process used to set that tolerance (inputs, reviewers, and approval authority).
  3. Communicate the tolerance to the people who make or influence risk decisions (so it changes outcomes, not just sits in a policy library).
    1

C2M2 is a maturity model frequently used in critical infrastructure environments. This requirement is assessed within the defined scope of your C2M2 adoption (business unit, function, or OT environment). 1

Plain-English interpretation (requirement-level)

A compliant risk tolerance definition answers, in operational terms:

  • What risk outcomes are unacceptable? Example: “Any scenario that can cause loss of control of safety-critical OT functions is not acceptable without compensating controls and executive sign-off.”
  • What risk can be accepted, by whom, and for how long? Example: “Time-bound acceptance for a defined system is allowed if residual risk falls below threshold X, includes documented compensating controls, and is approved by the defined risk acceptance authority.”
  • How will teams decide consistently? This is the heart of the requirement. The same type of risk should get the same disposition across sites, plants, or business units unless a documented exception exists.

The failure mode C2M2 is trying to prevent is also practical: if tolerance is not defined with documented criteria and follow-up, significant exposures remain unaddressed and decisions do not hold up during audits, customer diligence, or regulator review. 1

Who it applies to (entity and operational context)

This requirement applies to:

  • Energy sector organizations and critical infrastructure operators using C2M2 to assess cybersecurity capability and maturity. 1
  • Any scoped environment where cyber risk decisions are made: corporate IT, OT, cloud environments supporting operations, and shared services, if included in your C2M2 assessment scope. 1

Operationally, it touches:

  • Risk acceptance and exception workflows (security exceptions, policy waivers, compensating controls)
  • Security risk assessments (application, infrastructure, OT system, architecture reviews)
  • Third-party risk decisions where cyber risk is introduced through external relationships (as a dependency risk in the risk register even if managed in a separate TPDD program)
  • Remediation prioritization (what gets fixed first, what is tolerated, what requires escalation)

What you actually need to do (step-by-step)

Use this implementation sequence. It aligns to the recommended controls in the source pack: document criteria/inputs/reviewers/decision process, and retain outputs and decisions. 1

Step 1: Set the scope and decision inventory

Create a one-page scope statement:

  • Included environments (IT/OT/business units)
  • Systems and data types in scope
  • Which risk decisions must align (exceptions, risk acceptance, go/no-go releases, remediation priority)

Build an inventory of “risk decisions that matter,” such as:

  • Accepting a vulnerability past SLA
  • Allowing unsupported OS in production
  • Approving remote access paths into OT
  • Accepting a third party integration with limited assurance evidence

Step 2: Define tolerance in decision-grade terms (not adjectives)

Produce a Cybersecurity Risk Tolerance Standard (separate from a high-level policy). It should include:

A. Risk appetite vs. tolerance

  • Appetite: directional (“generally conservative”)
  • Tolerance: operational limits (“approval required above these thresholds”)

B. Risk dimensions you will judge Choose dimensions leaders recognize:

  • Safety impact
  • Operational disruption
  • Financial exposure (if you have internal models; do not invent quantified thresholds)
  • Regulatory/legal impact
  • Data sensitivity impact
  • Likelihood and exploitability (for cyber events)

C. Decision thresholds and required actions Use a table so teams can execute.

Example structure (customize to your environment):

Residual risk outcome Default decision Required approvals Required compensating controls Time limits/expiration
Safety/operational control impact plausible Avoid or mitigate Executive risk acceptance authority + OT owner Multi-layer controls + monitoring Exception must expire
Material business disruption plausible Mitigate or transfer Business owner + CISO delegate Segmentation, backup validation, detection Exception must expire
Limited operational impact and contained Accept with conditions System owner + security Documented rationale Review cadence

You are not required by the source text to use this exact format. You are required to define and communicate tolerance in a way that drives consistent decisions. 1

Step 3: Document criteria, inputs, reviewers, and decision process

Create a short “how we set tolerance” appendix that includes:

  • Inputs: enterprise risk priorities, system criticality, incident history, architecture constraints, OT safety requirements
  • Reviewers: security, OT engineering, legal/compliance, business operations
  • Approvers: defined risk acceptance authority (by role/title, not person)
  • Review frequency trigger: annual review plus material change triggers (mergers, new OT lines, major incident)

This maps directly to the recommended control to document criteria, inputs, reviewers, and decision process. 1

Step 4: Embed tolerance into your workflows

If tolerance is not embedded, it will not survive contact with urgent projects.

Minimum workflow hooks:

  • Risk assessment templates: include a “tolerance alignment” section: within tolerance / exceeds tolerance / unclear
  • Exception requests: require mapping to a tolerance category and naming compensating controls
  • Risk register: add fields for “tolerance outcome,” “acceptance authority,” and “expiration”
  • Change management: require risk sign-off when changes introduce new connectivity, remote access, or third-party dependencies

Daydream can help by centralizing these fields, approvals, and linked evidence so you can show the end-to-end record: tolerance statement → communication → risk decision → remediation tracking.

Step 5: Communicate to “relevant stakeholders” with evidence

Stakeholders to include, at minimum:

  • Risk acceptance authorities (executives, plant leadership, system owners)
  • Security leadership and GRC
  • OT engineering/operations leaders (if OT is in scope)
  • Third-party relationship owners where cybersecurity risk is accepted or compensated

Communication methods that produce audit-ready evidence:

  • Controlled document publication with read receipt or attestation
  • Role-based training module completion
  • Leadership brief deck with attendance list
  • Standard operating procedure updates with version control

The requirement includes communication; treat proof of dissemination as part of your control design. 1

Step 6: Prove operation with sampled decisions and remediation follow-through

Retain:

  • Risk assessment outputs
  • Management decisions (accept/mitigate/transfer/avoid)
  • Remediation tracking showing action on out-of-tolerance items

This maps directly to the recommended control to retain assessment outputs, management decisions, and remediation tracking. 1

Required evidence and artifacts to retain (audit-ready)

Keep these artifacts in a controlled repository:

  1. Cybersecurity Risk Tolerance Standard (approved, versioned)
  2. Approval record (meeting minutes, e-signature, governance ticket)
  3. Method appendix documenting criteria, inputs, reviewers, and decision process 1
  4. Stakeholder communication evidence (distribution list, attestations, training completion)
  5. Risk decision samples (accepted risks, exception requests, architecture review sign-offs) tied back to tolerance categories
  6. Remediation tracker extracts showing out-of-tolerance risks assigned, tracked, and closed 1
  7. Periodic review record (review notes, updates, rationale for changes)

Common exam/audit questions and hangups

Expect these questions, and prep the evidence before you are asked:

  • “Show me your cybersecurity risk tolerance. Who approved it, and when?”
  • “How do system owners know when a risk must be escalated?”
  • “Give me examples of risks accepted within tolerance vs. outside tolerance.”
  • “Where is the decision process documented (inputs, reviewers, authority)?” 1
  • “How do you ensure tolerance is communicated to relevant stakeholders?” 1
  • “Show remediation tracking for risks that exceeded tolerance.” 1

Hangups that slow reviews:

  • Tolerance exists only as “high/medium/low” with no decision thresholds.
  • There is no defined risk acceptance authority, so acceptance is informal.
  • Exceptions have no expiration, so “temporary” becomes permanent.
  • Communication is assumed, but there is no proof.

Frequent implementation mistakes (and how to avoid them)

  1. Writing tolerance as philosophy, not operating limits.
    Fix: use a table that ties residual risk outcomes to required actions and approvals.

  2. One tolerance statement for both IT and OT without nuance.
    Fix: keep an enterprise umbrella statement, then add OT-specific constraints if OT is in scope.

  3. No link between tolerance and remediation priority.
    Fix: define what “out of tolerance” automatically triggers (ticket severity, escalation path, compensating controls).

  4. Risk acceptance authority is unclear.
    Fix: define approval roles by risk tier and require documented sign-off.

  5. No evidence that decisions align to tolerance.
    Fix: sample decisions quarterly and document a lightweight QA review in GRC.

Enforcement context and risk implications

No public enforcement cases are provided in the source catalog for this requirement, so you should treat this as a maturity and defensibility expectation rather than a penalty-citation exercise.

Practically, weak tolerance definitions create two real risks:

  • Control failure risk: risk decisions look arbitrary under audit because there is no consistent standard.
  • Exposure risk: known issues remain open because nobody can say, credibly, whether they exceed tolerance. 1

Practical execution plan (30/60/90)

Use this as an operator’s rollout plan.

First 30 days (define + approve)

  • Confirm scope for C2M2 assessment and list risk decision points that must align.
  • Draft the Cybersecurity Risk Tolerance Standard with a decision table.
  • Document criteria, inputs, reviewers, and decision authority. 1
  • Obtain formal approval and store the signed artifact.

Days 31–60 (embed + communicate)

  • Update risk assessment and exception templates to require tolerance mapping.
  • Publish the standard in a controlled repository and capture attestation/read receipts.
  • Run stakeholder briefings for approvers and system owners; retain attendance evidence.
  • Configure Daydream (or your GRC system) to link tolerance categories to risk records and exceptions.

Days 61–90 (prove operation)

  • Pull a sample of risk decisions and confirm dispositions align to tolerance.
  • For out-of-tolerance risks, confirm remediation plans exist and are tracked to closure. 1
  • Hold a governance checkpoint: what decisions were blocked/escalated because of tolerance, and what updates are needed.

Frequently Asked Questions

What is the difference between risk appetite and risk tolerance for this requirement?

Appetite is directional guidance; tolerance is the actionable boundary that tells teams what must be escalated, mitigated, or cannot be accepted. C2M2 RISK-1.E expects a defined and communicated tolerance that drives consistent decisions. 1

Who are “relevant stakeholders” in practice?

Include anyone who approves risk acceptance, owns remediation funding, or grants exceptions (executives, system/asset owners, OT ops leaders, security leadership). Prove communication with attestations, training completion, or documented briefings. 1

Can we meet the requirement with a policy statement alone?

Usually no, because policy language rarely creates a consistent decision standard. You need documented criteria and a decision process, plus evidence it was communicated and is used in risk decisions. 1

How do we show auditors that tolerance is “operating,” not shelfware?

Produce a set of recent risk decisions (exceptions, risk acceptances, architecture reviews) that reference the tolerance categories, show the approval authority, and show remediation tracking for items that exceeded tolerance. 1

How should risk tolerance handle third-party risk?

Treat third-party cyber risk as a first-class input to tolerance decisions: define when a third party relationship requires compensating controls, added monitoring, or executive acceptance. Keep the evidence in the same record set as your other risk decisions to show consistency. 1

What is the minimum evidence set we should retain if we are short on time?

Keep the approved tolerance document, the documented decision process (criteria/inputs/reviewers/authority), dissemination proof, and a small sample of risk decisions plus remediation tracking that tie back to tolerance. 1

What you actually need to do

Use the cited implementation guidance when translating the requirement into day-to-day operating steps. 2

Footnotes

  1. Cybersecurity Capability Maturity Model v2.1

  2. DOE C2M2 program

Frequently Asked Questions

What is the difference between risk appetite and risk tolerance for this requirement?

Appetite is directional guidance; tolerance is the actionable boundary that tells teams what must be escalated, mitigated, or cannot be accepted. C2M2 RISK-1.E expects a defined and communicated tolerance that drives consistent decisions. (Source: Cybersecurity Capability Maturity Model v2.1)

Who are “relevant stakeholders” in practice?

Include anyone who approves risk acceptance, owns remediation funding, or grants exceptions (executives, system/asset owners, OT ops leaders, security leadership). Prove communication with attestations, training completion, or documented briefings. (Source: Cybersecurity Capability Maturity Model v2.1)

Can we meet the requirement with a policy statement alone?

Usually no, because policy language rarely creates a consistent decision standard. You need documented criteria and a decision process, plus evidence it was communicated and is used in risk decisions. (Source: Cybersecurity Capability Maturity Model v2.1)

How do we show auditors that tolerance is “operating,” not shelfware?

Produce a set of recent risk decisions (exceptions, risk acceptances, architecture reviews) that reference the tolerance categories, show the approval authority, and show remediation tracking for items that exceeded tolerance. (Source: Cybersecurity Capability Maturity Model v2.1)

How should risk tolerance handle third-party risk?

Treat third-party cyber risk as a first-class input to tolerance decisions: define when a third party relationship requires compensating controls, added monitoring, or executive acceptance. Keep the evidence in the same record set as your other risk decisions to show consistency. (Source: Cybersecurity Capability Maturity Model v2.1)

What is the minimum evidence set we should retain if we are short on time?

Keep the approved tolerance document, the documented decision process (criteria/inputs/reviewers/authority), dissemination proof, and a small sample of risk decisions plus remediation tracking that tie back to tolerance. (Source: Cybersecurity Capability Maturity Model v2.1)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
C2M2 Risk Tolerance Definition: Implementation Guide | Daydream