Risk Identification
To meet the C2M2 “Risk Identification” requirement (RISK-1.A, MIL1), you must run a repeatable process that identifies cybersecurity risks within your defined scope, documents how risks are found, and retains outputs showing risks were recorded and routed to decisions. The test is simple: can you prove you systematically identify risks that could affect objectives, operations, or critical functions? 1
Key takeaways:
- Define scope and criteria first; “we identify risks” without documented criteria will not hold up in review.
- Use consistent inputs (asset inventory, threats, vulnerabilities, incidents, third parties) and record results in a risk register.
- Retain evidence of decisions and follow-up, not just the assessment output. 1
“Risk identification” is the front door to every cybersecurity risk program. If you cannot show how risks are identified, everything downstream (risk analysis, prioritization, treatment, monitoring) looks ad hoc, even if teams are doing good work. Under C2M2 v2.1 RISK-1.A (MIL1), the expectation is baseline but concrete: cybersecurity risks to the organization are identified within the scope you are assessing, and the organization can show how it did so. 1
For a Compliance Officer, CCO, or GRC lead, the fastest path to operationalizing this requirement is to treat it as an evidence problem. You need (1) written criteria and a defined intake method for identifying risks, (2) a set of standard inputs that reliably produce risks, (3) a durable record (a risk register) that captures what was identified, and (4) proof that identified risks are routed to accountable owners for evaluation and action. C2M2 is commonly used by critical infrastructure operators, including energy sector organizations, where the business impact of missed risk identification is operational disruption, safety exposure, and regulatory scrutiny after incidents. 1
Requirement: Risk Identification (C2M2 RISK-1.A, MIL1)
C2M2 frames this as a minimum maturity expectation: you can name and evidence cybersecurity risks that could impact organizational objectives, operations, and delivery of critical functions. 1
Plain-English interpretation
You need a repeatable way to find cybersecurity risks in your environment, capture them, and show that leadership can see what was found. This is not limited to “IT security risks.” Your intake must cover operational technology (OT) where relevant, third-party dependencies, and business-process impacts in scope.
What “good” looks like at MIL1
- A defined scope (business unit, function, OT environment) for the C2M2 assessment. 1
- Documented criteria, inputs, reviewers, and decision process for risk identification. 1
- Assessment outputs and follow-up records showing identified risks were evaluated and addressed (at least routed into a decision and tracking mechanism). 1
Regulatory text
Excerpt (C2M2 v2.1 RISK-1.A): “Cybersecurity risks to the organization are identified.” 1
Operator meaning: You must be able to demonstrate, with artifacts, that your organization identifies cybersecurity risks within the defined scope. The practical bar is evidence of a working process: documented criteria plus outputs that show risks were captured and sent into management decision-making. 1
Who it applies to
Entity types
- Energy sector organizations
- Critical infrastructure operators 1
Operational context (what “scope” means in practice)
This applies when your organization has adopted C2M2 for a defined scope and is assessing maturity within that scope (for example, a generation site OT network, a transmission control center, or a corporate IT environment that supports critical operations). 1
If your scope includes third parties (managed service providers, cloud providers, OEMs, contractors), your risk identification inputs must explicitly account for those dependencies because they can materially affect operations.
What you actually need to do (step-by-step)
Use this as a build sheet. Each step maps to evidence an assessor will ask for.
1) Lock the scope and ownership
- Define the C2M2 scope statement: boundaries, key systems, critical functions supported, and what is explicitly out of scope.
- Assign an accountable owner for the risk identification process (often GRC, CISO org, or OT security lead) and name required reviewers (IT, OT, engineering, operations, legal/compliance, and major third-party owners where relevant).
Deliverable: Scope statement + RACI for risk identification.
2) Write your risk identification criteria (the “rules of the road”)
Document the criteria you use to decide what qualifies as a cybersecurity risk worth recording. Keep it simple:
- Risk definition (cause, event, impact framing)
- Impact categories relevant to your organization (operations downtime, safety, regulatory obligations, financial loss, data exposure)
- Minimum threshold for entry into the risk register (for example, any risk tied to critical function disruption, or any risk with a credible threat path)
This is explicitly called out as a recommended control: document criteria, inputs, reviewers, and decision process used for risk identification. 1
Deliverable: Risk identification procedure (or SOP) with criteria and roles.
3) Standardize your risk inputs (so the process is repeatable)
Define the minimum set of inputs you will review on a recurring basis. Typical inputs that stand up well in audits:
- Asset inventory or system list for in-scope environments (IT and/or OT)
- Known vulnerabilities affecting in-scope assets (scanner outputs, advisories, patch exceptions)
- Threat intelligence relevant to your sector and technology stack (high-level is fine if that is all you have at MIL1)
- Incident and near-miss reports (security and operational)
- Change management artifacts (major upgrades, network segmentation changes, new remote access paths)
- Third-party inventory and key dependencies (MSPs, SaaS, OEM remote support, critical contractors)
Deliverable: “Risk identification inputs checklist” with named data sources and owners.
4) Run structured identification sessions (and capture outputs)
At MIL1, you do not need a complex quantitative model. You do need consistency.
- Hold a cross-functional working session (or asynchronous review) where the inputs are reviewed against the criteria.
- Record each identified risk with enough detail that a reviewer can understand it without being in the room:
- Risk title
- Scope/system affected
- Risk statement (threat/event → impact)
- Primary driver (vulnerability, third-party dependency, process gap, configuration weakness)
- Risk owner (person/team)
- Date identified and source input (scanner report, incident, engineering review)
Deliverable: Meeting notes + initial risk list exported into your risk register.
5) Put every identified risk into a risk register with status
Your risk register is the system of record. It can be a GRC tool, ticketing system, or controlled spreadsheet if access and change control are managed. Minimum fields to pass most reviews:
- Unique ID
- Owner
- Business/operational impact category
- Link to evidence (scanner report, incident ticket, third-party assessment)
- Decision status (pending review, accepted, mitigation planned, remediated)
- Tracking link to remediation work (tickets, project plans)
This aligns with the recommended control to retain assessment outputs, management decisions, and remediation tracking. 1
Deliverable: Risk register extract showing recent identified risks and their disposition.
6) Prove the “decision and follow-up” loop works
Risk identification alone is rarely the end of the examiner’s questions. They will ask what happened next.
- Establish a management review cadence (risk committee, security steering committee, OT governance).
- Capture decisions: accept, mitigate, transfer, avoid, defer with rationale.
- Tie mitigations to tracking (tickets, POAMs, project milestones) and preserve evidence of closure.
Deliverable: Governance minutes + decision log + remediation tracking report.
7) Operationalize in third-party and change workflows
This is the fastest way to keep risk identification current:
- Third-party onboarding: require security review outputs to feed the risk register (for example, gaps found in due diligence become risks with owners).
- Major change approvals: add a “new cyber risk introduced?” checkpoint with required documentation.
Deliverable: Workflow screenshots/templates showing risk intake embedded in third-party and change processes.
Required evidence and artifacts to retain
Use this table as your audit evidence index.
| Artifact | What it proves | Minimum retention approach |
|---|---|---|
| Scope statement for C2M2 assessment | You know what environment you’re identifying risks for | Version-controlled document repository |
| Risk identification procedure/SOP | Criteria, inputs, roles, review steps exist | Approved doc with revision history |
| Inputs checklist + sample inputs | Identification is repeatable and based on defined sources | Keep representative samples and links |
| Risk register export | Risks are actually identified and recorded | System report or controlled spreadsheet |
| Workshop notes / intake records | How risks were identified and by whom | Dated notes, attendee list |
| Decision log / governance minutes | Management visibility and disposition | Meeting minutes with decisions |
| Remediation tracking (tickets/POAMs) | Follow-up exists beyond identification | Ticket links, status reports |
The key is coherence: artifacts must connect (input → risk → decision → tracking). 1
Common exam/audit questions and hangups
Questions you should be ready for
- “Show me your documented criteria for identifying cybersecurity risks.” 1
- “What inputs do you review to identify risks? Who owns each input?”
- “Give examples of risks identified in the last cycle and show what happened next.” 1
- “How do you identify risks introduced by third parties that support critical functions?”
- “How do you know your risk identification covers OT and operational impacts, not only IT?”
Hangups that slow teams down
- No clear scope: risks are listed, but the assessor cannot tell what environment they relate to.
- No criteria: teams rely on informal judgment; results vary by facilitator.
- No linkage to action: a risk register exists, but decisions and remediation tracking are missing or scattered.
Frequent implementation mistakes (and how to avoid them)
-
Treating vulnerability scans as “risk identification.”
Fix: Use scan findings as an input, then write risk statements tied to operational impact and ownership. -
One-time risk workshop with no operational hooks.
Fix: Embed risk intake into third-party onboarding, incident postmortems, and change management so risks continue to appear. -
Recording risks without accountable owners.
Fix: Require a named owner before a risk is marked “identified and logged.” If ownership is disputed, assign an interim owner (usually the system/service owner) until governance resolves it. -
Over-documenting narratives and under-documenting decisions.
Fix: Keep risk descriptions short, but make decision fields mandatory (accept/mitigate/defer) and link to evidence.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this C2M2 requirement, so you should treat “enforcement context” as indirect: after a cyber event, regulators, customers, and internal auditors often ask for proof that risks were identified and governed. C2M2’s own implementation risk factor is that without documented criteria and follow-up, significant exposures can remain unaddressed and your decisions may not stand up to internal control testing, audits, customer diligence, or regulator review. 1
Practical execution plan (30/60/90)
You asked for speed. Use this phased plan to get to a defensible MIL1 posture quickly.
First 30 days (stabilize the minimum viable process)
- Publish a scope statement for the C2M2 assessment. 1
- Draft and approve a short risk identification SOP: criteria, inputs, reviewers, decision routing. 1
- Stand up a risk register (tool or controlled spreadsheet) with required fields.
- Run one identification cycle using your inputs checklist and log the risks.
Days 31–60 (prove repeatability and governance)
- Hold a management review to disposition the first set of risks; capture decisions and minutes. 1
- Tie the top risks to remediation tracking (tickets, POAMs, project plans) and link them in the register. 1
- Add third-party and change management intake hooks so new risks enter the same register.
Days 61–90 (harden evidence and reduce single-person dependency)
- Run a second risk identification cycle to demonstrate the process repeats with consistent inputs.
- Test audit readiness: pick a risk at random and trace input → risk record → decision → remediation artifact.
- If you are scaling, consider Daydream to centralize third-party risk intake, evidence collection, and risk register workflows so identification outputs and follow-up stay connected across teams and third parties.
Frequently Asked Questions
Do we need a formal risk scoring model to satisfy “Risk Identification”?
Not for the baseline requirement. You need evidence that risks are identified and recorded with documented criteria and repeatable inputs. Scoring helps prioritization, but identification is about reliably finding and capturing risks. 1
What’s the minimum evidence that will satisfy an assessor?
A documented risk identification procedure (criteria, inputs, roles) plus a risk register extract showing identified risks and some form of decision or follow-up tracking. The evidence must connect from source inputs to management visibility. 1
How do we cover OT risk identification without boiling the ocean?
Start with the scoped critical functions and the systems that support them, then use a small set of OT-relevant inputs (asset list, remote access paths, patch/exception lists, incidents). Record risks in the same register so governance stays unified.
Do third-party issues belong in the risk register or a separate vendor tracker?
Put them where they can be governed and tracked. If third-party dependencies can impact objectives or critical functions, capture them as cybersecurity risks with owners and disposition, then link out to your third-party due diligence artifacts.
Our teams identify risks informally in meetings. How do we make that “count”?
Convert the informal practice into an SOP and require meeting outputs to be logged as risk register entries with dates, sources, and owners. Keep the process lightweight but consistent. 1
How often must we run risk identification?
C2M2 RISK-1.A does not specify a frequency in the provided text. Set a cadence that matches operational change (and run ad hoc identification after major incidents, major changes, or new third-party dependencies). 1
What you actually need to do
Use the cited implementation guidance when translating the requirement into day-to-day operating steps. 2
Footnotes
Frequently Asked Questions
Do we need a formal risk scoring model to satisfy “Risk Identification”?
Not for the baseline requirement. You need evidence that risks are identified and recorded with documented criteria and repeatable inputs. Scoring helps prioritization, but identification is about reliably finding and capturing risks. (Source: Cybersecurity Capability Maturity Model v2.1)
What’s the minimum evidence that will satisfy an assessor?
A documented risk identification procedure (criteria, inputs, roles) plus a risk register extract showing identified risks and some form of decision or follow-up tracking. The evidence must connect from source inputs to management visibility. (Source: Cybersecurity Capability Maturity Model v2.1)
How do we cover OT risk identification without boiling the ocean?
Start with the scoped critical functions and the systems that support them, then use a small set of OT-relevant inputs (asset list, remote access paths, patch/exception lists, incidents). Record risks in the same register so governance stays unified.
Do third-party issues belong in the risk register or a separate vendor tracker?
Put them where they can be governed and tracked. If third-party dependencies can impact objectives or critical functions, capture them as cybersecurity risks with owners and disposition, then link out to your third-party due diligence artifacts.
Our teams identify risks informally in meetings. How do we make that “count”?
Convert the informal practice into an SOP and require meeting outputs to be logged as risk register entries with dates, sources, and owners. Keep the process lightweight but consistent. (Source: Cybersecurity Capability Maturity Model v2.1)
How often must we run risk identification?
C2M2 RISK-1.A does not specify a frequency in the provided text. Set a cadence that matches operational change (and run ad hoc identification after major incidents, major changes, or new third-party dependencies). (Source: Cybersecurity Capability Maturity Model v2.1)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream