Actions to address risks and opportunities — General
To meet ISO/IEC 42001 Clause 6.1.1, you must translate your Clause 4 context (internal/external issues and stakeholder requirements) into a documented set of AI risks and opportunities, then decide which ones require action so the AI management system achieves its intended outcomes 1. This is the bridge between “what matters” and “what you will do about it.”
Key takeaways:
- Start from Clause 4 outputs, not a generic AI risk checklist 1.
- Define a repeatable method to identify, rate, and decide which AI risks/opportunities require action 1.
- Keep audit-ready evidence: inputs, decisions, owners, and follow-through into planning and controls 1.
Clause 6.1.1 is where many AI governance programs become “real” for auditors and operators. Up to Clause 4, you have context: what AI systems you run, where they are used, who is affected, what obligations you have, and what outcomes you want. Clause 6.1.1 requires you to convert that context into decisions: the risks and opportunities that must be addressed so your AI management system reaches its intended outcomes 1.
For a Compliance Officer, CCO, or GRC lead, the operational goal is simple: create a documented, repeatable process that (1) ingests Clause 4 issues and requirements, (2) identifies AI risks and opportunities tied to those inputs, (3) determines which ones need action, and (4) creates traceability into the rest of your management system planning. If you do this well, you reduce “unknown unknowns,” focus resources on the highest-risk AI use cases, and avoid governance theater where policies exist but decisions are not evidenced.
This page gives requirement-level guidance you can implement quickly: who must be involved, what artifacts to produce, how to structure the risk/opportunity register, what auditors ask, and what teams commonly get wrong.
Regulatory text
ISO/IEC 42001:2023 Clause 6.1.1 (excerpt): “The organization shall consider the issues and requirements determined in Clause 4 and determine the risks and opportunities that need to be addressed to ensure the AI management system can achieve its intended outcomes.” 1
What the operator must do
- Use Clause 4 outputs as mandatory inputs. You are expected to start with the “issues” and “requirements” you already determined 1.
- Determine risks and opportunities that require action. This is a decision point: not every risk becomes a project, but you must show a method for deciding what gets addressed and why 1.
- Tie decisions to intended outcomes. You need a line of sight from risks/opportunities to what your AI management system is trying to achieve (e.g., safe deployment, regulatory conformity, trustworthy AI operations) 1.
Plain-English interpretation
Clause 6.1.1 requires a context-driven AI risk and opportunity determination process. If your Clause 4 context says you process sensitive data, operate in regulated markets, or rely on third-party AI components, your risk identification must reflect that. Then you decide which risks/opportunities are significant enough to act on, assign ownership, and feed the results into planning.
This is not a one-time “AI risk assessment workshop.” It is a managed system activity with documented inputs, outputs, and traceability.
Who it applies to (entity and operational context)
Clause 6.1.1 applies to any organization implementing an AI management system, including:
- AI providers building or offering AI systems or AI-enabled services 1.
- AI users deploying AI systems in business processes (including internal tools) 1.
- Organizations that govern AI across multiple business units, subsidiaries, or product lines 1.
Operationally, it matters most when:
- AI is used in decisions affecting people (customers, employees, patients, students).
- AI relies on third parties (foundation model providers, data brokers, labeling firms, cloud platforms).
- AI outputs drive regulated actions (lending, insurance, healthcare, privacy, security, safety).
What you actually need to do (step-by-step)
1) Confirm and package the Clause 4 inputs
Collect the specific Clause 4 outputs you will use as inputs:
- Internal/external issues relevant to AI (business model, operating environment, constraints).
- Interested parties and their requirements (regulators, customers, employees, impacted individuals, third parties).
- AI management system scope and intended outcomes.
Operator tip: Put these into a short “Context & Requirements Summary” document and treat it as a controlled input. Auditors want to see that Clause 6.1.1 started from Clause 4, not from someone’s preferred risk template 1.
2) Define your method for identifying AI risks and opportunities
Document a procedure that answers:
- Where you look for risks/opportunities (AI inventory, incidents, model changes, third-party changes, regulatory changes, customer complaints).
- Who participates (Compliance/Legal, Security, Data Science/ML engineering, Product, HR, Procurement, business owners).
- What “risk” means in your context (harm, legal nonconformity, security failure, operational outage, reputational impact).
- What “opportunity” means (improved controls, better monitoring, reduced bias risk, better documentation, reduced third-party dependency).
Keep it simple but explicit. A lightweight rubric beats an unwritten practice every time.
3) Build an AI risk and opportunity register tied to AI use cases
Create a register that is keyed to AI systems/use cases. Minimum recommended fields:
- AI system/use case name, owner, business process, deployment status.
- Clause 4 requirement(s) implicated (privacy obligations, security requirements, customer commitments, internal policies).
- Risk/opportunity statement written in clear language.
- Trigger/event (new model release, new dataset, new geography, third-party change).
- Existing controls.
- Risk rating method and result (your chosen scale).
- Decision: “needs action” vs “accept/monitor,” with rationale.
- Action owner and due event (e.g., “before production release,” “before renewal,” “on material change”).
- Evidence pointer (links to testing, approvals, monitoring dashboards, third-party due diligence).
Third-party angle: If a third party supplies model components, training data, hosting, or monitoring, record third-party dependency risks here and link to third-party due diligence artifacts.
4) Determine which items “need to be addressed”
Clause 6.1.1 is explicit: identify risks and opportunities that need to be addressed 1. Your determination should include:
- Decision criteria (e.g., risk rating thresholds; “any legal/regulatory requirement gap requires action”; “any high-severity harm scenario requires action”).
- Approval authority (AI governance committee, CCO, product risk council).
- Treatment options (mitigate, avoid, transfer, accept with monitoring).
- For opportunities, a prioritization approach (quick wins vs foundational improvements).
Common hangup: Teams document risks but never document the decision of what will be addressed. Fix this by adding a mandatory “Determination” field with approver and date.
5) Turn determinations into planned actions and controls
Clause 6.1.1 is “general,” but it must connect to execution. For each “needs action” item, define:
- The control(s) or activities you will implement (testing, monitoring, access controls, documentation, human oversight, escalation paths).
- Ownership and accountable executive.
- Dependencies (data platform changes, procurement clauses, model monitoring tooling).
- Acceptance criteria (what “done” means: test passed, monitoring live, contract updated).
If you use a GRC tool or a workflow platform like Daydream, configure it so every risk/opportunity record can generate a tracked action with an owner, approval step, and evidence upload. That traceability is usually what closes the audit loop.
6) Set review triggers and keep it alive
Document when you refresh the register:
- On AI system onboarding and before production launch.
- On material changes (model update, dataset change, new user population, new geography, third-party substitution).
- After incidents/near misses.
- During periodic management review cycles.
Auditors will test whether your method is operating, not whether it exists.
Required evidence and artifacts to retain
Keep artifacts that show inputs → analysis → determination → action:
- Clause 4 context/requirements summary used as input 1.
- Procedure for identifying and determining risks/opportunities (version-controlled).
- AI inventory and scoped list of AI systems/use cases.
- AI risk & opportunity register with decisions, owners, and dates.
- Meeting minutes/approvals from the governance body that determined what “needs action.”
- Action plans and status tracking (tickets, project plans).
- Control evidence: testing reports, monitoring configurations, incident logs, training records.
- Third-party due diligence outputs where third parties are in scope (security reviews, contractual clauses, change notices).
Common exam/audit questions and hangups
Auditors and internal reviewers tend to ask:
- “Show me how Clause 4 outputs feed into your risk determination.” Provide traceable references in the register 1.
- “How do you decide what needs action?” They want decision criteria, not instincts.
- “Does this cover all AI in scope?” Expect sampling against your AI inventory.
- “Where are opportunities documented?” Many programs only record risks.
- “Show me one risk that resulted in a control change.” They test operational linkage.
Hangups that trigger nonconformities:
- No documented method; only slideware.
- Register exists but has no determinations, owners, or evidence pointers.
- Risks are generic and not tied to specific AI use cases and requirements.
Frequent implementation mistakes and how to avoid them
Mistake: Starting with a generic risk library
Fix: Use the Clause 4 requirements as prompts. For each requirement, ask “what could prevent this outcome?” and “what could improve it?” 1.
Mistake: Treating “opportunities” as optional fluff
Fix: Define opportunity categories that matter operationally (monitoring upgrades, documentation improvements, better third-party controls) and track at least a small set to completion.
Mistake: No integration with third-party risk management
Fix: Add a third-party dependency field and require procurement/legal review when the treatment requires contract terms, audit rights, or change notification.
Mistake: No material-change trigger
Fix: Require reassessment on model, data, population, or supplier changes. Put it into release management and procurement renewal workflows.
Enforcement context and risk implications
No public enforcement cases were provided in the supplied source catalog, so this page does not cite enforcement outcomes. Practically, weak risk determination shows up as:
- Untracked AI use cases and shadow deployments.
- Inconsistent approvals and missing accountability.
- Control gaps for third-party AI components. These increase the likelihood of incidents, customer harm, contract breaches, and audit findings.
A practical 30/60/90-day execution plan
First 30 days (Immediate stabilization)
- Name an owner for the Clause 6.1.1 process (often GRC or the AI governance lead).
- Collect Clause 4 issues/requirements into a controlled “Context & Requirements Summary” 1.
- Stand up an initial AI inventory (even if incomplete) and identify high-impact use cases.
- Draft the risk/opportunity determination procedure and a register template.
- Run one pilot workshop on a high-impact AI use case and produce a completed register entry with a “needs action” decision.
Next 60 days (Operationalize and connect workflows)
- Expand the register to cover all in-scope AI systems.
- Establish decision governance (who approves determinations, how often, and what evidence is required).
- Integrate third-party inputs: procurement intake questions, due diligence mapping, and contract change triggers.
- Convert “needs action” items into tracked remediation tasks with owners and acceptance criteria.
- Configure a system of record (GRC tool or Daydream workflow) to store evidence and approvals.
By 90 days (Prove it works and prepare for audit sampling)
- Run a second cycle triggered by a real event (model update, renewal, incident review) to show the process is active.
- Perform an internal control check: sample a few AI use cases and verify traceability from Clause 4 requirement → risk/opportunity → determination → action evidence 1.
- Tune rating criteria and decision thresholds based on what you learned.
- Prepare an “audit packet” that contains the procedure, the register, meeting approvals, and example evidence bundles.
Frequently Asked Questions
Do we need a formal risk scoring model to meet Clause 6.1.1?
You need a documented method to determine which risks and opportunities require action 1. A scoring model helps, but the audit focus is consistency, rationale, and traceability to actions.
How do we handle “opportunities” without creating a wish list?
Define opportunity categories tied to intended outcomes, then require an owner and a decision (do now, do later, or reject with rationale). Track a subset through completion to show the mechanism operates.
Does this requirement apply to third-party AI tools we “just use,” like SaaS with AI features?
Yes, if the AI is in your scope and affects your outcomes, you still need to determine risks and opportunities 1. Treat the provider as a third party dependency and capture contract, monitoring, and change-notice risks.
What’s the minimum evidence an auditor will accept?
A controlled procedure, a risk/opportunity register tied to Clause 4 inputs, and examples where determinations led to tracked actions with retained proof 1. Verbal explanations without artifacts rarely hold up.
Who should approve “risk acceptance” decisions?
Set an approval matrix based on impact and accountability. High-impact AI use cases usually require a business owner plus a control function sign-off (Compliance, Security, or a governance committee).
Can we combine this with our enterprise risk management (ERM) process?
Yes, as long as the ERM process explicitly consumes Clause 4 AI context and produces AI-specific determinations and actions 1. ERM registers often need added fields for AI system traceability and lifecycle triggers.
Footnotes
Frequently Asked Questions
Do we need a formal risk scoring model to meet Clause 6.1.1?
You need a documented method to determine which risks and opportunities require action (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). A scoring model helps, but the audit focus is consistency, rationale, and traceability to actions.
How do we handle “opportunities” without creating a wish list?
Define opportunity categories tied to intended outcomes, then require an owner and a decision (do now, do later, or reject with rationale). Track a subset through completion to show the mechanism operates.
Does this requirement apply to third-party AI tools we “just use,” like SaaS with AI features?
Yes, if the AI is in your scope and affects your outcomes, you still need to determine risks and opportunities (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). Treat the provider as a third party dependency and capture contract, monitoring, and change-notice risks.
What’s the minimum evidence an auditor will accept?
A controlled procedure, a risk/opportunity register tied to Clause 4 inputs, and examples where determinations led to tracked actions with retained proof (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). Verbal explanations without artifacts rarely hold up.
Who should approve “risk acceptance” decisions?
Set an approval matrix based on impact and accountability. High-impact AI use cases usually require a business owner plus a control function sign-off (Compliance, Security, or a governance committee).
Can we combine this with our enterprise risk management (ERM) process?
Yes, as long as the ERM process explicitly consumes Clause 4 AI context and produces AI-specific determinations and actions (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). ERM registers often need added fields for AI system traceability and lifecycle triggers.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream