AI risk treatment

ISO/IEC 42001 Clause 8.3 requires you to execute your AI risk treatment plan: implement the selected risk controls, track whether they worked, and retain documented evidence of the results. Operationally, that means converting treatment decisions into assigned control actions, verifying implementation and effectiveness, recording residual risk, and keeping auditable records. 1

Key takeaways:

  • Turn “risk treatment decisions” into owned, testable control tasks with deadlines and acceptance criteria.
  • Keep evidence of both implementation (the control exists) and results (the control worked as intended).
  • Close the loop: update residual risk and exceptions, and feed results back into AI risk assessment and change management.

“AI risk treatment” is where AI governance stops being a document set and becomes operational reality. Many programs do solid AI risk assessments, then struggle to prove they actually did something about the risks they found. Clause 8.3 closes that gap by requiring implementation of the AI risk treatment plan and its controls, plus documented information showing results. 1

For a Compliance Officer, CCO, or GRC lead, the practical question is simple: can you show an auditor that every material AI risk has a treatment decision, that the chosen controls were implemented, and that you retained evidence that the controls achieved the intended outcome? This requirement is not asking for “perfect safety.” It is asking for disciplined execution: defined actions, accountable owners, verification, and records.

This page gives requirement-level implementation guidance you can apply immediately: who must comply, how to run the treatment workflow end-to-end, what artifacts to retain, and where audits typically get stuck. All guidance is anchored to the Clause 8.3 obligation to implement the treatment plan and controls and retain documented results. 1

Regulatory text

Clause 8.3 (AI risk treatment): “The organization shall implement the AI risk treatment plan and controls, retain documented information of the results.” 1

What the operator must do:

  1. Implement the AI risk treatment plan: execute the actions your organization selected to address identified AI risks.
  2. Implement the controls: put the specific risk controls into operation (technical, process, contractual, or governance controls).
  3. Retain documented information of results: keep records showing the outcomes of treatment, such as verification/testing, sign-offs, monitoring outputs, and residual risk decisions. 1

Plain-English interpretation (what examiners want to see)

Clause 8.3 is an execution-and-evidence requirement. You must be able to show a traceable chain from:

  • Risk identified
  • Treatment decision and plan
  • Control implemented
  • Result verified
  • Residual risk accepted/escalated
  • Records retained. 1

If your AI risk register says “model hallucination may cause customer harm,” auditors will ask: What did you do about it, who owns it, what control was implemented (for example, human review for high-impact outputs), how do you know it works, and where is the evidence?

Who it applies to

Clause 8.3 applies to any organization operating an AI management system under ISO/IEC 42001, including:

  • AI providers building or offering AI systems.
  • AI users deploying AI systems internally or into customer-facing workflows.
  • Organizations using third parties for AI components (models, data, labeling, MLOps platforms) where risk treatment includes contractual and oversight controls. 1

Operational contexts where this becomes real work:

  • Launching a new model or major model update.
  • Deploying a third-party model into a regulated or sensitive process (support, HR, underwriting, security operations).
  • Responding to an incident, drift, or a safety failure.
  • Scaling from pilots to production, where “ad hoc mitigations” must become controlled processes.

What you actually need to do (step-by-step)

Below is a practical workflow that maps cleanly to “implement the plan and controls” and “retain documented results.” 1

Step 1: Convert treatment decisions into an implementable control backlog

For each AI risk treatment item, define:

  • Control objective (what risk outcome you are reducing).
  • Control design (what will be built or changed).
  • Control owner (one accountable person, not a committee).
  • Implementation tasks (tickets, playbooks, engineering work).
  • Acceptance criteria (how you’ll verify completion and effectiveness).
  • Dependencies (data pipeline, vendor contract addendum, model monitoring).

Practical tip: write acceptance criteria so a second person can verify it without interpretation.

Step 2: Assign governance actions for risk acceptance and exceptions

Not every risk gets mitigated. Some are transferred, avoided, or accepted. Your treatment plan should produce explicit decisions:

  • Accepted residual risk with rationale and approver.
  • Temporary exceptions with compensating controls and an end condition.
  • Risk transfer items (contractual terms, insurance, third-party attestations) with the actual documents attached.

This is where many programs fail: they implement partial controls but do not document who accepted what residual risk and why.

Step 3: Implement the controls in operations (not just in policy)

Controls can be technical or procedural. Examples of “implemented” evidence:

  • A model output filter is deployed and enforced in production.
  • A human-in-the-loop review step is embedded in the workflow for high-impact outputs.
  • Access controls restrict who can change prompts, system instructions, or model parameters.
  • A third-party AI contract includes audit rights, incident notification, and data handling commitments, and your procurement process enforces those clauses.

Implementation means the control is operating for the defined scope, not merely drafted.

Step 4: Verify control operation and record “results”

Clause 8.3 explicitly requires “documented information of the results.” 1

Create a lightweight verification approach matched to the risk:

  • Design verification: confirm the control exists and is configured as intended (screenshots, config exports, code review references).
  • Operational verification: confirm the control is actually used (workflow logs, training completion records tied to role-based assignments, production enforcement evidence).
  • Effectiveness checks: targeted tests tied to the risk (for example, red-team test cases, evaluation runs, sampling of outputs, incident trend review).

Record outcomes in a consistent template: pass/fail, issues found, remediation actions, retest date, and final sign-off.

Step 5: Update residual risk and close the loop

After verification:

  • Update the risk register with residual risk and current control status.
  • Link any open findings to a remediation plan.
  • Feed learnings into change management for future releases (for example, updating standard control baselines for all new AI projects).

Auditors look for closed-loop governance because it demonstrates the management system is functioning as a system.

Step 6: Set retention expectations and make evidence retrievable

“Retain documented information” is not “save it somewhere.” Keep records in a system that supports:

  • Search by AI system, risk ID, control ID, owner, and date.
  • Immutable logs or at least a clear history of changes and approvals.
  • A standard evidence pack per AI system for audits. 1

If you use Daydream, set up an “AI risk treatment” workspace that ties each risk to its treatment tasks, evidence requests, owner attestations, and residual risk approvals. The value is speed during audits and fewer gaps during handoffs.

Required evidence and artifacts to retain

Maintain an auditable trail that shows implementation and results. Common artifacts:

  • AI risk treatment plan with scope, risks, selected options, and assigned owners. 1
  • Control implementation records: tickets, change requests, pull requests, configuration baselines, SOPs.
  • Verification and test records: checklists, test scripts, evaluation outputs, sign-offs, defect logs.
  • Residual risk acceptance: approval records, rationale, conditions/limits, review trigger.
  • Third-party artifacts (where applicable): contract clauses, DPAs, vendor security/AI documentation, incident notification procedures, service monitoring evidence.
  • Monitoring outputs: drift alerts, abuse monitoring, escalation logs, incident postmortems tied back to the risk/control.

Common exam/audit questions and hangups

Expect these lines of questioning:

  • “Show me one high-risk item from identification through treatment implementation and results.” 1
  • “Who approved residual risk, and what limits did they set?”
  • “How do you know controls operate for all in-scope AI systems, not just the pilot?”
  • “Where is your documented evidence of results, not just a statement that it was done?” 1
  • “How do third-party AI components fit into your treatment plan and control set?”

Hangups usually occur when evidence is scattered across tools and teams, or when “results” are subjective narratives rather than recorded test outcomes.

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: treating the plan as a document, not a work queue.
    Fix: represent treatment items as trackable tasks with owners, due dates, and acceptance criteria.

  2. Mistake: no definition of “implemented.”
    Fix: define what counts as implementation per control type (policy-only does not count for technical controls).

  3. Mistake: “results” are a meeting note.
    Fix: require a verification artifact (test output, screenshot, workflow log excerpt, signed checklist). Clause 8.3 requires documented results. 1

  4. Mistake: residual risk acceptance is informal.
    Fix: require documented approval for any accepted residual risk above your threshold, with conditions and review triggers.

  5. Mistake: third-party AI is out of scope.
    Fix: include contract controls, intake due diligence, and ongoing monitoring as treatment actions for third-party AI dependencies.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite specific actions. Practically, weak AI risk treatment shows up as: inconsistent controls across systems, inability to prove control operation, and unapproved risk acceptance. Those gaps increase operational risk (incidents, customer harm, service disruptions) and audit risk (findings for ineffective control execution and missing documented information). 1

Practical execution plan (30/60/90-day)

The goal is fast operationalization without waiting for perfect maturity. Use these phases as a delivery plan; tailor scope to your highest-risk AI systems first. 1

First 30 days: Stand up the treatment execution mechanism

  • Pick the in-scope AI systems list and identify the top risks that require treatment now.
  • Standardize your risk treatment record (fields: risk, decision, controls, owner, evidence, residual risk).
  • Create an evidence checklist for “implemented” and “results” per control category.
  • Set up a single system of record for treatment tracking and artifacts (GRC tool, ticketing plus document repository, or Daydream).

By 60 days: Implement priority controls and prove results

  • Execute treatment tasks for the highest-risk items and collect verification artifacts.
  • Establish a lightweight review meeting cadence to clear blockers and approve residual risk.
  • Build “audit-ready” treatment packs for each priority AI system: plan, implemented controls, results, residual risk approvals.

By 90 days: Industrialize and scale

  • Expand treatment execution to remaining in-scope systems and recurring processes (new model releases, major prompt changes, new third-party AI).
  • Add QA: periodic sampling of controls to confirm evidence quality and consistency.
  • Embed treatment updates into change management so every meaningful AI change triggers treatment review and evidence capture.

Frequently Asked Questions

What counts as “documented information of the results” for AI risk treatment?

Records that show the outcome of implementing controls, such as test results, verification checklists, approval records, monitoring outputs, and remediation evidence. A policy statement alone rarely demonstrates results. 1

Do we need to implement every control in the risk treatment plan immediately?

Clause 8.3 requires implementation of the plan and controls, so you need a managed execution path with accountable owners and evidence. If sequencing is necessary, document priority, interim compensating controls, and residual risk approval. 1

How do we handle residual risk acceptance under Clause 8.3?

Treat residual risk acceptance as part of the treatment outcome: document the decision, rationale, approver, scope limits, and review triggers. Retain the approval record as part of your “results” evidence. 1

How does AI risk treatment apply when we use third-party AI models or platforms?

Your treatment plan should include contractual and oversight controls, plus operational controls around configuration, access, monitoring, and incident response for the integrated service. Keep contracts, due diligence outputs, and ongoing monitoring evidence as treatment results. 1

What’s the fastest way to make this auditable across many AI systems?

Standardize a treatment template and evidence checklist, then require each AI system owner to produce a repeatable “treatment pack.” Tools like Daydream can centralize tasks, evidence requests, and approvals so you can answer auditor sampling quickly. 1

What will an auditor sample to test Clause 8.3?

Usually a set of higher-risk AI systems or use cases, then a trace from risk to treatment to implemented controls to documented results. Missing links, unclear ownership, and weak evidence are common failure points. 1

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

What counts as “documented information of the results” for AI risk treatment?

Records that show the outcome of implementing controls, such as test results, verification checklists, approval records, monitoring outputs, and remediation evidence. A policy statement alone rarely demonstrates results. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Do we need to implement every control in the risk treatment plan immediately?

Clause 8.3 requires implementation of the plan and controls, so you need a managed execution path with accountable owners and evidence. If sequencing is necessary, document priority, interim compensating controls, and residual risk approval. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

How do we handle residual risk acceptance under Clause 8.3?

Treat residual risk acceptance as part of the treatment outcome: document the decision, rationale, approver, scope limits, and review triggers. Retain the approval record as part of your “results” evidence. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

How does AI risk treatment apply when we use third-party AI models or platforms?

Your treatment plan should include contractual and oversight controls, plus operational controls around configuration, access, monitoring, and incident response for the integrated service. Keep contracts, due diligence outputs, and ongoing monitoring evidence as treatment results. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What’s the fastest way to make this auditable across many AI systems?

Standardize a treatment template and evidence checklist, then require each AI system owner to produce a repeatable “treatment pack.” Tools like Daydream can centralize tasks, evidence requests, and approvals so you can answer auditor sampling quickly. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What will an auditor sample to test Clause 8.3?

Usually a set of higher-risk AI systems or use cases, then a trace from risk to treatment to implemented controls to documented results. Missing links, unclear ownership, and weak evidence are common failure points. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001 AI risk treatment: Implementation Guide | Daydream