AI risk treatment

ISO/IEC 42001 Clause 6.1.3 requires you to run a defined AI risk treatment process that selects risk treatment options, determines the controls you will implement, checks those controls against Annex A, and then documents the outcome in a Statement of Applicability and an AI risk treatment plan 1.

Key takeaways:

  • You must turn AI risk assessment outputs into specific, approved treatment decisions and controls, not just a risk register.
  • Annex A is your required cross-check: record what you adopted, what you excluded, and why in the Statement of Applicability.
  • Auditors will look for end-to-end traceability: risk → decision → control → owner → timeline → evidence.

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

“AI risk treatment” is the bridge between identifying AI risks and proving you control them in day-to-day operations. Clause 6.1.3 is explicit: define and apply a process that chooses treatment options, determines necessary controls, compares them to Annex A, and produces two artifacts that auditors expect to read: a Statement of Applicability (SoA) and an AI risk treatment plan 1.

For a Compliance Officer, CCO, or GRC lead, the operational goal is simple: make risk decisions repeatable and provable across AI use cases (internal builds, third-party models, embedded AI features, and business-user “shadow AI”). Your best outcome is a closed loop where every material AI risk has an accountable owner, an approved treatment choice (avoid/modify/share/retain), implemented controls, and evidence that controls are operating.

This page shows exactly what to implement, which teams you need, what artifacts to retain, and the exam questions that expose weak programs. If you want speed, you can run this as a lightweight workflow in Daydream: one intake, one risk-to-control mapping, and automated SoA + plan outputs from the same underlying record set.

Regulatory text

ISO/IEC 42001 Clause 6.1.3 (AI risk treatment) requires the organization to define and apply an AI risk treatment process that:

  1. selects appropriate risk treatment options,
  2. determines the controls necessary,
  3. compares those controls with Annex A controls, and
  4. produces a Statement of Applicability and an AI risk treatment plan 1.

Plain-English interpretation (what the requirement means in practice)

You are required to run a consistent decision process after AI risks are identified. That process must end in two concrete deliverables:

  • Statement of Applicability (SoA): a documented position on which Annex A controls apply to your organization’s AI risks and why (including exclusions and justifications).
  • AI risk treatment plan: the execution plan that assigns owners and actions to implement (or confirm) the chosen controls.

This clause is not satisfied by a generic enterprise risk process unless you can show it explicitly covers AI-specific risks and produces these AI-specific outputs with Annex A cross-referencing 1.

Who it applies to

Clause 6.1.3 applies to organizations implementing an AI management system, including:

  • AI providers building, training, fine-tuning, or integrating models into products and services.
  • AI users deploying AI for business functions (HR screening, customer support, fraud detection, analytics, engineering copilots, content generation).
  • Teams using third parties for AI capabilities (hosted model APIs, embedded AI in SaaS, outsourced data labeling, managed MLOps) because risk treatment still sits with you as the deploying organization.

Operationally, this requirement applies wherever you make decisions about AI: model selection, data sourcing, release gates, human review, monitoring, incident response, change management, and third-party onboarding.

What you actually need to do (step-by-step)

Use the steps below as your minimum viable AI risk treatment workflow. Your aim is traceability and repeatability.

Step 1: Define the AI risk treatment process (write it down)

Create a procedure that answers:

  • Inputs: AI risk assessment results, system context, intended use, affected stakeholders, legal/contractual requirements, third-party due diligence outputs.
  • Decision authority: who can accept risk, approve compensating controls, or block deployment (product owner vs. risk committee vs. CCO).
  • Treatment options: document the set you allow (commonly avoid, modify/mitigate, share/transfer, retain/accept) and when each is permitted.
  • Required outputs: SoA and treatment plan, stored in a controlled repository 1.

Practical control: require that no AI system goes live (or materially changes) without an approved treatment decision record.

Step 2: Classify the AI system and scope the treatment decision

Before choosing controls, force a short scoping step:

  • AI system name, owner, purpose, and users.
  • Whether it is internally built or sourced from a third party.
  • Data types used (including sensitive or regulated data categories as defined by your organization).
  • Where outputs go (customer-facing, internal only, automated decision, decision support).

This scoping is what prevents “one-size-fits-none” controls.

Step 3: Select risk treatment options per material risk

For each identified risk, record:

  • Risk statement (cause → event → impact).
  • Inherent risk (before controls) and your method for scoring it (your own rubric is fine; auditors care that it is consistent).
  • Treatment choice (avoid/mitigate/share/accept) with rationale and approval.

Operator tip: risk acceptance is where auditors probe hardest. If you accept a risk, document the business rationale, the approving authority, and the monitoring conditions that would trigger reconsideration.

Step 4: Determine the necessary controls and map them to Annex A

Translate each treatment choice into controls that:

  • Prevent or reduce likelihood,
  • Reduce impact,
  • Improve detection and response,
  • Constrain use to intended purpose.

Then perform the required cross-check: compare your controls to Annex A controls. The point is not to copy Annex A verbatim; the point is to show coverage decisions and gaps explicitly 1.

A practical mapping table (keep it simple):

AI risk Treatment option Control(s) selected Annex A control reference Apply? Justification / notes
Example: hallucinated customer advice Mitigate Output constraints, human review, escalation Mapped Annex A item Yes Applies to customer-facing use
Example: IP leakage via prompts Mitigate Data handling rules, prompt logging limits, DLP Mapped Annex A item Yes Applies where confidential data present
Example: low-stakes internal brainstorming Accept User training, allowed-use policy Mapped Annex A item Partially Reduced control scope justified by context

You will populate the “Annex A control reference” with the relevant Annex A control identifiers used in your program documentation. The clause requires comparison to Annex A; your SoA becomes the durable record of those decisions 1.

Step 5: Produce the Statement of Applicability (SoA)

Your SoA should be readable in one sitting. Include:

  • The list of Annex A controls considered,
  • Whether each control is applicable,
  • Whether it is implemented, planned, or not implemented,
  • The justification for inclusion/exclusion,
  • References to where the control is implemented (policy, SOP, technical standard, system config).

Auditor mindset: they use the SoA as an index. If your SoA says a control is implemented, they will ask for operating evidence.

Step 6: Produce the AI risk treatment plan

Your treatment plan is the execution tracker. Minimum fields:

  • Risk and control linkage (unique IDs help)
  • Control owner and supporting teams
  • Implementation actions (what will be built/changed)
  • Target dates and dependencies (use your internal project cadence)
  • Evidence to be produced
  • Residual risk assessment after treatment
  • Approval checkpoints (pre-release gate, periodic review, change-triggered review)

If you run GRC tooling, this is a natural fit for workflow automation. In Daydream, teams typically track each AI system as a record with linked risks, linked controls, and auto-generated SoA and treatment plan outputs from the same dataset, which cuts down “document drift.”

Step 7: Implement, verify, and keep it current

Clause 6.1.3 is not a one-time document exercise. Keep the treatment loop alive by tying it to:

  • AI change management (model updates, prompt changes, new data sources),
  • Third-party renewals and material vendor changes,
  • Incidents and near-misses,
  • Periodic control testing.

Required evidence and artifacts to retain

Keep evidence that proves both design and operation:

Core artifacts (mandatory outputs):

  • AI risk treatment process/procedure (approved, versioned)
  • Statement of Applicability (SoA) 1
  • AI risk treatment plan 1

Traceability artifacts (what auditors ask for next):

  • AI risk assessments feeding the treatment decisions
  • Risk acceptance memos/approvals (where applicable)
  • Control mapping workbook (risk → control → Annex A)
  • Policies and SOPs referenced in the SoA
  • Technical evidence: configuration screenshots/exports, logging and monitoring settings, access control lists, evaluation results
  • Third-party artifacts: contracts, DPAs, security reviews, model cards or supplier documentation if provided
  • Control testing records and issue remediation tickets

Common exam/audit questions and hangups

Expect these lines of questioning:

  • “Show me one AI use case end-to-end: risk assessment to SoA to treatment plan to evidence.”
  • “Where do you document exclusions from Annex A controls, and who approved them?”
  • “How do you decide risk acceptance, and what would cause you to revisit the decision?”
  • “How do you ensure third-party AI components are covered by your treatment plan?”
  • “How do you keep the SoA aligned with what engineering actually implemented?”

Hangup to prevent: inconsistent naming. If “Customer Support AI” is called three different things across risk register, SoA, and treatment plan, you will lose time in the audit and increase sampling risk.

Frequent implementation mistakes (and how to avoid them)

  1. SoA created as a template, not a decision record.
    Fix: require applicability and justification fields to be completed for every Annex A control considered, with links to evidence.

  2. Treatment plan is a project plan with no risk linkage.
    Fix: every plan item should map back to a specific risk and a specific control decision.

  3. Controls listed at a high level (“monitor the model”).
    Fix: define control statements so a tester can verify them (what is monitored, where, by whom, what triggers action).

  4. Third-party AI is treated as “out of scope.”
    Fix: treat third-party AI like any other third party risk. Your treatment plan should include contract controls, usage constraints, and monitoring.

  5. Risk acceptance becomes the default.
    Fix: require documented rationale and a review trigger. If the system changes materially, rerun treatment.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite enforcement actions. Practically, weak AI risk treatment increases the chance of inconsistent controls across use cases, unapproved risk acceptance, and gaps between documented intent and operational reality. Those gaps are what auditors and certifiers typically surface first.

Practical execution plan (30/60/90-day)

Use this as an operator plan. Adjust sequencing to match your release cycle and governance maturity.

First 30 days: stand up the minimum viable process

  • Publish the AI risk treatment procedure with defined decision roles and required outputs (SoA + treatment plan) 1.
  • Inventory AI systems in scope (include third-party AI features used by the business).
  • Choose your SoA format and build the Annex A comparison workbook structure.
  • Pilot the workflow on one high-impact AI use case and produce the first SoA and treatment plan.

Days 31–60: expand coverage and prove traceability

  • Run risk treatment for additional AI systems, prioritizing customer-facing and decision-impacting use cases.
  • Establish approval gates: no deployment without a recorded treatment decision and assigned controls.
  • Attach evidence expectations to each control in the treatment plan (what the control owner must save).
  • Start lightweight control testing on a sample of implemented controls to confirm they operate as described in the SoA.

Days 61–90: operationalize and stabilize

  • Integrate risk treatment triggers into change management (model updates, new data sources, major prompt changes).
  • Formalize third-party AI controls (contract clauses, due diligence checkpoints, ongoing monitoring expectations) in the treatment plan.
  • Tune the SoA so it acts as an index to real artifacts (policies, SOPs, configs, logs, test results).
  • If you need speed and consistency across teams, implement the workflow in Daydream so the SoA and treatment plan generate from the same controlled records as the risk assessments.

Frequently Asked Questions

Do we have to implement every Annex A control?

Clause 6.1.3 requires you to compare your chosen controls to Annex A and produce a Statement of Applicability that records what applies and why 1. If you exclude a control, document the justification and approval.

What’s the difference between the Statement of Applicability and the AI risk treatment plan?

The SoA is the control applicability and implementation position against Annex A, with justifications and references 1. The risk treatment plan is the action-and-ownership plan to implement or improve controls for specific risks 1.

Can we reuse our ISO 27001 risk treatment process?

You can reuse the structure if it explicitly covers AI risks and still produces the AI-specific outputs required here: Annex A comparison, SoA, and AI risk treatment plan 1. Most teams need at least an AI-specific mapping layer and documented triggers tied to AI system changes.

How detailed does “determine controls necessary” need to be?

Detailed enough that someone can test whether the control exists and operates. If a control statement cannot be verified with evidence (config, logs, tickets, review records), it is too vague for audit purposes.

How should we handle third-party AI models in the SoA and treatment plan?

Treat the third party as part of the AI system boundary: include contractual and operational controls (due diligence, permitted-use constraints, monitoring, incident coordination) in the treatment plan, and reflect applicability decisions in the SoA 1.

What evidence do auditors usually ask for first?

They typically start with one AI use case and ask you to show traceability from risk assessment to treatment decision to SoA and treatment plan, then to operating evidence that the controls are implemented as stated 1.

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

Do we have to implement every Annex A control?

Clause 6.1.3 requires you to compare your chosen controls to Annex A and produce a Statement of Applicability that records what applies and why (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). If you exclude a control, document the justification and approval.

What’s the difference between the Statement of Applicability and the AI risk treatment plan?

The SoA is the control applicability and implementation position against Annex A, with justifications and references (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). The risk treatment plan is the action-and-ownership plan to implement or improve controls for specific risks (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system).

Can we reuse our ISO 27001 risk treatment process?

You can reuse the structure if it explicitly covers AI risks and still produces the AI-specific outputs required here: Annex A comparison, SoA, and AI risk treatment plan (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). Most teams need at least an AI-specific mapping layer and documented triggers tied to AI system changes.

How detailed does “determine controls necessary” need to be?

Detailed enough that someone can test whether the control exists and operates. If a control statement cannot be verified with evidence (config, logs, tickets, review records), it is too vague for audit purposes.

How should we handle third-party AI models in the SoA and treatment plan?

Treat the third party as part of the AI system boundary: include contractual and operational controls (due diligence, permitted-use constraints, monitoring, incident coordination) in the treatment plan, and reflect applicability decisions in the SoA (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system).

What evidence do auditors usually ask for first?

They typically start with one AI use case and ask you to show traceability from risk assessment to treatment decision to SoA and treatment plan, then to operating evidence that the controls are implemented as stated (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system).

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001 AI risk treatment: Implementation Guide | Daydream