AI risk assessment

ISO/IEC 42001 Clause 8.2 requires you to run an AI risk assessment on a defined cadence and anytime a significant change is proposed or occurs to an AI system. To operationalize it, you need a trigger-based workflow tied to change management, a repeatable assessment method, documented outcomes (risks, controls, decisions), and retained evidence that each assessment happened and drove action.

Key takeaways:

  • Run AI risk assessments on a planned schedule and on change triggers, not ad hoc.
  • Bind the assessment to your AI lifecycle and change management so “significant change” cannot bypass review.
  • Retain artifacts that prove scope, method, results, decisions, and follow-through.

An “AI risk assessment requirement” sounds simple until you try to execute it across multiple AI use cases, models, teams, and third parties. ISO/IEC 42001:2023 Clause 8.2 is short, but it creates a hard operational expectation: your organization must be able to show (1) you assess AI risks at planned intervals and (2) you assess again when meaningful change happens, not after an incident.

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat AI risk assessment as an auditable process embedded in the AI system lifecycle: intake, design, build/buy, deployment, monitoring, and change control. You need clear triggers, defined roles, a standard template, and a decision record that links identified risks to concrete mitigations and approvals.

This page gives requirement-level implementation guidance you can put into a control set quickly: who it applies to, what to do step-by-step, what evidence to retain, what auditors tend to ask, and a practical execution plan. All requirement statements are grounded in ISO/IEC 42001:2023 Clause 8.2. 1

Regulatory text

Requirement (verbatim): “The organization shall perform AI risk assessments at planned intervals or when significant changes are proposed or occur.” 1

What the operator must do

You must implement a repeatable AI risk assessment process that:

  1. Runs on a defined schedule (“planned intervals”).
  2. Runs on demand when change happens (“when significant changes are proposed or occur”).
  3. Produces records demonstrating the assessment occurred and informed decisions.

The clause does not define “planned intervals” or “significant changes.” That is your job. Auditors will look for internal definitions, consistent execution, and evidence that changes cannot bypass assessment.

Plain-English interpretation of the requirement

Maintain a standing AI risk assessment program. Reassess risks periodically and whenever your AI system meaningfully changes. Treat “risk assessment” as a documented evaluation that identifies AI-specific risks, assigns owners, selects controls, and records deployment decisions (approve, approve with conditions, delay, or retire).

Who it applies to (entity and operational context)

This requirement applies to:

  • AI providers building or supplying AI systems.
  • AI users deploying AI systems in business processes.
  • Organizations operating AI capabilities directly or through third parties. 1

Operationally, it applies wherever your organization:

  • Builds models, fine-tunes, or configures AI components.
  • Integrates AI into products, decisioning, customer support, HR, fraud, underwriting, marketing, or internal tooling.
  • Uses third-party AI services (including embedded AI inside broader SaaS tools) where outputs affect operations, customers, or regulated outcomes.

What you actually need to do (step-by-step)

Step 1: Define the assessment scope and inventory boundary

Create an “AI system register” boundary so teams know what must be assessed. At minimum, include:

  • Use case and business owner
  • Model/provider (in-house vs third party)
  • Data sources (training, prompt, retrieval, production inputs)
  • Output type and how it is used (advisory vs automated action)
  • Deployment context (internal-only, customer-facing, regulated workflow)

Practical tip: If your inventory is immature, start with the AI systems that (a) affect external users, (b) influence high-impact decisions, or (c) use sensitive data.

Step 2: Set “planned intervals” you can execute

Document a cadence rule that fits your operating model, then apply it consistently. You can define intervals by:

  • Time-based (e.g., periodic review aligned to your control calendar), and/or
  • Event-based (e.g., per release train, per major model update), and/or
  • Risk-tier-based (high-risk systems reviewed more frequently than low-risk systems)

Auditors typically care less about the exact cadence and more about whether you can justify it and show completion records for each in-scope AI system. 1

Step 3: Define “significant change” triggers (make them hard to bypass)

Publish a trigger list and connect it to change management. Treat these as minimum categories; tailor them to your environment:

  • Model changes: new model family, version upgrade, fine-tuning, hyperparameter changes, new safety filters.
  • Data changes: new training data source, major changes to retrieval corpus, new sensitive data classes, new data preprocessing.
  • Purpose/context changes: expanded use case, new customer segment, new geography/language, new regulated workflow.
  • Control changes: removal/relaxation of human review, changes to monitoring, changes to access controls.
  • Third-party changes: new AI provider, material changes in provider terms, hosting location, subcontractors, or service architecture.
  • Performance drift signals: repeated incidents, escalation trends, monitoring alerts that indicate behavior change.

Implementation pattern that works: require a “AI risk assessment required?” field in the change ticket. If “yes,” the release cannot close without an attached assessment ID and approval.

Step 4: Standardize the AI risk assessment method

Use a template that is consistent and reviewable. Your method should capture:

  • System description: intended use, users, decision points, failure modes.
  • Risk identification: operational, legal/compliance, security, privacy, safety, and reputational risks specific to the AI behavior and context.
  • Impact and likelihood rating: your organization’s scoring model is fine; consistency matters more than sophistication.
  • Existing controls and gaps: what is already in place, what must be added.
  • Residual risk and decision: accept, mitigate, transfer (third party), or avoid/retire.
  • Approvals: named approvers tied to your governance (business owner, risk/compliance, security/privacy as needed).

Keep the assessment “right-sized.” A lightweight assessment that is done every time beats a perfect assessment that gets skipped.

Step 5: Link assessment outcomes to action

Auditors will look for follow-through. Convert mitigation items into tracked work:

  • Control implementation tasks (guardrails, monitoring, access limits)
  • Policy/process updates (human review steps, escalation paths)
  • Testing requirements before release (red teaming, bias checks, prompt injection testing, content safety validation)
  • Go-live conditions and metrics to monitor

Step 6: Operationalize with governance and tooling

Minimum operating model:

  • RACI: business owner (accountable), AI/ML owner (responsible), compliance/risk (review), security/privacy (review), internal audit (challenge).
  • Workflow: intake → assessment → decision → controls/tasks → release gate → monitoring feedback → reassessment.
  • Repository: single system of record for assessments and evidence.

Where Daydream fits naturally: use Daydream to manage AI system intake, route assessments to the right reviewers, tie reassessments to change events, and keep an audit-ready evidence trail without chasing files across shared drives.

Required evidence and artifacts to retain

Retain evidence that proves both cadence-based and change-triggered assessments occurred:

Core artifacts

  • AI risk assessment procedure (defines planned intervals and significant change triggers)
  • AI system register (scope boundary)
  • Completed assessment records (template outputs)
  • Risk register entries for AI risks with owners and due dates
  • Approval/decision records (who approved, when, conditions)
  • Change tickets linking to assessment IDs (release gating evidence)

Supporting artifacts (as applicable)

  • Test results and validation reports tied to risks identified
  • Monitoring plan and alerts/escalation records
  • Third-party due diligence records for AI providers (if applicable)
  • Incident/problem reports that triggered reassessment

Common exam/audit questions and hangups

Expect questions like:

  • “Show me your definition of ‘significant change’ and where it is enforced.”
  • “Pick one AI system. Prove it was assessed on schedule and again after a material change.”
  • “Where is the evidence that identified risks resulted in controls, not just documentation?”
  • “Who can override an assessment requirement, and how is that exception documented?”
  • “How do you ensure third-party AI changes trigger reassessment?”

Hangup to anticipate: teams may argue an embedded AI feature in a third-party tool is “not an AI system.” Your register boundary must explicitly address embedded/third-party AI where it affects business outcomes.

Frequent implementation mistakes and how to avoid them

  1. No trigger integration with change management.
    Fix: add mandatory fields and release gates so changes cannot close without assessment evidence.

  2. Assessments done once, then forgotten.
    Fix: schedule planned intervals in your GRC calendar and track completion like any other control.

  3. Template is too long, so teams skip it.
    Fix: make a short baseline assessment and add “deep-dive” sections only for higher-risk systems.

  4. No decision record (only risk lists).
    Fix: require a documented risk acceptance or mitigation decision with named approvers.

  5. Third-party AI treated as “outsourced risk.”
    Fix: assess the use case risk in your context even if the model is external; capture dependency risk and required controls.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite specific regulator actions. Practically, failing Clause 8.2 creates predictable audit findings: unmanaged model change risk, inconsistent governance, and inability to prove control operation. Those gaps increase the chance of incidents (bad outputs, privacy issues, or operational failures) and make it harder to defend decisions after the fact. 1

Practical 30/60/90-day execution plan

First 30 days: Stand up the minimum viable control

  • Publish an AI risk assessment procedure with (a) planned interval rule and (b) significant change triggers. 1
  • Define the AI system register boundary and assign owners for initial entries.
  • Ship a standardized assessment template and approval workflow.
  • Pick one or two pilot AI systems and run assessments end-to-end, including follow-up tasks.

Next 60 days: Bind it to delivery and third-party processes

  • Integrate the trigger into change management (release gate + required assessment link).
  • Add procurement/third-party intake questions so new AI services cannot onboard without assessment.
  • Build a central evidence repository and naming conventions.
  • Train product, engineering, and business owners on what counts as “significant change” in your environment.

Next 90 days: Scale, test, and make it auditable

  • Expand the AI system register to cover all in-scope AI uses (including embedded third-party AI).
  • Run planned-interval assessments for the highest-risk tier systems.
  • Add monitoring feedback loops that explicitly trigger reassessment when needed.
  • Perform a mock audit: sample systems, trace change tickets to assessments, verify closure of mitigation actions.

Frequently Asked Questions

What qualifies as “planned intervals” under ISO/IEC 42001 Clause 8.2?

The clause requires planned intervals but does not prescribe a specific cadence. Define an interval rule you can execute consistently and document why it fits your risk profile. 1

What counts as a “significant change” that triggers a new AI risk assessment?

Treat model, data, purpose/context, control, and third-party dependency changes as candidate triggers. Document your trigger list and enforce it through change management so teams cannot self-select out of reassessment. 1

Do we need to assess third-party AI tools we didn’t build?

Yes, if you deploy or rely on the AI capability in operations, you still own the risk in your context. Your assessment should capture dependency risk and required controls, even when the model is external. 1

How do we prove to an auditor that the requirement is operating?

Show your procedure, your AI system register, completed assessments on a schedule, and change tickets that include assessment links for significant changes. Auditors also look for evidence that mitigations were tracked and completed. 1

Can we reuse an assessment across multiple similar AI use cases?

You can reuse shared components (common model risks, standard control baselines), but you still need a documented assessment for each deployment context. Context drives risk, especially where outputs affect different users or decisions. 1

Who should approve AI risk assessments?

Assign approval to the accountable business owner and require review from risk/compliance, plus security and privacy where the use case touches those domains. Document named approvers and any acceptance of residual risk. 1

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

What qualifies as “planned intervals” under ISO/IEC 42001 Clause 8.2?

The clause requires planned intervals but does not prescribe a specific cadence. Define an interval rule you can execute consistently and document why it fits your risk profile. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What counts as a “significant change” that triggers a new AI risk assessment?

Treat model, data, purpose/context, control, and third-party dependency changes as candidate triggers. Document your trigger list and enforce it through change management so teams cannot self-select out of reassessment. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Do we need to assess third-party AI tools we didn’t build?

Yes, if you deploy or rely on the AI capability in operations, you still own the risk in your context. Your assessment should capture dependency risk and required controls, even when the model is external. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

How do we prove to an auditor that the requirement is operating?

Show your procedure, your AI system register, completed assessments on a schedule, and change tickets that include assessment links for significant changes. Auditors also look for evidence that mitigations were tracked and completed. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Can we reuse an assessment across multiple similar AI use cases?

You can reuse shared components (common model risks, standard control baselines), but you still need a documented assessment for each deployment context. Context drives risk, especially where outputs affect different users or decisions. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Who should approve AI risk assessments?

Assign approval to the accountable business owner and require review from risk/compliance, plus security and privacy where the use case touches those domains. Document named approvers and any acceptance of residual risk. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001 AI risk assessment: Implementation Guide | Daydream