Continual improvement

ISO/IEC 42001 Clause 10.1 requires you to continually improve the suitability, adequacy, and effectiveness of your AI management system (AIMS). Operationally, that means you must run a repeatable improvement loop that turns monitoring results, incidents, audits, metrics, and management reviews into tracked corrective actions, updates to controls, and measurable outcomes. 1

Key takeaways:

  • Treat continual improvement as a closed-loop system: inputs (signals) → decisions → actions → verification → updates.
  • Evidence matters more than intent: auditors will look for a backlog, owners, due dates, and proof changes reduced risk.
  • Tie improvements to AI risks and real operations, including third parties and model changes, not just policy refreshes.

Continual improvement under ISO/IEC 42001 is easy to describe and surprisingly easy to fail in practice. Many programs collect signals (metrics, incidents, user complaints, drift checks, internal audit findings) but cannot prove that those signals consistently drive changes in how AI is designed, deployed, monitored, or governed. Clause 10.1 closes that gap. It expects an operating rhythm where your AI management system stays “fit for purpose” as models, data, use cases, regulations, and suppliers change. 1

For a Compliance Officer, CCO, or GRC lead, the fastest path is to implement a tight improvement mechanism and connect it to existing governance you already run: issue management, CAPA, risk treatment plans, change management, internal audit, and management review. Then you define what “suitability,” “adequacy,” and “effectiveness” mean for your organization and your AI portfolio, and you retain evidence that shows progress over time, not a one-time remediation sprint. This page provides requirement-level guidance you can implement immediately, with artifacts an auditor can follow end-to-end.

Regulatory text

ISO/IEC 42001 Clause 10.1: “The organization shall continually improve the suitability, adequacy and effectiveness of the AI management system.” 1

Operator meaning (what you must do):

  • Suitability: Keep the AIMS aligned to your actual AI use cases, risk profile, business objectives, and operating environment. If your AI footprint changes, the AIMS must change too.
  • Adequacy: Ensure the AIMS includes enough controls, procedures, resourcing, and competence to manage your AI risks.
  • Effectiveness: Prove the AIMS works in practice. Controls must reduce identified risks, and failures must trigger corrective action.

Auditors typically expect a repeatable, documented process that ingests multiple improvement inputs (monitoring, incidents, audits, third-party issues, regulatory changes, KPI/KRI trends), prioritizes them, assigns ownership, tracks completion, and verifies outcomes.

Plain-English interpretation of the continual improvement requirement

You need an always-on system for making your AI governance better over time, backed by evidence. If your monitoring finds bias issues, a supplier changes a model behavior, an internal audit finds missing approvals, or a production incident occurs, you must:

  1. record it as an improvement trigger,
  2. decide what to change,
  3. implement the change through controlled processes,
  4. confirm the fix worked,
  5. update the AIMS documentation and training where needed.

This is not satisfied by an annual policy review alone. It is satisfied when you can show a consistent pattern of detecting issues and improving controls and outcomes across the AI lifecycle.

Who it applies to (entity and operational context)

Clause 10.1 applies to any organization operating an AI management system, including:

  • AI providers building or supplying AI systems,
  • AI users deploying AI in business processes,
  • Organizations using AI through third parties (cloud AI services, embedded AI features, outsourced model development). 1

Operationally, it applies wherever your AIMS touches:

  • model development and validation,
  • data sourcing and quality management,
  • deployment and change control,
  • ongoing monitoring (performance, drift, safety, security, fairness where relevant),
  • incident and issue management,
  • third-party risk management for AI suppliers,
  • documentation and training.

If you have multiple business units, you can centralize the improvement mechanism but must still capture improvements at the system/use-case level.

What you actually need to do (step-by-step)

1) Define improvement “inputs” and require capture

Create a documented list of triggers that must be logged into your improvement workflow. Common inputs:

  • monitoring results (model performance regressions, drift, human override rates),
  • AI incidents and near misses,
  • internal audit and control testing findings,
  • risk assessments and risk treatment plan gaps,
  • change requests and post-implementation reviews,
  • third-party notices (model version changes, outages, security advisories),
  • complaints, user feedback, or stakeholder escalations.

Practical control: Require teams to log triggers into a single register (a ticketing system, GRC tool, or controlled spreadsheet) with minimum fields: description, affected AI system/use case, risk impact, source, date, owner, proposed action.

2) Establish a triage and prioritization routine

Set a regular forum (or embed into an existing AI governance committee) to:

  • validate the issue,
  • assign severity/priority criteria,
  • decide the action path: corrective action, preventive action, control redesign, policy update, training, or acceptance with justification.

Keep the criteria simple and auditable: impact on people, customers, compliance obligations, safety, security, operational continuity, and reputational exposure.

3) Convert triggers into controlled actions (CAPA + change management)

For each prioritized item:

  • open a corrective/preventive action record,
  • perform root-cause analysis appropriate to the issue,
  • document the chosen fix and why it addresses the cause,
  • route the fix through change management (especially if it touches models, prompts, training data, thresholds, or human-in-the-loop steps),
  • define “done” with a verification method (re-test, monitoring improvement, audit re-check).

4) Verify effectiveness, not just completion

Clause 10.1 emphasizes effectiveness. Build verification into the workflow:

  • define expected outcome (what metric or observation should change),
  • assign who verifies and when,
  • capture the verification result and any follow-up actions.

If the fix didn’t work, reopen the action with a new root cause.

5) Update the AI management system artifacts

When improvements change how work is done, update:

  • policies/standards,
  • procedures and playbooks (incident response, monitoring, validation),
  • risk assessment templates,
  • training materials and role-based guidance,
  • supplier requirements and third-party due diligence checklists.

Track document versions and approvals so you can show the AIMS evolved based on real signals.

6) Report improvement status to top management

Feed trends and significant improvements into management review. The goal: leadership sees whether the AIMS is keeping pace with risk and whether resources are adequate. Keep reporting focused on actionable themes: recurring root causes, control gaps, supplier-driven risks, and bottlenecks.

7) Make third-party AI part of the loop

Include third-party signals as first-class improvement triggers:

  • contract/SLA breaches affecting AI performance,
  • model updates pushed by a provider,
  • data processing changes,
  • subprocessor changes,
  • security advisories.

Where a third party is responsible for a fix, your improvement record should still show your actions: escalation, mitigation, compensating controls, or switching providers.

Where Daydream fits naturally: If you struggle to keep improvement items tied to specific AI systems, owners, and third-party dependencies, Daydream can serve as the system of record for AI-related obligations and action tracking, so improvement evidence stays audit-ready without chasing tickets across tools.

Required evidence and artifacts to retain

Auditors will look for a clear chain from trigger → decision → action → verification → system update. Retain:

  • Continual improvement procedure (how triggers are captured, triaged, actioned, verified)
  • Improvement register/backlog with owners, status, and timestamps
  • CAPA records (root cause, action plan, approvals, completion evidence)
  • Change management records for AI changes (testing, approvals, rollback plans where applicable)
  • Monitoring and metric outputs that generated the trigger and later verified improvement
  • Internal audit reports and remediation tracking
  • Management review minutes/outputs that reference improvement themes and resourcing decisions
  • Updated controlled documents (version history, approvals)
  • Training assignments/completions when improvements require behavior change
  • Third-party communications and remediation evidence for supplier-driven improvements

Common exam/audit questions and hangups

Expect questions like:

  • “Show me the last few improvements and how you proved they were effective.”
  • “Where do monitoring findings go, and who owns them?”
  • “How do you prevent recurring issues?”
  • “How do improvements flow into policy/procedure updates?”
  • “How do you handle improvements when the AI system is provided by a third party?”
  • “What does ‘effective’ mean here, and how do you measure it?”

Hangups that cause findings:

  • improvement items tracked informally (email/Slack) with no audit trail,
  • “done” defined as task completion, not outcome verification,
  • improvements logged but not tied to specific AI systems/use cases,
  • management review exists but does not drive decisions or resourcing.

Frequent implementation mistakes and how to avoid them

  1. Mistake: Treating continual improvement as a document refresh cycle.
    Avoid it: require operational triggers (incidents, monitoring, audit findings) to drive tracked actions.

  2. Mistake: No root cause, only symptoms.
    Avoid it: enforce lightweight root-cause documentation for any repeat or high-impact issue.

  3. Mistake: Fixes bypass change control.
    Avoid it: route model, prompt, data, and threshold changes through the same rigor as other production changes.

  4. Mistake: Metrics without decisions.
    Avoid it: define escalation thresholds or decision rules so metrics produce actions, not dashboards.

  5. Mistake: Third-party AI treated as “out of scope.”
    Avoid it: log supplier-driven events as improvement triggers and document your mitigations and escalations.

Risk implications (why auditors care)

Without a functioning improvement loop, your AIMS degrades as AI systems evolve. The practical risk is control drift: what you claim you do (policies) no longer matches what the teams do (operations). That creates:

  • unmanaged model behavior changes,
  • repeated incidents,
  • inconsistent human oversight,
  • weak supplier accountability,
  • inability to demonstrate governance effectiveness to customers and auditors.

Clause 10.1 is often assessed indirectly: auditors sample incidents, changes, and audit findings and then test whether your AIMS converted them into verified improvements.

A practical 30/60/90-day execution plan

First 30 days (stand up the mechanism)

  • Publish a short continual improvement procedure for the AIMS. 1
  • Create the improvement register with minimum required fields and ownership rules.
  • Define trigger sources and require intake from monitoring, incidents, audits, and third-party notices.
  • Set the triage forum cadence and decision rights (RACI).

By 60 days (run real items through end-to-end)

  • Ingest a first wave of triggers and prioritize them using documented criteria.
  • Complete several actions through CAPA and change management, including at least one that updates an AIMS artifact (procedure, training, or supplier requirement).
  • Implement effectiveness verification steps and store evidence with each closed item.
  • Add management review reporting: themes, overdue actions, recurring root causes.

By 90 days (stabilize and prove effectiveness)

  • Demonstrate trend reporting: recurring issues, time-to-close, and verification outcomes.
  • Tighten linkages: each AI system/use case has an owner, monitoring plan, and improvement history.
  • Expand third-party integration: supplier-driven changes and issues flow into the same register with documented mitigations.
  • Prepare an audit-ready “continual improvement packet” that shows the complete chain for multiple samples.

Frequently Asked Questions

What counts as “continual” in continual improvement?

ISO/IEC 42001 Clause 10.1 does not prescribe a fixed cadence; it requires an ongoing ability to improve the AIMS based on operational signals. Your evidence should show repeated cycles of intake, action, and verification over time. 1

Do we need a formal CAPA process, or can we use our existing issue tracker?

You can use an existing issue tracker if it captures root cause (when appropriate), ownership, due dates, approvals, and effectiveness checks. Auditors care about traceability and verification more than the tooling brand.

How do we prove “effectiveness” without sophisticated AI metrics?

Use outcome-based checks tied to the issue: re-test results, reduced recurrence, improved monitoring signals, or successful audit re-testing. The key is to define the verification method before closing the action.

Does continual improvement apply to AI we buy from a third party?

Yes. You still operate an AIMS and must improve it when third-party AI introduces issues or changes. Your records should show escalation, compensating controls, and updates to supplier requirements where needed. 1

What if we decide to accept a risk instead of fixing it?

Document the decision, rationale, approver, and any monitoring or guardrails. Risk acceptance can be a valid outcome, but it must be deliberate and traceable within the AIMS.

What’s the fastest way to get audit-ready for this clause?

Build an improvement register and run several real triggers through the full loop, including verification evidence and at least one resulting update to AIMS documentation or training. Then package those samples so an auditor can follow them end-to-end.

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

What counts as “continual” in continual improvement?

ISO/IEC 42001 Clause 10.1 does not prescribe a fixed cadence; it requires an ongoing ability to improve the AIMS based on operational signals. Your evidence should show repeated cycles of intake, action, and verification over time. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Do we need a formal CAPA process, or can we use our existing issue tracker?

You can use an existing issue tracker if it captures root cause (when appropriate), ownership, due dates, approvals, and effectiveness checks. Auditors care about traceability and verification more than the tooling brand.

How do we prove “effectiveness” without sophisticated AI metrics?

Use outcome-based checks tied to the issue: re-test results, reduced recurrence, improved monitoring signals, or successful audit re-testing. The key is to define the verification method before closing the action.

Does continual improvement apply to AI we buy from a third party?

Yes. You still operate an AIMS and must improve it when third-party AI introduces issues or changes. Your records should show escalation, compensating controls, and updates to supplier requirements where needed. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What if we decide to accept a risk instead of fixing it?

Document the decision, rationale, approver, and any monitoring or guardrails. Risk acceptance can be a valid outcome, but it must be deliberate and traceable within the AIMS.

What’s the fastest way to get audit-ready for this clause?

Build an improvement register and run several real triggers through the full loop, including verification evidence and at least one resulting update to AIMS documentation or training. Then package those samples so an auditor can follow them end-to-end.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001 Continual improvement: Implementation Guide | Daydream