Management review

ISO/IEC 42001 Clause 9.3 requires top management to review your AI management system at planned intervals and take action based on what the review finds. To operationalize it, schedule a recurring management review, define required inputs/outputs, document decisions and follow-ups, and retain evidence that leadership evaluated suitability, adequacy, effectiveness, and alignment to strategy. 1

Key takeaways:

  • Make the management review a top-management agenda item with a defined scope, inputs, outputs, and minutes. 1
  • Track decisions to closure through action items, owners, dates, and evidence, or auditors will treat the review as “ceremonial.” 1
  • Prove alignment to strategic direction by linking review outcomes to AI objectives, risk posture, resourcing, and improvement priorities. 1

Management review is where an AI management system stops being “a set of documents” and becomes governed. Clause 9.3 is short, but the expectation is not. Auditors want evidence that top management periodically steps back, evaluates whether the AI management system still fits the organization, and makes decisions that change priorities, resources, risk acceptance, and improvement work. 1

For a Compliance Officer, CCO, or GRC lead, the fastest way to implement this requirement is to treat management review as a repeatable control: a calendar event, a defined set of inputs, a structured agenda, documented outputs, and a tracked action log. Done right, it also reduces friction across model governance, product, security, privacy, legal, and procurement because there is a single forum where leadership adjudicates tradeoffs and approves directional changes.

This page translates Clause 9.3 into an operator-ready runbook: who must attend, what to review, what decisions must be captured, what evidence to retain, and what auditors commonly challenge.

Regulatory text

Requirement (excerpt): “Top management shall review the AI management system at planned intervals.” 1

What the operator must do: Set a planned cadence for a top-management review of the AI management system, run the review using defined inputs, and document outputs that show leadership evaluated the system’s continuing suitability, adequacy, effectiveness, and alignment with strategic direction. 1

Plain-English interpretation (what this really means)

You need a documented, recurring meeting where top management:

  1. looks at how the AI management system is performing,
  2. decides what needs to change (objectives, controls, resources, priorities, risk acceptance), and
  3. verifies those decisions are executed and re-evaluated in the next cycle. 1

If you cannot show decisions, action items, and follow-through, you effectively cannot show “review” in a management-system sense. Auditors will treat a slide deck with no decisions as weak evidence.

Who it applies to

Entity types: Organizations that provide AI systems, use AI systems, or otherwise operate an AI management system under ISO/IEC 42001. 1

Operational context (where it bites in practice):

  • AI providers: model and product roadmaps, performance drift, incident learnings, customer obligations, and third-party dependencies must feed into management review decisions.
  • AI users (deployers): deployment controls, human oversight effectiveness, monitoring results, and business outcomes must be reviewed and re-approved at the top-management level.
  • Enterprise organizations: management review needs to cut across functions (GRC, security, privacy, legal, procurement, engineering, product) because AI risk does not sit in one silo.

What you actually need to do (step-by-step)

1) Define “planned intervals” as a formal cadence

  • Put management review on the corporate governance calendar.
  • Define the trigger(s) for out-of-cycle reviews (examples: major AI incident, material regulatory change, significant model change, acquisition, or a new high-risk use case).
  • Document the cadence in a procedure so it is not dependent on one person’s memory. 1

Practical tip: If your org already runs management reviews for ISO 27001/9001-style systems, reuse that mechanism and add AI-specific inputs. Keep one management review format, multiple management systems as agenda sections.

2) Identify “top management” and lock attendance expectations

  • Name the roles that qualify as top management for your organization (not just job titles; use role definitions).
  • Set quorum rules (what must be represented for decisions to be valid).
  • Add an executive sponsor who owns final decision-making and removes blockers.

Common operator pattern: CCO/CISO co-sponsor with a Product/Engineering exec. You want decision authority over both risk and delivery.

3) Build a standard agenda with required inputs

Create a template that forces coverage of suitability, adequacy, effectiveness, and strategic alignment. 1

Minimum agenda sections most teams can execute quickly:

  • Status of prior actions: what was decided last time, what is done, what is overdue, and why.
  • AI system inventory and material changes: new systems, retired systems, major updates, new geographies, new customer segments.
  • Risk posture: top AI risks, changes in risk acceptance, emerging issues, control gaps, and residual risk decisions.
  • Monitoring and performance: high-level trends, drift signals, safety or quality indicators, and exceptions.
  • Incidents and near misses: what happened, root causes, corrective actions, and whether changes are needed to controls or oversight.
  • Third party dependencies: critical third parties supporting data, models, tooling, hosting, evaluation, and labeling; any issues that affect AI risk.
  • Resources and competence: staffing gaps, training needs, tooling, budget constraints (no need for numbers in the minutes, but document decisions).
  • Objectives and improvement plan: changes to AI objectives, target state, and prioritized improvement work. 1

4) Run the meeting like a decision forum, not a status readout

During the review, force explicit decisions:

  • Accept risk as-is (and define conditions).
  • Require remediation (and define scope).
  • Pause or restrict a use case.
  • Approve changes to policies/standards.
  • Approve changes to monitoring, testing, or human oversight.
  • Approve resourcing or re-prioritization. 1

One move that helps in audits: For each major topic, record “Decision / Rationale / Owner / Due date / Evidence expected.” That ties leadership review to measurable execution.

5) Produce formal outputs and assign follow-ups

Management review must result in outputs you can point to. Keep it simple:

  • Approved meeting minutes
  • Action register (with owners and due dates)
  • Updated AI objectives (if changed)
  • Updated risk decisions (if changed)
  • Updated improvement plan (if changed)
  • Escalations to board or executive committee (if needed)

6) Track actions to closure (this is where audits are won or lost)

  • Maintain a single action log for management review items.
  • Require evidence links for closure (ticket, policy update, training record, monitoring change, third-party remediation proof).
  • Re-open items if evidence does not match the decision. 1

Where Daydream fits naturally: If you manage AI governance work across many teams, Daydream can serve as the system of record for management review actions, evidence collection, and audit-ready reporting, so minutes translate into tracked controls and artifacts rather than follow-ups lost in email.

Required evidence and artifacts to retain

Auditors typically want “show me” evidence, not narratives. Retain:

  • Management review procedure (cadence, scope, participants, inputs/outputs). 1
  • Calendar invites or governance calendar entries proving planning.
  • Attendance records and quorum confirmation (or documented exceptions).
  • Agenda and pre-read package.
  • Minutes that capture decisions, not just discussion. 1
  • Action log with owners, due dates, and closure evidence.
  • Evidence of updates triggered by review (revised objectives, updated policies, revised monitoring plan, updated risk register entries). 1

Common exam/audit questions and hangups

Expect variants of:

  • “Define top management for your AI management system. Who attended the last review?”
  • “Show me the last two management reviews and the actions that carried over.”
  • “What inputs are required, and how do you ensure they are complete?”
  • “What decisions were made about AI risk acceptance, resourcing, or strategic direction?”
  • “How do you handle out-of-cycle reviews after major AI changes or incidents?” 1

Hangup auditors focus on: If leadership review is disconnected from operational evidence (tickets, changes, control updates), the review does not demonstrate effectiveness. 1

Frequent implementation mistakes (and how to avoid them)

  1. Treating the review as a presentation.
    Fix: Require documented decisions and action items for every major agenda section.

  2. Inviting senior people but giving them no decision points.
    Fix: Pre-wire decisions. Put “approve / reject / defer” items in the pre-read.

  3. No linkage to strategy.
    Fix: Include a standing section: “Changes in business strategy that affect AI, and changes in AI that affect business strategy.” Record outcomes. 1

  4. No evidence trail.
    Fix: Minutes must reference where supporting artifacts live, and the action log must include closure evidence.

  5. Actions die after the meeting.
    Fix: Put action review as the first agenda item next time. Escalate overdue items to the same top-management group.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for ISO/IEC 42001 Clause 9.3, so you should treat this primarily as an audit and certification risk rather than a direct enforcement driver.

Operationally, weak management review creates predictable failure modes:

  • AI risks become “known issues” with no executive decision trail.
  • Incidents recur because corrective actions are not tracked.
  • Your AI program drifts away from business strategy and tolerances without a formal checkpoint. 1

Practical 30/60/90-day execution plan

First 30 days (stand up the control)

  • Assign an owner for the management review process (often GRC or the AI governance lead).
  • Define top management participants and quorum.
  • Publish the procedure: cadence, required inputs, agenda template, outputs, and action tracking. 1
  • Create templates: minutes, action register, pre-read checklist.
  • Schedule the first session and require pre-reads from control owners (risk, incidents, monitoring, third party, changes).

Days 31–60 (run the first review and prove follow-through)

  • Run the first management review with formal minutes.
  • Capture decisions and create action items with owners and due dates.
  • Centralize evidence for each action item (tickets, updated documents, monitoring changes).
  • Start an “out-of-cycle review” trigger log so significant events are not missed.

Days 61–90 (stabilize and make it audit-ready)

  • Run a mid-cycle check-in (or leadership readout) focused on action closure.
  • Improve the input package based on what leadership asked for (tighten metrics, add incident taxonomy, add third party status).
  • Perform an internal spot-check: pick several action items and verify closure evidence quality.
  • Integrate the action register with your GRC workflow tooling (Daydream or your existing system) so reminders, evidence, and reporting are consistent. 1

Frequently Asked Questions

What counts as “planned intervals” for ISO/IEC 42001 management review?

ISO/IEC 42001 Clause 9.3 requires that reviews occur at planned intervals, meaning you set and document a cadence and follow it. Pick an interval that matches how quickly your AI portfolio and risks change, then make it part of your governance calendar. 1

Does management review have to cover every AI system?

The review must cover the AI management system, so it needs enough breadth to address material risks, changes, and performance across the scope you declared. Many teams summarize the inventory and then deep-dive only the highest-risk or most-changed systems each cycle. 1

Who qualifies as “top management” for the review?

“Top management” should be defined by who has authority to set direction and allocate resources for the AI management system. Document the roles, require consistent attendance, and record substitutes with delegated authority when needed. 1

What’s the minimum evidence auditors expect to see?

Auditors typically expect a repeatable process (procedure), proof the review happened (agenda/minutes/attendance), and proof it drove action (action log with closure evidence). Minutes that include explicit decisions are stronger than narrative summaries. 1

We already do ISO 27001 management reviews. Can we combine them?

Yes, as long as the AI management system is explicitly reviewed and the required AI-related inputs and outputs are captured in the same record. Combined reviews reduce governance overhead, but only if AI does not get squeezed out of the agenda. 1

How do we handle third parties in management review?

Bring forward issues that affect AI risk, such as model providers, data suppliers, evaluators, and hosting services. Track third-party risks and remediation actions in the same action register so leadership decisions are followed through. 1

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

What counts as “planned intervals” for ISO/IEC 42001 management review?

ISO/IEC 42001 Clause 9.3 requires that reviews occur at planned intervals, meaning you set and document a cadence and follow it. Pick an interval that matches how quickly your AI portfolio and risks change, then make it part of your governance calendar. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Does management review have to cover every AI system?

The review must cover the AI management system, so it needs enough breadth to address material risks, changes, and performance across the scope you declared. Many teams summarize the inventory and then deep-dive only the highest-risk or most-changed systems each cycle. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Who qualifies as “top management” for the review?

“Top management” should be defined by who has authority to set direction and allocate resources for the AI management system. Document the roles, require consistent attendance, and record substitutes with delegated authority when needed. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What’s the minimum evidence auditors expect to see?

Auditors typically expect a repeatable process (procedure), proof the review happened (agenda/minutes/attendance), and proof it drove action (action log with closure evidence). Minutes that include explicit decisions are stronger than narrative summaries. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

We already do ISO 27001 management reviews. Can we combine them?

Yes, as long as the AI management system is explicitly reviewed and the required AI-related inputs and outputs are captured in the same record. Combined reviews reduce governance overhead, but only if AI does not get squeezed out of the agenda. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

How do we handle third parties in management review?

Bring forward issues that affect AI risk, such as model providers, data suppliers, evaluators, and hosting services. Track third-party risks and remediation actions in the same action register so leadership decisions are followed through. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001 Management review: Implementation Guide | Daydream