AI management system

ISO/IEC 42001 Clause 4.4 requires you to stand up an AI Management System (AIMS): a governed set of processes (and how they connect) that you run, maintain, and continuously improve across the AI lifecycle. To operationalize it fast, document your end-to-end AI process map, assign accountable owners, implement minimum control points (intake, risk assessment, change/release, monitoring, incidents), and retain evidence that the system runs. 1

Key takeaways:

  • You need a functioning management system, not a single “AI policy”; auditors look for interconnected processes with owners and records.
  • Start with a process map and a control operating cadence (intake → assess → build/buy → deploy → monitor → improve).
  • Evidence matters: decisions, approvals, monitoring outputs, incident handling, and management review records.

Clause 4.4 is the backbone requirement behind the rest of ISO/IEC 42001: you must establish and operate an AI Management System that ties together the processes you need to manage AI risk and performance, and you must keep it running as the organization changes. The fastest way to make this real is to treat it like other ISO management systems: define scope, define the process architecture, assign accountable process owners, implement a small number of mandatory control gates, then run a repeatable cadence that produces records.

For a Compliance Officer, CCO, or GRC lead, the practical question is: “What must exist so an auditor believes we have an AIMS?” The answer is a set of documented processes that cover your AI lifecycle (including third-party AI), show clear interactions (inputs/outputs/hand-offs), and generate auditable evidence (approvals, logs, reviews, corrective actions). If you already run an ISMS, privacy program, or model risk management program, reuse that structure. Clause 4.4 does not ask for perfection; it asks for a system you can operate and continually improve. 1

Regulatory text

Requirement (verbatim): “The organization shall establish, implement, maintain and continually improve an AI management system, including the processes needed and their interactions, in accordance with the requirements of this document.” 1

Operator interpretation: You must (1) define the AIMS processes you need, (2) implement them so they actually run, (3) keep them current as your AI portfolio and org change, and (4) run an improvement loop. “Processes and their interactions” means you can show how work moves from idea to deployment to monitoring, including who approves what and what records get created at each step. 1

Plain-English interpretation (what auditors are really testing)

Auditors will test whether your AI governance is a system:

  • System: a connected set of repeatable processes with owners, inputs, outputs, and records.
  • Not a system: a policy PDF plus a one-time risk assessment.

Expect evaluation along these lines:

  1. Can you explain, on one page, how AI work enters your org, gets approved, gets built/bought, gets deployed, and gets monitored?
  2. Do those steps have accountable owners and decision criteria?
  3. Do you have evidence that the steps are followed for real AI use cases?
  4. Do you review outcomes and fix issues through corrective actions and management review? 1

Who it applies to

Entities: Any organization implementing ISO/IEC 42001, including AI providers and AI users. 1

Operational context (where this bites):

  • You build models internally (ML, GenAI, rules-based “AI” products).
  • You deploy or use third-party AI (SaaS AI features, hosted LLMs, embedded AI in platforms).
  • Business teams use AI tools for decision support (HR, underwriting, fraud, marketing, customer support).
  • You rely on third parties for data labeling, model development, monitoring tools, or AI infrastructure.

Practical scope rule: if AI meaningfully affects customers, employees, financials, safety, or regulated outcomes, treat it as in-scope and route it through the AIMS processes.

What you actually need to do (step-by-step)

1) Define the AIMS scope and boundaries

  • Write a scope statement: which business units, geographies, products, and AI uses are covered.
  • Include third-party AI used in your processes, not only AI you build.
  • Define exclusions explicitly and justify them.

Deliverable: AIMS scope document approved by an accountable executive.

2) Create a process architecture (the “processes and interactions” map)

Build a single visual or table that shows the end-to-end lifecycle and handoffs. Minimum set most organizations need:

  • AI intake and inventory (capture proposed use, owner, purpose, data, third parties)
  • Risk classification and impact assessment (what could go wrong; who is affected)
  • Data and design controls (data sourcing, quality, privacy/security alignment)
  • Build/buy governance (third-party due diligence, contract requirements, integration review)
  • Change and release management (testing, approvals, rollback plans, sign-off)
  • Monitoring and performance review (drift, misuse, complaints, outcomes)
  • Incident management and escalation (AI-specific triggers, containment, comms)
  • Corrective action and continual improvement (root cause, fixes, lessons learned)
  • Training and competency (role-based requirements)
  • Records and documentation control (retention, versioning)

Deliverable: AIMS process map with owners (RACI works well), inputs/outputs, required records at each step.

3) Assign accountable owners and decision rights

For each process, name:

  • Accountable owner (single throat to choke)
  • Approvers for key gates (e.g., AI risk lead, security, privacy, legal, product)
  • Escalation path for disputes and exceptions

Common fast pattern: create an AI Governance Committee with authority to approve high-risk deployments and exceptions. Keep charters short and decision criteria explicit.

Deliverables: Role descriptions, governance charter, approval matrix.

4) Implement control gates that force the system to run

If you only do four gates, do these:

  1. Intake gate: nothing proceeds without an inventory record and business owner.
  2. Risk gate: risk classification completed before procurement or build starts.
  3. Release gate: testing evidence + approvals before production deployment.
  4. Post-release gate: monitoring plan active and assigned before “go-live” is considered complete.

Connect gates to tooling your teams already use (ticketing, SDLC, procurement workflow). Manual checklists are acceptable early, but they must produce records.

5) Operationalize “maintain” (change control for the management system)

Your AIMS itself must be maintained:

  • Version control for policies, standards, procedures
  • Change approval workflow
  • Communications plan so teams know what changed
  • Training updates tied to changes

6) Run “continual improvement” as a closed loop

Define what triggers improvements:

  • Incidents, near misses, customer complaints
  • Monitoring signals (performance degradation, drift, misuse patterns)
  • Audit findings or control failures
  • Material changes in models, data sources, or third parties

Then require:

  • Root cause analysis
  • Corrective action plan with owner and due date
  • Verification that the fix worked
  • Management review decisions captured as minutes/actions 1

Required evidence and artifacts to retain

Use this as your audit-ready checklist:

System definition

  • AIMS scope statement and approvals
  • AI governance charter, committee minutes, decision logs
  • Process map showing interactions and owners
  • Document control procedure (versioning, approvals)

Operating records

  • AI inventory (including third-party AI), with business owners
  • Intake tickets/forms for new AI uses
  • Risk assessments and classification results
  • Third-party due diligence records for AI suppliers (security, privacy, reliability, subcontractors)
  • Release approvals, testing summaries, rollback plans
  • Monitoring plans and monitoring outputs (alerts, reviews, KPIs you choose)
  • Incident records and post-incident reviews
  • Corrective action logs and closure evidence
  • Training completion records for relevant roles 1

Common exam/audit questions and hangups

“Show me your processes and their interactions.”
Hangup: teams provide policies, not process flows. Fix: maintain a one-page lifecycle map and show 2–3 real examples traced end-to-end.

“How do you know your inventory is complete?”
Hangup: no intake control, shadow AI, and unmanaged third-party features. Fix: tie intake to procurement, SDLC, and security architecture review; add attestations from business units.

“Who can approve exceptions, and where are they documented?”
Hangup: informal approvals in chat. Fix: exception register with rationale, compensating controls, expiry/review trigger.

“How do you demonstrate continual improvement?”
Hangup: no corrective action discipline. Fix: a corrective action log that links to monitoring signals, incidents, and audits, plus evidence of closure.

Frequent implementation mistakes (and how to avoid them)

  1. Treating the AIMS as a policy set.
    Avoidance: design workflows and gates that create operational records.

  2. Ignoring third-party AI.
    Avoidance: add AI-specific questions to third-party onboarding and contract reviews; inventory AI features in existing tools.

  3. No single accountable owner per process.
    Avoidance: assign one accountable owner; multiple “co-owners” usually means nobody owns outcomes.

  4. Over-engineering on day one.
    Avoidance: start with minimum viable gates and expand based on findings and real incidents.

  5. No linkage to existing management systems.
    Avoidance: integrate with ISMS/privacy/SDLC change management instead of duplicating controls; auditors accept reuse if interactions are explicit. 1

Enforcement context and risk implications

No public enforcement cases were provided for this requirement in the available source catalog. Practically, the risk is examination failure, certification failure, and unmanaged AI outcomes (errors, bias, privacy/security incidents, and contractual breaches) because no one can prove consistent control. Clause 4.4 is often where auditors anchor their conclusions: if the system does not operate, control-by-control evidence elsewhere looks ad hoc. 1

Practical 30/60/90-day execution plan

First 30 days (stand up the skeleton)

  • Confirm AIMS scope and executive sponsor.
  • Build AI inventory baseline from procurement, SDLC repos, and business attestations.
  • Draft the AIMS process map with owners and required records.
  • Stand up an AI governance forum and decision log.
  • Implement a simple intake form and risk classification template.

Days 31–60 (make it run)

  • Put control gates into existing workflows (ticketing/procurement/SDLC).
  • Run the process on a small set of live AI use cases and collect evidence.
  • Create an exception process and register.
  • Define monitoring expectations (what gets monitored, by whom, and escalation triggers).
  • Train key roles (product, engineering, procurement, legal, security) on the new gates.

Days 61–90 (prove “maintain and improve”)

  • Perform an internal audit-style walkthrough: trace multiple AI systems end-to-end.
  • Fix gaps with corrective actions and record closures.
  • Hold a management review focused on AIMS performance (findings, exceptions, incidents, monitoring outcomes).
  • Tighten document control and retention so records are consistently retrievable.
  • If you use Daydream, configure your evidence requests and workflows so intake, third-party due diligence, approvals, and monitoring records land in one place with clear ownership and audit trails.

Frequently Asked Questions

Do we need ISO certification to comply with Clause 4.4?

Clause 4.4 is a requirement within ISO/IEC 42001; certification is optional, but auditors will still expect you to meet the requirement if you claim alignment. Treat it as a governance and operating model requirement, not a certification artifact. 1

What counts as an “AI management system” in practice?

A defined set of AI lifecycle processes with assigned owners, documented interactions, and evidence that those processes operate (intake, risk assessment, release approvals, monitoring, incidents, corrective actions). A policy alone rarely satisfies “implement” and “maintain.” 1

How do we handle third-party AI tools and embedded AI features?

Put them in the same inventory and intake workflow as internal AI, then route them through third-party due diligence and release/change controls. Your “process interactions” should show procurement and vendor management handoffs explicitly. 1

What evidence is most persuasive to an auditor for continual improvement?

A corrective action log tied to real triggers (incidents, monitoring results, audit findings) plus proof of closure and management review decisions. Meeting minutes with tracked actions are usually easier to defend than informal emails. 1

We already have SDLC change management. Can we reuse it?

Yes, reuse is expected if it works, but you must show how SDLC controls connect to AI-specific risks and records (risk classification, monitoring plan, AI incident triggers). Update the process map so the interactions are explicit. 1

What’s the fastest way to close gaps if we have lots of untracked AI use?

Start with an amnesty-based discovery and a mandatory intake gate going forward, tied to procurement and production deployment. Prioritize review for the highest-impact AI uses, and document temporary exceptions with compensating controls. 1

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

Do we need ISO certification to comply with Clause 4.4?

Clause 4.4 is a requirement within ISO/IEC 42001; certification is optional, but auditors will still expect you to meet the requirement if you claim alignment. Treat it as a governance and operating model requirement, not a certification artifact. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What counts as an “AI management system” in practice?

A defined set of AI lifecycle processes with assigned owners, documented interactions, and evidence that those processes operate (intake, risk assessment, release approvals, monitoring, incidents, corrective actions). A policy alone rarely satisfies “implement” and “maintain.” (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

How do we handle third-party AI tools and embedded AI features?

Put them in the same inventory and intake workflow as internal AI, then route them through third-party due diligence and release/change controls. Your “process interactions” should show procurement and vendor management handoffs explicitly. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What evidence is most persuasive to an auditor for continual improvement?

A corrective action log tied to real triggers (incidents, monitoring results, audit findings) plus proof of closure and management review decisions. Meeting minutes with tracked actions are usually easier to defend than informal emails. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

We already have SDLC change management. Can we reuse it?

Yes, reuse is expected if it works, but you must show how SDLC controls connect to AI-specific risks and records (risk classification, monitoring plan, AI incident triggers). Update the process map so the interactions are explicit. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What’s the fastest way to close gaps if we have lots of untracked AI use?

Start with an amnesty-based discovery and a mandatory intake gate going forward, tied to procurement and production deployment. Prioritize review for the highest-impact AI uses, and document temporary exceptions with compensating controls. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001 AI management system: Implementation Guide | Daydream