Determining the scope of the AI management system

To meet ISO/IEC 42001 Clause 4.3, you must define and document the boundaries of your AI management system (AIMS): which AI systems are in-scope, where they operate (org units, geographies, products, data, and lifecycle stages), and whether your organization acts as an AI provider, an AI user, or both. Your documented scope becomes the audit anchor for governance, risk controls, and evidence.

Key takeaways:

  • Your AIMS scope must explicitly name the AI systems covered and your role(s) (provider/user).
  • “Boundary” means org/process/data/lifecycle limits, plus third-party AI you rely on.
  • A tight scope statement plus an AI system inventory prevents control gaps and audit findings.

Determining the scope of the AI management system is a gating requirement. If you cannot explain “what is covered” and “what is not covered,” every downstream control becomes debatable, and audits turn into arguments about definitions instead of evidence. ISO/IEC 42001 Clause 4.3 forces that clarity by requiring you to set boundaries and applicability, identify the AI systems included, and state your organizational role as an AI provider, AI user, or both, then keep that scope as documented information 1.

For a CCO, compliance officer, or GRC lead, operationalizing Clause 4.3 is about creating a scope statement that is precise enough to test and enforce. You need a defensible method for deciding what qualifies as an “AI system” in your environment, how you handle embedded AI inside third-party products, and how you treat prototypes, shadow AI, and acquisitions. You also need a repeatable way to update scope when systems change.

This page gives requirement-level implementation guidance you can execute quickly: a practical scoping method, the minimum artifacts auditors expect, common hangups, and a phased plan to get from “unclear inventory” to “documented, governed AIMS scope.”

Regulatory text

ISO/IEC 42001 Clause 4.3 requires that you determine the boundaries and applicability of your AI management system to establish its scope. The scope must specify (1) the AI systems covered and (2) the organization’s roles as provider and/or user, and it must be available as documented information 1.

What the operator must do: produce a written scope statement that can be tested. It must map to a known set of AI systems and clearly state whether you build/offer AI (provider), use AI (user), or do both. The “boundaries and applicability” language means you also need to define what parts of the organization, processes, data, and lifecycle stages the AIMS governs.

Plain-English interpretation

Your AIMS scope is the contract between leadership, engineering, and assurance teams. It answers:

  • Which AI systems are governed by the AIMS? Name them or define an unambiguous inclusion rule tied to an inventory.
  • Where does governance apply? Which business units, products, geographies, and deployment environments are included.
  • What role do you play? Provider, user, or both, per system or per portfolio.
  • What’s excluded and why? Exclusions must be explicit, bounded, and reviewed, not “we’ll get to it later.”

If you cannot trace your controls and risk treatment back to an in-scope list of AI systems, you have a scoping failure, not a documentation issue.

Who it applies to (entity and operational context)

Clause 4.3 applies to any organization implementing ISO/IEC 42001, including:

AI providers

You develop, train, fine-tune, package, sell, or otherwise supply AI systems (including APIs, models, model-enabled applications, or managed AI services). Your scope must cover the AI systems you provide and the supporting lifecycle processes you control.

AI users

You deploy or rely on AI systems to support internal operations or customer-facing activities, even if a third party provides the model or platform. Scope must address third-party AI you use, including embedded AI features in SaaS tools where your organization configures use and bears outcome risk.

Mixed-role organizations (most enterprises)

You may be a provider for one product line and a user for internal functions (fraud, HR screening, customer support). Your scope should state roles by system category or by system.

Operationally, scoping is most critical where:

  • AI affects regulated outcomes (eligibility, pricing, safety, hiring, medical or financial decisions).
  • AI is integrated into production customer journeys.
  • Multiple teams can deploy AI without centralized approval (shadow AI risk).
  • Third-party AI is embedded, and responsibility boundaries are unclear.

What you actually need to do (step-by-step)

Step 1: Establish scoping criteria you can enforce

Define an “AI system” inclusion rule that your organization can apply consistently. Keep it simple and testable, for example:

  • Any system that performs inference or generates outputs using ML or generative methods, where outputs influence decisions, customer experience, or automated actions.
  • Any product feature marketed as “AI,” “ML,” “recommendation,” “prediction,” “assistant,” or “automation,” even if the model is third-party.

Also define scoping triggers:

  • Production deployment
  • Material model change (new model, retrain, fine-tune, parameter shift)
  • New use case for an existing system
  • Acquisition of a product/team with AI

Step 2: Build (or validate) an AI system inventory

Create a single list of candidate AI systems, including:

  • Internal-built models and model-enabled apps
  • Third-party AI services and platforms
  • AI embedded within third-party tools you configure (chatbots, analytics predictions, content generation)
  • Prototypes that are used with real data or real customers

Minimum fields to capture:

  • System name, owner, business unit
  • Purpose/use case and affected stakeholders
  • Deployment environment(s)
  • Data categories used (high-level)
  • Role: provider/user/both
  • Third-party dependencies (model provider, platform, critical subcontractors)

If you already have an application inventory, extend it with AI attributes rather than creating a separate universe.

Step 3: Define boundaries (org, process, data, lifecycle)

Convert “inventory + criteria” into explicit boundaries:

Organizational boundaries

  • Which legal entities, subsidiaries, and functions are included.
  • Who is required to follow AIMS policies (employees, contractors).

Process boundaries

  • Which processes are governed: design, procurement, development, validation, deployment, monitoring, incident response, change management.

Data boundaries

  • Which data environments and pipelines are in-scope (training data, prompt logs, inference telemetry).
  • Clarify whether personal data or sensitive data use is included in first scope or handled by parallel privacy governance, then document the interface.

Lifecycle boundaries

  • Which lifecycle stages are covered for in-scope systems (development through retirement).
  • How you handle research/prototype stages (often the biggest gap).

Step 4: Decide inclusions and exclusions, then document rationale

Make scoping decisions system-by-system or category-by-category:

  • Include systems that are in production, customer-facing, safety/regulated-impact, or materially automated.
  • Conditionally include pilots and proofs of concept once they use real customer/employee data or affect decisions.
  • Exclude only with a clear reason and a clear boundary (e.g., “pure academic research with synthetic data and no deployment path”), and put exclusions on a review cadence tied to your change process.

Auditors will accept exclusions that are explicit and controlled. They will challenge exclusions that look like avoidance.

Step 5: Write the scope statement (documented information)

Your scope statement should be short, specific, and auditable. Include:

  • Covered AI systems (by list, inventory reference, or defined categories tied to inventory)
  • Roles (provider/user/both), ideally by system category
  • Boundary statement (legal entities, business units, geographies, environments)
  • Interfaces to other management systems (security, privacy, quality) if you rely on them for certain controls
  • Exclusions with rationale and review mechanism

A practical format is a one-page “AIMS Scope” document plus an appendix pointing to the AI inventory.

Step 6: Operationalize scope through governance gates

Scope is real only if it changes behavior. Tie it to:

  • Procurement intake for third-party AI (so new AI can’t enter unlogged)
  • SDLC or model lifecycle checkpoints (design review, pre-prod approval, change approval)
  • Exception process (document why out-of-scope activities are permitted, who approved, and for how long)

If you use Daydream for third-party risk management, connect the AI inventory to third-party records so each in-scope AI dependency has an accountable owner, due diligence status, and contract controls mapped to the scope boundaries.

Required evidence and artifacts to retain

Auditors will look for traceability: scope → inventory → controls → records. Keep:

  • AIMS Scope Statement (dated, versioned, approved) 1
  • AI System Inventory with roles and ownership
  • Scoping methodology (criteria, triggers, inclusion/exclusion logic)
  • Exclusion register with rationale and approval
  • RACI / governance charter showing who decides scope changes
  • Change management records showing how new AI gets added to scope
  • Third-party dependency list for in-scope AI systems (contracts and due diligence references)
  • Management review or steering committee minutes where scope is reviewed/updated

Common exam/audit questions and hangups

Expect these lines of questioning:

  1. “Show me your scope statement and the list of AI systems it covers.” If your scope references “all AI,” but you can’t produce an inventory, you will struggle.
  2. “Are you a provider or a user?” Many orgs answer at the enterprise level, but the real answer varies by system.
  3. “What about third-party AI embedded in tools?” Auditors want your position: either in-scope with supplier controls, or formally excluded with rationale.
  4. “How do you prevent shadow AI?” You need intake gates (procurement, security reviews, access controls) and attestation mechanisms.
  5. “How do you update scope?” A static scope document is a red flag if your AI footprint changes frequently.

Frequent implementation mistakes and how to avoid them

Mistake 1: Writing a scope statement with no inventory

Avoidance: build inventory first, then write scope. If inventory is incomplete, document the discovery plan and put interim controls on new deployments.

Mistake 2: Scoping only “models we built”

This misses third-party AI and embedded features. Avoidance: include systems you use, not just systems you create, and document provider/user roles per system 1.

Mistake 3: Excluding prototypes without a boundary

“R&D is out of scope” is too broad. Avoidance: define objective exclusion conditions (synthetic data only, no customer exposure, no production path) and require review at gate points.

Mistake 4: Forgetting geographies and legal entities

AI deployments often differ across regions. Avoidance: explicitly name covered legal entities and regions; document why any are excluded.

Mistake 5: Scope that does not connect to decision rights

If no one owns scope change decisions, scope drifts. Avoidance: assign a governing body and a documented change trigger process tied to procurement and release management.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so this guidance focuses on audit defensibility and operational risk. Practically, poor scoping creates three predictable exposures:

  • Control gaps: teams ship AI outside governance.
  • Assurance failure: you cannot demonstrate coverage of the systems that matter most.
  • Third-party exposure: you rely on AI suppliers without clear accountability, contract terms, or monitoring expectations.

Treat scope as a risk boundary. If it’s vague, your risk boundary is vague.

Practical 30/60/90-day execution plan

First 30 days (Immediate)

  • Assign a single accountable owner for AIMS scoping (often GRC with AI governance lead).
  • Draft scoping criteria and triggers; get engineering and procurement buy-in.
  • Stand up an initial AI inventory from app catalogs, procurement records, and platform logs.
  • Publish an interim rule: new AI deployments require inventory registration and role assignment.

Next 60 days (Near-term)

  • Finalize boundaries (legal entities, business units, geographies, environments).
  • Classify each inventory entry as provider/user/both and record third-party dependencies.
  • Draft and approve the formal AIMS Scope Statement as documented information 1.
  • Implement governance gates: procurement intake, release/change approvals, and an exception process.

Next 90 days (Operationalize and stabilize)

  • Validate completeness via workshops with product, security, data, and procurement teams.
  • Run a scope review session with leadership; record decisions and updates.
  • Test scope against reality: pick a sample of AI-enabled tools and confirm they appear in inventory and follow controls.
  • If you manage third-party risk in Daydream, link in-scope AI systems to their third-party dependencies so due diligence and ongoing monitoring are provably aligned to the AIMS scope.

Frequently Asked Questions

Do we need to list every model, or can we scope by product or platform?

Clause 4.3 requires specifying the AI systems covered and making scope available as documented information 1. In practice, you can scope by product/platform category if you maintain an inventory that unambiguously maps models and AI features to those categories.

We only use third-party AI tools. Are we still in scope?

Yes, you are an AI user, and your scope should state that role and identify the AI systems you use 1. Include embedded AI features in third-party tools if you configure them or rely on their outputs.

Can R&D or innovation labs be out of scope?

They can be, but only with a tight boundary and a documented rationale. Define when R&D becomes in-scope (for example, real data use, external exposure, or a production path) and route it through a gate.

How do we handle shadow AI (teams using AI without approval)?

Treat it as a scoping completeness risk. Add intake controls through procurement and access management, and require business owners to register AI systems before deployment so your scope stays accurate.

What does “provider vs user” mean for internal tools?

If you build and operate an AI tool for internal users, you are still providing an AI system within the organization. Record the role in a way that matches how you control development and operation, and keep it consistent across your inventory 1.

How often should we review and update the scope?

ISO/IEC 42001 Clause 4.3 does not set a fixed review frequency 1. Review on change triggers (new system, major change, acquisition, new geography) and on a regular governance cadence your organization can sustain.

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

Do we need to list every model, or can we scope by product or platform?

Clause 4.3 requires specifying the AI systems covered and making scope available as documented information (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). In practice, you can scope by product/platform category if you maintain an inventory that unambiguously maps models and AI features to those categories.

We only use third-party AI tools. Are we still in scope?

Yes, you are an AI user, and your scope should state that role and identify the AI systems you use (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). Include embedded AI features in third-party tools if you configure them or rely on their outputs.

Can R&D or innovation labs be out of scope?

They can be, but only with a tight boundary and a documented rationale. Define when R&D becomes in-scope (for example, real data use, external exposure, or a production path) and route it through a gate.

How do we handle shadow AI (teams using AI without approval)?

Treat it as a scoping completeness risk. Add intake controls through procurement and access management, and require business owners to register AI systems before deployment so your scope stays accurate.

What does “provider vs user” mean for internal tools?

If you build and operate an AI tool for internal users, you are still providing an AI system within the organization. Record the role in a way that matches how you control development and operation, and keep it consistent across your inventory (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system).

How often should we review and update the scope?

ISO/IEC 42001 Clause 4.3 does not set a fixed review frequency (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). Review on change triggers (new system, major change, acquisition, new geography) and on a regular governance cadence your organization can sustain.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
Determining the scope of the AI management system | Daydream