Customers
To meet the ISO/IEC 42001 “Customers” requirement, you must give customers clear, actionable information about your AI system before and during use: intended use, capabilities, limitations, risks, and operating instructions. Operationalize it by standardizing customer-facing disclosures, aligning them to product changes, and retaining evidence that the right information reached the right customer at the right time. (ISO/IEC 42001:2023 Artificial intelligence — Management system)
Key takeaways:
- Build a repeatable customer disclosure package tied to the AI system’s intended use, limits, and safe operating conditions.
- Treat product changes as disclosure triggers; ship updated information with release notes and customer notices.
- Keep proof of delivery and versioning (what you told customers, when, and for which model/system version).
“Customers” in ISO/IEC 42001 Annex A Control A.10.4 is a communication control, not a marketing exercise. Auditors will look for disciplined, consistent disclosure that helps customers operate the AI system safely and as intended, and that sets accurate expectations about performance boundaries. The fastest path is to define “relevant information” for your AI offerings, package it into standard artifacts (docs, UI notices, and contractual attachments), and wire it into your release and change-management workflow so the information stays current.
This requirement matters most when customers rely on your AI outputs to make decisions, integrate your system into production workflows, or configure the system in ways that can shift risk (for example, prompt templates, fine-tuning, thresholds, or automation rules). In those contexts, incomplete or stale disclosures turn into predictable failure modes: customers use the tool outside intended scope, assume unsupported accuracy, or miss operational constraints that drive incidents.
The goal is simple: customers should know what the AI system is for, what it can and cannot do, what risks exist in normal use, and how to operate it responsibly. (ISO/IEC 42001:2023 Artificial intelligence — Management system)
Regulatory text
Control requirement (Annex A, A.10.4): “When providing AI systems to customers, the organization shall communicate relevant information about the AI system.” (ISO/IEC 42001:2023 Artificial intelligence — Management system)
Operator interpretation: You must have a defined set of customer-facing disclosures for each AI system (or materially different AI capability) and a reliable mechanism to deliver and update those disclosures. “Relevant information” should, at minimum, cover the system’s intended use, capabilities, limitations, risks, and operating instructions, because those elements directly affect safe use and customer outcomes. (ISO/IEC 42001:2023 Artificial intelligence — Management system)
Plain-English interpretation (what the requirement really demands)
Customers need enough information to:
- Decide whether to use your AI system for their purpose (intended use and capabilities).
- Avoid predictable misuse (limitations and constraints).
- Operate it safely and correctly (operating instructions, configuration guidance, and guardrails).
- Understand and manage downside risk (known risks, residual risks, and required mitigations or human review).
This is not limited to a PDF manual. You can satisfy the requirement through documentation, in-product notices, onboarding materials, contractual exhibits, and change notifications, as long as the content is complete and you can prove it reached customers.
Who it applies to (entity + operational context)
This applies to any organization providing an AI system to customers, including:
- AI providers shipping SaaS products, APIs, embedded models, or on-prem deployments.
- Organizations that package AI features into broader products where customers rely on AI outputs.
- AI users acting as resellers or service providers when they “provide” an AI system or AI-enabled service to their own customers. (ISO/IEC 42001:2023 Artificial intelligence — Management system)
Operationally, you should treat this as in-scope when:
- Customers can make decisions or take actions based on AI outputs.
- Customers can configure the AI (thresholds, prompts, agents, automation, fine-tunes).
- The AI system’s performance depends on customer-provided data quality or operating conditions.
- You deliver updates that change behavior, limitations, or risk profile.
What you actually need to do (step-by-step)
1) Define “relevant information” for your AI system
Create a required disclosure checklist that product teams must complete per AI system/capability. Keep it tight and auditable:
- Intended use: what tasks and use cases are supported; explicit out-of-scope uses.
- Capabilities: what the system does, the nature of outputs, dependencies (data, integrations).
- Limitations: known failure modes, boundary conditions, non-determinism where applicable, constraints on inputs.
- Risks: foreseeable risks in normal use (for example, harmful outputs, automation errors, data sensitivity issues) and what mitigations customers must apply.
- Operating instructions: setup, configuration guidance, monitoring expectations, human review requirements, and escalation paths for issues.
Practical tip: write these disclosures so a customer can convert them into operating procedures without guessing.
2) Package the information into customer-facing artifacts
Standardize a Customer AI Disclosure Pack with consistent sections. Common components:
- Product documentation page(s) and/or PDF exhibit.
- In-app onboarding and context-sensitive help.
- API documentation and integration guidance.
- Contractual attachment (order form exhibit or DPA addendum) referencing the disclosure pack location and update process.
Avoid scattering critical limitations across blogs, sales decks, and slideware. Put the authoritative version in one controlled location and reference it everywhere else.
3) Put disclosures under document control (versioning + ownership)
Assign owners:
- Product owns intended use, capabilities, operating instructions.
- Risk/Compliance owns risk statements and required mitigations language.
- Legal/Commercial owns customer-notice terms and contract references.
Minimum controls:
- Version number and effective date.
- Mapping to AI system/model version(s).
- Change log summary (what changed and why).
4) Wire disclosures into your change-management lifecycle
Define disclosure triggers. Examples that should force review and possible customer notice:
- Model changes (new model, retraining, fine-tune, new safety layer).
- New automation features (agents, auto-actions, workflow execution).
- Material changes to limitations, supported inputs, or operating constraints.
- New risk discovered (post-incident learning, red-team finding, customer complaint trend).
Make release management require a “Customer disclosure impact” check before shipping.
5) Deliver the information to customers and prove delivery
Decide how customers will receive the disclosure pack:
- In-product acceptance (checkbox + link to disclosure pack).
- Customer onboarding email with tracked delivery and links.
- Contract signature package including the disclosure exhibit.
- Support portal announcement with access logs (where available).
You need an evidence trail that supports: who received what, when, and which version.
6) Train customer-facing teams and support escalation
Sales engineering, onboarding, and support should know:
- How to explain intended use and limitations without improvising.
- When to warn customers a use case is out of scope.
- How to route customer risk questions to the right owner.
A lightweight internal playbook reduces “shadow disclosures” that contradict controlled documentation.
Required evidence and artifacts to retain
Keep artifacts in a system your audit team can search and export:
Core artifacts
- Customer AI Disclosure Pack (current + prior versions) with effective dates.
- Approval record (Product, Risk/Compliance, and any required legal review).
- Release/change records showing disclosure impact assessment.
- Customer communications: onboarding emails, release notes, customer notices, in-app prompts.
Proof of delivery
- Contract exhibit inclusion (signed order form or MSA addendum reference).
- In-app acceptance logs or account-level acknowledgement records.
- Support portal announcement logs or email delivery logs.
Operational backstops
- Support macros and escalation runbooks for AI limitations/risk questions.
- Customer-facing incident communications templates (if applicable to your operating model).
Common exam/audit questions and hangups
Expect auditors to probe these areas:
- “Show me what you tell customers about intended use, limitations, and risks.” (ISO/IEC 42001:2023 Artificial intelligence — Management system)
- “How do you ensure disclosures stay current after model updates?”
- “How do you confirm customers received the updated information?”
- “Where do you document operating instructions for safe use?”
- “Who approves customer-facing risk statements, and where is the evidence?”
Hangup to anticipate: teams often have decent docs, but no linkage to change management and no proof of customer delivery.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Disclosures are marketing copy.
Fix: require concrete limitations, operating constraints, and out-of-scope uses in the disclosure template. -
Mistake: “Relevant information” exists, but it’s fragmented.
Fix: publish a single authoritative disclosure pack and reference it consistently. -
Mistake: No customer notice process for material changes.
Fix: add disclosure impact review as a release gate; define what requires proactive customer notice versus passive documentation update. -
Mistake: No evidence of delivery.
Fix: implement acknowledgement, contract exhibits, or documented onboarding delivery. Keep logs. -
Mistake: Support contradicts the docs.
Fix: train customer-facing teams and provide approved scripts/macros for common questions.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so you should treat risk primarily as certification/audit failure risk and customer harm risk. Control breakdowns commonly lead to:
- Misuse outside intended scope.
- Customer reliance on outputs without appropriate review.
- Disputes over expectations (what the system was supposed to do versus what it can do).
- Incident response friction because customers were never told their operational responsibilities.
For a CCO/GRC lead, the practical implication is that customer communication is a first-line risk control. If you cannot prove disclosure quality and delivery, you will struggle to defend downstream failures as customer misuse or misconfiguration.
Practical 30/60/90-day execution plan
First 30 days (Immediate stabilization)
- Inventory AI systems/capabilities provided to customers.
- Draft a standard Customer AI Disclosure Pack template (intended use, capabilities, limitations, risks, operating instructions). (ISO/IEC 42001:2023 Artificial intelligence — Management system)
- Identify the authoritative publication channel (docs site, customer portal, in-product).
- Set owners and an approval workflow for customer-facing AI risk statements.
Next 60 days (Operationalize and connect to change)
- Publish disclosure packs for highest-risk customer-facing AI systems first (those driving decisions or automation).
- Add a “customer disclosure impact” checkpoint to release/change workflows.
- Implement a delivery mechanism and logging approach (contract exhibit, in-app acknowledgment, onboarding message).
- Train Sales Engineering and Support with a short “what we can/can’t claim” guide tied to the disclosure packs.
By 90 days (Evidence-ready and scalable)
- Complete disclosure packs for all customer-provided AI systems in scope.
- Run an internal audit: sample customer accounts and prove what disclosures they received and which version.
- Add periodic review triggers tied to model change, new risk discovery, and major feature releases.
- Consider a tooling workflow (for example, Daydream) to track third-party and customer-facing obligations, route approvals, and keep evidence packaged for audits without chasing screenshots and email threads.
Frequently Asked Questions
Do we have to disclose model architecture or training data details to comply?
The requirement is to communicate “relevant information” for safe and intended operation, not to publish proprietary internals. Focus on intended use, limitations, risks, and operating instructions customers need to operate the system responsibly. (ISO/IEC 42001:2023 Artificial intelligence — Management system)
What counts as a “customer” if we provide an API?
API consumers are customers for this control. Your API docs and developer portal should contain the disclosure pack elements, and you should have a way to notify customers when changes materially affect limitations, risks, or operating requirements.
How do we handle frequent model updates without spamming customers?
Classify changes by impact. Minor changes can be handled via updated docs and release notes; material changes to intended use, limitations, or risk should trigger proactive notice using your defined customer-notice channel.
Can we satisfy this requirement with Terms of Service language alone?
Usually not. Legal terms help set expectations, but customers also need practical operating instructions and clear limitation/risk disclosures in usable formats (docs, onboarding, in-product guidance). (ISO/IEC 42001:2023 Artificial intelligence — Management system)
What evidence is most persuasive in an audit?
Version-controlled disclosures plus proof of delivery (signed exhibit reference, in-app acknowledgements, or logged onboarding communications) and a change-management record showing you reassess disclosures when the AI system changes.
We sell through resellers and partners. Who must communicate the information?
If you provide the AI system to customers through a channel, you still need a reliable mechanism to ensure the customer receives the relevant information. Contractually require downstream delivery and keep evidence (partner playbooks, required disclosure links, and partner attestations where practical).
Frequently Asked Questions
Do we have to disclose model architecture or training data details to comply?
The requirement is to communicate “relevant information” for safe and intended operation, not to publish proprietary internals. Focus on intended use, limitations, risks, and operating instructions customers need to operate the system responsibly. (ISO/IEC 42001:2023 Artificial intelligence — Management system)
What counts as a “customer” if we provide an API?
API consumers are customers for this control. Your API docs and developer portal should contain the disclosure pack elements, and you should have a way to notify customers when changes materially affect limitations, risks, or operating requirements.
How do we handle frequent model updates without spamming customers?
Classify changes by impact. Minor changes can be handled via updated docs and release notes; material changes to intended use, limitations, or risk should trigger proactive notice using your defined customer-notice channel.
Can we satisfy this requirement with Terms of Service language alone?
Usually not. Legal terms help set expectations, but customers also need practical operating instructions and clear limitation/risk disclosures in usable formats (docs, onboarding, in-product guidance). (ISO/IEC 42001:2023 Artificial intelligence — Management system)
What evidence is most persuasive in an audit?
Version-controlled disclosures plus proof of delivery (signed exhibit reference, in-app acknowledgements, or logged onboarding communications) and a change-management record showing you reassess disclosures when the AI system changes.
We sell through resellers and partners. Who must communicate the information?
If you provide the AI system to customers through a channel, you still need a reliable mechanism to ensure the customer receives the relevant information. Contractually require downstream delivery and keep evidence (partner playbooks, required disclosure links, and partner attestations where practical).
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream