Information for interested parties
“Information for interested parties” means you must identify every transparency and disclosure obligation tied to your AI systems, map each obligation to the right audience (customers, users, regulators, impacted individuals, and third parties), and consistently deliver the required information through controlled channels. Build an obligations register, assign owners, publish standardized notices, and retain proof of delivery and approvals.
Key takeaways:
- Identify “interested parties” and their AI-related information rights before you publish, deploy, or materially change an AI system.
- Centralize obligations in a register, then operationalize them through templates, release gates, and defined communication channels.
- Keep evidence that disclosures were accurate, approved, current, and actually delivered.
Annex A, Control A.8.5 of ISO/IEC 42001 focuses on a deceptively operational requirement: you must know who needs what information about your AI systems, and you must provide it reliably. For a Compliance Officer, CCO, or GRC lead, this is less about writing a policy and more about building a repeatable machine that prevents missed disclosures, inconsistent statements, and “shadow” communications from product teams, sales, or third parties.
In practice, this control becomes your transparency and disclosure operating model. It forces clarity on questions auditors and customers will ask anyway: Which AI systems do we run? Who is affected? What are we required (or committed) to tell them? Where is that information published? Who approves it? How do we ensure it stays current after model updates, data changes, or vendor swaps?
This page gives requirement-level implementation guidance you can execute quickly: scope, roles, a step-by-step build plan, evidence to retain, common audit pitfalls, and a pragmatic execution roadmap. All guidance ties back to the ISO/IEC 42001 requirement to “identify and fulfil” information obligations for interested parties. 1
Regulatory text
Control requirement (excerpt): “The organization shall identify and fulfil its information obligations to interested parties regarding AI systems.” 1
Operator interpretation:
You need a controlled process to (1) identify all AI-related information obligations, and (2) meet them on time, in the right format, through the right channels, with evidence. “Obligations” include legal/regulatory duties, contractual commitments, and internal commitments (for example, public statements or customer assurances). “Interested parties” includes anyone who can reasonably expect information due to impact, use, oversight, or dependency.
Plain-English meaning (what this control is really asking)
You must answer three questions for every AI system you build, buy, or use:
- Who is entitled to information? (interested parties)
- What exactly must we tell them, and when? (obligations and triggers)
- How do we ensure we actually tell them, consistently, every time? (execution and evidence)
If you cannot prove the organization systematically identifies obligations and follows through, you will fail the control even if some disclosures exist ad hoc.
Who it applies to (entity and operational context)
This applies to any organization that provides, deploys, or uses AI systems, including:
- AI providers shipping AI-enabled products, features, APIs, models, or decision engines to customers or the public.
- AI users deploying AI internally (HR screening, fraud detection, customer support bots, coding assistants) where outputs affect individuals, customers, or regulated processes.
- Organizations relying on third-party AI systems (SaaS, embedded models, outsourced decisioning) where your organization still communicates with impacted parties or customers.
Operationally, A.8.5 lands across:
- Product and engineering (UI notices, documentation, change notes)
- Legal and compliance (regulatory interpretation, contractual commitments)
- Privacy, security, and risk (risk disclosures, incident communications)
- Procurement and third-party risk (vendor-provided disclosures, flow-down terms)
- Customer-facing teams (sales claims, customer notices, support scripts)
What you actually need to do (step-by-step)
Step 1: Define “interested parties” for AI disclosures
Create a simple taxonomy you can reuse:
- End users (people interacting with the AI)
- Impacted individuals (people affected by decisions, even if they never used the system)
- Customers and client admins (B2B buyers, enterprise admins)
- Regulators and supervisory bodies (where applicable to your sector/geo)
- Internal stakeholders (risk owners, process owners, workforce councils where relevant)
- Third parties (vendors, subprocessors, integrators, distributors)
Deliverable: Interested Parties Matrix that lists party type, relationship, and contact channel (portal, email, UI, contract notice address, DPA channel).
Step 2: Build an “AI Information Obligations Register”
This is the core artifact auditors expect. For each AI system (or AI feature), record:
- System name, owner, business purpose, deployment context
- Interested party categories impacted
- Obligation statements (what must be disclosed)
- Trigger events (new launch, model update, material logic change, new data source, incident)
- Delivery method (in-product, privacy notice, documentation, contract exhibit, customer letter)
- Approver(s) (Legal/Compliance, Privacy, Product)
- Evidence type (screenshots, published URL, email log, contract version)
Keep it simple enough that teams update it.
Step 3: Standardize disclosure content with templates and rules
Create controlled templates so teams don’t improvise:
- AI use notice (what the system does, user interaction expectations)
- Decision/assistance disclosure (is it automated decisioning, recommendation, or support?)
- Data-related statements (types of data used, high-level sources, retention pointers)
- Human oversight statement (where humans review, how to request review if applicable)
- Limitations and safe-use guidance (known failure modes, prohibited use cases)
- Contact/escalation path (how questions or complaints are handled)
Operational rule: customer-facing statements about AI should come from controlled templates or approved language blocks.
Step 4: Put “disclosure gates” into your delivery lifecycle
Add required checks to points where information obligations often get missed:
- Product launch / feature flag enablement: confirm disclosures exist and are published.
- Model updates: confirm whether change triggers updates to disclosures or documentation.
- New third-party AI component: confirm vendor disclosures are consistent with yours, and that your obligations still get met.
- Material incident involving AI: confirm external communications follow pre-approved playbooks.
Make this a release requirement, not a best-effort task.
Step 5: Assign owners and escalation paths
Define RACI that works under pressure:
- System Owner (Product/Business): accountable for completeness, funds the work.
- Compliance/Legal: accountable for obligation interpretation and final approval.
- Privacy: accountable for privacy notice alignment and data-use statements.
- Security: accountable for security-related communications and incident alignment.
- Support/Customer Success: accountable for frontline scripts and intake processes.
If you cannot name an owner per AI system, you cannot “fulfil” obligations reliably.
Step 6: Prove delivery and keep it current
Create a lightweight cadence:
- Periodic review tied to your AI change management cycle (not calendar-only).
- Version control for notices and documentation.
- Archive prior versions and effective dates.
- Track which customers/segments received which notice when required by contract or policy.
Tooling note: many teams track this in spreadsheets at first, but it breaks when disclosures proliferate across products and regions. Daydream is a natural place to manage the obligations register, map obligations to controls and evidence, and run review workflows without chasing approvals in email.
Required evidence and artifacts to retain
Keep artifacts that prove identification and fulfilment:
Core artifacts
- Interested Parties Matrix (approved, dated)
- AI Information Obligations Register (current, with version history)
- Disclosure templates and approved language blocks
- Approval records (ticket/workflow approvals, legal sign-off, change logs)
Fulfilment evidence
- Screenshots/PDF captures of in-product notices and public documentation
- Customer communications logs (email sends, portal announcements, release notes)
- Contract exhibits or order form language addressing AI disclosures
- Support scripts/knowledge base articles used to answer AI transparency questions
Change evidence
- Release/change tickets that show disclosure review occurred
- Mapping of triggers to actions (e.g., “model update” required doc update)
Common exam/audit questions and hangups
Auditors and certification reviewers often probe:
- “Show me your list of AI systems and where each one discloses AI use to users/customers.”
- “How do you determine which parties are ‘interested’ for an internal AI system?”
- “What triggers a disclosure update, and who approves it?”
- “How do you control statements made by Sales, Support, and Marketing?”
- “Show evidence that disclosures stayed current after model changes.”
- “How do third-party AI components affect your disclosures?”
Hangup to expect: teams confuse “we have a privacy policy” with “we fulfil AI information obligations.” A.8.5 is broader than privacy.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Treating disclosures as one-time launch work.
Fix: bind disclosure review to change management triggers (model/data/vendor changes). -
Mistake: No single source of truth.
Fix: maintain one obligations register and point all teams to it; don’t let every product team invent its own approach. -
Mistake: Over-reliance on third-party vendor documentation.
Fix: ingest vendor statements, but write your own audience-appropriate disclosures. You own customer communications in most delivery models. -
Mistake: Sales/Support “freestyling” AI claims.
Fix: controlled language blocks, training, and a review process for customer-facing materials. -
Mistake: No proof of delivery.
Fix: archive published pages, capture screenshots, retain customer notice logs, and store approvals with timestamps.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement. Even without case law in your file, the risk is straightforward: missed or inconsistent AI disclosures create exposure across contractual disputes, regulatory inquiries, customer trust issues, and incident response. The operational risk is higher when AI systems are updated frequently, rely on multiple third parties, or are embedded across many workflows.
Practical execution plan (30/60/90-day)
First 30 days (Immediate stabilization)
- Inventory AI systems in scope (include third-party AI used in core processes).
- Stand up the Interested Parties Matrix for your top-risk systems.
- Create the first version of the AI Information Obligations Register.
- Freeze unapproved customer-facing AI claims; publish interim guidance for Sales/Support.
By 60 days (Operationalize fulfilment)
- Publish disclosure templates and approved language blocks.
- Implement disclosure gates in product release and change management workflows.
- Assign system owners and approval routing; document RACI.
- Start capturing fulfilment evidence (screenshots, notices, approval logs) in a single repository.
By 90 days (Make it durable)
- Expand obligations register to full AI portfolio.
- Implement periodic review tied to AI changes (model updates, data changes, vendor swaps).
- Run an internal audit-style review: sample systems, test evidence, verify delivery and versioning.
- Consider moving the register, workflows, and evidence collection into Daydream to reduce manual tracking and missed renewals.
Frequently Asked Questions
Do we need to disclose every internal AI use to employees or impacted individuals?
A.8.5 requires you to identify obligations and fulfil them. For internal AI, determine which groups are “interested parties” based on impact and oversight needs, then document what you will communicate and through which channel. Keep the rationale in your obligations register.
What counts as an “information obligation” if no law explicitly tells us what to disclose?
Obligations can come from contracts, public statements, customer commitments, internal policies, and sector expectations you’ve adopted. Treat anything you promise customers about AI behavior, data use, or oversight as an obligation and control it through approvals and versioning.
How do we handle disclosures when we use a third-party AI model inside our product?
Start with the vendor’s documentation, but don’t stop there. Map what the vendor provides against what your customers and users need to understand, then publish your own disclosures and keep vendor changes as triggers for review.
What evidence is strongest in an ISO audit for this control?
Auditors look for a living obligations register, approval trails, and proof that disclosures were delivered and kept current after changes. Screenshots of production UI notices, archived public docs, and change tickets tied to disclosure updates are usually persuasive.
Who should approve AI disclosures: Legal, Compliance, or Product?
Product should own accuracy of how the system works, while Legal/Compliance should approve obligation interpretation and external commitments. Document the approval chain and enforce it through release gates.
How do we prevent Sales from making unsupported AI claims?
Give Sales controlled, pre-approved language blocks and require review for deviations. Pair that with training and a simple escalation path so deals do not stall when customers ask detailed AI questions.
Footnotes
Frequently Asked Questions
Do we need to disclose every internal AI use to employees or impacted individuals?
A.8.5 requires you to identify obligations and fulfil them. For internal AI, determine which groups are “interested parties” based on impact and oversight needs, then document what you will communicate and through which channel. Keep the rationale in your obligations register.
What counts as an “information obligation” if no law explicitly tells us what to disclose?
Obligations can come from contracts, public statements, customer commitments, internal policies, and sector expectations you’ve adopted. Treat anything you promise customers about AI behavior, data use, or oversight as an obligation and control it through approvals and versioning.
How do we handle disclosures when we use a third-party AI model inside our product?
Start with the vendor’s documentation, but don’t stop there. Map what the vendor provides against what your customers and users need to understand, then publish your own disclosures and keep vendor changes as triggers for review.
What evidence is strongest in an ISO audit for this control?
Auditors look for a living obligations register, approval trails, and proof that disclosures were delivered and kept current after changes. Screenshots of production UI notices, archived public docs, and change tickets tied to disclosure updates are usually persuasive.
Who should approve AI disclosures: Legal, Compliance, or Product?
Product should own accuracy of how the system works, while Legal/Compliance should approve obligation interpretation and external commitments. Document the approval chain and enforce it through release gates.
How do we prevent Sales from making unsupported AI claims?
Give Sales controlled, pre-approved language blocks and require review for deviations. Pair that with training and a simple escalation path so deals do not stall when customers ask detailed AI questions.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream