Communication

ISO/IEC 42001 Clause 7.4 requires you to define and run a communication process for your AI management system: who must be informed, what must be communicated, when, through which channels, and how you meet AI transparency obligations. Operationalize it by publishing a communication plan, assigning owners, integrating it into AI lifecycle workflows, and retaining evidence that communications happened.

Key takeaways:

  • Define internal and external AI communications as a controlled process, not ad hoc messaging.
  • Build transparency obligations into product, legal, and customer-support workflows.
  • Keep auditable proof: message approvals, release notes, customer notices, training logs, and incident communications.

Clause 7.4 looks deceptively simple: “determine internal and external communications” for the AI management system. In practice, this requirement is where many AI governance programs either become operational or remain a set of documents. Your auditors will look for two things: (1) you made deliberate decisions about communication needs across the AI lifecycle, and (2) those decisions are executed consistently, especially where transparency to users, customers, regulators, and other third parties is required.

For a CCO or GRC lead, the fastest path is to treat communication as a governed control area with triggers and records, similar to incident communications or privacy notices. That means defining audiences (employees, leadership, customers, impacted individuals, regulators, suppliers), message types (AI use disclosure, model changes, limitations, human oversight, incident updates), and “events” that require communication (go-live, material changes, safety issues, complaints, outages, deprecations).

This page gives you requirement-level implementation guidance you can apply immediately: a plain-English interpretation, applicability, step-by-step build instructions, evidence to retain, exam questions, common failure modes, and an execution plan you can run without waiting for a larger program redesign.

Regulatory text

Requirement (excerpt): “The organization shall determine the internal and external communications relevant to the AI management system, including transparency obligations for AI systems.” 1

What the operator must do:
You must (a) decide what AI-management-related information needs to be communicated, (b) decide who receives it internally and externally, (c) define how and when it is communicated, and (d) ensure transparency obligations for AI systems are explicitly covered and executed. Evidence needs to show the decisions were made intentionally and that communications occur in real operations, not only on paper. 1

Plain-English interpretation (what Clause 7.4 is really asking)

Clause 7.4 expects a repeatable communication system around your AI management system (AIMS). “Determine communications” means you can point to a clear set of rules that answer:

  • What gets communicated (topics and message types)
  • To whom (internal and external audiences)
  • When (triggers, timing, and escalation thresholds)
  • How (channels, templates, approvals, accessibility)
  • By whom (owners and backups)
  • How you prove it happened (records and retention)

The “including transparency obligations” phrase is the sharp edge. Your communications inventory must cover AI transparency where you deploy or provide AI: disclosures, limitations, intended use, human-in-the-loop realities, and material changes that affect reliance. 1

Who it applies to (entity and operational context)

This applies to any organization operating an AI management system, including:

  • AI providers delivering AI-enabled products/services to customers or third parties
  • AI users deploying AI internally (HR, fraud, customer support, engineering copilots) or externally (customer-facing AI)
  • Shared contexts where third parties supply models, data, labeling, hosting, or monitoring

Operationally, you should treat Clause 7.4 as applicable across:

  • AI system lifecycle: design, training, testing, deployment, monitoring, change management, retirement
  • Functions: Product, Engineering/ML, Security, Privacy, Legal, Compliance, Risk, Customer Support, Sales/Account teams, Procurement/TPRM, Internal Audit
  • External stakeholders: customers, end users, impacted individuals, regulators (as applicable), and critical third parties

What you actually need to do (step-by-step)

Step 1: Build a communication inventory tied to AI lifecycle events

Create a controlled list of communication “objects” (not a narrative policy). Minimum set:

  • Internal governance communications (AIMS policy updates, roles, training, KPI/KRI reporting)
  • External AI transparency communications (disclosures, limitations, “how it works” summaries where appropriate)
  • Incident and issue communications (safety events, model failures, major defects, data problems)
  • Change communications (model updates, feature changes, deprecations, new uses, new data sources)
  • Third-party communications (supplier requirements, attestations, security/privacy coordination)

Make each object structured: audience, trigger, owner, channel, approval, record.

Step 2: Define audiences and owners (RACI that an auditor can test)

Create a RACI that maps:

  • Accountable: typically the AIMS owner or AI governance lead
  • Responsible: comms owner(s) by message type (Product, Legal, Security, Support)
  • Consulted: Privacy, Compliance, Risk, Engineering, TPRM
  • Informed: Exec leadership, frontline teams, customers/partners, other third parties as relevant

Avoid “shared accountability.” Pick one accountable owner per communication type.

Step 3: Establish triggers and thresholds for “must communicate” events

Write explicit triggers so teams do not debate in the moment. Examples you can implement immediately:

  • AI system goes live or expands to a new user group
  • Material change to model behavior, intended use, or limitations
  • A monitoring signal indicates harmful output patterns (or credible complaint signals)
  • A data source or third-party component changes
  • A significant user-facing workflow changes from manual to AI-assisted or AI-driven
  • A confirmed defect affects reliability, integrity, or user trust

Your triggers should align to your existing SDLC / MLOps gates: design review, release approval, change control, and incident response.

Step 4: Build templates that hard-code transparency obligations

Create templates so transparency is consistent and reviewable. At minimum:

  • User-facing AI disclosure notice (what is AI-assisted, what is automated, how to get help)
  • System limitations and appropriate use (what it should not be used for)
  • Human oversight statement (who reviews, what recourse exists)
  • Material change notice (what changed, expected impact, effective date, rollback/opt-out where available)
  • Incident customer notice (what happened, impact, mitigations, next update cadence)

Keep templates version-controlled. Require Legal/Compliance approval where customer reliance is foreseeable.

Step 5: Integrate communications into operational workflows (where programs succeed or fail)

Add mandatory communication checks into:

  • AI intake / use-case approval: “External transparency required? Y/N. If yes, which template?”
  • Release management: “Any customer-facing behavioral change? If yes, attach notice and approvals.”
  • Change management: “Model/data change requires internal bulletin and external notice decision.”
  • Incident response: “Communication lead assigned, stakeholder list confirmed, draft notice attached.”
  • Third-party onboarding: “Supplier comms path and escalation contacts documented.”

If you run Daydream for third-party risk management and due diligence, connect your AI suppliers to the same comms triggers: contract change notices, incident notifications, and model/provider updates should land in a trackable queue with owners and evidence.

Step 6: Define retention and auditability rules

Clause 7.4 is easy to “say” and hard to “prove.” Define:

  • Where final messages live (ticketing system, GRC tool, knowledge base)
  • How approvals are captured (workflow, e-sign, PR/merge approvals)
  • Minimum record set (see next section)
  • How you show coverage across AI systems (a comms matrix per system)

Step 7: Test it with two tabletop exercises

Run one exercise for material model change and one for an AI incident. Validate:

  • people know their roles
  • you can produce a customer-ready notice quickly
  • you can generate an evidence packet without reconstructing history

Required evidence and artifacts to retain

Auditors typically want both design evidence (the plan) and operating evidence (proof it runs). Maintain:

  • Communication policy or standard for the AIMS (scope, principles, transparency)
  • Communication plan / matrix mapping message types to audience/trigger/owner/channel
  • Templates and approved boilerplate (version-controlled)
  • Approval records (Legal/Compliance/Product/Security sign-offs)
  • Training and internal announcements for roles and processes
  • Release notes and change logs that tie to communications sent
  • Incident communication logs (drafts, timestamps, recipients, update cadence)
  • Third-party communications records (notices received/sent, escalation contacts, contractual notification terms)
  • Evidence per AI system that transparency obligations were assessed and addressed (even if the result is “no external notice needed,” with rationale)

Common exam/audit questions and hangups

Expect these questions, and prepare the evidence in a single packet per AI system:

  1. “Show me how you decided what needs to be communicated externally.” They want a documented decision rule and examples.
  2. “Where are transparency obligations defined for this AI system?” They want system-specific mapping to notices, UI text, documentation, or customer terms.
  3. “How do you handle model changes?” They want triggers, approvals, and proof that a real change produced the required communications.
  4. “Who can approve customer-facing AI statements?” They want clear ownership and controls against uncontrolled claims.
  5. “Prove internal teams were informed.” They want training logs, internal bulletins, or release comms tied to system changes.

Hangup to anticipate: teams often provide a policy but cannot produce operating records for actual releases or incidents.

Frequent implementation mistakes and how to avoid them

  • Mistake: Writing a comms policy with no triggers. Fix: convert prose into a trigger-based matrix tied to lifecycle gates.
  • Mistake: Treating “transparency” as a marketing page. Fix: require product/legal/compliance review, and keep statements consistent with how the system actually behaves.
  • Mistake: No record of “no comms needed” decisions. Fix: require a short rationale in the release ticket or change request.
  • Mistake: Support and Sales are uninformed. Fix: add a mandatory internal enablement notice for customer-facing AI changes, with a FAQ snippet.
  • Mistake: Third-party AI components are excluded. Fix: include supplier notifications and incident reporting obligations in the same communication inventory, then track them in your TPRM workflow (Daydream can hold these as recurring obligations with evidence).

Risk implications (why operators treat this as a control, not a memo)

Poor AI communications create predictable failure modes: inconsistent customer statements, unsupported claims about capabilities, missed notifications during incidents, and internal confusion about approved uses. Those become audit findings because they show the management system is not operating as designed. Clause 7.4 also links directly to trust and reliance risk: if users do not understand AI limitations, they will use outputs beyond intended purpose, and your incident response load increases.

Practical execution plan (30/60/90-day)

First 30 days (stand up the control)

  • Name an accountable owner for AIMS communications.
  • Draft the communication matrix (audience, trigger, owner, channel, approval, record).
  • Inventory current customer-facing AI statements (webpages, UI copy, contracts, sales decks, support macros).
  • Publish initial templates for disclosure, limitations, material change notice, and incident notice.

Days 31–60 (wire into workflows)

  • Add comms checkpoints to AI intake, release management, and incident response workflows.
  • Train Product, Support, and Sales on what they can say and where to find approved language.
  • Implement a central evidence repository and retention rules for approvals and sent communications.
  • Run one tabletop for a material change and capture the evidence packet as your “gold standard.”

Days 61–90 (prove operations and tighten coverage)

  • Select a small set of AI systems and produce complete comms traceability for each (decision → approval → message → record).
  • Add third-party AI supplier communication obligations into your TPRM workflow (notification SLAs, escalation contacts, proof of receipt).
  • Perform an internal audit-style test: pick a recent release and prove the required communications happened, or document the gap and remediation.

Frequently Asked Questions

Do we need external communications for every AI system?

No. Clause 7.4 requires you to determine relevant internal and external communications, then execute them. Your evidence should show a consistent decision method, even when the outcome is “internal-only.” 1

What counts as an AI “transparency obligation” under this clause?

ISO/IEC 42001 Clause 7.4 explicitly calls out transparency obligations for AI systems, so you should define what you disclose about AI involvement, limitations, and relevant changes. Keep it system-specific and tied to user impact. 1

How do we operationalize this without creating a bottleneck in Legal?

Use pre-approved templates and controlled language, then route only high-risk or novel communications for Legal/Compliance approval. Keep the trigger rules tight so routine releases do not require bespoke review.

What evidence is strongest in an ISO audit?

Auditors respond well to traceability: a release/change record that shows the trigger decision, approvals, the final message, and proof it was delivered. Pair that with your communication matrix and template version history.

How do we handle communications when a third-party model provider is involved?

Include third-party notifications and escalation paths in your communication inventory, and bind them to contracts and TPRM processes. Retain provider notices and your internal assessment of impact on your AI systems.

Can we meet Clause 7.4 with a policy alone?

A policy helps, but Clause 7.4 expects you to determine and run internal and external communications relevant to the AI management system. You need operating records showing it works in practice. 1

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

Do we need external communications for every AI system?

No. Clause 7.4 requires you to determine relevant internal and external communications, then execute them. Your evidence should show a consistent decision method, even when the outcome is “internal-only.” (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What counts as an AI “transparency obligation” under this clause?

ISO/IEC 42001 Clause 7.4 explicitly calls out transparency obligations for AI systems, so you should define what you disclose about AI involvement, limitations, and relevant changes. Keep it system-specific and tied to user impact. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

How do we operationalize this without creating a bottleneck in Legal?

Use pre-approved templates and controlled language, then route only high-risk or novel communications for Legal/Compliance approval. Keep the trigger rules tight so routine releases do not require bespoke review.

What evidence is strongest in an ISO audit?

Auditors respond well to traceability: a release/change record that shows the trigger decision, approvals, the final message, and proof it was delivered. Pair that with your communication matrix and template version history.

How do we handle communications when a third-party model provider is involved?

Include third-party notifications and escalation paths in your communication inventory, and bind them to contracts and TPRM processes. Retain provider notices and your internal assessment of impact on your AI systems.

Can we meet Clause 7.4 with a policy alone?

A policy helps, but Clause 7.4 expects you to determine and run internal and external communications relevant to the AI management system. You need operating records showing it works in practice. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001 Communication: Implementation Guide | Daydream