Communication of incidents
ISO/IEC 42001 Annex A Control A.8.4 requires you to establish and implement a plan for communicating AI system incidents, with clear roles, decision criteria, and notification procedures. To operationalize it quickly, define what counts as an AI incident, pre-approve internal and external message paths, and prove you can execute the plan through records from real events and exercises. 1
Key takeaways:
- Define “AI system incident” for your environment, then map it to specific audiences and message triggers. 1
- Assign accountable owners and backups, and set a governed workflow from detection to comms approval to distribution. 1
- Retain evidence that the plan exists, is implemented, and is used: logs, approvals, templates, and post-incident reviews. 1
“Communication of incidents” sounds like a PR problem until you run an AI incident that touches legal, security, product, and customer trust at the same time. ISO/IEC 42001 Annex A Control A.8.4 pushes you to treat AI incident communication as an operational capability: you need a written plan, and you need to be able to execute it under pressure. 1
For a Compliance Officer, CCO, or GRC lead, the fastest path is to avoid debating messaging style and focus on mechanics: who must be told, who decides, what gets said, and how you prove it happened. Your plan should cover internal escalation (so the right people know quickly), external notifications (so customers, affected third parties, or other stakeholders receive accurate information), and governance controls (so you don’t send inconsistent, speculative, or unauthorized statements). 1
This page gives requirement-level implementation guidance you can hand to security, engineering, and communications teams. It is written to help you pass an ISO/IEC 42001 audit and reduce the real risk of delay, confusion, and conflicting statements during AI system incidents. 1
Regulatory text
Requirement (Annex A, Control A.8.4): “The organization shall establish and implement a plan for communicating AI system incidents.” 1
Operator meaning: You must (1) document an AI incident communication plan and (2) run your incident communications according to that plan in practice. Auditors will look for evidence that the plan is real, owned, current, and followed during incidents and drills. 1
Plain-English interpretation
You need a playbook for AI incidents that answers four questions without improvisation:
- What happened (and what do we call it)? A defined taxonomy for AI system incidents in your context.
- Who needs to know? Internal stakeholders plus any external audiences relevant to your AI use.
- Who decides and approves? Named roles, backups, and an approval path.
- How do we communicate consistently? Channels, templates, and a record of what was sent and when. 1
This control is narrower than “general breach notification.” It is about AI system incidents, which may include safety, harmful outputs, model performance drift causing impact, data/model integrity issues, prompt injection or model manipulation, or operational failures in AI components. Your plan should define incident types you care about, then map each type to communication actions. 1
Who it applies to (entity and operational context)
ISO/IEC 42001 applies across AI lifecycle roles, so A.8.4 is relevant to:
- AI providers building or offering AI systems.
- AI users deploying AI systems in business processes.
- Organizations where AI is embedded in products, internal operations, decision support, or customer-facing workflows. 1
Operationally, this requirement bites hardest when:
- Your AI outputs affect customers, employees, or other third parties.
- You depend on third parties for model APIs, training data, model hosting, monitoring, or labeling.
- You run multiple AI systems with different owners, making communications inconsistent unless centralized. 1
What you actually need to do (step-by-step)
Use the following build sequence. It is designed so you can get to “auditable” fast, then refine.
1) Define “AI system incident” and severity tiers
Create a short definition that aligns with your environment and your AI risk assessment process. Then create severity tiers that drive communication actions. Keep it operational: engineers should be able to classify an event quickly. 1
Minimum outputs
- AI incident definition (one paragraph)
- Severity tiers with examples (table)
- “Non-incidents” list (to reduce noise)
2) Identify audiences and required communication paths
List the stakeholder groups that may need notification. Don’t overcomplicate; focus on groups that have action to take or are affected:
- Internal: AI product owner, Security/IR, Legal, Privacy, Compliance, Customer Support, Executive sponsor, affected business unit.
- External: impacted customers, impacted third parties in your supply chain, relevant partners, and if appropriate, public statements via a controlled channel. 1
Create a simple mapping: incident type + severity → audience(s) → channel(s) → approver(s).
3) Assign roles, decision rights, and backups
Your plan should name (by role/title) who does what:
- Incident Comms Owner: accountable for executing the plan.
- Technical Incident Lead: provides facts and technical timeline.
- Approvers: typically Legal/Compliance and the business owner for the AI system.
- Customer-facing lead: Support/Success lead for customer notifications.
- Backup roles: avoid single points of failure. 1
Decision rights matter more than org charts. Write down who can authorize an external notification and who can authorize a public statement.
4) Build message templates and minimum content requirements
Pre-write templates for common scenarios. Your goal is consistent, non-speculative communication that can be sent quickly.
Template library to maintain
- Internal “AI incident declared” notice (to leadership + responders)
- Customer notification (impact, mitigations, support path)
- Third-party notification (if they need to act or coordinate)
- Public holding statement (if you choose to use public comms) 1
Minimum content checklist
- What is confirmed vs under investigation
- Scope (systems, users, processes)
- Customer/third-party actions (if any)
- Mitigations underway
- Next update cadence and contact path
5) Integrate with incident response and case management
A.8.4 fails in practice when it’s a standalone Word document. Tie it to your incident workflow:
- Add an “AI incident communications” task group in your incident tickets.
- Require comms approvals as ticketed steps with timestamps and approver identity.
- Keep all outbound messages attached to the incident record. 1
If you use a GRC system such as Daydream, treat this control like a living requirement: link the plan, templates, training, and incident records directly to the control, then map ownership so evidence collection is continuous rather than audit-season scrambling.
6) Train and run exercises
Implementation means people can execute the plan. Run tabletop scenarios that force communication decisions, not just technical containment. Capture decisions and action items, then update the plan. 1
7) Post-incident review and plan maintenance
After any AI incident (or exercise), perform a structured review:
- What did we communicate, to whom, through which channels, and why?
- Where did approvals or facts bottleneck?
- Did we contradict ourselves across channels?
- What templates, stakeholder lists, or triggers need updating? 1
Required evidence and artifacts to retain
Auditors typically want “plan + proof of execution.” Keep these artifacts ready:
Core documents
- AI system incident communication plan (version-controlled)
- Stakeholder contact lists (role-based; reviewed on a schedule you set)
- Message templates and content checklists 1
Operational proof
- Incident tickets showing comms tasks completed, timestamps, and approvals
- Copies of internal notices, customer notifications, and third-party communications
- Exercise records: scenario, attendee list, outputs, action items
- Post-incident review reports and plan updates 1
Governance linkage
- RACI chart or responsibility matrix for incident communications
- Training records for key roles 1
Common exam/audit questions and hangups
Expect these questions, and pre-build your answers with evidence:
- “Show me the plan. Who owns it, and when was it last updated?” 1
- “How do you define an AI system incident? Give examples of what qualifies.” 1
- “Walk me through your last AI-related incident. Who did you notify and why?” 1
- “How do you prevent unauthorized external communication during incidents?” 1
- “How do third parties fit into your incident communications?” 1
Hangup to anticipate: teams produce a generic security incident comms plan but cannot show AI-specific triggers, owners, or examples. Your plan must explicitly cover AI system incidents. 1
Frequent implementation mistakes and how to avoid them
Mistake 1: No AI-specific classification.
Fix: Define AI incident categories tied to your AI systems and failure modes. 1
Mistake 2: Approval chaos.
Fix: Write explicit decision rights for external messaging and pre-identify approvers and backups. 1
Mistake 3: Contact lists are stale.
Fix: Use role-based distribution lists and embed list review into your operational cadence. 1
Mistake 4: Communications live outside the incident record.
Fix: Require all messages and approvals to be attached to the incident ticket/case. 1
Mistake 5: Templates that speculate.
Fix: Force “confirmed vs unconfirmed” language and require technical lead sign-off on facts. 1
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so you should manage to the standard and your risk profile rather than trying to “pattern match” regulators. Practically, weak incident communication creates second-order risk: inconsistent statements, delayed customer support actions, contractual disputes with third parties, and loss of executive confidence in AI governance. 1
A practical 30/60/90-day execution plan
Use this as an operator’s rollout plan; tune scope to your AI footprint.
First 30 days (stand up the minimum viable plan)
- Inventory AI systems in scope for incident communications (customer-facing first).
- Draft and approve the AI incident communication plan (owner, triggers, audiences, approvals). 1
- Create an initial stakeholder list and role-based contact groups.
- Build a template pack for internal and external notifications.
- Integrate a comms task checklist into incident tickets.
Days 31–60 (prove implementation and close gaps)
- Run at least one tabletop focused on communication decisions and approvals.
- Validate the “facts pipeline”: how technical details get confirmed and released.
- Add third-party coordination steps for key AI dependencies (model/API providers, hosting, data suppliers).
- Establish a lightweight review cycle for templates and contact lists. 1
Days 61–90 (operational hardening)
- Expand coverage to additional AI systems and business units.
- Train on-call roles and backups using realistic scenarios.
- Set up evidence automation: ensure every incident record captures sent messages and approvals.
- Add post-incident review triggers and track actions to closure in your GRC workflow (including Daydream if you use it). 1
Frequently Asked Questions
What counts as an “AI system incident” for this requirement?
ISO/IEC 42001 A.8.4 does not define the term in the excerpt provided, so you must define it for your environment and document it in the plan. Use a definition that an on-call engineer can apply consistently and that drives clear communication actions. 1
Do we need a separate plan from our security incident response communications plan?
You can extend your existing incident communications plan, but you must explicitly cover AI system incidents, including AI-specific triggers, owners, and templates. Auditors will look for AI-specific implementation rather than a generic security-only document. 1
Who should approve external communications during an AI incident?
Put decision rights in writing and assign role-based approvers (commonly Legal/Compliance plus the AI system owner). The key is clarity and backups so you can act without improvising approval paths mid-incident. 1
How do we handle third-party involvement (model APIs, hosting, data providers)?
Treat third parties as explicit stakeholders in your plan: define when you notify them, what information you share, and how you coordinate joint statements if needed. Keep the communications in the incident record so you can show the plan was followed. 1
What evidence will an auditor expect to see that we “implemented” the plan?
They will typically expect incident records (or exercises) showing communications executed with approvals, timestamps, and copies of what was sent. Plan documents alone usually do not demonstrate implementation. 1
We haven’t had an AI incident yet. How do we show compliance?
Run a tabletop exercise and retain the artifacts: scenario, decisions, drafted communications, and plan updates from lessons learned. That gives you implementation evidence without waiting for a real incident. 1
Footnotes
Frequently Asked Questions
What counts as an “AI system incident” for this requirement?
ISO/IEC 42001 A.8.4 does not define the term in the excerpt provided, so you must define it for your environment and document it in the plan. Use a definition that an on-call engineer can apply consistently and that drives clear communication actions. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
Do we need a separate plan from our security incident response communications plan?
You can extend your existing incident communications plan, but you must explicitly cover AI system incidents, including AI-specific triggers, owners, and templates. Auditors will look for AI-specific implementation rather than a generic security-only document. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
Who should approve external communications during an AI incident?
Put decision rights in writing and assign role-based approvers (commonly Legal/Compliance plus the AI system owner). The key is clarity and backups so you can act without improvising approval paths mid-incident. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
How do we handle third-party involvement (model APIs, hosting, data providers)?
Treat third parties as explicit stakeholders in your plan: define when you notify them, what information you share, and how you coordinate joint statements if needed. Keep the communications in the incident record so you can show the plan was followed. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
What evidence will an auditor expect to see that we “implemented” the plan?
They will typically expect incident records (or exercises) showing communications executed with approvals, timestamps, and copies of what was sent. Plan documents alone usually do not demonstrate implementation. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
We haven’t had an AI incident yet. How do we show compliance?
Run a tabletop exercise and retain the artifacts: scenario, decisions, drafted communications, and plan updates from lessons learned. That gives you implementation evidence without waiting for a real incident. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream