Roles, responsibilities and authorities
ISO/IEC 42001 Clause 5.3 requires top management to assign and communicate clear responsibilities and authorities for every relevant role in your AI management system, so decisions are owned, escalation paths work, and controls do not fail in gaps between teams. Operationalize it by defining an AI governance RACI, documenting decision rights, and proving staff know who does what.
Key takeaways:
- Assign named owners and decision rights across the AI lifecycle, not just a high-level “AI committee.”
- Communicate roles in ways people actually use (job descriptions, onboarding, SOPs, tooling).
- Keep evidence that assignments exist, are current, and are understood (RACI, charters, approvals, training attestations).
“Roles, responsibilities and authorities” is a simple requirement that fails in predictable ways: roles exist on paper, but approval rights are unclear; multiple teams believe someone else owns model risk; production changes happen without a defined authority; incidents stall because escalation is informal. ISO/IEC 42001:2023 Clause 5.3 forces you to remove that ambiguity by making top management accountable for assigning and communicating responsibilities and authorities for relevant AI management system roles.
For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat this like a governance control with auditable artifacts: define the roles that matter for AI risk and compliance, map them to lifecycle activities (data, development, testing, deployment, monitoring, change, incident response, retirement), assign accountable owners and decision makers, then publish it in the same channels your organization already uses to run work. Your goal is operational clarity: who can approve a new AI use case, who can block release, who owns monitoring, who signs off on risk acceptance, and who is on the hook when a third party model changes.
This page gives requirement-level implementation guidance you can execute quickly and defend in an audit.
Regulatory text
Requirement (verbatim): “Top management shall ensure that the responsibilities and authorities for relevant roles within the AI management system are assigned and communicated within the organization.” 1
Operator interpretation (plain English)
Top management must make sure:
- the AI management system has clearly defined roles (not just teams),
- each role has assigned responsibilities (what they must do) and authorities (what they can decide/approve/stop), and
- those assignments are communicated so the organization can follow them in day-to-day work.
1
What auditors probe is not your org chart. They probe whether decisions and control steps across the AI lifecycle have an unambiguous owner and decision authority, and whether staff can name them without guessing.
Who it applies to
Entity scope
This requirement applies to organizations implementing an AI management system, including:
- AI providers building and offering AI systems
- AI users deploying or operating AI in business processes
- Organizations using AI internally (including shared services, analytics, and automation teams)
Operational context (where it bites)
You need this most when:
- AI use cases cut across functions (Product, Engineering, Legal, Security, HR, Procurement).
- A third party supplies models, data, or AI-enabled software and you still own outcomes.
- You have multiple model types (rules, ML, GenAI) with different risk owners.
- Releases happen through CI/CD and “authority to deploy” is unclear.
What you actually need to do (step-by-step)
Step 1: Define “relevant roles” for your AI management system
Start from lifecycle activities, then map roles to them. A practical baseline set of roles (adapt names to your org):
- Top management sponsor (exec owner for the AI management system)
- AI governance lead (program owner; often GRC/Compliance)
- Model/product owner (business accountability for a specific AI system)
- Data owner (authorizes data use and sharing)
- Engineering lead / MLOps (build and deploy authority)
- Information security (security controls, threat modeling, access decisions)
- Privacy (privacy risk assessment, data minimization requirements)
- Legal/Regulatory (regulatory interpretation, contract clauses for third parties)
- Risk management / model risk (risk assessment method, sign-off gates)
- QA/Validation (testing authority, release readiness input)
- Procurement / third-party risk (third party due diligence, ongoing monitoring)
- Incident manager (AI incident triage and response coordination)
- Internal audit (independent assurance role)
Output: a role inventory with short purpose statements.
Step 2: Assign responsibilities and authorities using a RACI-plus model
A basic RACI is necessary but often insufficient. Add decision rights and stop authority. Build a matrix for core AI activities, for example:
- Intake and approval of new AI use cases
- Data sourcing and labeling approval
- Model development and documentation
- Risk assessment and control selection
- Validation/testing sign-off
- Deployment approval and rollback authority
- Monitoring ownership (drift, performance, safety issues)
- Change management (retraining, prompt changes, vendor model updates)
- Third party onboarding and renewal approvals
- Incident declaration, severity assignment, notifications, and closure
- Retirement/decommission approval and data retention actions
For each activity, document:
- Accountable owner (A)
- Responsible executor (R)
- Consulted reviewers (C)
- Informed stakeholders (I)
- Authority: who can approve, who can block, who can accept residual risk, and who can grant exceptions
Practical rule: If a control step is mandatory (e.g., pre-release risk review), it must have a named accountable role and a named authority for go/no-go. If it does not, it will be skipped under delivery pressure.
Step 3: Formalize governance forums and escalation paths
Define forums where decisions happen, not just where updates are shared:
- AI risk review board (or equivalent): decision scope, quorum, voting/approval method, and override rules.
- Exception/risk acceptance process: who can accept which risks and under what conditions.
- Incident escalation: who declares an AI incident, who leads, who communicates externally.
Document escalation triggers that force action (examples: harmful outputs, regulatory complaint, major vendor model change, unexpected use by a sensitive population). Avoid “email the group alias” as your only escalation plan.
Step 4: Communicate roles in the channels people actually use
Clause 5.3 explicitly requires communication, not just assignment 1. Use multiple paths:
- Publish the RACI/decision rights in your GRC system or policy portal.
- Embed responsibilities in SOPs and runbooks (release checklist, incident playbook).
- Update job descriptions for key roles (at least for control owners and approvers).
- Add onboarding content for teams touching AI systems.
- Put approver groups in tooling (ticketing workflows, CI/CD gates, model registry approvals).
A fast test: ask an engineer, product manager, and privacy analyst “Who can approve deployment?” and “Who can stop a release?” If answers differ, your communication is not effective.
Step 5: Make it durable with change control
Roles drift with reorganizations. Put minimal maintenance rules in place:
- Ownership review trigger: org changes, new AI system, new third party AI, major incident.
- Update mechanism: who edits the RACI, who approves changes, and where version history lives.
- Coverage check: every AI system must map to the same control steps and owners.
If you use Daydream or another GRC workflow tool, treat the RACI and decision rights as controlled documents with tasks, approvals, and attestations so updates do not depend on institutional memory.
Required evidence and artifacts to retain
Auditors look for “assigned” and “communicated.” Keep evidence for both.
Assignment artifacts
- AI management system governance charter (scope, roles, forums, decision rights)
- AI RACI-plus matrix (activities × roles, including approval/stop authority)
- Role descriptions for key governance roles (one-page is fine)
- Appointment evidence (management memo, committee roster, tool-based assignment)
- Delegation of authority statements for risk acceptance and exceptions
Communication artifacts
- Published policy/SOP references that point to the roles and approvals
- Onboarding/training materials that explain who does what
- Attestations or acknowledgements for key roles (control owners, approvers)
- Meeting minutes showing decisions made by the defined authorities
- Workflow/tool screenshots: required approvers, ticket templates, change gates
Operational proof (what makes it real)
- A sample of completed AI use case approvals with correct sign-offs
- A sample of release/changes with the right approver path
- An incident record showing escalation and authority worked as designed
Common exam/audit questions and hangups
Expect variations of:
- “Show me who is accountable for the AI management system and how top management ensures assignments.”
- “Who can approve a new AI use case? Who can block it?”
- “Where is the escalation path documented for AI incidents?”
- “How do you ensure third party AI changes are evaluated and approved?”
- “How do you keep role assignments current after reorganizations?”
- “Prove that the organization knows these roles (not just that a document exists).”
Hangups auditors often flag:
- RACI exists but lacks decision rights (no explicit approval/stop authority).
- Committee names exist, but membership and quorum are undefined.
- Roles are team names only (“Security”), not accountable individuals or named role-holders.
- Exception handling is informal; risk acceptance authority is unclear.
Frequent implementation mistakes and how to avoid them
-
Mistake: Treating “communicated” as “posted once.”
Fix: embed role responsibilities into workflows (tickets, release gates) and onboarding. -
Mistake: Over-centralizing all authority in a single AI committee.
Fix: keep centralized risk governance, but delegate operational approvals with clear boundaries and escalation thresholds. -
Mistake: Ignoring third party responsibilities.
Fix: assign internal owners for third party AI due diligence, ongoing monitoring, and vendor-change impact assessment. Your third party can perform tasks, but you still need internal accountability. -
Mistake: No explicit stop authority.
Fix: name the role(s) that can halt deployment or disable an AI feature, and ensure they can execute it operationally (access, runbooks, on-call). -
Mistake: “Everyone is responsible.”
Fix: every control step gets one accountable owner. Shared responsibility is fine for execution, not for accountability.
Risk implications (why operators should care)
Unassigned or uncommunicated authorities create predictable failures:
- Control steps get skipped under time pressure.
- Incidents stall while teams debate ownership.
- Risk acceptance happens implicitly, with no accountable executive.
- Third party AI changes reach production without review. These failures translate into customer harm, contractual breaches, and audit findings. Clause 5.3 is a foundational control because it prevents governance gaps before they become technical or legal problems 1.
Practical 30/60/90-day execution plan
You asked for speed; here is a pragmatic plan you can run without waiting for a full program rebuild.
First 30 days (get to “clear ownership”)
- Identify AI systems in scope and list the lifecycle activities you must govern.
- Name interim accountable owners for each activity (even if titles are temporary).
- Draft the RACI-plus matrix and decision rights (approve/block/accept exceptions).
- Get top management sign-off on the governance charter and delegation of authority.
- Publish the roles and escalation path in a single authoritative location.
Next 60 days (make it operational)
- Embed approvals into intake, change, and release workflows.
- Stand up the risk review forum with defined quorum and meeting outputs.
- Train key stakeholders (product, engineering, security, privacy, procurement).
- Start collecting operational proof: signed approvals, meeting minutes, exceptions log.
By 90 days (make it durable and auditable)
- Convert interim owners into stable role assignments and back-ups (coverage for absences).
- Add maintenance triggers (org changes, new AI system, major vendor updates).
- Run a tabletop incident exercise using the defined escalation and authority.
- Perform an internal control check: sample recent changes and confirm correct sign-offs.
Frequently Asked Questions
Do we need a separate “AI” org to meet this requirement?
No. You need clearly assigned responsibilities and authorities for relevant roles within the AI management system, even if those roles sit in existing teams 1.
What counts as “communicated within the organization”?
Communication is effective when staff can reliably find the current assignments and follow them in workflows. Publishing a static document helps, but auditors will also look for SOP references, training, and tool-based approvals.
Can a third party be assigned a role in our AI management system?
A third party can perform tasks, but you should still assign internal accountability and authority for decisions, approvals, and risk acceptance. Put third party responsibilities in contracts and third-party oversight procedures, then map internal owners in the RACI.
How detailed should the RACI be?
Start with the control steps that drive risk: intake, assessment, approval, deployment, monitoring, change, incidents, and retirement. Expand only after you can show the basics work consistently.
Who should have “stop authority” for an AI release?
Give stop authority to a small set of roles that can act fast (often product owner plus security/risk gatekeepers for defined triggers). Document it, and ensure they can execute the stop via runbooks and access.
We reorganize frequently. How do we keep this from becoming shelfware?
Tie updates to change triggers (org changes, new systems, major incidents) and treat the RACI and governance charter as controlled documents with an owner, approver, and version history.
Footnotes
Frequently Asked Questions
Do we need a separate “AI” org to meet this requirement?
No. You need clearly assigned responsibilities and authorities for relevant roles within the AI management system, even if those roles sit in existing teams (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system).
What counts as “communicated within the organization”?
Communication is effective when staff can reliably find the current assignments and follow them in workflows. Publishing a static document helps, but auditors will also look for SOP references, training, and tool-based approvals.
Can a third party be assigned a role in our AI management system?
A third party can perform tasks, but you should still assign internal accountability and authority for decisions, approvals, and risk acceptance. Put third party responsibilities in contracts and third-party oversight procedures, then map internal owners in the RACI.
How detailed should the RACI be?
Start with the control steps that drive risk: intake, assessment, approval, deployment, monitoring, change, incidents, and retirement. Expand only after you can show the basics work consistently.
Who should have “stop authority” for an AI release?
Give stop authority to a small set of roles that can act fast (often product owner plus security/risk gatekeepers for defined triggers). Document it, and ensure they can execute the stop via runbooks and access.
We reorganize frequently. How do we keep this from becoming shelfware?
Tie updates to change triggers (org changes, new systems, major incidents) and treat the RACI and governance charter as controlled documents with an owner, approver, and version history.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream