AI impact assessment and risk acceptance

The ai impact assessment and risk acceptance requirement under ISO/IEC 42001 expects you to (1) assess how each AI system could impact people, the business, and stakeholders, and (2) document a formal decision to treat, transfer, avoid, or accept residual risk before deployment and after material changes 1. Operationalize it by standardizing an AI Impact Assessment (AIIA) workflow, clear approval thresholds, and auditable risk acceptance records.

Key takeaways:

  • Run a repeatable AI impact assessment before release, and re-run it after material model, data, or use-case changes 1.
  • Define who can accept risk, at what thresholds, and what compensating controls are required for approval 1.
  • Keep audit-ready artifacts: assessment inputs, risk decisions, approvers, dates, and follow-up actions tied to your AI inventory 1.

AI impact assessments fail in practice for one reason: teams treat them like a one-time narrative write-up instead of a decision system. ISO/IEC 42001 is an AI management system standard; assess-and-accept is the point where your governance turns into a controlled release decision 1. If you cannot show how impacts were identified, scored, mitigated, and either approved or rejected by an authorized risk owner, you will struggle to demonstrate effective control operation.

This page gives requirement-level implementation guidance for a Compliance Officer, CCO, or GRC lead who needs to stand up a working “AI impact assessment and risk acceptance” process quickly. The goal is not perfect documentation. The goal is consistent decisions, traceability, and repeatability across AI systems, including internally developed models and third-party AI capabilities embedded in products and business workflows 1.

You’ll leave with (1) a practical interpretation, (2) a step-by-step operating process, (3) the artifacts to retain for audits and customer due diligence, and (4) a 30/60/90-day plan to get the control running with minimal friction.

Regulatory text

Provided excerpt (non-licensed summary): “Baseline implementation-intent summary derived from publicly available framework overviews; licensed standard text is not reproduced in this record.”
Implementation intent summary: “Assess potential AI impacts and formalize risk acceptance decisions.” 1

What an operator must do:

  1. Assess AI impacts relevant to the system’s purpose, users, affected individuals, and business environment. Impacts include safety, legal/compliance, privacy, security, fairness, transparency, and operational harms.
  2. Decide and record risk treatment: mitigate, transfer, avoid, or accept residual risk.
  3. Ensure the decision is authorized: risk acceptance must be approved by a defined role with appropriate authority, not informally assumed by a project team.
  4. Make it repeatable: the same logic must apply across AI systems and be triggered by material changes to model, data, or use-case 1.

Plain-English interpretation of the requirement

You must be able to answer, for every AI system you operate:

  • “What could go wrong, who could it harm, and how would we detect it?”
  • “What did we do about those risks?”
  • “What risks remain?”
  • “Who explicitly accepted those remaining risks, and under what conditions?” 1

A strong implementation treats the AI Impact Assessment (AIIA) as a release gate and the risk acceptance record as a contract between the business owner and the control functions: it states what is allowed to ship, what must be monitored, and what would trigger rollback or re-approval.

Who it applies to

Entity scope: Organizations that develop AI systems and organizations that operate AI systems, including those built by third parties 1.
Operational scope:

  • Internally built models (ML, LLM, rules + ML hybrids).
  • AI features embedded in products (recommendations, ranking, detection, generation, decision support).
  • Material third-party AI dependencies (hosted models, APIs, AI components in SaaS), where you still own outcomes in your environment.

Common “in scope” moments:

  • New AI system or new AI-powered feature.
  • Expanding to a new user group, geography, or regulated workflow.
  • Switching base model/provider, retraining, changing prompts/guardrails, or changing training data sources.
  • Incident patterns that suggest new impacts.

What you actually need to do (step-by-step)

Step 1: Define your “AI impact assessment” minimum content (template)

Create a standard AIIA form that is short enough to finish, but structured enough to compare across systems. Minimum fields:

  • System identity: name, owner, lifecycle stage, link to AI inventory record.
  • Intended use and prohibited use: who uses it, for what decisions, what it must not be used for.
  • Affected parties: users, customers, employees, non-users impacted by outputs.
  • Impact categories checklist: privacy, security, safety, discrimination/fairness, explainability, IP/content risk, regulatory obligations, business continuity.
  • Risk scenarios: 5–15 concrete “if X then Y harm” scenarios.
  • Controls mapping: existing controls and gaps (technical + procedural).
  • Residual risk statement: what remains after controls.
  • Decision request: approve, approve with conditions, reject, or defer pending remediation 1.

Practical tip: force specificity by requiring each risk scenario to include a harm, a trigger, and a detection/response owner.

Step 2: Establish risk scoring and acceptance thresholds

You need consistency more than sophistication. Use a simple matrix:

  • Impact severity (e.g., low/medium/high based on harm to people, legal exposure, or business disruption).
  • Likelihood (based on model behavior history, data quality, attack surface, or control strength).
  • Detectability (how quickly monitoring would catch failures).

Then define who can accept what:

  • Product/Business owner: low residual risk.
  • Functional risk owner (Security/Privacy/Compliance): medium residual risk with documented compensating controls.
  • Executive or risk committee: high residual risk, only with explicit rationale and time-bound conditions 1.

Write this into a one-page Risk Acceptance Authority Matrix. Auditors look for this because it prevents “silent acceptance.”

Step 3: Make AIIA a release gate in delivery workflows

Your assessment must change outcomes. Embed it into:

  • SDLC / model development lifecycle: required before production deployment.
  • Change management: required for material changes (new data source, new model version, broadened use case).
  • Procurement/TPRM: required before onboarding third-party AI or enabling new AI capabilities in a third-party tool.

Mechanically: add a ticket/work item type (“AIIA”) and block deployment until the approval field is complete and evidence is attached.

Step 4: Run the assessment with the right people in the room

Minimum participants:

  • System owner (accountable)
  • Engineering/ML lead (feasibility and control implementation)
  • Security (threats, abuse cases, logging)
  • Privacy (data protection and data minimization decisions)
  • Legal/Compliance (obligations, customer commitments, policy)
  • If relevant: HR, Safety, Trust, or Customer Support (downstream harm handling)

Keep meetings short. Do pre-work in the template, then hold a decision review.

Step 5: Document the risk decision and conditions (risk acceptance record)

Your risk acceptance record must include:

  • What residual risks are being accepted (explicit list).
  • Why acceptance is justified (business rationale).
  • Conditions of operation (guardrails, monitoring, human-in-the-loop constraints, approved user groups).
  • Required follow-ups (remediation tasks, deadlines, owner).
  • Review trigger events (incident types, drift signals, complaint thresholds, major model/provider change).
  • Approver identity, role, date/time, and scope (what exactly is being approved) 1.

If you use Daydream, this is where teams typically standardize approvals and evidence capture: the AIIA template, the authority matrix, and the signed risk acceptance can sit next to the system inventory and control mapping so nothing ships without traceability.

Step 6: Reassess on change and on learning (continuous improvement)

Define “material change” in plain language and enforce it. Examples:

  • Model version changes with behavior impact.
  • New training data sources or new data categories.
  • New deployment context (new market, new customer segment, new integration).
  • New capability enabling higher-risk uses.

Tie reassessment to change tickets, not calendar reminders, so it happens when risk actually changes 1.

Required evidence and artifacts to retain

Keep these artifacts in a single system of record and link them to the AI inventory entry:

  1. AI Impact Assessment (completed template + supporting exhibits).
  2. Risk register entry (risks, scores, owners, treatments, residual risk).
  3. Risk acceptance approval (signed/recorded decision, authority level, date).
  4. Control evidence: test results, red-team/abuse testing notes (if performed), monitoring configuration, logging specs.
  5. Change management evidence: approvals for material changes, reassessments, rollback decisions.
  6. Post-launch monitoring and incident records tied back to the assessment’s risk scenarios 1.

Common exam/audit questions and hangups

Auditors and customer assessors tend to ask:

  • “Show me your AIIA for the last AI feature you shipped. Who approved residual risk?”
  • “How do you define material change, and how do you enforce reassessment?”
  • “Where is your risk acceptance authority documented?”
  • “How do third-party AI services get assessed and approved?”
  • “Give an example where you rejected or delayed deployment due to impact findings.” 1

Hangup: teams provide a generic narrative but cannot show a decision trail (approver, date, conditions, follow-up tasks). Treat that as a control failure, not a documentation gap.

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: One template for every system, regardless of risk.
    Fix: tier your AIIA. Low-risk systems get a shorter path; higher-risk systems require deeper scenario analysis and stronger sign-off.

  2. Mistake: Risk accepted by the project team because “everyone agreed.”
    Fix: publish an authority matrix and enforce workflow approvals by role.

  3. Mistake: No link to operational monitoring.
    Fix: require each high/medium risk scenario to name a detector (metric/log/alert) and an owner.

  4. Mistake: Third-party AI treated as “vendor’s problem.”
    Fix: assess the impact in your context. Add contractual and technical controls, then accept/mitigate residual risk explicitly 1.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement. Practically, weak impact assessment and undocumented risk acceptance increases:

  • Regulatory and contractual exposure when AI causes harm and you cannot show a controlled decision process.
  • Audit findings because you cannot evidence governance operating effectiveness 1.

Treat this as an assurance control: it reduces “unknown unknowns” and makes your decisions defensible.

Practical 30/60/90-day execution plan

First 30 days: Stand up the minimum viable control

  • Create an AI system inventory baseline (even if incomplete) and assign owners.
  • Publish AIIA template and a one-page risk acceptance authority matrix.
  • Add a release gate in your ticketing/CI process: no production without AIIA + approval.
  • Pilot on one active AI system and run a decision review.

Days 31–60: Expand and harden

  • Roll the workflow to all new AI work and to the highest-risk existing systems first.
  • Define “material change” triggers in change management and procurement/TPRM intake.
  • Build an evidence checklist and central repository structure aligned to audits.
  • Train product and engineering leads on how to write risk scenarios that are testable.

Days 61–90: Prove operating effectiveness

  • Run a sample-based internal review: pick a few systems and trace from inventory → AIIA → decision → monitoring evidence.
  • Add quality controls: required fields, approver validation, and time-bound conditions.
  • Create a quarterly governance readout for accepted risks and overdue follow-ups.
  • If using Daydream, configure automation for reassessment triggers and approval routing tied to your authority matrix 1.

Frequently Asked Questions

Do we need an AI impact assessment for every model experiment?

No. Scope it to systems that reach production, influence real decisions, or can affect people or customers. Keep a lightweight intake for experiments, and require a full AIIA as part of the production release gate 1.

What counts as “risk acceptance” in an audit?

A dated decision by an authorized role that explicitly lists residual risks and conditions for operation. Meeting notes help, but auditors usually want a formal record tied to the system and the risk register 1.

How do we handle third-party AI models where we can’t see training data?

Assess impact in your use context: intended use, prohibited use, data you send, output failure modes, monitoring, and contractual controls. Then document residual risk and formal approval before enabling the capability 1.

Can Security or Privacy accept risk on behalf of the business?

They can approve within their delegated authority if your authority matrix grants it. The business/system owner should still be accountable for operating conditions and ongoing monitoring responsibilities 1.

What’s the fastest way to show evidence to an auditor?

Maintain a single “AI system packet” per system: inventory entry, latest AIIA, risk register excerpt, signed acceptance, and monitoring/incident evidence. Link the packet to change tickets for reassessments 1.

We already have a general enterprise risk process. Do we still need an AI-specific one?

You can reuse enterprise risk governance, but you still need AI-specific impact categories, change triggers, and artifacts tied to model/data behavior. Otherwise you will miss AI failure modes and fail to evidence consistent operation 1.

Related compliance topics

Footnotes

  1. ISO/IEC 42001 overview

Frequently Asked Questions

Do we need an AI impact assessment for every model experiment?

No. Scope it to systems that reach production, influence real decisions, or can affect people or customers. Keep a lightweight intake for experiments, and require a full AIIA as part of the production release gate (Source: ISO/IEC 42001 overview).

What counts as “risk acceptance” in an audit?

A dated decision by an authorized role that explicitly lists residual risks and conditions for operation. Meeting notes help, but auditors usually want a formal record tied to the system and the risk register (Source: ISO/IEC 42001 overview).

How do we handle third-party AI models where we can’t see training data?

Assess impact in your use context: intended use, prohibited use, data you send, output failure modes, monitoring, and contractual controls. Then document residual risk and formal approval before enabling the capability (Source: ISO/IEC 42001 overview).

Can Security or Privacy accept risk on behalf of the business?

They can approve within their delegated authority if your authority matrix grants it. The business/system owner should still be accountable for operating conditions and ongoing monitoring responsibilities (Source: ISO/IEC 42001 overview).

What’s the fastest way to show evidence to an auditor?

Maintain a single “AI system packet” per system: inventory entry, latest AIIA, risk register excerpt, signed acceptance, and monitoring/incident evidence. Link the packet to change tickets for reassessments (Source: ISO/IEC 42001 overview).

We already have a general enterprise risk process. Do we still need an AI-specific one?

You can reuse enterprise risk governance, but you still need AI-specific impact categories, change triggers, and artifacts tied to model/data behavior. Otherwise you will miss AI failure modes and fail to evidence consistent operation (Source: ISO/IEC 42001 overview).

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream