AI system impact assessment

ISO/IEC 42001 Clause 8.4 requires you to perform an AI system impact assessment on a planned basis and whenever significant changes occur to the AI system. To operationalize it, define what “significant change” means for your environment, route those changes through a gated impact-assessment workflow, and retain evidence that each assessment was completed, reviewed, and acted on. 1

Key takeaways:

  • You need a repeatable, scheduled AI impact assessment process plus a “significant change” trigger. 1
  • “Impact assessment” must be operational: scoped, documented, reviewed, and tied to decisions and mitigations. 1
  • Auditors will test both coverage (all in-scope systems) and triggering (changes actually caused reassessments). 1

An “AI system impact assessment requirement” sounds simple until you try to run it in production. ISO/IEC 42001 Clause 8.4 does not ask for a one-time report; it asks for a management-system discipline: impact assessments that happen as planned and whenever significant changes occur. 1

For a Compliance Officer, CCO, or GRC lead, the fastest path to compliance is to treat AI impact assessment as a gated workflow connected to your AI lifecycle and change management. That means: (1) a clear inventory of in-scope AI systems, (2) a standard assessment template with required reviewers, (3) defined triggers that force reassessment, and (4) evidence that the organization made decisions based on the assessment (approvals, mitigations, monitoring updates, or stop/go outcomes). 1

This page gives requirement-level implementation guidance you can put into a control statement, a procedure, and an audit-ready evidence plan. It assumes you operate AI as a provider, a user, or both, and that AI components may be built internally or sourced from third parties. 1

Regulatory text

Requirement (verbatim): “The organization shall conduct AI system impact assessments as planned and when significant changes occur.” 1

Operator interpretation (what you must do):

  1. Establish a plan for AI system impact assessments (cadence and scope) so assessments occur predictably, not ad hoc. 1
  2. Define “significant changes” and ensure they trigger an impact assessment before or during deployment, not months later. 1
  3. Execute and document each assessment, including outcomes and follow-up actions, so you can prove the process ran and influenced decisions. 1

Plain-English requirement

You need a repeatable process to evaluate how each AI system can affect people, customers, the business, and other stakeholders, and you must re-run that evaluation on a schedule and whenever the system materially changes. “Materially changes” should be defined so engineering and product teams can recognize it and route work into your assessment workflow. 1

Who it applies to

In-scope entities

  • Any organization operating an AI management system under ISO/IEC 42001. 1
  • AI providers (you build/offer AI systems or AI-enabled services). 1
  • AI users (you deploy or rely on AI from third parties in business processes). 1

Operational contexts that commonly fall in scope

  • AI used in customer decisions (eligibility, pricing, fraud actions, content moderation).
  • AI used in employee decisions (hiring screens, performance analytics).
  • AI embedded in security, monitoring, or financial controls.
  • AI sourced from third parties where you configure, fine-tune, or set decision thresholds.

If you do not maintain a single source of truth for where AI exists (models, features, agents, decision engines), your first audit issue will be “unknown AI.” The impact assessment process depends on an inventory.

What you actually need to do (step-by-step)

1) Define the AI systems that require an impact assessment

Create a rule that identifies “AI system” for your program and ties it to an inventory entry. Include:

  • System name, owner, business purpose, and deployment context
  • Whether it is provider-built, user-deployed, or third-party supplied
  • Interfaces and dependencies (data sources, downstream decision points)

Practical control: “No production AI system without an inventory record and an impact assessment record.”

2) Set the “planned” assessment cadence

Clause 8.4 requires assessments “as planned,” so document your plan and apply it consistently. 1

Implementation options:

  • Align to your product lifecycle (e.g., assessment at launch and at each major release gate).
  • Align to risk tier (higher-risk systems get more frequent planned reassessments).

What auditors look for: a written plan and evidence that the organization followed it for multiple systems over time.

3) Define “significant change” triggers that force reassessment

Write a trigger definition that engineering can apply. Common trigger categories:

  • Model changes: new model, architecture change, re-training, fine-tuning, prompt/agent logic changes that alter behavior.
  • Data changes: new data sources, label definitions, retention changes, feature engineering changes.
  • Use-case changes: new decision type, new user population, new geography, new channel, new downstream automation.
  • Control changes: reduced human review, changed thresholds, changed escalation paths.
  • Third-party changes: provider model updates, API behavior changes, new subprocessors that affect data flow.

Operational tip: Make “significant change” a required field in your change tickets. If “yes,” the ticket cannot close until the impact assessment is attached or linked.

4) Build a standard AI system impact assessment template

Keep it structured so you can compare assessments across systems. Minimum sections most teams can defend in audit:

  • Scope and context: what the system does, who it affects, where it is used
  • Stakeholder and harm mapping: who could be harmed and how (customers, employees, vulnerable groups, internal users)
  • Impact domains: safety, discrimination/fairness concerns, privacy/data protection, security abuse cases, explainability/contestability needs, business/operational risks
  • Controls and mitigations: human-in-the-loop checks, monitoring, guardrails, access controls, incident response hooks
  • Residual risk and decision: approve, approve with conditions, postpone pending fixes, or reject
  • Owner sign-offs: accountable business owner plus required control functions (GRC, privacy, security, legal as applicable)

This does not need to be academic. It needs to drive decisions and create traceable artifacts.

5) Route assessments through a formal review and approval workflow

Define:

  • Required reviewers by risk tier (example: high-impact systems require security + privacy + legal + business owner sign-off).
  • Approval gates tied to deployment (no release without approval).
  • Exceptions process with time-bound compensating controls and leadership approval.

If you use Daydream to manage third-party and AI governance workflows, configure the impact assessment as a standardized intake plus approval task set, with automatic evidence capture (submitted form, reviewer decisions, attachments, and timestamps).

6) Connect impact assessment outputs to operational controls

Auditors will test whether the assessment is a paper exercise. Make the outputs actionable:

  • Update monitoring requirements (drift, performance, error rates, harmful outputs)
  • Update user disclosures, training, and support scripts
  • Update incident response playbooks (AI-specific failure modes)
  • Create remediation tickets and track closure

7) Reassess after significant changes and record deltas

When the trigger fires, do not rewrite the entire document from scratch. Record:

  • What changed
  • What impacts changed
  • What controls changed
  • Whether approvals were re-obtained

This “delta assessment” pattern reduces friction and improves compliance coverage.

Required evidence and artifacts to retain

Keep evidence in a way that is easy to produce per system and per change event:

Core artifacts

  • AI system inventory entry (system owner, purpose, deployment context)
  • Impact assessment plan (documented cadence and scope definition) 1
  • Completed impact assessment reports/forms (initial and reassessments) 1
  • Review/approval records (sign-offs, meeting minutes, workflow approvals)
  • Change records that show the trigger and the reassessment linkage (ticket ID → assessment ID)
  • Remediation tracking (issues, actions, closure evidence)
  • Exception approvals (if any) and compensating controls

Retention note: keep them long enough to demonstrate history across multiple assessment cycles and changes; align to your internal retention schedule.

Common exam/audit questions and hangups

Expect these, and prepare evidence paths ahead of time:

  1. “Show me your plan.” Where is the documented schedule/cadence and how is it applied? 1
  2. “What counts as significant change?” Is it defined, communicated, and embedded in change management? 1
  3. “Prove it triggered.” Pick a recent change and show the assessment occurred because of it. 1
  4. “Who approves and why?” Is accountability clear, and do approvals match risk?
  5. “What did you do differently because of the assessment?” Show mitigations, monitoring updates, or a stop/go decision.

Auditors get stuck when organizations cannot demonstrate linkage between (a) change events and (b) reassessments.

Frequent implementation mistakes (and how to avoid them)

Mistake 1: No inventory, or an inventory that omits “shadow AI.”
Fix: Tie access controls and procurement to registration. If a team wants to deploy AI, they must create an inventory record.

Mistake 2: “Significant change” is vague.
Fix: Define triggers in engineering-friendly categories (model, data, use-case, control, third party). Add examples and non-examples.

Mistake 3: Assessments are written, but not approved.
Fix: Require sign-offs and store them with the assessment record. A PDF without workflow evidence is fragile in audit.

Mistake 4: Assessments don’t change anything.
Fix: Make “actions/mitigations” a required section with ticket references and owners.

Mistake 5: Third-party AI is treated as “out of scope.”
Fix: If you deploy it, it is in scope for impact. Add third-party change notifications as a reassessment trigger.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so treat ISO/IEC 42001 Clause 8.4 primarily as a certifiable management-system obligation. Your practical risk is audit failure due to missing evidence, inconsistent execution, or inability to demonstrate reassessments after significant changes. 1

A practical 30/60/90-day execution plan

First 30 days (Immediate stabilization)

  • Assign ownership: one accountable program owner plus backups.
  • Confirm scope: list AI systems in production and in-flight.
  • Draft “significant change” trigger definitions and add them to change tickets.
  • Build the first version of the impact assessment template and required approvers.

By 60 days (Operationalize and prove repeatability)

  • Run assessments on the highest-impact systems first.
  • Stand up the approval workflow and block releases without an assessment.
  • Train product/engineering on triggers and submission steps.
  • Start a simple evidence binder per system (inventory record + latest assessment + approvals).

By 90 days (Scale and harden)

  • Expand coverage to remaining in-scope AI systems.
  • Add delta assessments for change-triggered reassessments.
  • Add QA checks: periodic sampling to confirm changes triggered reassessments.
  • Integrate with third-party intake so externally sourced AI changes can trigger reassessment.

If you need this to run with minimal manual coordination, configure Daydream so system owners complete a guided assessment, approvers review in one place, and evidence is retained automatically with audit-ready exports.

Frequently Asked Questions

What qualifies as an “AI system impact assessment” under ISO/IEC 42001 Clause 8.4?

ISO/IEC 42001 Clause 8.4 requires you to conduct impact assessments as planned and when significant changes occur, but it does not prescribe a single format. Your assessment should be structured, repeatable, and tied to approvals and mitigations. 1

How do I define “significant change” without over-triggering assessments?

Define categories that map to real engineering changes (model, data, use-case, controls, third party) and provide examples. Start stricter for high-impact systems, then refine based on what actually created new impacts.

Do we need to reassess if only the prompt changes?

If the prompt or agent logic materially changes behavior, outputs, or downstream decisions, treat it as a significant change and reassess. Document your rule so teams apply it consistently. 1

Does third-party AI require an impact assessment if the vendor already did one?

Yes, if you deploy or rely on the system in your context, you still need to assess impacts in your specific use-case and operating environment. Vendor documentation can be an input, not a replacement.

Who should approve an AI system impact assessment?

At minimum, the accountable business owner should approve, and you should add required control functions based on risk (commonly security, privacy, legal, and GRC). The key is consistent criteria and recorded approvals.

What evidence will an auditor ask for first?

They will ask for your planned cadence, your trigger definition for significant changes, and examples showing that changes actually caused reassessments. Keep assessment records linked to change tickets and approvals. 1

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

What qualifies as an “AI system impact assessment” under ISO/IEC 42001 Clause 8.4?

ISO/IEC 42001 Clause 8.4 requires you to conduct impact assessments as planned and when significant changes occur, but it does not prescribe a single format. Your assessment should be structured, repeatable, and tied to approvals and mitigations. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

How do I define “significant change” without over-triggering assessments?

Define categories that map to real engineering changes (model, data, use-case, controls, third party) and provide examples. Start stricter for high-impact systems, then refine based on what actually created new impacts.

Do we need to reassess if only the prompt changes?

If the prompt or agent logic materially changes behavior, outputs, or downstream decisions, treat it as a significant change and reassess. Document your rule so teams apply it consistently. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Does third-party AI require an impact assessment if the vendor already did one?

Yes, if you deploy or rely on the system in your context, you still need to assess impacts in your specific use-case and operating environment. Vendor documentation can be an input, not a replacement.

Who should approve an AI system impact assessment?

At minimum, the accountable business owner should approve, and you should add required control functions based on risk (commonly security, privacy, legal, and GRC). The key is consistent criteria and recorded approvals.

What evidence will an auditor ask for first?

They will ask for your planned cadence, your trigger definition for significant changes, and examples showing that changes actually caused reassessments. Keep assessment records linked to change tickets and approvals. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001: AI system impact assessment | Daydream