AI system impact assessment process

An AI system impact assessment process is a defined, repeatable workflow you run before (and when materially changing) an AI system to identify who could be affected, how they could be harmed, and what controls you will implement to reduce that impact. ISO/IEC 42001 Annex A Control A.5.2 requires you to establish and implement this process and keep evidence that assessments are performed and acted on. 1

Key takeaways:

  • You need a documented, operational process, not a one-off “ethical review.” 1
  • The assessment must cover effects on individuals, groups, and society, and drive concrete mitigations and approvals. 1
  • Auditors will look for scope triggers, consistent execution, decision records, and follow-through artifacts.

Compliance teams get burned on AI governance in a predictable way: the organization has policies, principles, and maybe a model inventory, but no reliable mechanism that forces impact thinking into delivery. Control A.5.2 closes that gap by requiring a process you can run repeatedly, with clear inputs, decision rights, and evidence.

For a CCO or GRC lead, the goal is operational speed: define when an impact assessment is required, who owns it, how it is performed, and what “done” means. Then integrate it into existing gates (procurement, SDLC, change management, product approvals, third-party onboarding) so assessments happen because the business cannot move forward without them.

This page provides requirement-level implementation guidance for building an AI system impact assessment process that auditors can follow end-to-end. It focuses on workflow design, minimum artifacts, and the exam questions you should be able to answer on demand, with practical examples you can adapt to both internally built AI and third-party AI services. 1

Regulatory text

Requirement (verbatim): “The organization shall establish and implement a process for conducting AI system impact assessments.” 1

What the operator must do:
You must (1) define a documented process and (2) put it into use. Auditors will expect to see that the process exists, has owners and triggers, produces consistent outputs, and results in decisions and mitigations that are tracked to completion. The intent is to evaluate potential effects on individuals, groups, and society, not only technical performance. 1

Plain-English interpretation

An “AI system impact assessment process” is your organization’s standard way to answer four questions before deploying or materially changing an AI system:

  1. Who can be affected (direct users, non-users impacted by decisions, employees, customers, vulnerable groups)?
  2. What could go wrong (unfair treatment, privacy intrusion, safety harms, denial of service/benefits, manipulation, over-reliance, reputational harm)?
  3. How likely and severe is it, given your context and controls?
  4. What will you do about it (mitigate, monitor, restrict use, add human review, or stop the use case)?

A practical definition of “implemented” is: your product, engineering, procurement, and business owners cannot move an AI system forward without completing the assessment and recording an approval decision.

Who it applies to (entity and operational context)

This applies to:

  • AI providers building AI systems or models for internal or external use.
  • AI users deploying AI tools (including third-party tools) in business processes.
  • Organizations using AI in any operational context where outputs can affect people or decisions about people. 1

Operational contexts that commonly require an impact assessment process:

  • Customer decisions (eligibility, pricing, fraud actions, content moderation).
  • Employee decisions (screening, performance analytics, workforce monitoring).
  • Safety-relevant automation (health, industrial, physical environments).
  • Customer-facing generative AI (advice, support, recommendations).
  • Third-party AI embedded in products (APIs, scoring engines, identity tools).

What you actually need to do (step-by-step)

1) Define scope and triggers (make it hard to bypass)

Create written triggers that force an assessment. Common triggers include:

  • New AI system deployment (internal build or third-party).
  • Material change to model, training data, intended use, or decision logic.
  • Expansion to new user group, geography, or regulated domain.
  • New integration that changes what the system can infer or decide.

Control objective: if someone asks, “Why didn’t this system get an impact assessment?” you can point to a trigger list and show either (a) the completed assessment or (b) the documented rationale for non-applicability.

2) Assign ownership and decision rights (RACI, not vibes)

Set roles in a RACI table:

  • Business owner: accountable for intended use and outcomes.
  • System owner (product/engineering): accountable for implementation and controls.
  • Risk/compliance: responsible for methodology, challenge, and sign-off criteria.
  • Privacy/security: responsible for privacy and security impact inputs.
  • Legal (as needed): advisory for higher-risk uses and external claims.

Define who can approve:

  • Low-impact: business + system owner.
  • Elevated impact: add compliance and privacy/security sign-off.
  • High-impact: require a formal risk acceptance by an executive committee.

3) Standardize the assessment template (keep it short, but complete)

A workable template typically captures:

A. System profile

  • Purpose, intended users, affected parties.
  • AI type (predictive, generative, decisioning, monitoring).
  • Deployment context (internal tool, customer product, third-party SaaS).

B. Data and inputs

  • Data categories (especially personal or sensitive).
  • Data provenance (internal, customer-provided, third-party).
  • Retention and access boundaries (link to privacy/security artifacts).

C. Impact analysis (the core)

  • Affected parties mapping: individuals, groups, society-level considerations.
  • Harm types: discrimination/unfairness, privacy intrusion, safety, financial harm, denial of access/benefits, deception/manipulation, over-reliance, accessibility barriers.
  • Operational failure modes: hallucinations, drift, prompt injection, automation bias, poor explainability, brittle edge cases.
  • Severity/likelihood rating: use a simple rubric (e.g., Low/Med/High) with definitions.

D. Mitigations and controls

  • Human review points and escalation.
  • Guardrails (allowed uses, disallowed uses, policy checks).
  • Testing and monitoring plan (quality, safety, bias signals relevant to your use).
  • User transparency (disclosures, notices, consent flows where applicable).
  • Incident response hooks (how issues are reported and triaged).

E. Decision and conditions

  • Approve, approve with conditions, or reject.
  • Conditions tied to owners and due dates (tracked to closure).
  • Reassessment cadence trigger (event-based; don’t rely only on time).

4) Integrate into delivery workflows (where processes become real)

Pick at least two hard gates:

  • SDLC gate: impact assessment required before production release for AI features.
  • Procurement gate: third-party AI cannot be purchased or renewed without an impact assessment (and mapped third-party controls).
  • Change management gate: model/version change requires reassessment if triggers are met.

Practical implementation pattern: add a required field in your ticketing system (“Impact Assessment ID”) and block the release/contract workflow if empty.

5) Run the assessment and document challenge

Require at least one reviewer outside the build team (compliance, privacy, security, or an AI governance group) to provide documented challenge. Auditors look for evidence that the process detects risk, not just rubber-stamps delivery.

6) Track conditions to closure and monitor

An impact assessment that identifies mitigations but doesn’t track completion will fail in practice and in audits. Create a simple tracker:

  • Finding/condition
  • Owner
  • Status
  • Evidence link
  • Closure date
  • Residual risk statement (if accepted)

If you use Daydream to manage third-party risk and control evidence, treat AI impact assessment conditions like control obligations: assign tasks, collect artifacts, and maintain an audit-ready record across internal and third-party systems.

Required evidence and artifacts to retain

Keep artifacts in a controlled repository with versioning:

  1. AI impact assessment procedure (process doc with triggers, roles, workflow steps).
  2. Assessment template and scoring rubric definitions.
  3. Completed impact assessments for in-scope AI systems.
  4. Approval records (who approved, when, with what conditions).
  5. Mitigation implementation evidence (test results, monitoring configuration, updated SOPs, UI disclosures).
  6. Exception/risk acceptance records for residual risk.
  7. Inventory linkage (each AI system in inventory points to the latest impact assessment).
  8. Third-party documentation when using external AI (supplier descriptions, deployment constraints, contractual requirements you imposed).

Common exam/audit questions and hangups

Auditors and assessors often probe these areas:

  • “Show me your trigger logic.” How do you decide which AI systems require assessment, and how do you prevent bypass?
  • “Prove it’s implemented.” Provide a sample of recent AI releases/procurements and show corresponding assessments and approvals.
  • “Where are individuals and groups considered?” Many teams focus only on model risk; the control expects broader societal/people impacts. 1
  • “What happens after the assessment?” They will follow conditions to closure and ask for evidence that mitigations shipped.
  • “How do you handle third-party AI?” Expect scrutiny if you rely on vendors without documenting your own deployment context impacts.

Frequent implementation mistakes and how to avoid them

Mistake Why it fails Fix
Template is a long narrative essay Inconsistent outputs, low completion rates Use structured fields, drop-down ratings, and required sections
No integration into release/procurement Assessments happen “sometimes” Add workflow gating in SDLC/procurement tooling
Confusing impact assessment with privacy assessment Misses non-privacy harms (fairness, safety, over-reliance) Cross-reference privacy, but keep impact scope broader
No decision record Cannot prove governance Require explicit approve/conditional/reject outcomes
Mitigations aren’t tracked Risks remain theoretical Track actions like audit issues: owner, status, evidence
Third-party AI exempted by default Risk still exists in your use Assess your deployment, even if the model is external

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so you should treat enforcement risk as indirect: failures tend to surface through broader regulatory actions (privacy, discrimination, consumer protection, safety incidents) and through contractual/audit findings when customers request ISO/IEC 42001 alignment. The operational risk is more immediate: unassessed AI systems create uncontrolled decisioning, customer harm, and incident response surprises that are expensive to unwind after deployment.

Practical 30/60/90-day execution plan

First 30 days (stand up the minimum viable process)

  • Name an executive sponsor and a process owner (GRC or AI governance lead).
  • Draft the impact assessment SOP: scope, triggers, RACI, approval tiers. 1
  • Build the template and scoring rubric (Low/Med/High definitions).
  • Identify your initial in-scope AI inventory slice (customer-impacting and employee-impacting first).
  • Pilot the assessment on a small set of AI systems and capture lessons.

By 60 days (make it hard to bypass)

  • Embed the process into SDLC and procurement gates (tickets, checklists, sign-off steps).
  • Train product, engineering, and procurement on how to complete the assessment.
  • Create the mitigation tracker and link it to your issue management process.
  • Establish a review forum (monthly AI risk review or existing risk committee).

By 90 days (scale and evidence for audit)

  • Expand coverage to remaining AI systems and key third parties providing AI capabilities.
  • Run a quality review on completed assessments for consistency and adequacy.
  • Produce an audit pack: SOP, template, sample assessments, approvals, closed mitigations.
  • Add continuous triggers: reassessment on model changes, new data sources, new use cases.

Frequently Asked Questions

Do we need an impact assessment for third-party AI tools like chatbots or scoring APIs?

Yes if your deployment can affect individuals, groups, or society, even if you did not build the model. Assess the impact in your context, then document any compensating controls and contractual requirements you impose. 1

How is an AI impact assessment different from a DPIA or security risk assessment?

A DPIA focuses on privacy risks, and a security assessment focuses on confidentiality, integrity, and availability. An AI impact assessment must also cover broader harms such as unfair treatment, safety issues, over-reliance, and societal effects tied to the AI system’s use. 1

What counts as “implemented” for ISO/IEC 42001 Control A.5.2?

You can show implementation by demonstrating that the documented process is used consistently: completed assessments exist, approvals are recorded, and mitigations are tracked to completion. A policy without executed assessments is weak evidence. 1

Who should sign off on an AI impact assessment?

The business owner and system owner should always approve, because they own outcomes and controls. Add compliance, privacy, and security sign-off for higher-impact systems based on your defined approval tiers.

How often should we redo an impact assessment?

Use event-based triggers: reassess when the model changes, the intended use changes, new data sources are introduced, or the affected population changes. Document the trigger logic so reassessment is predictable and testable.

What evidence should we show an auditor for a single AI feature release?

Provide the completed impact assessment, the approval decision, the list of required mitigations, and proof those mitigations were implemented (test records, monitoring setup, updated procedures, user-facing disclosures where relevant). Tie it back to the release record in your SDLC tool.

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

Do we need an impact assessment for third-party AI tools like chatbots or scoring APIs?

Yes if your deployment can affect individuals, groups, or society, even if you did not build the model. Assess the impact in your context, then document any compensating controls and contractual requirements you impose. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

How is an AI impact assessment different from a DPIA or security risk assessment?

A DPIA focuses on privacy risks, and a security assessment focuses on confidentiality, integrity, and availability. An AI impact assessment must also cover broader harms such as unfair treatment, safety issues, over-reliance, and societal effects tied to the AI system’s use. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What counts as “implemented” for ISO/IEC 42001 Control A.5.2?

You can show implementation by demonstrating that the documented process is used consistently: completed assessments exist, approvals are recorded, and mitigations are tracked to completion. A policy without executed assessments is weak evidence. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Who should sign off on an AI impact assessment?

The business owner and system owner should always approve, because they own outcomes and controls. Add compliance, privacy, and security sign-off for higher-impact systems based on your defined approval tiers.

How often should we redo an impact assessment?

Use event-based triggers: reassess when the model changes, the intended use changes, new data sources are introduced, or the affected population changes. Document the trigger logic so reassessment is predictable and testable.

What evidence should we show an auditor for a single AI feature release?

Provide the completed impact assessment, the approval decision, the list of required mitigations, and proof those mitigations were implemented (test records, monitoring setup, updated procedures, user-facing disclosures where relevant). Tie it back to the release record in your SDLC tool.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001: AI system impact assessment process | Daydream