Nonconformity and corrective action

ISO/IEC 42001 Clause 10.2 requires you to run a closed-loop process for nonconformities: contain and correct the issue, determine root cause, implement corrective actions that prevent recurrence, and verify those actions work. To operationalize it fast, stand up an intake-and-triage workflow, a root-cause method, corrective action tracking with owners and due dates, and effectiveness reviews tied to your AI management system. 1

Key takeaways:

  • Treat every nonconformity as a managed case with containment, root cause, corrective action, and effectiveness verification.
  • Your auditors will look for traceability: issue → cause → action → evidence → effectiveness check.
  • The system must work across AI lifecycle activities, including third parties that build, host, supply data, or monitor your AI.

“Nonconformity and corrective action” is the mechanism that keeps an AI management system credible under pressure. ISO/IEC 42001 Clause 10.2 is short, but it implies disciplined operations: you must respond to nonconformities quickly, prevent recurrence, and prove the fix worked. 1

For a Compliance Officer, CCO, or GRC lead, the fastest path is to define what qualifies as a nonconformity in your AI context, create a single intake path, and run each event as a closed-loop corrective action record. Nonconformities can come from internal audits, model monitoring, incident response, complaints, regulator or customer findings, third-party failures, or engineering discovering a control gap mid-release.

Operationalizing Clause 10.2 means you are building a repeatable “CAPA-like” motion (corrective and preventive action), aligned to your AI risk controls and lifecycle governance. You will need roles, templates, evidence discipline, and a way to validate effectiveness beyond “we deployed a patch.” This page gives requirement-level steps, artifacts, and exam-ready talking points.

Regulatory text

Clause requirement (operator view). ISO/IEC 42001 Clause 10.2 states: “When a nonconformity occurs, the organization shall react, evaluate causes, implement corrective action, and review effectiveness.” 1

What this means in practice

You need a documented, consistently used process that:

  1. Reacts to the nonconformity (containment and correction to reduce impact).
  2. Evaluates causes (root cause analysis, not only symptoms).
  3. Implements corrective action (changes that prevent recurrence, not just a one-off fix).
  4. Reviews effectiveness (evidence that the corrective action worked and stayed working).
    All four elements must be demonstrable with records. 1

Plain-English interpretation (requirement-level)

A nonconformity is a failure to meet a requirement of your AI management system. In AI programs, the “requirement” could be internal (your policy, model governance standard, risk control) or external (a contractual commitment you adopted into your system). Clause 10.2 expects you to treat that failure as a controlled event: you stop the bleeding, figure out why it happened, fix the system condition that allowed it, and confirm the fix worked.

Auditors typically test this clause by sampling a few nonconformities and checking whether your records prove a closed loop, including objective evidence of effectiveness.

Who it applies to (entity + operational context)

Clause 10.2 applies to any organization operating an AI management system under ISO/IEC 42001, including:

  • AI providers building, training, deploying, or operating AI systems.
  • AI users deploying AI in business processes (even if models are sourced externally).
  • Organizations relying on AI-enabled features where governance commitments exist (monitoring, human oversight, change management). 1

Operational contexts where nonconformities commonly arise:

  • Model lifecycle: data selection, training runs, evaluation, release approvals, drift monitoring, retirement.
  • Governance controls: missing approvals, outdated risk assessments, incomplete documentation, failed human oversight steps.
  • Security and privacy adjacent: access control gaps in model endpoints, data handling deviations, logging gaps (only if those controls are part of your AI management system requirements).
  • Third parties: a cloud provider outage that breaks monitoring, a data supplier violates agreed quality constraints, a model vendor changes behavior without notice, a labeling firm deviates from instructions.

What you actually need to do (step-by-step)

1) Define and publish “nonconformity” for your AI program

Create a short definition with examples tailored to your controls. Include at least:

  • Failure to follow required AI governance steps (approvals, reviews, monitoring).
  • Failure to meet a documented control requirement (logging, evaluation thresholds you committed to, documentation completeness).
  • Repeated incidents indicating a systemic process defect.
    Keep it operational: people should recognize when to open a record.

Output: Nonconformity criteria and examples in your AI management system documentation. 1

2) Stand up a single intake + triage workflow

You need one obvious door. Options: ticketing system queue, GRC workflow, or a dedicated mailbox that creates records. Triage should classify:

  • Severity/impact (business + AI risk impact)
  • Scope (which AI system, which business process, which third party)
  • Containment needed (stop feature, roll back model, disable endpoint, increase review)
  • Notification needs (internal leadership, affected business owner, third party manager)

Operator tip: If your incident response process exists, integrate it. A nonconformity can start as an incident and then convert into a corrective action record after containment.

Output: Triage SOP + assignment rules + case template. 1

3) React: containment and correction actions first

Clause language starts with “react.” That is a hint about priorities: reduce harm before you perfect the analysis.

  • Containment examples: pause an automated decision, put a human approval step in place, throttle model traffic, block a risky input class, suspend a third-party data feed.
  • Correction examples: fix a broken monitoring job, restore logging, patch a misconfigured access role.

Evidence expectation: timestamped actions, who approved them, and confirmation they took effect. 1

4) Evaluate causes: run root cause analysis that fits the failure

Pick a root cause method and use it consistently:

  • 5 Whys for straightforward process failures.
  • Fishbone/Ishikawa when multiple contributing factors exist (people/process/technology/data/third party).
  • Barrier analysis when a control failed (what barrier should have stopped it; why didn’t it).

Your record should distinguish:

  • Direct cause (what happened)
  • Contributing factors (why it was possible)
  • Systemic root cause (what in your system needs to change)

Common audit hangup: “Root cause” that restates the symptom (e.g., “monitoring failed because monitoring wasn’t running”). Write the causal chain until it points to a controllable system condition. 1

5) Implement corrective action: change the system, not only the output

Corrective actions prevent recurrence. Examples in an AI context:

  • Add a required pre-deployment checklist step and enforce it in CI/CD.
  • Add automated monitoring with alerting and on-call ownership.
  • Update third-party requirements (data quality SLA, change notification clauses).
  • Improve training for model release managers.
  • Adjust access controls or segregation of duties for model registry approvals.

Run corrective actions like projects:

  • Assign an owner.
  • Define acceptance criteria (“done”).
  • Track dependencies and approvals.
  • Record evidence at completion. 1

6) Review effectiveness: prove the fix works

Effectiveness is not “implemented.” It is “implemented and verified.” Acceptable approaches include:

  • Re-test the failed control (e.g., run the monitoring job, simulate alert).
  • Sample recent releases to confirm the governance step is followed.
  • Review metrics and logs showing sustained operation.
  • Perform a targeted internal audit on the affected area.

Define effectiveness criteria at the time you create the corrective action, so you are not improvising later.

Output: Effectiveness review record with objective evidence attached or linked. 1

7) Feed learning back into your AI management system

Close the loop by updating the system artifacts that allowed the nonconformity:

  • Policies/standards
  • Procedures and checklists
  • Training materials
  • Risk assessments and control mappings
  • Third-party requirements and oversight plans (where relevant)

Required evidence and artifacts to retain

Auditors want traceability. Maintain a record set per nonconformity:

Artifact What it should show Where teams store it
Nonconformity record Description, date detected, system/process impacted, reporter GRC tool or ticketing system
Triage and severity rationale Why it was prioritized this way, who decided Same record
Containment/correction actions Actions, timestamps, approvals, verification Change tickets, incident log, deployment records
Root cause analysis Method used, causal chain, contributing factors Attached RCA doc or embedded notes
Corrective action plan Actions, owners, due dates, acceptance criteria CAPA plan or project tracker
Evidence of completion Screenshots, configs, logs, training attendance, contract addendum Linked artifacts
Effectiveness review Test results, samples reviewed, monitoring evidence Audit/testing record
Closure approval Named approver and closure date Workflow sign-off

Clause 10.2 does not dictate tooling. It does require that your process is effective and evidenced. 1

Where Daydream fits naturally: If you struggle to keep evidence linked across tickets, model registries, third-party reviews, and audit trails, Daydream can centralize the nonconformity record, route tasks to engineering and third-party owners, and maintain a clean evidence pack for certification audits.

Common exam/audit questions and hangups

Expect questions like:

  • “Show me your last few nonconformities and walk me from detection to effectiveness review.”
  • “How do you decide whether something is a nonconformity versus an incident versus an improvement?”
  • “How do you ensure corrective actions prevent recurrence?”
  • “Who can close a nonconformity, and what proof do they need?”
  • “How do you handle nonconformities caused by third parties?”
    Hangups that trigger findings:
  • No documented root cause method, or inconsistent depth across cases.
  • “Corrective action” recorded as immediate fix only, with no system change.
  • No effectiveness criteria, or effectiveness asserted without evidence. 1

Frequent implementation mistakes (and how to avoid them)

  1. Treating nonconformities as paperwork. Fix: enforce ownership, due dates, and closure criteria; report aging items to governance forums.
  2. Skipping containment because “it’s low risk.” Fix: require an explicit containment decision: “none needed” with rationale.
  3. Root cause stops too early. Fix: require at least one process/control-level cause, not only a human error label.
  4. No link to AI lifecycle governance. Fix: tag each nonconformity to an AI system, lifecycle stage, and control area.
  5. Third-party issues treated as “out of scope.” Fix: record the nonconformity and track corrective action through your third-party governance, including contract changes or monitoring adjustments.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement. Operationally, weak corrective action discipline increases the chance of repeat failures in model governance, monitoring, and third-party dependencies. That raises audit risk (repeat findings) and operational risk (recurring incidents with the same root causes). 1

Practical 30/60/90-day execution plan

First 30 days (stand up the mechanics)

  • Publish a definition and examples of AI nonconformities aligned to your AI management system requirements.
  • Choose the system of record (GRC workflow, ticketing queue) and implement an intake form/template.
  • Define roles: reporter, triage lead, corrective action owner, approver/closer.
  • Create required fields: containment actions, RCA method, corrective actions, effectiveness criteria, evidence links.
  • Train engineering, data science, product, and third-party management on “when to open a nonconformity.”

Next 60 days (make it work end-to-end)

  • Run tabletop exercises using past incidents or near misses; open records and drive them to closure.
  • Establish a weekly review (aging, blockers, repeat themes).
  • Add a standard root cause method and a quality check step before closure.
  • Integrate with change management so corrective actions that touch code/config have traceable deployment evidence.

By 90 days (prove repeatability and readiness)

  • Complete a sample-based internal audit of the process: select closed items and verify evidence quality.
  • Analyze trends: recurring control gaps, third-party categories driving issues, lifecycle stages with repeat failures.
  • Update governance artifacts based on learnings (procedures, checklists, third-party requirements).
  • Prepare an “audit pack” export/report showing traceability for a set of nonconformities and corrective actions.

Frequently Asked Questions

What counts as a “nonconformity” under ISO/IEC 42001 Clause 10.2?

Any failure to meet a requirement of your AI management system qualifies. In practice, define clear triggers such as skipped governance approvals, missing monitoring, incomplete documentation, or failure to follow your third-party oversight requirements. 1

Do we need a separate CAPA process for AI, or can we reuse our enterprise CAPA?

You can reuse an enterprise CAPA if it captures containment, root cause, corrective action, and effectiveness review for AI-related failures. Most teams add AI-specific fields such as AI system ID, lifecycle stage, and links to model monitoring evidence. 1

How do we prove “effectiveness” without inventing metrics?

Use objective evidence tied to the failure mode: a re-test, a controlled simulation, a sample review of subsequent releases, or logs showing the control ran as designed. Define effectiveness criteria when you open the corrective action so closure is evidence-based. 1

What if the nonconformity was caused by a third party?

Record it the same way and track containment plus corrective actions through your third-party governance channel. Corrective action might include contract changes, monitoring additions, supplier corrective action requests, or replacing the third party for the impacted function. 1

Can we close a nonconformity once the immediate fix is deployed?

Not if you have not addressed root cause and verified effectiveness. Closure should require evidence of the corrective action and a documented effectiveness check appropriate to the risk and scope. 1

What’s the minimum documentation auditors expect to see?

A record that shows the event, containment/correction, root cause analysis, corrective action plan with ownership, and an effectiveness review backed by evidence links. If any element is missing, auditors often treat the case as not closed-loop. 1

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

What counts as a “nonconformity” under ISO/IEC 42001 Clause 10.2?

Any failure to meet a requirement of your AI management system qualifies. In practice, define clear triggers such as skipped governance approvals, missing monitoring, incomplete documentation, or failure to follow your third-party oversight requirements. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Do we need a separate CAPA process for AI, or can we reuse our enterprise CAPA?

You can reuse an enterprise CAPA if it captures containment, root cause, corrective action, and effectiveness review for AI-related failures. Most teams add AI-specific fields such as AI system ID, lifecycle stage, and links to model monitoring evidence. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

How do we prove “effectiveness” without inventing metrics?

Use objective evidence tied to the failure mode: a re-test, a controlled simulation, a sample review of subsequent releases, or logs showing the control ran as designed. Define effectiveness criteria when you open the corrective action so closure is evidence-based. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What if the nonconformity was caused by a third party?

Record it the same way and track containment plus corrective actions through your third-party governance channel. Corrective action might include contract changes, monitoring additions, supplier corrective action requests, or replacing the third party for the impacted function. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Can we close a nonconformity once the immediate fix is deployed?

Not if you have not addressed root cause and verified effectiveness. Closure should require evidence of the corrective action and a documented effectiveness check appropriate to the risk and scope. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What’s the minimum documentation auditors expect to see?

A record that shows the event, containment/correction, root cause analysis, corrective action plan with ownership, and an effectiveness review backed by evidence links. If any element is missing, auditors often treat the case as not closed-loop. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001: Nonconformity and corrective action | Daydream