Internal audit
ISO/IEC 42001 Clause 9.2 requires you to run internal audits on a planned schedule to confirm your AI management system (AIMS) conforms to requirements and is effectively implemented and maintained 1. To operationalize it, establish an audit program, audit criteria and scope, auditor independence, corrective actions, and a repeatable evidence trail.
Key takeaways:
- You need a planned internal audit program tied to AIMS scope and risk, not ad hoc spot checks.
- Audits must test both conformity (meets requirements) and effectiveness (works in practice), with documented results and corrective action follow-through.
- Auditor competence and independence are exam hot buttons; design the program so auditors don’t audit their own work.
“Internal audit” in ISO/IEC 42001 is not a one-off project or a generic corporate audit. It is a recurring control that validates whether your AI management system is built the way you say it is, and whether it operates that way day-to-day. The clause text is short, but the operational expectation is concrete: you plan audits, execute them at the planned intervals, document what you did and what you found, and then prove you fixed what mattered.
For a CCO, GRC lead, or compliance officer, the fastest path is to treat internal audit as a workflow with a stable backbone: a defined audit program, standard workpapers, an evidence repository, and a corrective action process that closes the loop. Your goal is not perfection; your goal is repeatability and defensibility. When a certification auditor or an internal stakeholder asks, “Show me how you know the AIMS is working,” internal audit is the answer you can prove.
If you’re already running internal audits for ISO 27001 or a quality management system, reuse the machinery. The differences are the audit criteria (ISO/IEC 42001 requirements plus your AI policies/controls) and the audit focus (AI lifecycle governance, risk controls, and operational effectiveness across AI use).
Regulatory text
Requirement (verbatim): “The organization shall conduct internal audits at planned intervals.” 1
What the operator must do:
You must define an internal audit cadence and execute audits according to that plan, covering the AI management system within scope. The audits must give you documented assurance that your AIMS conforms to the requirements you committed to and that it is effectively implemented and maintained 1. “Planned intervals” means you can justify the schedule and show you followed it.
Plain-English interpretation
Internal audit is your formal “trust but verify” mechanism for the AI management system. You are expected to:
- Plan what gets audited, when, by whom, and against what criteria.
- Test whether controls exist and whether they actually work in operating conditions.
- Document results in a way a third party can retrace.
- Drive fixes through corrective actions and re-checks.
A useful mental model: policies say what you do; internal audit proves you did it, and shows where you didn’t.
Who it applies to
This requirement applies to any organization implementing ISO/IEC 42001, including:
- AI providers building or supplying AI systems.
- AI users deploying AI systems in business processes.
- Organizations using an AIMS to govern internal and third-party AI use 1.
Operationally, it applies wherever your AIMS scope reaches:
- Product teams developing models, prompt workflows, evaluation pipelines, or model monitoring.
- Business functions using AI for decisions, recommendations, or customer interactions.
- Procurement and third-party management teams overseeing AI-enabled tools and services.
- Security, privacy, legal, and compliance teams owning AI risk controls.
What you actually need to do (step-by-step)
1) Define your internal audit program (the “audit backbone”)
Create an Internal Audit Procedure and an Audit Program that answer:
- Scope: Which parts of the AIMS, sites, business units, and AI use cases are in scope.
- Criteria: ISO/IEC 42001 requirements plus your internal AIMS policies, standards, and control procedures 1.
- Cadence: The planned intervals for full-scope and targeted audits. Pick a schedule you can sustain and justify based on risk and change velocity.
- Methods: Interviews, evidence sampling, walkthroughs, configuration checks, data tracebacks, and control testing.
- Roles and independence: Who can audit what, and who approves results.
Deliverable: a program document you can hand to an auditor without apologizing.
2) Build an audit universe mapped to your AIMS
List auditable units, then map them to AIMS obligations and key risks. Typical entries:
- AI inventory completeness and change management.
- Model onboarding and risk assessment workflow.
- Training/validation data governance checkpoints (as defined by your AIMS).
- Human oversight controls for high-impact uses (as defined by your AIMS).
- Monitoring, incident handling, and post-deployment review.
This prevents the most common failure mode: audits that only review policies and meeting notes.
3) Create standard audit workpapers and test steps
For each audit area, standardize:
- Control objective: What must be true.
- Test procedure: What you will check.
- Sampling approach: What records you will pull (tickets, approvals, logs, assessments).
- Pass/fail logic: What counts as conforming vs nonconforming.
Example test (inventory control):
- Objective: All in-scope AI systems are recorded and have accountable owners.
- Test: Compare procurement list + cloud app catalog + model registry against AI inventory; investigate gaps; verify each entry has an owner and last review date in your records.
4) Ensure auditor competence and independence
Set rules that prevent self-audit:
- An engineer should not audit the controls they designed or operate.
- A product owner should not audit their own model’s governance artifacts.
- If you’re small, use cross-team audits or bring in a trained internal auditor from another function.
Document competence via training records, resumes, or prior audit experience. Independence is less about org charts and more about credible separation of duties.
5) Execute audits and record results in a traceable way
During execution:
- Hold an opening meeting with scope, criteria, and timing.
- Gather evidence and record it in workpapers with clear references (ticket IDs, links, screenshots, export files).
- Classify findings consistently (nonconformity, observation, opportunity for improvement) based on your procedure.
- Hold a closing meeting confirming preliminary results and next steps.
Tip: Write findings so they stand alone. A good finding states the criterion, condition, evidence, and risk.
6) Drive corrective actions to closure (and re-test)
For each nonconformity:
- Assign an owner and due date.
- Require root cause analysis proportionate to severity.
- Document corrective action plan and completion evidence.
- Re-test the control to confirm effectiveness.
If you cannot show closure discipline, your audit program becomes theater. Auditors and certifying bodies usually probe this.
7) Report to management and feed management review
Internal audit outputs should inform leadership decisions about resourcing, priorities, and risk acceptance. Your management review agenda should consume audit results as an input, even if only as a summary and trend discussion.
Required evidence and artifacts to retain
Keep artifacts in a controlled repository with retention rules aligned to your compliance program. Minimum set:
- Internal Audit Procedure (roles, independence, methods, reporting, corrective action linkage).
- Audit Program / Schedule showing planned intervals 1.
- Audit plans per audit (scope, criteria, locations/teams, timeline).
- Workpapers (test steps performed, evidence references, sampling notes).
- Audit reports (findings, conclusions, sign-off).
- Corrective action records (owner, root cause, plan, closure evidence, re-test notes).
- Auditor competence records (training, qualifications).
- Communications (opening/closing meeting notes, management reporting).
If you use Daydream to run third-party due diligence or internal control tracking, treat it as your system of record for audit artifacts: centralized evidence, owner assignment, and a clean chain from finding to corrective action to closure.
Common exam/audit questions and hangups
Expect these questions from certification auditors and internal stakeholders:
- “Show the audit program and prove you executed audits at the planned intervals.” 1
- “What criteria did you audit against, and how do you know it covers the AIMS scope?”
- “How did you ensure auditor independence?”
- “Pick one finding. Show the corrective action, closure evidence, and re-test.”
- “How do audits cover third-party AI systems or AI-enabled tools within scope?”
- “How do you decide which areas get audited more deeply?”
Hangup patterns:
- Audits that are only document reviews, with no operational testing.
- Findings with no objective evidence trail.
- Corrective actions “closed” without re-testing.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Treating internal audit as a calendar event.
Avoid it: Define auditable units, test steps, and evidence standards. A scheduled meeting is not an audit. -
Mistake: Auditing policies instead of operations.
Avoid it: Require at least one end-to-end trace per audit (example: from AI intake request to risk assessment to approval to monitoring evidence). -
Mistake: No independence in small teams.
Avoid it: Use cross-functional auditors, rotate responsibilities, or supplement with trained internal auditors from another domain. -
Mistake: Findings are vague (“needs improvement”).
Avoid it: Force every finding to include criterion, evidence, and impact in the write-up. -
Mistake: Corrective actions close on promises.
Avoid it: Require closure artifacts and re-test notes as exit criteria.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement. Practically, weak internal audit increases the chance that AI-related control gaps persist unnoticed, which can cascade into privacy, security, consumer protection, or contractual failures depending on your AI use cases. Internal audit is also a certifiability issue: if you cannot demonstrate planned, executed audits with follow-through, you will struggle to show the AIMS is maintained 1.
A practical 30/60/90-day execution plan
First 30 days (stand up the minimum viable audit program)
- Set AIMS audit scope boundaries and confirm auditable units.
- Draft and approve Internal Audit Procedure (include independence and corrective action linkage).
- Publish an audit program calendar that matches your operating reality.
- Build core templates: audit plan, workpaper, audit report, corrective action record.
- Train internal auditors on the procedure and evidence expectations.
By 60 days (run real audits and produce defensible outputs)
- Execute at least one pilot audit on a high-change area (example: AI inventory + onboarding workflow).
- Produce a complete audit report with referenced evidence.
- Open corrective actions for all nonconformities, assign owners, and track progress.
- Calibrate severity and finding language so different auditors write consistently.
By 90 days (close the loop and institutionalize)
- Re-test closed corrective actions and document effectiveness.
- Report audit results and trends to management and feed them into management review inputs.
- Expand the audit universe and schedule targeted audits for other critical areas (third-party AI, monitoring, incident handling).
- Refine sampling guidance and acceptance criteria based on what you learned in the pilot.
Frequently Asked Questions
What counts as “planned intervals” for ISO/IEC 42001 internal audit?
The standard requires audits at planned intervals but does not specify a fixed cadence; you must define the schedule and show you follow it 1. Set intervals based on AIMS scope, change rate, and risk, then document the rationale.
Do internal audits need to cover third-party AI tools we use?
If those tools fall within your AIMS scope, your audits should test the controls you rely on to govern them (for example, onboarding, monitoring, and risk acceptance). The focus is your management system’s effectiveness, including how you manage dependencies.
Can the compliance team audit the AI program if they helped write the policy?
That’s a common independence problem. If the same individuals designed or operate the control, assign a different auditor or use a cross-functional auditor to avoid self-audit and preserve credibility.
What’s the minimum evidence an external auditor will expect?
Expect to show an audit program/schedule, an executed audit plan, workpapers with referenced evidence, a final report, and corrective action closure records with re-test notes 1. Missing workpapers or weak corrective action evidence is a frequent hangup.
How deep should testing go for an internal audit versus ongoing monitoring?
Internal audit is periodic assurance, so it should test design and operating effectiveness using sampling and traceability. Continuous monitoring can produce signals, but audit must still document the test steps and conclusions.
We’re small. Do we need a formal internal audit department?
No. You need a defined process, competent auditors, independence from the work being audited, documented results, and corrective action follow-through 1. Small teams often meet this through cross-functional audits and strong templates.
Footnotes
Frequently Asked Questions
What counts as “planned intervals” for ISO/IEC 42001 internal audit?
The standard requires audits at planned intervals but does not specify a fixed cadence; you must define the schedule and show you follow it (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). Set intervals based on AIMS scope, change rate, and risk, then document the rationale.
Do internal audits need to cover third-party AI tools we use?
If those tools fall within your AIMS scope, your audits should test the controls you rely on to govern them (for example, onboarding, monitoring, and risk acceptance). The focus is your management system’s effectiveness, including how you manage dependencies.
Can the compliance team audit the AI program if they helped write the policy?
That’s a common independence problem. If the same individuals designed or operate the control, assign a different auditor or use a cross-functional auditor to avoid self-audit and preserve credibility.
What’s the minimum evidence an external auditor will expect?
Expect to show an audit program/schedule, an executed audit plan, workpapers with referenced evidence, a final report, and corrective action closure records with re-test notes (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). Missing workpapers or weak corrective action evidence is a frequent hangup.
How deep should testing go for an internal audit versus ongoing monitoring?
Internal audit is periodic assurance, so it should test design and operating effectiveness using sampling and traceability. Continuous monitoring can produce signals, but audit must still document the test steps and conclusions.
We’re small. Do we need a formal internal audit department?
No. You need a defined process, competent auditors, independence from the work being audited, documented results, and corrective action follow-through (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). Small teams often meet this through cross-functional audits and strong templates.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream