Competence

To meet the ISO/IEC 42001 competence requirement, you must define the skills and knowledge needed for anyone under your control whose work affects your AI Management System (AIMS), then prove those people are competent through training, education, or experience. Operationalize this by mapping roles to competence criteria, assessing gaps, closing them, and retaining auditable evidence. 1

Key takeaways:

  • Define competence by role, not by course completion, and tie it directly to AIMS-relevant tasks.
  • Run a repeatable competence assessment and remediation cycle (onboarding, role change, annual refresh).
  • Keep evidence that auditors can trace from AIMS risks and controls to named people and their qualification status.

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

“Competence” in ISO/IEC 42001 is a staffing control with teeth: if people designing, deploying, monitoring, or governing AI aren’t qualified for their AIMS-impacting tasks, your policies and technical controls will fail in predictable ways. The standard does not prescribe certifications or training hours. It requires that you determine what competence is necessary and ensure the people doing the work under your control meet it through education, training, or experience. 1

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat competence like an auditable access control: only qualified individuals should be able to perform specific AIMS-relevant activities (model development, data labeling, evaluation, red teaming, incident response, monitoring, approval gates). Then you document the rule (competence criteria), enforce the rule (assessment and gating), and prove enforcement (records and management review inputs).

This page gives requirement-level implementation guidance you can execute quickly: role-to-competence mapping, a practical assessment workflow, evidence packs, exam questions, and a phased rollout plan. It’s written for real operators who need something that survives both internal scrutiny and an external certification audit.

Regulatory text

Requirement (excerpt): “The organization shall determine the necessary competence of persons doing work under its control that affects the AIMS performance and effectiveness.” 1

Operator interpretation:
You must (1) identify which people and roles can materially affect AIMS performance and effectiveness, (2) define what “competent” means for those roles, and (3) ensure competence exists and stays current. This is broader than employees. It includes contractors and third parties performing work “under your control” (for example, staff augmentation, outsourced data labeling, external model evaluators, or managed service providers performing monitoring).

Plain-English interpretation (what auditors expect)

Auditors typically test competence as a traceability exercise:

  1. Start from AIMS scope and key processes. Who builds, tests, deploys, monitors, approves, and retires AI systems?
  2. Confirm competence criteria exist for those roles. Criteria must be specific enough to evaluate, not “must be trained.”
  3. Check how you validate competence. Training records help, but experience, supervised practice, work samples, and assessments often matter more.
  4. Verify you act on gaps. If someone is not competent, is there a remediation plan and interim controls (supervision, restricted responsibilities, approvals)?
  5. Confirm records are retained and controlled. You need evidence you can produce on demand. 1

Who it applies to

Entity scope

  • Any organization operating an AIMS, including AI providers and AI users. 1

Operational scope (the people “under your control”)

Include anyone whose work can affect AIMS performance/effectiveness, such as:

  • AI/ML engineers, data scientists, and applied research
  • Data engineering and data governance teams
  • Product owners for AI features and model-enabled decisioning
  • Model risk management, validation, QA, and red team functions
  • Security engineers supporting AI infrastructure and monitoring
  • Privacy, legal, compliance, and policy owners for AI controls
  • Incident response, on-call, and operations teams for AI services
  • Third parties: contractors, consultants, outsourced labeling teams, external evaluators, and managed service providers performing AIMS tasks

A practical boundary: if a person can change model behavior, data inputs, evaluation outcomes, deployment configuration, monitoring thresholds, or approvals, they affect AIMS effectiveness and belong in your competence program.

What you actually need to do (step-by-step)

Step 1: Build an “AIMS work map”

Create a simple inventory of AIMS-relevant activities across the lifecycle:

  • Design and requirements
  • Data sourcing, labeling, and quality controls
  • Model development and training
  • Evaluation, testing, and safety review
  • Deployment and change management
  • Monitoring, drift detection, and issue triage
  • Incident response and post-incident review
  • Decommissioning and documentation

Output: an AIMS Activities Register that lists each activity, the accountable role, and the teams involved.

Step 2: Identify competence-critical roles

From the work map, select roles where mistakes create high impact (for example, unsafe releases, biased outcomes, privacy violations, undetected drift, weak incident handling). Keep the list tight enough to operate, broad enough to cover real control points.

Output: a Competence Scope List (roles + rationale tied to AIMS activities).

Step 3: Define competence criteria per role (make it assessable)

For each scoped role, define criteria in three buckets:

  • Knowledge (what they must understand)
  • Skills (what they must be able to do)
  • Judgment/authority (what decisions they can make unsupervised)

Write criteria so you can validate them. Example structure:

Role AIMS-impacting tasks Competence criteria How verified Minimum evidence
Model evaluator Runs evaluation plan and interprets results Can execute approved evaluation protocol; can explain limitations and failure modes Practical test + peer review Evaluation work sample + reviewer sign-off
Data label QA lead Approves labeling guidelines and QA Can design QA sampling plan; can detect systemic label error Review of prior work + supervised trial QA plan + trial results

Output: a Role Competence Matrix.

Step 4: Assess current competence (baseline)

Pick an assessment method that matches the risk of the role:

  • Low/medium risk: manager attestation + training completion + work samples
  • Higher risk: structured interview + practical exercise + documented sign-off

Avoid “checkbox training only” for roles that require judgment (for example, approving deployment, interpreting safety results, or deciding incident severity).

Output: Competence Assessments for each in-scope individual, including pass/gap determination.

Step 5: Close gaps with targeted actions and interim controls

For each gap, record:

  • Remediation action (training, mentoring, supervised practice, role restriction, external hire)
  • Deadline and owner
  • Interim risk control (additional review, dual approval, restricted production access)

Output: a Competence Remediation Log tied to individuals and roles.

Step 6: Gate sensitive activities to competent personnel

This is where competence becomes operational:

  • Require named competent approvers for release gates (evaluation approval, deployment approval, monitoring threshold changes).
  • Tie competence status to access/permissions where possible.
  • For third parties, bake competence requirements into SOWs and onboarding.

If you use Daydream for third-party risk workflows, treat competence as a vendor/third-party onboarding requirement: collect role qualifications, validate who performs AIMS tasks, and track renewals alongside other due diligence artifacts.

Output: Workflow gates (in ticketing/SDLC) and contractual controls (for third parties).

Step 7: Keep competence current

Add triggers:

  • New hire onboarding into in-scope roles
  • Role change or promotion into approval authority
  • Major AIMS process change (new evaluation standard, new monitoring approach)
  • Material incident or near miss (post-incident skill correction)

Output: Competence Review Procedure embedded in HR onboarding and change management.

Required evidence and artifacts to retain

Keep artifacts in a form that supports traceability from AIMS process → role → person → proof:

  1. AIMS Activities Register (what work affects AIMS effectiveness)
  2. Role Competence Matrix (criteria + verification method)
  3. Competence Assessments 1
  4. Training records (internal or external), where relevant
  5. Work samples / practical evaluations (sanitized if needed)
  6. Approval gate assignments (who can approve what, based on competence)
  7. Remediation Log (gaps, actions, closure evidence)
  8. Third-party competence evidence (SOW language, resumes/qualifications, attestations, named personnel lists)
  9. Management review inputs summarizing competence health for AIMS (high-level, not HR-sensitive detail) 1

Common exam/audit questions and hangups

Expect these lines of questioning:

  • “Show me how you determined competence for people whose work affects AIMS effectiveness.” 1
  • “Which roles are in scope, and why?”
  • “How do you verify competence beyond training attendance?”
  • “How do you handle contractors and third parties doing AIMS work under your control?”
  • “What happens when someone is not competent for an assigned task?”
  • “How do you ensure competence stays current after process changes or incidents?”

Typical hangup: teams can describe training but cannot explain how training maps to specific AIMS tasks and approval authority.

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: Treating competence as an HR training catalog.
    Fix: Start from AIMS activities and control points, then define competence by task.

  2. Mistake: No assessment, only completion certificates.
    Fix: Add role-appropriate verification (work sample review, practical test, peer sign-off).

  3. Mistake: Ignoring third parties.
    Fix: For any outsourced AIMS task, require named personnel, qualification evidence, and a right to replace unqualified staff.

  4. Mistake: No gating.
    Fix: Tie competence status to approval gates and, where feasible, system permissions.

  5. Mistake: Stale competence.
    Fix: Add triggers tied to onboarding, role changes, AIMS change management, and incidents.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for ISO/IEC 42001 competence. Practically, competence failures show up as control failures: incorrect evaluation results accepted, unsafe changes pushed without understanding, monitoring signals missed, or incidents mishandled. Certification auditors will often treat these as systemic because they undermine AIMS effectiveness. 1

Practical execution plan (phased)

Immediate phase

  • Define AIMS scope boundaries and list AIMS-relevant activities.
  • Identify competence-critical roles and owners.
  • Draft the Role Competence Matrix for the highest-impact roles first.
  • Decide acceptable evidence types and where records will live.

Near-term phase

  • Run baseline assessments for in-scope individuals.
  • Open remediation items and set interim controls for any high-risk gaps.
  • Add competence checks into onboarding and role-change workflows.
  • Update third-party SOW templates to require competence evidence and named personnel.

Ongoing phase

  • Embed competence gating into SDLC and operational change control.
  • Review competence status after material incidents and major process changes.
  • Report competence program health in AIMS management review inputs. 1

Frequently Asked Questions

Does ISO/IEC 42001 require specific AI certifications?

No specific certifications are mandated. You must determine what competence is necessary for AIMS-impacting work and prove people meet it through education, training, or experience. 1

Who counts as “persons doing work under its control”?

Employees, contractors, and third parties can all be under your control if you direct their work or rely on their outputs in your AIMS processes. Treat outsourced labeling, evaluation, and monitoring as in scope when they affect AIMS effectiveness. 1

Is training completion enough evidence?

Often no. For roles involving judgment or approvals, auditors expect additional verification such as work samples, practical assessments, or documented sign-offs by a competent reviewer. 1

How do we handle competence for fast-changing AI tools and methods?

Define competence around tasks and controls (evaluation, monitoring, incident handling), then refresh criteria when your AIMS processes change. Use post-incident reviews and change management as triggers to reassess competence. 1

How should we document competence without creating sensitive HR records?

Keep the evidence focused on role requirements and verification outcomes (competent/gap/remediation) rather than personal data. Store detailed HR materials in HR systems and reference them from the AIMS competence record when needed. 1

What’s the most audit-friendly way to show competence quickly?

Provide a Role Competence Matrix, a list of in-scope personnel mapped to roles, and a sample set of assessment records that show gap remediation and gating for high-impact approvals. This demonstrates determination, verification, and control in one trace. 1

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

Does ISO/IEC 42001 require specific AI certifications?

No specific certifications are mandated. You must determine what competence is necessary for AIMS-impacting work and prove people meet it through education, training, or experience. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Who counts as “persons doing work under its control”?

Employees, contractors, and third parties can all be under your control if you direct their work or rely on their outputs in your AIMS processes. Treat outsourced labeling, evaluation, and monitoring as in scope when they affect AIMS effectiveness. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Is training completion enough evidence?

Often no. For roles involving judgment or approvals, auditors expect additional verification such as work samples, practical assessments, or documented sign-offs by a competent reviewer. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

How do we handle competence for fast-changing AI tools and methods?

Define competence around tasks and controls (evaluation, monitoring, incident handling), then refresh criteria when your AIMS processes change. Use post-incident reviews and change management as triggers to reassess competence. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

How should we document competence without creating sensitive HR records?

Keep the evidence focused on role requirements and verification outcomes (competent/gap/remediation) rather than personal data. Store detailed HR materials in HR systems and reference them from the AIMS competence record when needed. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

What’s the most audit-friendly way to show competence quickly?

Provide a Role Competence Matrix, a list of in-scope personnel mapped to roles, and a sample set of assessment records that show gap remediation and gating for high-impact approvals. This demonstrates determination, verification, and control in one trace. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001 Competence: Implementation Guide | Daydream