AI policy

ISO/IEC 42001 Clause 5.2 requires top management to establish an AI policy that fits your organization’s purpose, sets the framework for AI objectives, and commits to responsible AI, legal compliance, and continual improvement of the AI management system 1. To operationalize it quickly, you need an approved policy, clear scope, mapped obligations, assigned ownership, and evidence that the policy drives objectives and governance.

Key takeaways:

  • The AI policy is a top-management control: approval, direction, and accountability must be explicit.
  • Your policy must do three jobs: set a framework for AI objectives, commit to responsible AI and legal requirements, and commit to continual improvement 1.
  • Auditors will look for proof the policy is implemented, not just published.

An “AI policy” under ISO/IEC 42001 is not a generic ethics statement. Clause 5.2 ties it directly to management-system operation: top management must establish a policy that matches the organization’s purpose, drives AI objectives, commits the organization to responsible AI and applicable legal requirements, and commits to continual improvement of the AI management system 1.

For a CCO, GRC lead, or compliance operator, the fastest path is to treat the AI policy as a governance anchor: it sets non-negotiable commitments, defines the scope of AI covered, assigns ownership, and creates the “why and how” that your procedures, risk assessments, training, and monitoring must align to.

This page translates the requirement into operator steps you can execute: what the policy must contain, how to route it through top management, how to connect it to objectives and controls, and what evidence you must retain to satisfy an ISO-style audit. It also highlights common failure modes, such as publishing a policy that never influences decision-making, procurement, model risk review, or incident management.

Regulatory text

Clause requirement (verbatim excerpt): “Top management shall establish an AI policy that is appropriate to the purpose of the organization, provides a framework for setting AI objectives, includes commitments to responsible AI, applicable legal requirements, and continual improvement of the AI management system.” 1

What the operator must do (in one line)

You must produce a top-management-approved AI policy that (1) fits your organization’s mission and context, (2) guides AI objectives, and (3) makes explicit commitments to responsible AI, legal compliance, and continuous improvement of the AI management system 1.

Plain-English interpretation (what auditors expect)

Auditors generally treat this requirement as “tone from the top” plus operational linkage. They will test whether:

  • Top management established the policy. Not just “reviewed by Legal,” but approved and owned at the executive level.
  • The policy is tailored. It should reflect your business model and how you actually develop, buy, deploy, or rely on AI.
  • The policy drives objectives. Your AI objectives should clearly trace back to the policy’s commitments.
  • The policy is enforceable. Teams can tell what’s allowed, what’s prohibited, and who decides when there’s a conflict.
  • Commitments are explicit. Responsible AI, compliance with applicable legal requirements, and continual improvement must appear as commitments, not implied values 1.

Who it applies to (entity and operational context)

Clause 5.2 applies to any organization implementing an AI management system under ISO/IEC 42001, including:

  • AI providers (building models, model-enabled products, or AI services)
  • AI users (deploying third-party AI tools or embedding AI into internal workflows)
  • Organizations with AI materially affecting operations, customers, regulated outcomes, or risk posture

Operationally, it applies wherever AI is present in:

  • Product development and customer-facing features
  • Underwriting, credit, fraud, pricing, claims, or eligibility decisions
  • HR/recruiting tooling
  • Security operations (detection/response automation)
  • Marketing personalization and analytics
  • Third-party software with AI features enabled by default

What you actually need to do (step-by-step)

Treat this as a controlled document with executive ownership and downstream implementation hooks.

Step 1: Define policy scope in business terms

Write a scope paragraph that answers:

  • What you mean by “AI” for your organization (include AI-enabled tools, not only in-house models).
  • Where it applies: products, internal use, and third-party AI.
  • Any explicit exclusions (and who approves exceptions).

Operator tip: If your scope is vague, every later control becomes disputable.

Step 2: Draft the mandatory commitments (use “shall” language sparingly, but be clear)

Your policy must include commitments to:

  • Responsible AI (what “responsible” means in your context: governance, oversight, human review where needed, safety, accountability)
  • Applicable legal requirements (commitment to identify and comply with relevant legal obligations before and during AI use)
  • Continual improvement of the AI management system (commitment to monitor performance, findings, incidents, and improve controls)
    All three are required elements in the clause 1.

Step 3: Provide a framework for setting AI objectives

Translate the policy into objective categories your teams can measure and manage. Examples of objective “buckets” (customize to your environment):

  • Governance and accountability (ownership, approvals, oversight cadence)
  • Risk management and controls (risk assessments, testing gates, monitoring)
  • Third-party AI management (due diligence, contract requirements, ongoing oversight)
  • Incident readiness (issue intake, escalation, remediation expectations)
  • Training and awareness (role-based enablement)

Your policy doesn’t need to list all objectives, but it must clearly define how objectives are set and aligned to the policy 1.

Step 4: Align policy language to your organization’s purpose

Add 3–6 sentences that connect AI use to business purpose and risk tolerance. For example:

  • “We use AI to improve customer outcomes while maintaining safety, accountability, and compliance obligations.”
  • “We do not deploy AI in ways that materially harm customers or bypass required review controls.”

This is where auditors assess “appropriate to the purpose of the organization” 1.

Step 5: Assign governance ownership and decision rights

At minimum, name:

  • Policy owner (often CCO, CIO, CRO, or Head of Risk)
  • Executive sponsor (a member of top management)
  • The committee or forum that resolves AI-risk decisions and exceptions
  • Required functions in review (Legal/Compliance, Security, Privacy, Product, Data/ML)

Keep it crisp: who approves new AI use cases, who can block, and who accepts residual risk.

Step 6: Operationalize via procedures, not more policy

Create or update “binding” downstream artifacts that implement policy commitments, such as:

  • AI intake and use-case approval workflow
  • AI risk assessment template and minimum control gates
  • Third-party due diligence requirements for AI tooling
  • Testing and monitoring requirements (pre-deployment and post-deployment)
  • Incident and issue management process for AI failures

Where Daydream fits: If you need to operationalize quickly across many third parties, Daydream can centralize AI third-party due diligence questionnaires, contract obligation tracking, and evidence collection so your AI policy commitments show up in procurement and ongoing monitoring workflows.

Step 7: Get top-management approval and communicate

Route for approval in a way you can prove (minutes, signatures, governance portal record). Then publish and communicate to relevant roles (engineering, product, procurement, legal, customer ops). Your communication method matters less than your evidence that it happened and that staff can find the policy.

Step 8: Build the “continual improvement” feedback loop

Define how you will update the AI management system based on:

  • Internal audit findings
  • Incidents and near misses
  • Model/tool performance monitoring issues
  • Regulatory change intake (where applicable)

You don’t need a long narrative. You need a repeatable mechanism consistent with the policy’s commitment 1.

Required evidence and artifacts to retain

Auditors commonly request a tight evidence pack. Keep these items ready:

  • AI Policy (version-controlled, current, and accessible)
  • Approval evidence: top management sign-off, meeting minutes, or governance system approval record
  • Policy-to-objectives mapping: a one-page crosswalk showing how AI objectives align to policy commitments
  • Communication evidence: training snippet, email notice, intranet post, onboarding link, acknowledgment records where used
  • Operational procedures that implement the policy (intake, risk assessment, third-party due diligence, monitoring, incident handling)
  • Exception register: requests, approvals/denials, rationale, compensating controls, and review status
  • Continuous improvement artifacts: corrective actions, lessons learned, management review inputs/outputs relevant to AI

Common exam/audit questions and hangups

Expect questions like:

  • “Show me where top management established the AI policy.” (They will reject “Compliance drafted it” without exec establishment.)
  • “How is this policy appropriate to your purpose?” (They’ll test tailoring and relevance to actual AI use.)
  • “Where does the policy require compliance with applicable legal requirements?” (They want explicit commitment.)
  • “Show how AI objectives are set from the policy.” (They want linkage, not parallel documents.)
  • “How do you know the policy is followed in procurement and third-party onboarding?” (They’ll sample third-party AI tools.)
  • “How do you improve the AI management system?” (They want proof of feedback loops, not aspirational statements.)

Frequent implementation mistakes (and how to avoid them)

  1. Publishing an “AI ethics” poster instead of an operable policy.
    Fix: include scope, commitments, and decision rights tied to your management system 1.

  2. No top-management establishment evidence.
    Fix: capture approval records and meeting minutes that show top management action.

  3. Policy doesn’t cover third-party AI.
    Fix: explicitly apply the policy to AI acquired as software, platform features, and outsourced services. Connect it to third-party due diligence.

  4. “Comply with laws” with no mechanism.
    Fix: tie the policy to a legal-requirements intake process for AI use cases and vendors, with required review points.

  5. No continual-improvement hook.
    Fix: require periodic review triggered by incidents, audit findings, or meaningful changes to AI systems 1.

Enforcement context and risk implications

ISO/IEC 42001 is a management-system standard, not a regulator. Your risk is not “fines for violating Clause 5.2,” but downstream exposure:

  • Failed certifications or surveillance audits
  • Weak governance that leads to legal, customer, or contractual issues when AI causes harm or noncompliance
  • Third-party AI failures that you cannot evidence you governed or monitored

Treat the AI policy as a litigation-readiness and customer-trust artifact: it becomes discoverable in disputes and is often requested in enterprise security/compliance reviews.

Practical execution plan (30/60/90)

Exact day counts are organization-dependent; use this as phased execution.

Phase 1 (immediate): Establish and approve the policy

  • Inventory AI usage (internal, product, and third-party).
  • Draft policy with mandatory commitments and scope 1.
  • Identify owner, exec sponsor, and governance forum.
  • Secure top-management approval and store evidence.

Phase 2 (near-term): Connect policy to objectives and workflows

  • Define AI objectives categories and document the linkage to policy.
  • Implement intake/approval for new AI use cases.
  • Add third-party AI due diligence requirements to procurement.
  • Publish internally with clear “where to ask” channels.

Phase 3 (ongoing): Prove implementation and improve

  • Start an exception register and track resolutions.
  • Test one or two use cases end-to-end (intake → risk review → approval → monitoring).
  • Collect evidence for audit: approvals, communications, assessments, remediation actions.
  • Run management review inputs for AI and document improvement actions 1.

Frequently Asked Questions

Does ISO/IEC 42001 require a separate “Responsible AI” policy?

Clause 5.2 requires an AI policy that includes a commitment to responsible AI 1. You can meet the requirement with one integrated AI policy if it is explicit and operational.

Who counts as “top management” for approval?

Use your organization’s definition used for management-system governance. In practice, auditors expect an executive-level approver with authority to set direction and allocate resources, plus evidence of that approval.

We only buy AI tools; we don’t build models. Does this still apply?

Yes, if you are implementing an AI management system, the AI policy must govern AI use in your organization, including third-party AI 1. Your procedures should emphasize procurement, due diligence, and monitoring rather than model development controls.

What’s the minimum content that must be in the AI policy?

The policy must be appropriate to your purpose, provide a framework for AI objectives, and include commitments to responsible AI, applicable legal requirements, and continual improvement 1. If any of those elements are missing, expect an audit finding.

How do we prove “continual improvement” without overbuilding a program?

Document a repeatable mechanism: issue intake, corrective actions, and management review outputs tied to AI governance. Keep evidence that you acted on findings or incidents and updated controls or guidance accordingly 1.

What evidence is most commonly missing in audits?

Approval evidence from top management and a clear mapping between the policy and AI objectives are frequent gaps. Auditors also look for proof the policy changed behavior in procurement, onboarding, and exception handling.

Footnotes

  1. ISO/IEC 42001:2023 Artificial intelligence — Management system

Frequently Asked Questions

Does ISO/IEC 42001 require a separate “Responsible AI” policy?

Clause 5.2 requires an AI policy that includes a commitment to responsible AI (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). You can meet the requirement with one integrated AI policy if it is explicit and operational.

Who counts as “top management” for approval?

Use your organization’s definition used for management-system governance. In practice, auditors expect an executive-level approver with authority to set direction and allocate resources, plus evidence of that approval.

We only buy AI tools; we don’t build models. Does this still apply?

Yes, if you are implementing an AI management system, the AI policy must govern AI use in your organization, including third-party AI (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). Your procedures should emphasize procurement, due diligence, and monitoring rather than model development controls.

What’s the minimum content that must be in the AI policy?

The policy must be appropriate to your purpose, provide a framework for AI objectives, and include commitments to responsible AI, applicable legal requirements, and continual improvement (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system). If any of those elements are missing, expect an audit finding.

How do we prove “continual improvement” without overbuilding a program?

Document a repeatable mechanism: issue intake, corrective actions, and management review outputs tied to AI governance. Keep evidence that you acted on findings or incidents and updated controls or guidance accordingly (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system).

What evidence is most commonly missing in audits?

Approval evidence from top management and a clear mapping between the policy and AI objectives are frequent gaps. Auditors also look for proof the policy changed behavior in procurement, onboarding, and exception handling.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 42001 AI policy: Implementation Guide | Daydream