AI policy
ISO/IEC 42001 Annex A, Control A.2.2 requires you to document an AI policy that governs how your organization develops and/or uses AI systems. To operationalize it fast, publish a board/exec-approved policy that sets scope, roles, risk controls, third-party rules, and compliance gates, then prove it is implemented through procedures, training, and governance records. 1
Key takeaways:
- Your “AI policy” must cover both development and use of AI systems, not just engineering activity. 1
- Auditors will test existence, approval, communication, and implementation evidence, not the elegance of the prose.
- The fastest path is a policy + mapped procedures + intake/approval workflow + artifact retention.
An “AI policy requirement” sounds simple, but ISO/IEC 42001 auditors and internal stakeholders will treat it as the anchor for your entire AI management system. Annex A, Control A.2.2 sets a clear expectation: your organization must document a policy that governs either the development of AI systems, the use of AI systems, or both. 1
For a Compliance Officer, CCO, or GRC lead, the practical challenge is translating a single sentence into an operational control that stands up to audit: defining what counts as “AI,” establishing decision rights, setting minimum risk controls, handling third-party AI, and ensuring employees know what they can and cannot do. The policy also needs to be implementable. A policy that bans “all AI” is rarely credible; a policy that allows “anything” is not defensible.
This page gives requirement-level implementation guidance: who needs the policy, what to put in it, how to roll it out with real governance, and what evidence to retain so you can pass an ISO 42001 audit (or an internal assessment) without turning the policy into a book-length manifesto.
Regulatory text
Requirement (verbatim excerpt): “The organization shall document a policy for the development or use of AI systems.” 1
What the operator must do: You must produce a documented, organization-level policy that governs how AI systems are developed and/or used. Auditors will expect the policy to be formally approved, communicated, kept current, and implemented through supporting procedures and records. 1
Plain-English interpretation
You need a written rulebook for AI in your business. It must:
- State what AI is in your environment (what is “in scope”).
- Set who can approve AI use and under what conditions.
- Define minimum requirements for risk, security, privacy, and oversight.
- Cover internal AI development and external AI use (including third-party tools) if either occurs in your organization. 1
A policy is not the same as a procedure. The policy sets guardrails and governance expectations. Procedures and workflows prove people follow them.
Who it applies to
Entity scope
This applies to any organization that develops AI systems, uses AI systems, or both, including AI providers and AI users. 1
Operational scope (where this shows up in practice)
You should treat the policy as applicable wherever AI can affect:
- Products and services (customer-facing AI, decisioning, recommendations, support chat).
- Internal operations (HR screening tools, productivity copilots, code assistants, fraud analytics).
- Data processing (model training, prompt inputs, RAG pipelines, fine-tuning, synthetic data generation).
- Third-party relationships (SaaS with embedded AI, outsourced model development, external data/model providers).
If your business units say “we don’t do AI,” your scoping exercise should still cover shadow AI (employees using public tools) and embedded AI (features inside third-party software).
What you actually need to do (step-by-step)
1) Set policy ownership and approval path
- Assign a single accountable owner (often GRC, Privacy, or Security) and define co-owners (Legal, Product, Engineering, Data).
- Decide approval authority (exec sponsor, risk committee, or board-level oversight depending on your governance model).
- Define the review trigger events: major incident, new AI use case category, regulatory change, or material changes to data processing.
Output: AI Policy document with owner, approver, effective date, and review cadence.
2) Define scope and key terms so the policy is enforceable
Minimum definitions to include:
- “AI system” (use your internal definition consistently across procurement, product, and security reviews).
- “High-impact” or “material risk” AI use cases (even if you do not quantify risk, describe categories).
- “Third party AI” and “embedded AI” (AI capabilities inside tools you buy).
Practical tip: If you cannot define “AI system” cleanly, define “AI use” operationally: model-based automation or generation that influences decisions, content, or actions.
3) Establish governance: roles, decision rights, escalation
Your policy should assign:
- First line: business owner of the AI use case (accountable for outcomes).
- Second line: independent review (risk/compliance/privacy/security).
- Third line: internal audit (periodic assurance).
Add a simple escalation standard: when risk is high, novel, or customer-impacting, it must go to a named committee or exec sponsor.
Evidence: RACI chart, committee charter, meeting minutes, decision logs.
4) Build a mandatory AI intake and approval workflow
A policy without an intake gate fails in real organizations. Require that any new AI use case (or material change) goes through an intake that captures:
- Purpose and users (internal vs external).
- Data types used (including sensitive data).
- Human oversight model (who reviews outputs and when).
- Testing expectations (quality, safety, security checks).
- Third-party involvement (models, platforms, data providers).
Keep the intake lightweight so teams comply. If it’s painful, they will route around it.
Evidence: Intake form, ticketing workflow, approvals, exception records.
5) Set minimum control requirements the policy mandates
Your AI policy should require (at a minimum) that AI systems are not deployed/used until the organization addresses:
- Risk assessment: documented evaluation proportionate to the use case.
- Data governance: approved data sources, handling rules, retention, and access controls.
- Security controls: secure development practices and security review for AI components where relevant.
- Privacy and legal review: where personal data or regulated data is involved.
- Human oversight: defined reviewer roles, intervention rights, and monitoring triggers.
- Third-party controls: due diligence before using third-party AI services; contractual and security expectations.
Do not overfit to a single team’s process. Instead, reference existing enterprise processes (SDLC, procurement, privacy impact assessments) and state how AI must enter them.
Where Daydream fits naturally: If your AI systems depend on third parties (model providers, AI SaaS, data brokers), Daydream can help standardize third-party due diligence evidence collection and map it back to your AI policy requirements without chasing emails across teams.
6) Publish acceptable use rules for employees (shadow AI control)
Include clear “allowed / not allowed / needs approval” guidance, such as:
- Allowed: low-risk drafting with non-sensitive inputs.
- Not allowed: entering customer personal data into unapproved tools.
- Needs approval: deploying AI outputs into customer communications, decisioning, or production code.
Add a requirement to use only approved AI tools and approved configurations (enterprise accounts, logging, retention settings).
Evidence: Acceptable Use addendum, approved tools list, communications, attestations.
7) Operationalize with training and communications
- Train high-risk user groups first: engineering, product, HR, customer support, procurement.
- Include scenario-based examples aligned to your policy intake categories.
- Require attestations where appropriate (employees acknowledge the AI acceptable use rules).
Evidence: training content, completion records, acknowledgments.
8) Monitor compliance and manage exceptions
Your policy should define:
- How exceptions are requested, approved, time-limited, and tracked.
- What monitoring exists (periodic access reviews to AI tools, sampling of AI-assisted decisions where feasible, review of intake register completeness).
- Incident reporting triggers (harmful outputs, data leakage, model misuse, policy violations).
Evidence: exception register, monitoring reports, incident tickets, post-incident reviews.
Required evidence and artifacts to retain
Auditors generally want to see a chain from policy to practice. Maintain:
- Approved AI Policy (version history, approval record, effective date). 1
- AI system/use-case inventory or register (even a basic list is better than none).
- AI intake records and approval decisions (including risk acceptance where applicable).
- Third-party due diligence packets for AI-related third parties (security/privacy questionnaires, contracts, SOC reports if available, DPIAs where applicable).
- Training and acknowledgment records for relevant staff.
- Exception register and time-bound approvals.
- Governance records: committee minutes, risk decisions, escalations.
- Incident and issue management records tied to AI events.
Common exam/audit questions and hangups
Expect questions like:
- “Show me the AI policy and who approved it.” 1
- “How do you ensure teams follow it for new AI use cases?”
- “How do you define ‘AI system’ and scope what is covered?”
- “How do you handle third-party AI tools and embedded AI in SaaS?”
- “Show evidence of training and communication.”
- “How are exceptions handled, and what happens when policy is violated?”
Hangups:
- A policy exists, but no intake workflow exists.
- The policy says “do a risk assessment,” but there is no template or record of any assessments.
- Shadow AI is acknowledged but unmanaged (no approved tools list, no enforcement).
Frequent implementation mistakes (and how to avoid them)
-
Writing a policy that reads like principles only.
Fix: Add “must” statements that point to real gates: intake, approvals, and minimum controls. -
Scoping only custom-built models.
Fix: Explicitly include external AI services and embedded AI features in third-party tools. -
Banning AI broadly, then ignoring reality.
Fix: Publish acceptable use with approved tools and a clear path to request approval for new tools. -
No exception mechanism.
Fix: Add an exception process with documented approvals and time limits. -
No evidence strategy.
Fix: Decide upfront what artifacts you will retain and where they live (GRC system, ticketing tool, document repository).
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite enforcement actions.
Operationally, weak AI policy controls tend to show up as:
- Unapproved third-party AI processing sensitive data.
- Inconsistent risk decisions across business units.
- Inability to prove oversight after an incident.
- Audit findings for “policy not implemented” even when a policy document exists.
A practical 30/60/90-day execution plan
First 30 days (foundation and scope)
- Name policy owner and approvers; set governance forum.
- Run a fast discovery: where AI is used, including third-party tools and embedded AI.
- Draft the AI policy outline and align it to existing enterprise processes (SDLC, procurement, privacy).
- Stand up an interim AI intake form and approval path (even if manual).
By 60 days (publish and implement)
- Finalize and approve the AI policy; publish it in your policy repository.
- Roll out acceptable use rules and an approved tools list.
- Train priority teams; collect attestations where appropriate.
- Start an AI use-case register fed from intake tickets.
By 90 days (evidence, monitoring, and audit readiness)
- Convert the interim intake into a durable workflow (ticketing + required fields + routing).
- Implement exception tracking and periodic review.
- Produce a first monitoring report: intake volume, exceptions, third-party AI reviews completed, open risks.
- Run an internal audit-style tabletop: pick a live AI use case and trace policy → intake → approvals → controls → monitoring evidence.
Frequently Asked Questions
Do we need an AI policy if we only “use” third-party AI tools and don’t build models?
Yes. The requirement covers “development or use of AI systems,” so third-party AI use still needs a documented policy. 1
What counts as “documented” for the AI policy?
A written policy with version control, formal approval, and accessibility to the workforce meets the baseline expectation. Auditors also expect evidence that the policy is implemented through workflows and records. 1
Can our AI policy be a section inside an existing information security or acceptable use policy?
It can, as long as it clearly governs AI development/use and is discoverable, approved, and enforceable. In practice, many teams still publish a standalone AI policy and cross-reference related policies for security, privacy, and procurement.
How do we control “shadow AI” without blocking productivity?
Publish clear acceptable use rules, provide approved enterprise AI tools, and require approval for higher-risk uses. Pair that with training and periodic checks (for example, procurement reviews and access controls) so the policy is real.
What evidence will an auditor ask for beyond the policy document?
Common requests include AI intake tickets, approval records, training completion, third-party due diligence artifacts, exception logs, and governance meeting minutes that show oversight decisions.
How do we handle AI features embedded in third-party SaaS tools?
Treat embedded AI as in scope for your policy. Require procurement and security/privacy reviews to identify AI features, understand data flows, and document approval before enabling them for sensitive workflows.
Footnotes
Frequently Asked Questions
Do we need an AI policy if we only “use” third-party AI tools and don’t build models?
Yes. The requirement covers “development or use of AI systems,” so third-party AI use still needs a documented policy. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
What counts as “documented” for the AI policy?
A written policy with version control, formal approval, and accessibility to the workforce meets the baseline expectation. Auditors also expect evidence that the policy is implemented through workflows and records. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
Can our AI policy be a section inside an existing information security or acceptable use policy?
It can, as long as it clearly governs AI development/use and is discoverable, approved, and enforceable. In practice, many teams still publish a standalone AI policy and cross-reference related policies for security, privacy, and procurement.
How do we control “shadow AI” without blocking productivity?
Publish clear acceptable use rules, provide approved enterprise AI tools, and require approval for higher-risk uses. Pair that with training and periodic checks (for example, procurement reviews and access controls) so the policy is real.
What evidence will an auditor ask for beyond the policy document?
Common requests include AI intake tickets, approval records, training completion, third-party due diligence artifacts, exception logs, and governance meeting minutes that show oversight decisions.
How do we handle AI features embedded in third-party SaaS tools?
Treat embedded AI as in scope for your policy. Require procurement and security/privacy reviews to identify AI features, understand data flows, and document approval before enabling them for sensitive workflows.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream