Resource documentation
To meet ISO/IEC 42001 Annex A Control A.4.2, you must maintain a living record of every material resource your AI systems require—data, compute, tools, and people—and keep it accurate enough that an auditor (or an internal reviewer) can verify the system can be operated, monitored, and changed under controlled conditions. Treat it as an operational dependency register tied to each AI system, not a one-time document.
Key takeaways:
- Document resources by AI system: datasets, infrastructure, software/tooling, third-party services, and human roles.
- Tie each resource to ownership, access controls, change management, and continuity plans.
- Store evidence that your documentation is used in practice (approvals, tickets, inventories, vendor records).
“Resource documentation” sounds basic until you try to operate it across multiple AI systems, environments, and third parties. Annex A Control A.4.2 is short, but it drives a lot of downstream audit work: if you cannot name the resources an AI system depends on, you cannot convincingly manage change, security, reliability, supplier risk, or even safe decommissioning.
For a Compliance Officer, CCO, or GRC lead, the fastest way to operationalize this requirement is to build a single, reviewable “AI System Resource Register” and make it the dependency source of truth. It should map each AI system to: (1) data resources (training, fine-tuning, evaluation, inference inputs), (2) computational resources (on-prem, cloud accounts, GPUs/CPUs, storage, networking), (3) tooling (ML platforms, MLOps pipelines, model registries, monitoring), (4) third-party services (APIs, hosted models, annotators), and (5) human resources (roles, skills, staffing, and who is on-call).
If you already maintain CMDB, data catalogs, IAM, and third-party inventories, you are not starting from zero. The work is to connect them to each AI system, assign accountable owners, and prove the register is kept current through change management.
Regulatory text
Requirement (excerpt): “The organization shall identify and document the resources required for AI systems.” 1
What the operator must do: Maintain documented information that identifies the resources each AI system needs to be developed, deployed, operated, monitored, and maintained. The documentation must be specific enough to support governance and repeatable operations, and it must be kept current as resources change. 1
Plain-English interpretation (what “resource documentation requirement” means in practice)
You need a structured inventory of AI system dependencies. If the system needs a dataset, a cloud project, a feature store, an annotation team, an API subscription, or a specific engineer role to operate safely, it belongs in the record.
A good test: if losing the resource would stop the system from working, degrade outputs, or increase risk, it is “required” and must be documented. Another test: if a change to the resource could change model behavior (for example, training data refresh, inference prompt template updates, model version change), it must be documented with enough detail to manage the change under your control framework. 1
Who it applies to (entities and operational context)
This control applies to:
- AI providers building or offering AI systems.
- AI users deploying AI systems internally or embedding AI services into products.
- Any organization operating AI systems, including those heavily reliant on third parties for models, data, platforms, or staffing. 1
Operationally, it applies across the AI lifecycle:
- Build: training pipelines, evaluation harnesses, labeling operations, model registry.
- Buy/consume: hosted model APIs, managed ML platforms, third-party datasets.
- Run: inference infrastructure, prompt/config management, monitoring, incident response.
- Change: model upgrades, dataset updates, vendor changes, access changes.
- Retire: decommissioning resources, revoking access, data retention and deletion steps.
What you actually need to do (step-by-step)
1) Define the “AI system” boundary you will document
Pick a consistent unit: “customer support assistant,” “fraud model v3,” “resume screening model,” etc. Record what is in scope (model(s), prompts/config, pipelines, interfaces) and what is out of scope (general corporate IT not uniquely required).
Output: AI system list with unique identifiers and owners.
2) Create a standard resource taxonomy (keep it audit-friendly)
Use categories that map to real controls and artifacts:
| Category | What to capture | Examples |
|---|---|---|
| Data resources | source, purpose, sensitivity, location, refresh cadence, access path | training dataset, labeled data vendor feed, inference inputs, feature store tables |
| Compute/infrastructure | environment, accounts/subscriptions, regions, runtime, scaling dependencies | cloud project IDs, GPU cluster, Kubernetes namespace, storage bucket |
| Tooling/software | SDLC/MLOps tools, monitoring, model registry, CI/CD | MLflow, feature store, evaluation suite, prompt management repo |
| Third-party services | provider, contract/SOW link, SLA expectations, dependency criticality | hosted LLM API, labeling firm, vector DB SaaS |
| Human resources | accountable owner, required roles/skills, on-call coverage | model owner, data steward, SRE, security approver |
Output: Resource documentation standard and template.
3) Build the AI System Resource Register
For each AI system, document:
- Resource name and type (from the taxonomy)
- Owner (person/team)
- Where it lives (system of record, account, repository, region/environment)
- Access controls (IAM groups, approval path, privileged access)
- Change triggers (what changes require review: dataset refresh, model version bump, vendor change)
- Operational dependency (what breaks if this resource is unavailable)
- Related controls (link to risk assessment, third-party review, monitoring, incident runbooks)
Keep the register in a system that supports approvals and version history (GRC tool, controlled wiki, ticketed CMDB extension). If you use Daydream, implement this as a structured register with required fields, ownership routing, and change attestations so updates do not depend on someone remembering to edit a document.
Output: Published register for all in-scope AI systems.
4) Reconcile against existing inventories (to avoid gaps)
Do not trust self-attestation alone. Cross-check:
- CMDB / cloud asset inventory (compute, storage, network)
- Data catalog (datasets, tables, lineage)
- IAM / SSO groups (who can access what)
- Third-party inventory (all external providers supporting the AI system)
- Source code repos (pipelines, prompt configs, evaluation scripts)
Output: Reconciliation log and remediation tickets for missing documentation.
5) Attach “proof of use” through operating routines
Examiners look for operationalization, not a pretty spreadsheet. Wire the register into:
- Change management: changes to documented resources require a ticket that references the register entry.
- Procurement and third-party due diligence: new AI-related third parties cannot go live without being added.
- Incident response: runbooks reference the resources (who owns the cloud account, which dashboards, which API keys).
- Periodic review: system owners re-attest that listed resources are current and complete.
Output: SOPs and workflow evidence showing the register is used.
Required evidence and artifacts to retain
Keep artifacts that show (1) the documentation exists, (2) it is complete for each AI system, and (3) it is maintained.
Minimum evidence set:
- AI System Resource Register (current version + version history)
- AI system inventory with owners and business purpose
- Resource templates/standard (definitions, required fields)
- Links to underlying systems of record:
- cloud account/project list for the system
- datasets and data lineage entries
- model registry entries and model versions
- monitoring dashboards and alert routes
- third-party contracts/SOWs and due diligence records
- Change management evidence (tickets/approvals referencing resource entries)
- Access control evidence (IAM group mappings, privileged access approvals)
- Review/attestation evidence (owner sign-offs, meeting notes, action items)
Common exam/audit questions and hangups
Expect questions like:
- “Show me, for this AI system, every dataset and where it is stored. Who approves access?”
- “Which third parties are required to operate the system? What happens if they fail?”
- “Where is the model hosted and monitored? Who is on-call?”
- “How do you ensure changes to prompts/configs or data refreshes are controlled?”
- “Prove the documentation is current. What was the last update and why?”
Hangups auditors often find:
- Documentation exists, but it is not tied to specific AI systems.
- The register lists tools but misses data dependencies or third-party services.
- Ownership is unclear, or “AI team” is listed without a named accountable role.
- No evidence that updates are enforced through change control.
Frequent implementation mistakes (and how to avoid them)
-
Treating “resources” as only infrastructure.
Fix: include data, third-party services, and humans explicitly. If a labeling vendor or prompt engineer is required, document it. -
Listing resources without ownership or access paths.
Fix: every resource gets an owner and a pointer to how access is granted (group, ticket type, approver). -
Building a spreadsheet that immediately goes stale.
Fix: connect updates to existing workflows (change management, procurement intake, model release gates). Daydream-style routing and attestations help because the register becomes the workflow, not an afterthought. -
Over-scoping to “all corporate IT.”
Fix: document what the AI system requires. Reference enterprise standards elsewhere, but keep the register AI-system-specific. -
No linkage to third-party risk management.
Fix: for each required external service, include the third party record and the minimum assurance package (contract, security review status, exit/continuity notes).
Enforcement context and risk implications
No public enforcement cases were provided for this specific ISO control. Treat the risk as practical and audit-driven: weak resource documentation leads to uncontrolled changes, untracked third-party dependencies, and gaps in access governance. Those gaps show up quickly during incidents, model performance drift investigations, and external audits against AI governance commitments. 1
Practical execution plan (30/60/90)
First 30 days (Immediate)
- Assign an accountable owner for the control (usually GRC partnered with AI engineering).
- Define the AI system boundary and publish the AI system inventory.
- Approve the resource taxonomy and the register template.
- Pilot the register on a small set of high-impact AI systems; reconcile against cloud assets, data catalog, and third-party inventory.
- Create a basic SOP: “When a resource changes, update the register via ticket.”
By 60 days (Near-term)
- Expand the register coverage to all in-scope AI systems.
- Add required fields that drive operational controls: access path, change triggers, dependency criticality, and links to monitoring/runbooks.
- Connect procurement intake so new AI-related third parties are captured automatically.
- Add an owner attestation workflow and a review cadence aligned to your existing governance meetings.
By 90 days (Operationalized and testable)
- Demonstrate traceability: pick any AI system and produce a complete dependency pack within hours.
- Validate “proof of use”: sample recent changes (model updates, data refreshes, vendor changes) and confirm register updates occurred through change management.
- Run a tabletop exercise: simulate loss of a key resource (data feed or model API) and confirm the register identifies owners, contacts, and recovery steps.
- If you use Daydream, convert the register into structured records with required fields, automated reminders, and approval routing so currency does not depend on manual follow-up.
Frequently Asked Questions
What counts as a “resource” for ISO 42001 Control A.4.2?
Anything required to build, run, monitor, or change the AI system under control: data, compute, tooling, third-party services, and the human roles needed to operate it. If losing it would break the system or increase risk, document it. 1
Do we need separate documentation for each model version?
Track resources at the AI system level, then link to model registry entries and versions where behavior-impacting resources change (model artifact, prompt/config, evaluation set). The goal is traceability without duplicating documentation.
We use a hosted LLM API. What do we document if we don’t control the model?
Document the API provider as a required third party, the integration components you control (prompts/config, routing, guardrails), the accounts/keys, monitoring, and the contingency plan if the provider degrades or changes terms. Include the third-party due diligence record.
Can we satisfy this control with our CMDB alone?
Usually not. CMDB covers infrastructure well but misses data resources, ML tooling, and human/operational dependencies. Use CMDB as an input, then complete the AI-specific register.
How detailed should dataset documentation be?
Detailed enough to identify the dataset, where it is stored, who owns it, how access is granted, and what changes could impact model behavior (refreshes, schema changes, sourcing changes). Link to your data catalog entry when you have one.
Who should own the AI system resource register?
GRC should own the standard and oversight, but each AI system needs a named operational owner responsible for accuracy. Engineering, data, and procurement teams must be accountable for updates through their existing workflows.
Footnotes
Frequently Asked Questions
What counts as a “resource” for ISO 42001 Control A.4.2?
Anything required to build, run, monitor, or change the AI system under control: data, compute, tooling, third-party services, and the human roles needed to operate it. If losing it would break the system or increase risk, document it. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
Do we need separate documentation for each model version?
Track resources at the AI system level, then link to model registry entries and versions where behavior-impacting resources change (model artifact, prompt/config, evaluation set). The goal is traceability without duplicating documentation.
We use a hosted LLM API. What do we document if we don’t control the model?
Document the API provider as a required third party, the integration components you control (prompts/config, routing, guardrails), the accounts/keys, monitoring, and the contingency plan if the provider degrades or changes terms. Include the third-party due diligence record.
Can we satisfy this control with our CMDB alone?
Usually not. CMDB covers infrastructure well but misses data resources, ML tooling, and human/operational dependencies. Use CMDB as an input, then complete the AI-specific register.
How detailed should dataset documentation be?
Detailed enough to identify the dataset, where it is stored, who owns it, how access is granted, and what changes could impact model behavior (refreshes, schema changes, sourcing changes). Link to your data catalog entry when you have one.
Who should own the AI system resource register?
GRC should own the standard and oversight, but each AI system needs a named operational owner responsible for accuracy. Engineering, data, and procurement teams must be accountable for updates through their existing workflows.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream