System and computing resources
ISO/IEC 42001 Annex A Control A.4.5 requires you to identify and document the system and computing resources your AI systems use, across on‑prem, cloud, and third-party components. To operationalize it, build and maintain a resource inventory mapped to each AI system, with clear ownership, change control triggers, and audit-ready evidence. 1
Key takeaways:
- Maintain an AI-specific inventory of compute, storage, network, and platform services mapped to each AI system. 1
- Treat third-party infrastructure (cloud, hosted APIs, managed ML platforms) as “computing resources” that must be documented. 1
- Make the inventory operational: owners, update triggers, and retention of screenshots/exports, architecture diagrams, and approvals. 1
Control A.4.5 is an inventory requirement, but auditors will treat it as a reliability and risk-control foundation. If you cannot name the compute, storage, and infrastructure an AI system runs on, you usually cannot prove basic governance outcomes: where the model executes, where training data is processed, which environments are production, who can change capacity, or which third parties are in the path.
For a CCO or GRC lead, the fastest way to implement A.4.5 is to create a “resource-to-AI-system map” that is complete enough for an assessor to follow the chain from an AI use case to the exact platforms and accounts that execute it. Your goal is not to produce perfect infrastructure-as-code documentation. Your goal is a controlled register that stays current and supports downstream controls like access management, incident response, and third-party due diligence.
This page gives requirement-level steps, evidence to retain, audit questions to expect, and a practical execution plan you can run with engineering and security without rewriting your entire CMDB. 1
Regulatory text
Text (excerpt): “The organization shall identify and document the system and computing resources used by AI systems.” 1
What the operator must do: You must (1) identify the computing resources that each AI system depends on, and (2) document them in a form that is controlled, reviewable, and maintainable. “Computing resources” should be read broadly in practice: hardware, virtual machines, Kubernetes clusters, serverless functions, GPUs/TPUs, storage, databases, networking components, and cloud/managed services that run, train, fine-tune, host, or monitor an AI model. 1
Plain-English interpretation
You need a reliable list of “where the AI runs” and “what it depends on,” tied to each AI system. If a system uses a managed model endpoint, your documentation still must capture that the endpoint exists, which account/subscription it sits in, who owns it, and how changes are approved.
This is not a one-time spreadsheet. Auditors expect it to be kept current through defined triggers (new AI system, environment changes, migration, new cloud service, third-party substitution) and to have an accountable owner.
Who it applies to
Entity types: AI providers, AI users, and organizations operating AI systems. 1
Operational context where it matters most:
- Organizations building AI: training pipelines, feature stores, experimentation platforms, model registries, inference services.
- Organizations deploying third-party AI: SaaS AI features, API-based model providers, embedded AI in third-party platforms.
- Hybrid environments: on-prem compute with cloud training, or cloud inference with on-prem data sources.
- Regulated or high-impact processing: where you need clean traceability for incident response, change management, and third-party oversight.
What you actually need to do (step-by-step)
Step 1: Define the scope of “AI system” and “computing resources”
Write a short scoping note (one page is enough) that states:
- What counts as an AI system in your environment (apps with ML models, AI-assisted decision services, LLM-integrated workflows).
- What counts as computing resources (on-prem, cloud, managed services, and third-party dependencies).
This prevents debates later when teams try to exclude “just an API call” or “just a managed endpoint.” 1
Step 2: Build an AI System Resource Inventory (the core artifact)
Create a register with one row per AI system, and a structured sub-section (or linked table) for its resources. Minimum fields that work well in audits:
AI system identifiers
- AI system name and unique ID
- Business owner
- Technical owner (platform/engineering)
- Environment(s): dev/test/prod (or equivalent)
- Criticality tier (your internal scale)
Computing resources (map to the AI system)
- Cloud account/subscription/project IDs (where applicable)
- Compute: instance types, cluster names, serverless services, GPU pools
- Storage: buckets, object stores, volumes, file shares
- Data services: databases, streaming services
- ML services: managed training, model registry, feature store, vector DB, monitoring tools
- Networking: VPC/VNet, subnets, load balancers, egress controls (at least at a high level)
- CI/CD + runtime orchestration: pipelines, container registry, IaC repos (link references)
- Third-party resources: hosted APIs, SaaS platforms, managed endpoints, including the third party name and service component
Keep the inventory readable. Assessors want traceability more than exhaustiveness. 1
Step 3: Tie each resource to a purpose and lifecycle stage
For each major resource category, capture “used for what”:
- Training
- Fine-tuning
- Evaluation/testing
- Inference/serving
- Monitoring and logging
This small addition pays off in reviews because it clarifies why sensitive data might touch a given environment and what “production” actually means for the AI system. 1
Step 4: Assign ownership and update triggers (make it operational)
Document:
- Inventory owner (role, not a person): often AI governance lead + platform owner.
- Update triggers: new AI system onboarding; cloud account change; region change; new managed service; major model version rollout; third-party change.
- Update mechanism: service ticket + approval, or pull request workflow if you store the register in a controlled repo.
- Review cadence: define an internal periodic review (cadence is your choice; auditors care that it exists and is followed).
If you use Daydream for third-party risk workflows, treat third-party AI platforms and managed model providers as first-class dependencies in the inventory and link them to your due diligence record (security review, DPIA where applicable, contract artifacts, SLA/BCP posture). That reduces the “AI inventory vs third-party inventory” mismatch that creates audit gaps.
Step 5: Prove completeness (the exam trap)
Auditors commonly probe whether the register is complete. Use at least one reconciliation method:
- Export a list of cloud projects/accounts tagged “AI” and reconcile to your inventory.
- Reconcile deployed model endpoints from your model serving platform.
- Reconcile procurement records for AI-related SaaS and APIs.
Pick one method you can repeat. Document the reconciliation result and follow-ups. 1
Step 6: Control changes and retain evidence
Add a lightweight change control step:
- Change request references the AI system ID and the resource being added/removed.
- Approval recorded by the technical owner and (where required) AI governance.
Retain evidence as exports, screenshots, or signed change tickets.
Required evidence and artifacts to retain
Keep these in a single audit folder per AI system (or a governed repository):
- AI System Resource Inventory (current version + prior versions or change history). 1
- Architecture diagram showing major compute and data flows (high level is acceptable).
- Cloud/account mapping evidence: account IDs, project IDs, subscription IDs tied to the AI system.
- Configuration exports for key resources (cluster list, model endpoints, storage locations) where practical.
- Third-party dependency list and links to due diligence artifacts (SOC reports if available, contract terms, security questionnaire outcomes).
- Reconciliation evidence (export + signoff notes).
- Change control records for additions/removals of major resources.
Common exam/audit questions and hangups
Expect questions like:
- “Show me all production compute resources used by AI System X, including managed services.” 1
- “Which cloud accounts run AI workloads, and who owns them?”
- “Does the inventory include the third-party model API, the gateway, and the logging stack?”
- “How do you know the inventory is complete?”
- “What triggers an update, and can you show the last update evidence?”
Hangups that slow teams down:
- Disagreement on whether a SaaS AI feature counts as “your AI system.”
- Fragmented ownership across ML, platform, and security teams.
- CMDB is too generic; AI resources aren’t tagged consistently.
Frequent implementation mistakes and how to avoid them
- Listing only servers and forgetting managed services. Fix: add explicit fields for managed ML services, model endpoints, and third-party APIs. 1
- No link between AI systems and cloud accounts. Fix: require account/subscription/project IDs for every AI system entry.
- Inventory exists but never changes. Fix: define update triggers and enforce them through your change workflow.
- Treating third parties as out of scope. Fix: document third-party computing resources as dependencies and connect them to third-party risk records.
- Over-building the inventory. Fix: start with material resources (production and data-processing paths), then iterate.
Enforcement context and risk implications
No public enforcement cases were provided for this requirement. Practically, weak documentation here increases operational risk: you may miss shadow AI deployments, fail to scope incident response, or overlook third-party processing paths. It also creates knock-on gaps in access reviews, data governance, and change management because those controls depend on knowing what exists. 1
Practical 30/60/90-day execution plan
First 30 days (Immediate)
- Name owners: governance owner and technical owner per AI system.
- Draft the scope note for “AI system” and “computing resources.” 1
- Stand up the inventory template and populate it for the highest-risk AI systems first (production and externally facing systems).
- Capture initial evidence: architecture diagrams, account mappings, and third-party dependency lists.
By 60 days (Near-term)
- Expand coverage to remaining AI systems (including internal tools that process sensitive data).
- Add lifecycle-stage purpose fields (training vs inference vs monitoring).
- Implement one completeness reconciliation method and record the result.
- Connect third-party dependencies to due diligence records (Daydream can hold the third-party workflow and evidence references in one place).
By 90 days (Operationalize)
- Formalize update triggers in your change process and test them with a real change (new endpoint, new service, migration).
- Run a management review of the inventory: what changed, what’s missing, who is behind on updates.
- Package audit-ready evidence per AI system and confirm retrieval is fast (a common failure is “we have it somewhere” with no indexing). 1
Frequently Asked Questions
Do we need to document compute for a third-party model API we call?
Yes. The requirement is to document computing resources used by the AI system, and third-party services are part of the runtime dependency chain. Record the third party, the service name, and the integration point as a computing resource dependency. 1
Can we point auditors to our CMDB instead of creating a separate AI inventory?
You can, as long as the CMDB can clearly map each AI system to its computing resources and stays current through defined triggers. Many CMDBs lack AI-specific fields (model endpoints, managed ML services), so teams often maintain an AI overlay register. 1
How detailed does “document” need to be?
Detailed enough that a reviewer can trace where the AI system runs and what it depends on, including accounts/projects and major managed services. If an engineer has to guess which subscription or cluster is in scope, the documentation is not sufficient. 1
Does this include developer laptops or local experimentation environments?
Include them if they are part of the AI system lifecycle in a way that affects data processing, model artifacts, or production behavior. If local environments are allowed for sensitive data or model training, document the approved pattern and constraints as part of the resource picture. 1
What evidence should we keep if infrastructure is defined in code?
Keep controlled references: repository links, tagged releases, and change approvals tied to the AI system ID, plus exports/screenshots that show the deployed resources. Auditors typically want proof of what is actually deployed, not only what could be deployed. 1
Who should own this inventory in a matrixed org?
Put accountability with the AI governance function (or GRC) for completeness and review, and assign technical ownership per AI system to the platform/ML owner for accuracy. Document both roles so updates do not stall between teams. 1
Footnotes
Frequently Asked Questions
Do we need to document compute for a third-party model API we call?
Yes. The requirement is to document computing resources used by the AI system, and third-party services are part of the runtime dependency chain. Record the third party, the service name, and the integration point as a computing resource dependency. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
Can we point auditors to our CMDB instead of creating a separate AI inventory?
You can, as long as the CMDB can clearly map each AI system to its computing resources and stays current through defined triggers. Many CMDBs lack AI-specific fields (model endpoints, managed ML services), so teams often maintain an AI overlay register. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
How detailed does “document” need to be?
Detailed enough that a reviewer can trace where the AI system runs and what it depends on, including accounts/projects and major managed services. If an engineer has to guess which subscription or cluster is in scope, the documentation is not sufficient. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
Does this include developer laptops or local experimentation environments?
Include them if they are part of the AI system lifecycle in a way that affects data processing, model artifacts, or production behavior. If local environments are allowed for sensitive data or model training, document the approved pattern and constraints as part of the resource picture. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
What evidence should we keep if infrastructure is defined in code?
Keep controlled references: repository links, tagged releases, and change approvals tied to the AI system ID, plus exports/screenshots that show the deployed resources. Auditors typically want proof of what is actually deployed, not only what could be deployed. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
Who should own this inventory in a matrixed org?
Put accountability with the AI governance function (or GRC) for completeness and review, and assign technical ownership per AI system to the platform/ML owner for accuracy. Document both roles so updates do not stall between teams. (Source: ISO/IEC 42001:2023 Artificial intelligence — Management system)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream