ID.AM-07: Inventories of data and corresponding metadata for designated data types are maintained
ID.AM-07 requires you to maintain an inventory of designated data types and the metadata that makes that inventory operational (where the data is, who owns it, how it’s protected, and how it moves). To implement it quickly, define “designated data types,” set a minimum metadata schema, connect discovery to a system of record, and run a recurring reconciliation with evidence.
Key takeaways:
- Scope first: “designated data types” must be explicitly defined and approved, or the inventory will sprawl and fail.
- Metadata is the control: without ownership, location, classification, retention, and sharing details, an inventory is not actionable.
- Operationalize with a system of record, automated feeds where possible, and a recurring attestation/reconciliation cycle.
A data inventory is only defensible if it can answer exam questions fast: What sensitive data do we have, where is it stored, who is accountable, what protections apply, and which third parties receive it? ID.AM-07 focuses on the subset that matters most: “designated data types.” That phrase is your lever to keep the requirement bounded while still covering the data that drives security, privacy, and resilience risk.
For a Compliance Officer, CCO, or GRC lead, the practical goal is to turn “we think we know where the data is” into a maintained, evidenced inventory with clear ownership and repeatable updates. This requirement is usually tested indirectly through incident response failures, incomplete breach scoping, overbroad access, weak retention, unmanaged SaaS sprawl, and third-party data sharing that the organization can’t fully enumerate.
This page gives requirement-level implementation guidance you can run as a control: define scope, establish a metadata standard, assign control ownership, connect discovery and change management, and collect recurring evidence. The intent is audit-ready traceability, not a one-time spreadsheet.
Requirement: ID.AM-07 data inventories and metadata (operator view)
Control outcome: You can produce an accurate, current inventory for designated data types, including the metadata needed to govern those data types across systems, repositories, and third parties.
Why examiners care: If you cannot enumerate sensitive data and its movement, you will struggle to (a) scope incidents, (b) enforce least privilege, (c) meet retention and deletion obligations, (d) manage third-party risk, and (e) demonstrate control effectiveness.
Regulatory text
Framework requirement (excerpt): “Inventories of data and corresponding metadata for designated data types are maintained.” (NIST CSWP 29; NIST CSF 1.1 to 2.0 Core Transition Changes)
What the operator must do:
- Decide which data types are “designated” (in scope).
- Maintain an inventory of where that data exists.
- Maintain “corresponding metadata” that makes the inventory usable for governance and cybersecurity operations.
- Keep it maintained over time, with an update mechanism and evidence (not a one-off exercise). (NIST CSWP 29; NIST CSF 1.1 to 2.0 Core Transition Changes)
Plain-English interpretation
Maintain a living map of the data that matters most, plus the minimum set of attributes that lets you control it. A list of systems is not enough. For each designated data type, you should be able to answer:
- Where it lives (systems, repositories, endpoints, cloud services)
- How it’s classified (sensitivity/criticality)
- Who owns it (business owner and technical custodian)
- Who can access it (roles/groups, privileged access paths)
- How it moves (integrations, exports, ETL, APIs, file transfers)
- Where it goes externally (third parties, processors, subprocessors)
- How long it stays (retention and deletion triggers)
Who it applies to (entity + operational context)
Entity scope: Any organization operating a cybersecurity program aligned to NIST CSF 2.0, especially those with regulated data, critical operations, or meaningful third-party dependencies. (NIST CSWP 29)
Operational scope: This requirement is most relevant where you have:
- Multiple data stores (SaaS, IaaS/PaaS, on-prem)
- Data pipelines (BI, analytics, ML, ETL)
- Third-party sharing (support tools, payment processors, marketing platforms, MSSPs)
- M&A, rapid product releases, or decentralized IT
Teams involved: Security/GRC (control design), Data Governance/Privacy (taxonomy and ownership), IT/Cloud Ops (system inventory), App Engineering (data flows), Legal/Records (retention), Procurement/TPRM (third-party data sharing).
What you actually need to do (step-by-step)
Step 1: Define “designated data types” with a tight scope statement
Create a controlled list of data types that the organization commits to inventorying. Keep it defensible:
- Start with categories that drive legal, security, or operational risk (examples: customer PII, employee data, payment data, auth secrets, regulated health data, trade secrets, security telemetry).
- Define each data type with inclusion/exclusion criteria and examples.
- Assign a business owner for each data type (not a generic “Security owns all data” stance).
Deliverable: “Designated Data Types Register” approved by the data governance council, CISO, or equivalent.
Step 2: Set the minimum metadata schema (what fields must exist)
Without a standard schema, “metadata” turns into inconsistent notes. Define required fields per inventory record. A practical minimum set:
| Metadata field | What it answers | Example |
|---|---|---|
| Data type | What is this? | “Customer PII” |
| System/repository | Where is it stored? | “Salesforce” / “S3 bucket X” |
| Authoritative source | Which copy is the system of record? | “CRM is authoritative” |
| Business owner | Who is accountable? | VP Sales Ops |
| Technical owner | Who operates it? | SaaS admin group |
| Classification | How sensitive? | Confidential/Restricted |
| Access model | Who can access and how? | RBAC groups, PAM path |
| Retention | How long kept? | “7 years from close” |
| Encryption/control mapping | How protected? | KMS key, DLP policy |
| External sharing | Which third parties receive it? | Ticketing system, processor |
| Data flows | How does it move? | API integration, nightly ETL |
Deliverable: “Data Inventory Metadata Standard” embedded into your GRC tool, CMDB, or data catalog template.
Step 3: Pick a system of record and connect feeds
Decide where the inventory “lives” and how updates occur:
- If you have a data catalog, use it as the system of record for data assets and metadata.
- If you don’t, use a GRC register with enforced required fields and link-outs to CMDB, IAM, and vendor inventories.
- Connect automated discovery where feasible (cloud asset inventory, SaaS app discovery, DLP findings), but do not block on perfect automation.
Control design note: Auditors tend to accept partial automation if ownership, update triggers, and reconciliation are strong.
Step 4: Build the population: systems, stores, and third parties
Work from multiple starting points to avoid blind spots:
- Known systems list (CMDB, cloud accounts/projects/subscriptions, SaaS list)
- Data egress points (email, file transfer, API gateways, iPaaS)
- Third-party inventory (contracts + security reviews + DPAs)
- Incident/breach lessons learned (data you struggled to locate previously)
Deliverable: A populated inventory with traceable sources (links to tickets, scans, contracts, architecture diagrams).
Step 5: Define “maintained” with explicit update triggers
Write down what causes an update, who updates it, and how fast:
- New system onboarding
- New third party receiving a designated data type
- Material schema change (new fields collected, new identifiers)
- New data flow/integration
- Decommissioning or migration
- Reclassification decision (data becomes more sensitive)
- Retention schedule changes
Tie triggers to existing processes (change management, procurement intake, SDLC security review, privacy intake). This is where the control becomes real.
Step 6: Add a reconciliation and attestation cycle with evidence
You need a recurring mechanism to catch drift:
- Reconcile discovery outputs (cloud inventory, DLP alerts, SaaS discovery) against the inventory.
- Require owners to attest that records remain accurate and complete for their scope.
- Track exceptions with tickets and due dates.
Evidence principle: You are proving maintenance, not just existence.
Step 7: Operationalize downstream uses (prove it’s not shelfware)
Make the inventory a dependency for:
- Data access reviews (targeted by data type/system)
- Third-party risk reviews (which data types shared)
- Incident response scoping (quickly list systems holding designated data)
- Retention and deletion workflows
- Security architecture decisions (encryption, tokenization, segmentation)
This creates a natural forcing function for accuracy.
Required evidence and artifacts to retain (audit-ready)
Keep artifacts that show design, operation, and change history:
- Designated Data Types Register (approved version + change log)
- Data Inventory export (current snapshot) and historical snapshots or audit trail
- Metadata Standard (required fields, definitions, validation rules)
- Ownership assignments (RACI, accountable executives, admin groups)
- Update procedure (runbook describing triggers and workflow)
- Reconciliation evidence (scan outputs, comparison report, exception tickets)
- Attestations (owner sign-offs, meeting minutes, workflow approvals)
- Third-party data sharing mapping (links to contracts, DPAs, security reviews)
- Sampling package for auditors (see below)
Practical sampling package: Pick a few designated data types and produce end-to-end traceability: inventory record → system owner → access model → third party sharing → retention → last attestation.
Common exam/audit questions and hangups
Questions you should be ready to answer:
- “Define ‘designated data types.’ Who approved the scope?”
- “Show me where customer PII exists across your environment.”
- “How do you know the inventory is current?”
- “What triggers an update when a new SaaS tool is adopted?”
- “Which third parties receive designated data types, and where is that recorded?”
- “Who owns each record, and what happens when owners change roles?”
- “Demonstrate one reconciliation cycle and how exceptions were handled.”
Hangups auditors see:
- Inventory exists but has no owners or no update workflow.
- Metadata fields are optional, inconsistent, or blank.
- Third-party sharing is handled in procurement, but never reconciled back to the inventory.
- Engineering creates new pipelines without any data-type mapping.
Frequent implementation mistakes and how to avoid them
-
Mistake: Inventorying “all data everywhere.”
Avoid it by defining designated data types and sticking to them. Expand scope deliberately through governance. -
Mistake: Treating a CMDB as a data inventory.
A CMDB lists systems; ID.AM-07 expects data and metadata. Link the CMDB, but keep data-type records as first-class entries. -
Mistake: No external sharing map.
Add a required field: “third parties receiving this data type,” tied to your third-party inventory. Make procurement intake a trigger. -
Mistake: No evidence of maintenance.
Keep reconciliation outputs and attestations. If updates happen through tickets, retain the ticket trail and approvals. -
Mistake: Ownership assigned to a team mailbox.
Assign a named role (business owner + technical owner). Back it with a distribution list for continuity, but keep accountability explicit.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so this guidance stays at framework-implementation level. The practical risk is still clear: incomplete data inventories lead to incomplete breach scoping, uncontrolled sharing with third parties, mis-scoped retention, and gaps in access control coverage. Those failures often surface during regulator inquiries, customer audits, and incident postmortems.
A practical 30/60/90-day execution plan
First 30 days (establish control design + narrow scope)
- Appoint a control owner and backup; document RACI.
- Define the designated data types list and get approval.
- Publish the minimum metadata standard and required fields.
- Select the system of record (data catalog, GRC register, or equivalent) and lock down edit permissions.
- Pilot with a small set of high-risk systems and one high-risk data type.
Days 31–60 (populate + connect operational triggers)
- Expand population across core systems and primary third parties.
- Implement intake triggers in change management, SDLC/security review, and procurement workflows.
- Create the reconciliation method (even if manual at first) and start exception tracking.
- Produce an audit-ready sampling package for a few data types.
Days 61–90 (prove maintenance + integrate downstream use)
- Run an attestation cycle for owners and close exceptions.
- Integrate inventory outputs into at least two downstream processes (incident scoping, access reviews, third-party reviews, retention).
- Tune metadata definitions based on audit feedback and operational friction.
- Establish a steady-state calendar and evidence collection checklist.
Tooling note (where Daydream fits naturally)
If you already track controls in Daydream, map ID.AM-07 to a named control owner, link it to the designated data type register and inventory system of record, and schedule recurring evidence collection (inventory export, reconciliation report, and owner attestations). The win is less about “document storage” and more about making evidence collection repeatable across control cycles.
Frequently Asked Questions
What counts as “designated data types” under ID.AM-07?
The framework leaves selection to you. Define a bounded list tied to risk, legal exposure, and operational criticality, then get formal approval and maintain a change log. (NIST CSWP 29)
Do we need a specialized data catalog to comply?
No. You need a maintained inventory plus consistent metadata and evidence of upkeep. A catalog helps, but a controlled GRC register or structured repository can work if it enforces required fields and audit trails.
How do we handle unstructured data like files, emails, and chat exports?
Treat repositories as first-class entries (file shares, collaboration suites, ticketing attachments) and document where designated data types can appear, what controls apply (DLP, retention), and who owns those repositories.
What’s the minimum metadata auditors will expect?
Expect to show location, ownership, classification, access model, retention, and external sharing at a minimum. Add data flows for high-risk types because it drives incident scoping and third-party mapping.
How do we keep the inventory current in a fast-moving SaaS environment?
Make procurement and SaaS onboarding a required trigger for inventory updates, then reconcile periodically against SaaS discovery outputs or SSO app catalogs. Track exceptions with tickets and due dates.
How should we represent third parties in the inventory?
Record third parties as recipients/processors for each designated data type, and link to the third-party record (contract, DPA, security review). This keeps TPRM and data governance aligned during audits.
Frequently Asked Questions
What counts as “designated data types” under ID.AM-07?
The framework leaves selection to you. Define a bounded list tied to risk, legal exposure, and operational criticality, then get formal approval and maintain a change log. (NIST CSWP 29)
Do we need a specialized data catalog to comply?
No. You need a maintained inventory plus consistent metadata and evidence of upkeep. A catalog helps, but a controlled GRC register or structured repository can work if it enforces required fields and audit trails.
How do we handle unstructured data like files, emails, and chat exports?
Treat repositories as first-class entries (file shares, collaboration suites, ticketing attachments) and document where designated data types can appear, what controls apply (DLP, retention), and who owns those repositories.
What’s the minimum metadata auditors will expect?
Expect to show location, ownership, classification, access model, retention, and external sharing at a minimum. Add data flows for high-risk types because it drives incident scoping and third-party mapping.
How do we keep the inventory current in a fast-moving SaaS environment?
Make procurement and SaaS onboarding a required trigger for inventory updates, then reconcile periodically against SaaS discovery outputs or SSO app catalogs. Track exceptions with tickets and due dates.
How should we represent third parties in the inventory?
Record third parties as recipients/processors for each designated data type, and link to the third-party record (contract, DPA, security review). This keeps TPRM and data governance aligned during audits.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream