CM-13: Data Action Mapping

To meet the cm-13: data action mapping requirement, you must develop and document a map of your system’s data actions: where data is created, collected, accessed, modified, transmitted, stored, shared, and disposed. Operationalize CM-13 by producing a system-scoped “data action map” tied to owners, authoritative sources, and a change process that keeps it current. 1

Key takeaways:

  • CM-13 is a documentation and control-readiness requirement: produce a system-specific map of data actions, not a generic diagram. 1
  • Your map must be maintainable: assign owners, define update triggers, and retain evidence that it stays aligned with system change. 1
  • Auditors will test completeness and currency by tracing from key data types and interfaces to your documented map and change records.

CM-13: Data Action Mapping sits in NIST’s Configuration Management (CM) family, but it functions as a practical “truth table” for how a system handles information across its lifecycle. If you cannot explain what the system does to data, you will struggle to defend access controls, monitoring, encryption coverage, retention, incident response scoping, and third-party data sharing decisions.

For a CCO, Compliance Officer, or GRC lead, the fastest path is to treat CM-13 as a system-level deliverable with clear boundaries: (1) define the system and its external dependencies, (2) enumerate data actions at the points where data changes state or location, and (3) connect each action to the component, interface, and responsible owner. Then lock it into your change governance so it stays accurate when systems evolve.

This page gives requirement-level implementation guidance you can hand to control owners and engineering teams. It prioritizes artifacts and exam-ready evidence over theory, while staying faithful to the control’s plain requirement: develop and document a map of system data actions. 1

Regulatory text

Requirement (CM-13): “Develop and document a map of system data actions.” 1

What the operator must do:
Create and maintain a document (or governed set of documents) that shows how data moves through the system and what the system does to that data at each step. “Data actions” should be explicit verbs tied to real components and interfaces, such as: collect/ingest, validate, transform, enrich, store, replicate, back up, transmit, log, export/share, archive, and delete/dispose. The output must be understandable to assessors and usable by operators, not just an engineering whiteboard diagram. 1

Plain-English interpretation

CM-13 requires you to answer, with documentation: “What happens to data in this system?” That includes:

  • Where data enters (sources, ingestion methods, APIs, file drops, user input)
  • What the system does to it (processing steps, transformations, analytics, tokenization, hashing)
  • Where it goes and who can touch it (datastores, queues, caches, logs, third parties)
  • How it leaves (exports, reports, integrations, downstream systems)
  • How it ends (retention, archival, deletion, disposal)

If your system handles federal data (or you are a contractor supporting a federal system), CM-13 becomes a baseline hygiene control for assessment readiness under NIST SP 800-53 Rev. 5. 2

Who it applies to (entity and operational context)

CM-13 commonly applies in these contexts:

  • Federal information systems implementing NIST SP 800-53 controls. 2
  • Contractor systems handling federal data, where security control alignment and auditability matter (for example, systems supporting federal programs, regulated environments, or contractual security requirements). 1

Operationally, CM-13 is owned by GRC but executed with engineering, architecture, and security operations. It touches:

  • Product/application teams (what data actions occur in code and services)
  • Cloud/platform teams (storage, backups, replication, network paths)
  • Security teams (logging, monitoring, encryption coverage, DLP boundaries)
  • Data teams (pipelines, warehouses, BI exports)
  • Third-party management (processors, SaaS integrations, managed services)

What you actually need to do (step-by-step)

Use the steps below as your implementation runbook. The goal is a map that an assessor can trace.

Step 1: Define scope and system boundary

  1. Identify the system name, purpose, and boundary (what is “in” vs “out”).
  2. List external dependencies: identity providers, SaaS, managed databases, logging platforms, CI/CD, support tooling that handles production data.
  3. Declare authoritative sources for truth: architecture diagrams, IaC repos, service catalog, data catalog, integration registry.

Deliverable: “CM-13 scope statement” embedded in the data action map.

Step 2: Create a data inventory starter set (system-scoped)

You do not need an enterprise-wide data catalog to start. Build a practical list:

  • Primary data objects (customer records, case files, telemetry, audit logs)
  • Sensitive categories relevant to your environment (e.g., authentication secrets, tokens, regulated or contractual data)
  • Key identifiers used for joining or tracking

Deliverable: A table that lists “data object → sensitivity/handling notes → primary stores.”

Step 3: Enumerate data actions across the lifecycle

For each data object, document actions at minimum across:

  • Ingress (collection/creation)
  • Processing (validation, transformation, enrichment)
  • Storage (databases, object stores, caches, search indexes)
  • Transmission (service-to-service, external APIs, batch exports)
  • Access/use (users, admins, automated jobs)
  • Sharing (third parties, downstream systems)
  • Retention/disposal (archival, deletion, purge jobs)

Practical format that works in audits: a matrix.

Data object Action Component/service Location Trigger Interfaces Owner Logs/controls reference
Example: audit events store logging pipeline cloud region continuous agent → collector SecOps log retention config

Keep it system-specific. A generic “data flows through microservices” statement will not pass a serious assessment because it cannot be tested.

Step 4: Map trust boundaries and third-party touchpoints

Assessors focus on boundary crossings. For each external integration:

  • Identify what data leaves, why, and how (API, SFTP, webhook)
  • Identify where the third party stores/processes it (if known)
  • Link the integration to your third-party due diligence record (security review, DPA, data processing terms)

Deliverable: An “external data actions” appendix that lists each third party and the data actions involved.

Step 5: Tie the map to configuration/change governance

CM-13 fails most often due to staleness. Build update triggers:

  • New integration or data store
  • New API endpoint or export/report
  • Schema changes introducing new sensitive fields
  • Changes to logging/telemetry destinations
  • Major architectural changes (new region, new pipeline, new queue)

Deliverable: A lightweight procedure: “When X changes, update the CM-13 data action map,” with named owners and a review/approval step.

Step 6: Validate the map with technical spot-checks

Do a targeted verification:

  • Compare against cloud inventory (databases, buckets, queues, KMS keys)
  • Compare against API gateway routes and outbound network rules
  • Compare against logging sinks and SIEM integrations

Deliverable: A short “validation log” or ticket trail showing what you checked and what you updated.

Step 7: Package for assessment readiness

Place the final artifacts in a controlled repository with versioning and access control. Add a one-page “how to read this map” guide for auditors.

Where Daydream fits naturally: Daydream helps you assign CM-13 ownership, standardize the implementation procedure, and schedule recurring evidence requests so you always have current artifacts ready for assessment.

Required evidence and artifacts to retain

Keep these items audit-ready and versioned:

  • Data Action Map (primary artifact): diagram + matrix (or a single combined document) showing data actions and flows. 1
  • Scope/boundary statement: what system components are covered.
  • System inventory extracts used to validate (export from cloud inventory, service catalog snapshot, or equivalent).
  • Integration list including third-party data touchpoints and data actions (share/export/transmit/store).
  • Change linkage evidence: tickets/PRs showing map updates tied to system changes.
  • Control ownership and procedure: named owner, update triggers, storage location, and review steps, aligned to the recommended best practice to map CM-13 to owner, implementation procedure, and recurring evidence artifacts. 1

Common exam/audit questions and hangups

Expect assessors to test CM-13 by sampling. Common questions:

  1. “Show me where this data type is stored, and where it is transmitted.” They will pick a sensitive data object and trace it end-to-end.
  2. “How do you know the map is current?” They want update triggers and change evidence, not verbal assurances.
  3. “Do logs contain production data?” Logging is a frequent blind spot; they will ask where logs go and what they contain.
  4. “Which third parties receive or process this data?” They will cross-check against your third-party inventory and contracts.
  5. “What is out of scope, and why?” A weak boundary definition creates findings because the map becomes non-testable.

Frequent implementation mistakes and how to avoid them

Mistake 1: Treating CM-13 as a one-time diagram.
Fix: build update triggers and require map updates in change management for integrations, datastores, and exports.

Mistake 2: Documenting flows but not actions.
Fix: add action verbs per hop (collect, validate, transform, store, transmit, delete). The control asks for a map of data actions. 1

Mistake 3: Ignoring “secondary” data stores (logs, caches, analytics).
Fix: explicitly cover observability tooling, caches, data warehouses, BI exports, backups, and replicas.

Mistake 4: Leaving out third-party processing.
Fix: create an external touchpoints appendix and tie each touchpoint to your third-party risk record.

Mistake 5: No authoritative source linkage.
Fix: cite where each claim comes from (IaC repo, cloud inventory, architecture decision record, service catalog entry) so you can defend it during assessment.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for CM-13. Practically, CM-13 failures show up as assessment findings because missing data action documentation makes related controls hard to substantiate (for example, proving encryption coverage, monitoring scope, retention and deletion behavior, and controlled data sharing). From a risk view, stale or incomplete maps increase the chance that sensitive data moves to places your controls do not cover, especially through logs, exports, and third parties.

A practical 30/60/90-day execution plan

Use phased execution without pretending the calendar guarantees outcomes. Treat the phases as milestones.

First 30 days (Immediate): produce a minimum viable map

  • Confirm system boundary and owner.
  • Identify top data objects and the primary stores.
  • Draft the data action matrix for major flows (ingress → processing → storage → egress).
  • List all external integrations and third parties that receive data.
  • Store artifacts in a versioned repository and open the “CM-13 update procedure” task.

Next 60 days (Near-term): make it testable and complete

  • Expand coverage to logs, backups, replication, caches, analytics, and admin tooling.
  • Validate against cloud inventory, API gateway routes, and outbound network paths.
  • Add evidence links (tickets/PRs) showing corrections made after validation.
  • Align third-party touchpoints with third-party due diligence records.

Next 90 days (Operationalize): keep it current

  • Embed CM-13 updates into change management (new integrations, exports, data stores).
  • Add a recurring review cadence that matches how often your architecture changes (set this based on your environment’s change rate).
  • Run an internal audit-style trace exercise: pick a sensitive field and trace it across the map, controls, and evidence.

Frequently Asked Questions

What counts as a “data action” for CM-13?

Treat data actions as explicit verbs the system performs on data: collect, create, validate, transform, store, replicate, transmit, export/share, log, archive, and delete/dispose. The documentation should tie each action to a component and interface so it can be tested. 1

Do we need a diagram, a table, or both?

NIST only requires that you develop and document a map of system data actions. In practice, a diagram helps people understand flows, and a table (matrix) makes it auditable because each action can be traced and sampled. 1

How detailed should the map be for microservices?

Document to the level where a reviewer can trace where sensitive data is stored and transmitted and identify the responsible owner. If listing every internal call is unrealistic, group services by bounded context and fully detail boundary crossings, datastores, and exports.

Are logs and monitoring data in scope?

Yes if the system writes logs/telemetry that include production data or identifiers, because that is a data action (create/transmit/store). Auditors often probe logs because they commonly bypass standard data handling assumptions.

How do we keep CM-13 from becoming shelfware?

Make the map a required input/output of change management for integrations, new data stores, exports, and schema changes. Keep evidence of updates (tickets/PRs) alongside the map so you can prove it stays current.

How does CM-13 relate to third-party risk management?

The map should explicitly show when data is transmitted to, stored by, or processed by a third party, and it should point to the third-party record (security review, contractual terms). That linkage reduces gaps between your architecture reality and your third-party due diligence file.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as a “data action” for CM-13?

Treat data actions as explicit verbs the system performs on data: collect, create, validate, transform, store, replicate, transmit, export/share, log, archive, and delete/dispose. The documentation should tie each action to a component and interface so it can be tested. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Do we need a diagram, a table, or both?

NIST only requires that you develop and document a map of system data actions. In practice, a diagram helps people understand flows, and a table (matrix) makes it auditable because each action can be traced and sampled. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How detailed should the map be for microservices?

Document to the level where a reviewer can trace where sensitive data is stored and transmitted and identify the responsible owner. If listing every internal call is unrealistic, group services by bounded context and fully detail boundary crossings, datastores, and exports.

Are logs and monitoring data in scope?

Yes if the system writes logs/telemetry that include production data or identifiers, because that is a data action (create/transmit/store). Auditors often probe logs because they commonly bypass standard data handling assumptions.

How do we keep CM-13 from becoming shelfware?

Make the map a required input/output of change management for integrations, new data stores, exports, and schema changes. Keep evidence of updates (tickets/PRs) alongside the map so you can prove it stays current.

How does CM-13 relate to third-party risk management?

The map should explicitly show when data is transmitted to, stored by, or processed by a third party, and it should point to the third-party record (security review, contractual terms). That linkage reduces gaps between your architecture reality and your third-party due diligence file.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream