MEASURE-2.10: Privacy risk of the AI system – as identified in the map function – is examined and documented.

MEASURE-2.10 requires you to take the privacy risks you already identified during the NIST AI RMF MAP function and run a documented examination that confirms impact, likelihood, controls, and residual risk for the AI system. Operationally, this means producing a repeatable privacy risk assessment package tied to the system’s data flows, model behavior, and deployment context, with evidence you can defend in audit. 1

Key takeaways:

  • Your “MAP” privacy risks are inputs; MEASURE-2.10 demands analysis, decisions, and documentation you can show on request. 1
  • Scope must cover the full AI lifecycle: training data, inference inputs/outputs, logging, monitoring, and third-party data/model dependencies. 1
  • Audit readiness hinges on artifacts: a privacy risk register entry per risk, testing/validation evidence where feasible, and clear control ownership and review cadence. 1

MEASURE-2.10: privacy risk of the ai system – as identified in the map function – is examined and documented. requirement is a “make it real” control: it expects you to take privacy concerns discovered during scoping and mapping and turn them into an examined, recorded position the organization can stand behind. For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat this as a system-level privacy risk assessment that is tightly linked to the AI system’s MAP outputs: data inventory, data flows, stakeholder context, intended use, and identified harms. 1

This requirement is medium severity in practice because gaps are easy to spot. Teams often have a MAP workshop deck or a high-level DPIA-like document, but they cannot show (a) how each mapped privacy risk was tested or evaluated, (b) what controls address it, (c) who owns those controls, and (d) what residual risk was accepted, mitigated, or escalated. MEASURE-2.10 is the bridge between “we identified risks” and “we evaluated them and can prove governance.” 1

Regulatory text

Text (excerpt): “Privacy risk of the AI system – as identified in the map function – is examined and documented.” 1

Operator meaning: You must (1) take the privacy risks already identified during MAP, (2) examine them using a defined method appropriate to your organization, and (3) document both the analysis and the resulting decisions (mitigations, acceptance, escalation, or design changes). The output needs to be specific to the AI system, not a generic privacy policy statement. 1

What “examined” must include, in practice:

  • A clear description of each privacy risk scenario tied to a concrete data flow or model behavior.
  • An assessment of impact and likelihood using your standard risk methodology (or a defined AI/privacy-specific one).
  • Evidence of control design and operation (or a plan with dates/owners if not yet implemented).
  • Residual risk determination and sign-off path. 1

Plain-English interpretation

You already “mapped” privacy risks. Now you must do the work to validate them, size them, decide what to do, and write it down in a way an auditor, regulator, or internal risk committee can review without guessing.

Think of the deliverable as a Privacy Risk Examination Packet for the AI system:

  • Inputs: MAP outputs (data inventory, purpose, context, stakeholders, risk list).
  • Processing: structured examination (risk analysis + control mapping + testing/verification where feasible).
  • Outputs: documented risk outcomes, owners, and review cadence. 1

Who it applies to

Entity scope: Any organization developing, integrating, procuring, or deploying AI systems, including those that use third parties for models, data, hosting, evaluation tooling, or annotation. 1

Operational context where this comes up most:

  • You deploy an AI feature that processes personal data (direct identifiers or inferred attributes).
  • You fine-tune a model on internal data or customer data.
  • You use a third-party model API and send user prompts, attachments, or metadata.
  • You run monitoring/observability that stores prompts, outputs, or embeddings.
  • You repurpose an AI system to a new user group or geography, changing privacy expectations. 1

Control owners (typical):

  • Privacy (accountable for privacy risk method and sign-off path)
  • Product/Engineering (responsible for implementation and technical evidence)
  • Security (responsible for access controls, logging, retention, and secure SDLC alignment)
  • Legal (advisory for contract and notices; do not make Legal the sole owner)
  • Procurement/TPRM (where third-party data/model components exist) 1

What you actually need to do (step-by-step)

Step 1: Define the unit of assessment (the “AI system” boundary)

Document what is in scope:

  • Model(s) used (first-party, open-source, third-party API)
  • Training/fine-tuning datasets (sources, sensitivity, rights)
  • Inference inputs (what users submit; what the system collects)
  • Outputs (to whom, how stored, how reused)
  • Logging/telemetry (prompts, outputs, embeddings, identifiers)
  • Human review loops (labeling, QA, escalations)
  • Third parties that receive data (model providers, hosting, analytics) 1

Artifact: System Boundary & Data Flow Diagram (AI + privacy-focused).

Step 2: Import the MAP-identified privacy risks into a workable register

Take each privacy risk identified in MAP and create a discrete entry with:

  • Risk statement in “scenario” form (trigger → data → harm)
  • Data categories affected
  • Data subjects affected
  • Processing purpose and necessity
  • Systems/teams involved 1

Tip: Avoid “privacy risk: data leakage” as a single entry. Split by pathway (prompt logging, model output, training data reuse, third-party transfer).

Step 3: Examine each risk with a consistent method

For each risk scenario, document:

  • Impact analysis: what could happen to individuals (and second-order harm where relevant)
  • Likelihood analysis: how plausible given architecture, access paths, and controls
  • Existing controls: technical + procedural controls that reduce likelihood/impact
  • Residual risk: remaining exposure after controls
  • Decision: accept, mitigate, redesign, restrict use, or stop deployment
  • Owner and due date: for any mitigation actions 1

Evidence expectation: The examination should cite concrete system facts (configurations, retention settings, access roles, API parameters, vendor terms) rather than assumptions.

Step 4: Validate “privacy-by-behavior” risks (AI-specific)

MAP often surfaces AI-specific privacy risks that require targeted examination. Build tests or review evidence for items like:

  • Memorization risk indicators and training data exposure pathways (if you train/fine-tune).
  • Prompt/output handling: whether personal data is captured, stored, or sent to third parties.
  • Inference attacks or reconstruction concerns where relevant to your use case.
  • Unintended sensitive inference (model derives health, location, or other sensitive traits from non-sensitive inputs). 1

Artifact: Privacy Risk Test Notes (even if qualitative), including test owner, date, and system version.

Step 5: Tie mitigations to controls, and controls to owners

Operationalize by mapping each mitigation to:

  • Control name and description (policy, technical guardrail, or process step)
  • Control owner
  • How you know it works (evidence type)
  • Review trigger (model change, vendor change, new data source, new market) 1

This is where many programs fail: they list mitigations but cannot show control operation evidence.

Step 6: Document sign-off and escalation thresholds

Define who can accept residual privacy risk and when to escalate (e.g., to a risk committee). Document approvals for:

  • Go-live decision for the AI system
  • Any risk acceptance
  • Exceptions (like retaining prompts longer than standard) 1

Artifact: Privacy Risk Acceptance Record with approver, date, and scope.

Step 7: Set a review cadence and event-based triggers

MEASURE-2.10 is not “one and done.” Define:

  • Scheduled review (aligned to your governance cycle)
  • Triggers: model updates, prompt logging changes, new data categories, third-party changes, incident learnings, or expanded user populations 1

Step 8: Put it on rails (recurring evidence collection)

Convert the above into an auditable control:

  • Documented procedure
  • Assigned control owner
  • Evidence checklist per release or review cycle
  • Storage location and retention rules 1

If you use Daydream, treat MEASURE-2.10 as a mapped control with an evidence request template (risk register export, sign-off, data flow diagram, test notes) and a recurring collection schedule so the next audit is a pull, not a scramble. 1

Required evidence and artifacts to retain

Minimum set most auditors will expect for defensibility:

  • MAP outputs reference: the list of privacy risks identified during MAP and how they feed MEASURE-2.10. 1
  • AI system data map: data flow diagram + inventory of inputs/outputs/logging/retention. 1
  • Privacy risk register entries: one per risk scenario, with impact/likelihood, controls, residual risk, and decision. 1
  • Control mapping: mitigation-to-control matrix, with owners and evidence types. 1
  • Testing/verification notes: where feasible, especially for logging, access, retention, and third-party transfers. 1
  • Approvals: risk acceptance and go-live sign-offs with dates and scope. 1
  • Change log linkage: model/version identifiers that tie the examination to what is actually deployed. 1

Common exam/audit questions and hangups

Expect these questions in internal audit, SOC-style examinations, or regulator-aligned reviews:

  • “Show me the privacy risks you identified in MAP, and where each is examined in MEASURE.”
  • “How do you know prompts/outputs are not retained longer than stated?”
  • “Which third parties receive personal data, and what contract terms govern that transfer?”
  • “Who approved the residual privacy risk, and what alternatives were considered?”
  • “What triggers a re-examination after a model update or feature expansion?” 1

Hangups:

  • No single source of truth for the AI system boundary.
  • Risk statements are generic and not tied to specific data flows.
  • Controls exist but lack operating evidence (screenshots, configs, logs, tickets). 1

Frequent implementation mistakes and how to avoid them

  • Mistake: Treating a general DPIA template as sufficient.
    Fix: Add AI-specific behaviors (prompt logging, output sharing, fine-tuning reuse, third-party model processing) and tie each to system evidence. 1

  • Mistake: Documenting mitigations without owners or due dates.
    Fix: Every mitigation becomes a control with an owner and an evidence plan. 1

  • Mistake: Ignoring third-party model/data pathways.
    Fix: Include third-party data transfer points in the data flow diagram and require contractual + technical evidence (routing, redaction, retention settings). 1

  • Mistake: No linkage to system versions/releases.
    Fix: Stamp the assessment with model version, configuration hash where applicable, and deployment date. 1

Enforcement context and risk implications

NIST AI RMF is a framework, not a regulator, so this requirement is typically enforced indirectly: through customer audits, procurement requirements, internal governance, and regulators applying privacy expectations under separate authorities. Your operational risk is straightforward: if you cannot show that mapped privacy risks were examined and documented, you will struggle to justify launch decisions, defend incident response posture, or satisfy due diligence questionnaires for AI-enabled products. 2

Practical execution plan (30/60/90-day)

First 30 days (Immediate)

  • Confirm AI system boundary and complete an AI-focused data flow diagram. 1
  • Import MAP privacy risks into a structured risk register format. 1
  • Assign owners across Privacy, Product/Engineering, Security, and TPRM for each risk. 1
  • Draft the examination template (impact, likelihood, controls, residual risk, decision, evidence). 1

By 60 days (Near-term)

  • Complete examinations for the highest-risk scenarios (personal data in prompts, third-party transfers, retention/logging, sensitive inference). 1
  • Build the mitigation-to-control mapping and define evidence for each control. 1
  • Establish sign-off workflow for residual risk acceptance and go-live. 1

By 90 days (Operationalize)

  • Run the process as a release gate: new model/version requires a privacy risk re-examination or a documented “no-change” rationale. 1
  • Implement recurring evidence collection (risk register export, approvals, configuration evidence) and store artifacts in a controlled repository. 1
  • Add event triggers (vendor changes, new data sources, feature expansion) to your change management intake. 1

Frequently Asked Questions

Do we need a formal DPIA to meet MEASURE-2.10?

You need a documented examination of MAP-identified privacy risks; a DPIA can satisfy it if it is system-specific and includes control evidence, owners, and residual risk decisions. If your DPIA is generic, it will not cover MEASURE-2.10 well. 1

How do we handle MEASURE-2.10 if the model is provided by a third party?

Examine privacy risk across data sent to the third party, what the third party stores, and what comes back in outputs, then document controls and contract constraints. Treat the third party as part of the system boundary for privacy risk. 1

What is the minimum documentation an auditor will accept?

A traceable chain from MAP risk → examined analysis → controls and evidence → residual risk decision and approval. If any link is missing, the requirement is not met in practice. 1

Do we need technical testing, or is a narrative assessment enough?

Narrative alone is fragile for AI systems. Where you can, attach system evidence such as retention settings, access controls, redaction rules, and prompt logging configurations to substantiate conclusions. 1

How often do we need to re-examine privacy risks?

Re-examine on meaningful change: new data categories, model updates, deployment to a new population, changes in logging/retention, or third-party changes. Also set a regular governance review cadence consistent with your enterprise risk process. 1

Who should sign off on residual privacy risk?

Use your existing risk acceptance model: Privacy typically owns methodology, but acceptance should sit with the accountable business owner, with escalation paths for material risk. Document the approver and scope. 1

Footnotes

  1. NIST AI RMF Core

  2. NIST AI RMF program page

Frequently Asked Questions

Do we need a formal DPIA to meet MEASURE-2.10?

You need a documented examination of MAP-identified privacy risks; a DPIA can satisfy it if it is system-specific and includes control evidence, owners, and residual risk decisions. If your DPIA is generic, it will not cover MEASURE-2.10 well. (Source: NIST AI RMF Core)

How do we handle MEASURE-2.10 if the model is provided by a third party?

Examine privacy risk across data sent to the third party, what the third party stores, and what comes back in outputs, then document controls and contract constraints. Treat the third party as part of the system boundary for privacy risk. (Source: NIST AI RMF Core)

What is the minimum documentation an auditor will accept?

A traceable chain from MAP risk → examined analysis → controls and evidence → residual risk decision and approval. If any link is missing, the requirement is not met in practice. (Source: NIST AI RMF Core)

Do we need technical testing, or is a narrative assessment enough?

Narrative alone is fragile for AI systems. Where you can, attach system evidence such as retention settings, access controls, redaction rules, and prompt logging configurations to substantiate conclusions. (Source: NIST AI RMF Core)

How often do we need to re-examine privacy risks?

Re-examine on meaningful change: new data categories, model updates, deployment to a new population, changes in logging/retention, or third-party changes. Also set a regular governance review cadence consistent with your enterprise risk process. (Source: NIST AI RMF Core)

Who should sign off on residual privacy risk?

Use your existing risk acceptance model: Privacy typically owns methodology, but acceptance should sit with the accountable business owner, with escalation paths for material risk. Document the approver and scope. (Source: NIST AI RMF Core)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream