MEASURE-2.8: Risks associated with transparency and accountability – as identified in the map function – are examined and documented.

MEASURE-2.8 requires you to take the transparency and accountability risks you already identified during MAP and run a repeatable review that evaluates impact, likelihood, and control gaps, then document the results in a durable record tied to each AI system. The goal is audit-ready traceability from “risk identified” to “risk examined, owned, and evidenced.” 1

Key takeaways:

  • You need a documented, system-specific assessment of transparency and accountability risks identified in MAP, not a generic policy statement. 1
  • Assign owners, define acceptance criteria, and retain evidence that reviews happened and led to decisions or control changes. 1
  • Operationalize with a lightweight cadence: trigger reviews on releases, model changes, incidents, and material third-party dependency changes. 1

Transparency and accountability risks are rarely “single-control” problems. They show up as gaps between what your AI system does and what you can explain, prove, or take responsibility for: unclear provenance of training data, undocumented model changes, inability to provide user notices, unclear escalation paths, weak audit trails, or unresolved ownership when a third party supplies key components.

MEASURE-2.8 exists to force closure on those gaps. The MAP function is where you identify risks in context (system purpose, users, impacts, stakeholders). MEASURE is where you examine those identified transparency and accountability risks with enough rigor that a regulator, auditor, or internal risk committee can see: (1) what the risk is, (2) how you evaluated it, (3) what decision you made, and (4) what evidence supports that decision. 1

For most CCOs and GRC leads, the fastest path is to treat MEASURE-2.8 as a structured “risk examination memo” per AI system, backed by artifacts already produced by engineering, product, privacy, and security. Your job is to define the minimum standard, enforce consistency, and make it easy to reproduce.

Regulatory text

Requirement (excerpt): “Risks associated with transparency and accountability – as identified in the map function – are examined and documented.” 1

Operator meaning: You must (a) take the set of transparency and accountability risks identified during MAP, (b) perform an explicit examination of those risks (not just list them), and (c) document the examination in a way that is attributable to a specific AI system, version, and business context. The documentation must show how you reached conclusions and what actions, owners, and residual risks remain. 1

Plain-English interpretation (what “examined and documented” really means)

“Examined” means you did more than capture a risk statement. You evaluated the risk using defined criteria and reviewed supporting evidence. In practice, an examination record should answer:

  • What is the transparency/accountability risk and who is affected?
  • What is the system claim you need to support (explainability, notice, provenance, audit trail, governance ownership)?
  • What evidence exists today (logs, documentation, review approvals, testing results)?
  • What is the control gap, and what is the residual risk after controls?
  • Who owns remediation or risk acceptance, and by when?

“Documented” means that record is stored, versioned, and retrievable, and it is linked to your MAP outputs (risk register, system inventory, impact assessment) so you can show traceability. 1

Who it applies to (entity and operational context)

Applies to: organizations that develop, fine-tune, integrate, or deploy AI systems in production or material internal use, including systems built with third-party models, APIs, datasets, and evaluators. 1

Common operational contexts where MEASURE-2.8 becomes non-optional:

  • AI that makes or influences decisions about people (eligibility, pricing, hiring, access, fraud flags).
  • Customer-facing AI (chatbots, copilots) where user notice, explanation, and escalation drive complaint risk.
  • AI built on third-party foundation models where provenance, responsibility boundaries, and audit rights are unclear.
  • Safety- or mission-critical workflows where accountability for failures must be provable.

Control owners you will typically need involved:

  • Product owner (intended use, user communications, release gating)
  • Engineering/ML (model behavior, evaluation, change management)
  • Security (logging, access control, monitoring)
  • Privacy/Legal/Compliance (notices, DPIA/AI impact assessment alignment, accountability)
  • Third-party risk (contractual commitments, supplier transparency)

What you actually need to do (step-by-step)

Step 1: Define your transparency and accountability risk taxonomy (minimum set)

Start simple and consistent. Create a standard set of risk prompts that map to the risks you identified in MAP. Examples you can operationalize quickly:

Transparency risk prompts

  • User notice: Do users know they are interacting with AI? Are limitations disclosed?
  • Explainability: Can you provide a meaningful explanation appropriate to the impact?
  • Data provenance: Can you trace training/fine-tuning data sources and rights?
  • Model provenance: Can you identify model version, supplier, and changes?
  • Output traceability: Can you trace outputs to inputs, prompts, and system state?

Accountability risk prompts

  • Ownership: Is there a named accountable executive and system owner?
  • Decision rights: Who can approve changes, exceptions, and risk acceptance?
  • Audit trail: Are logs sufficient to reconstruct key events and decisions?
  • Escalation: Is there a path for incidents, complaints, and user appeals?
  • Third party boundaries: Are responsibilities and liabilities clear with suppliers?

Document this taxonomy as your MEASURE-2.8 “review checklist” so examinations are comparable across systems. 1

Step 2: Pull the MAP outputs and bind them to a system record

For each in-scope AI system, collect the MAP artifacts that identified the transparency/accountability risks (for example: system context, intended use, stakeholder analysis, impact considerations, and the risk list itself). Create a single system risk file that references those MAP findings and identifies the current system version and deployment context. 1

Step 3: Perform the examination using defined scoring and decision criteria

Use a consistent method. Auditors do not require a specific scoring model, but they do expect repeatability.

A practical structure:

  1. Risk statement (what can go wrong; who is harmed).
  2. Evidence review (what you checked).
  3. Control evaluation (existing controls and gaps).
  4. Residual risk (post-control assessment).
  5. Decision (remediate, accept, avoid, transfer).
  6. Owner + follow-ups (named accountable party; tracked action items).

Create a short “risk acceptance standard” for transparency/accountability risks. Example decision rules you can enforce:

  • If user-facing, require documented user notice content and placement.
  • If decisions materially affect individuals, require an explanation approach and an escalation path.
  • If a third party supplies model or data, require contractual transparency commitments and audit support artifacts.

Tie each decision to evidence (screenshots, links, test reports, approvals). 1

Step 4: Document results in an auditable artifact (your “MEASURE-2.8 record”)

Your documentation should be consistent and retrievable. A one- to two-page template works well if it forces evidence links.

Minimum fields to include

  • System name, owner, business purpose, deployment environment
  • Model/dataset provenance summary and version identifiers
  • Transparency/accountability risks (from MAP) with examination notes
  • Evidence list (by risk)
  • Control gaps and remediation plan
  • Residual risk rating and rationale
  • Approvals (risk owner, compliance sign-off if required)
  • Date, reviewer, next review trigger

Step 5: Add triggers so this stays operational

MEASURE-2.8 breaks if it is a one-time exercise. Add triggers that require re-examination and re-documentation:

  • Material model update, fine-tuning, prompt/guardrail changes
  • New dataset, new data source, or changed data rights
  • New third party model/API or material contract change
  • User experience changes affecting notice or explanation
  • Incident, complaint trend, or audit finding

Step 6: Assign ownership and build recurring evidence collection

Map MEASURE-2.8 to a policy, procedure, control owner, and recurring evidence collection. This is the quickest way to stay audit-ready and prevent “we did it once” drift. 1

If you run Daydream, treat MEASURE-2.8 as a control with a defined owner and evidence tasks (template completion, approvals, and attachments) scheduled around your release process, plus an exception workflow for urgent launches.

Required evidence and artifacts to retain (audit-ready checklist)

Retain artifacts that prove both the examination and the basis for decisions:

  • AI system inventory entry with owner and version identifiers
  • MAP outputs showing identified transparency/accountability risks
  • Completed MEASURE-2.8 examination record per system (template)
  • Evidence attachments, as applicable:
    • User notice language and placement proof (screenshots, UI specs)
    • Explanation approach documentation (what you disclose and when)
    • Model cards/system cards or internal technical summaries
    • Data provenance records (sources, rights, retention constraints)
    • Logging/audit trail configuration and sample log extracts
    • RACI for accountability and escalation paths
    • Change management records (approvals for model/prompt updates)
    • Third-party contracts/SOWs addressing transparency and audit support
  • Risk acceptance approvals and exception register entries
  • Remediation tickets with status and closure evidence

Common exam/audit questions and hangups

Expect these lines of questioning:

  1. Traceability: “Show me the MAP-identified risks and where you examined them.” If you can’t link MAP outputs to the MEASURE-2.8 record, you will burn time in the exam.
  2. Versioning: “Which model version does this assessment cover?” Examiners look for stale documentation.
  3. Evidence quality: “How do you know the notice is presented to users?” Assertions without screenshots or specs do not hold up.
  4. Accountability: “Who can accept the residual risk, and where is that recorded?” Committees that discuss risk without a recorded decision create exposure.
  5. Third parties: “What do you rely on your model provider to do, and what do you validate yourself?” If the boundary is unclear, accountability gaps follow.

Frequent implementation mistakes (and how to avoid them)

  • Mistake: Treating MEASURE-2.8 as a policy statement.
    Fix: Require a per-system examination record with evidence links and approvals. 1

  • Mistake: No explicit “accountable owner.”
    Fix: Assign a named accountable executive and a system owner; record decision rights for exceptions.

  • Mistake: Documentation that is not release-aware.
    Fix: Tie re-examination triggers to change management and deployment pipelines.

  • Mistake: Ignoring third-party constraints.
    Fix: Add due diligence questions and contract clauses for audit support, documentation access, and model/version change notifications; store supplier artifacts with the system record.

  • Mistake: Over-scoring and under-evidencing.
    Fix: Keep scoring simple; spend effort on evidence quality, approvals, and traceability.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this specific NIST AI RMF requirement. Practically, the risk is indirect: if an incident occurs (harm, complaint, bias allegation, misleading AI interactions), transparency and accountability documentation becomes the first place investigators look to assess governance discipline and truthfulness of claims. Weak MEASURE-2.8 records increase the chance that you cannot demonstrate you understood risks, made informed decisions, or implemented controls aligned to stated policies. 2

Practical execution plan (30/60/90-day)

Below is a pragmatic plan you can run as a compliance program sprint. Timeboxes are sequencing guidance, not a claim about required duration.

First 30 days (stand up the control)

  • Define the MEASURE-2.8 template (one to two pages) and evidence checklist. 1
  • Identify in-scope AI systems and confirm owners and approvers.
  • Build the transparency/accountability risk taxonomy and checklist aligned to your MAP outputs.
  • Pilot the examination on one high-impact system and adjust the template based on friction.

Days 31–60 (scale to portfolio)

  • Roll out MEASURE-2.8 examinations across remaining in-scope systems, starting with customer-facing and high-impact use cases.
  • Establish a central repository structure (by system, by version) and enforce naming/versioning rules.
  • Add third-party dependency fields and require suppliers’ documentation where applicable.
  • Stand up a lightweight review forum (product + compliance + security) for risk acceptance decisions and exceptions.

Days 61–90 (operationalize and make it sticky)

  • Integrate MEASURE-2.8 triggers into change management and release gating (no production change without updated examination record for transparency/accountability risks impacted).
  • Define recurring evidence collection tasks per system owner (logs sample, notice verification, audit trail checks, supplier updates).
  • Run a tabletop audit: pick one system, reconstruct the chain from MAP → MEASURE-2.8 → remediation/acceptance → evidence.
  • If you use Daydream, configure the control with an owner, required fields, evidence tasks, and exception workflow so each release produces audit-ready records.

Frequently Asked Questions

Do I need a separate MEASURE-2.8 document for every model and feature?

You need documentation at the AI system level that is specific enough to cover the deployed model version and the user-facing behavior. If multiple features share the same model but have different user notice or escalation paths, document those differences explicitly.

What counts as a “transparency risk” versus an “accountability risk”?

Transparency risks relate to what you disclose and can explain (notice, provenance, traceability). Accountability risks relate to who owns outcomes and how decisions and changes are governed (owners, approvals, audit trails, escalation).

We rely on a third-party foundation model. Can we “inherit” their transparency documentation?

You can incorporate supplier artifacts, but you still must examine and document your specific deployment risks and responsibility boundaries. Record what you trust the third party to do, what you validate, and what evidence you retain.

How detailed does the evidence need to be?

Enough that someone outside the build team can verify the claim. For user notice, keep UI screenshots or released specs; for audit trails, keep configuration and sample logs; for governance, keep approval records.

Who should be the approver for risk acceptance under MEASURE-2.8?

The approver should have authority over the business outcome and resources needed to remediate. Many programs use a system owner for routine risks and a senior risk committee for higher-impact residual risks.

What if the MAP function produced a long list of risks and we can’t address all of them immediately?

MEASURE-2.8 does not require immediate remediation of every risk, but it does require examination and documentation. Record prioritization, rationale, interim controls, and explicit acceptance for deferred items.

Footnotes

  1. NIST AI RMF Core

  2. NIST AI RMF program page

Frequently Asked Questions

Do I need a separate MEASURE-2.8 document for every model and feature?

You need documentation at the AI system level that is specific enough to cover the deployed model version and the user-facing behavior. If multiple features share the same model but have different user notice or escalation paths, document those differences explicitly.

What counts as a “transparency risk” versus an “accountability risk”?

Transparency risks relate to what you disclose and can explain (notice, provenance, traceability). Accountability risks relate to who owns outcomes and how decisions and changes are governed (owners, approvals, audit trails, escalation).

We rely on a third-party foundation model. Can we “inherit” their transparency documentation?

You can incorporate supplier artifacts, but you still must examine and document your specific deployment risks and responsibility boundaries. Record what you trust the third party to do, what you validate, and what evidence you retain.

How detailed does the evidence need to be?

Enough that someone outside the build team can verify the claim. For user notice, keep UI screenshots or released specs; for audit trails, keep configuration and sample logs; for governance, keep approval records.

Who should be the approver for risk acceptance under MEASURE-2.8?

The approver should have authority over the business outcome and resources needed to remediate. Many programs use a system owner for routine risks and a senior risk committee for higher-impact residual risks.

What if the MAP function produced a long list of risks and we can’t address all of them immediately?

MEASURE-2.8 does not require immediate remediation of every risk, but it does require examination and documentation. Record prioritization, rationale, interim controls, and explicit acceptance for deferred items.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream