MAP-3.2: Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness – as connected to organizational risk tolerance – are examined and documented.
To meet MAP-3.2, you must identify, quantify (where feasible), and document the full set of potential costs from AI errors and trustworthiness failures, including non-monetary harms, then compare those costs to your stated organizational risk tolerance and record the decision to accept, mitigate, transfer, or avoid the risk (NIST AI RMF Core). This becomes a repeatable control tied to incident learning and model changes.
Key takeaways:
- Document costs from AI failure modes, including non-monetary impacts (safety, rights, customer trust, operational disruption) (NIST AI RMF Core).
- Explicitly map those costs to your risk tolerance and decision outcomes (accept/mitigate/avoid/transfer) (NIST AI RMF Core).
- Keep audit-ready evidence showing you revisit cost assumptions after incidents, drift, or material changes (NIST AI RMF Core).
MAP-3.2 is a mapping requirement, but operators fail it for one simple reason: they treat AI risk as a technical performance problem rather than a business-cost problem that must be evaluated against a defined risk tolerance. NIST’s intent is operational: decision-makers should be able to look at your documentation and see (1) what can go wrong with the AI system, (2) what the organization stands to lose in money and in non-monetary harm, (3) what level of loss the organization is willing to tolerate, and (4) what you decided to do about it (NIST AI RMF Core).
This requirement is most useful when it becomes a standard artifact that gates deployment and change management. You should be able to produce a “cost of AI failure” view per use case, tie it to the enterprise risk appetite/risk tolerance statement, and show how it influenced controls: human review, fallback processes, customer disclosures, monitoring thresholds, and incident response playbooks (NIST AI RMF Core). If you operationalize MAP-3.2 well, you also reduce friction across Legal, Privacy, Security, Compliance, Product, and Model Risk because everyone works from the same cost-and-tolerance framing.
Regulatory text
Requirement (MAP-3.2): “Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness – as connected to organizational risk tolerance – are examined and documented.” (NIST AI RMF Core)
What the operator must do:
You must (a) examine potential and realized costs caused by AI errors and trustworthiness shortcomings, (b) include non-monetary costs, and (c) document the analysis in a way that explicitly connects to organizational risk tolerance, so the output can drive risk decisions and controls (NIST AI RMF Core). Treat this as a required decision record for each AI system and material use case.
Plain-English interpretation (what MAP-3.2 is really asking)
MAP-3.2 expects you to answer, in writing:
-
“If the AI is wrong, unreliable, or untrusted, what does it cost us?”
Costs include direct financial loss and non-monetary harm (for example: safety impacts, unfair outcomes, customer trust erosion, employee burden, regulatory exposure, and mission impact) (NIST AI RMF Core). -
“How much cost are we willing to tolerate?”
Your analysis must link back to existing risk tolerance statements (enterprise risk appetite, product risk limits, model risk tiers, operational risk thresholds) (NIST AI RMF Core). -
“Given the cost and tolerance, what did we decide?”
Document whether you accept risk as-is or require mitigations (controls, monitoring, human review, scope limits, customer communications, or even a no-go decision) (NIST AI RMF Core).
Who it applies to (entity and operational context)
Applies to: Organizations developing, procuring, integrating, or deploying AI systems (NIST AI RMF Core; NIST AI RMF program page).
Operationally, it applies when:
- You deploy an AI model that informs decisions, recommendations, detection, ranking, summarization, or automation.
- You materially change a model, its training data, prompts, decision thresholds, or downstream workflow.
- You rely on a third party’s AI capability (API model, embedded AI feature, managed service), because the cost of failure still lands on you.
Functions that should co-own execution:
- GRC/Compliance or Model Risk for control definition and evidence.
- Product/Business owner for impact and decision authority.
- Engineering/ML for failure modes and technical constraints.
- Legal/Privacy/Security for non-monetary harms and obligation mapping.
- Operations/Customer Support for real-world error costs and remediation load.
What you actually need to do (step-by-step)
Step 1: Define the “system + use case” boundary
Document the AI system, intended purpose, users, affected parties, and where AI outputs influence decisions. Include upstream dependencies (data sources) and downstream actions (auto-approve, queue, notify, deny, recommend).
Output artifact: System/Use-Case Profile (one per deployment context).
Step 2: Build a failure-mode to cost map
Create a structured list of “expected errors” and “trustworthiness failures” that matter for your use case (NIST AI RMF Core). Keep it practical:
- Incorrect outputs (false positives/negatives, hallucinations, misranking)
- Instability (drift, sensitivity to inputs)
- Unavailability or latency (system functionality failure)
- Security and abuse modes (prompt injection, data leakage scenarios)
- Misuse or over-reliance by staff/customers
- Bias/unfair impact pathways (a non-monetary cost driver)
For each failure mode, capture:
- Impact categories: financial, operational, legal/regulatory, safety, privacy, equity, reputation/trust, customer harm, employee burden.
- Affected parties: customers, employees, applicants, patients, partners, the public.
- Detection/containment: how you would notice it and stop the bleed.
Output artifact: AI Failure Modes & Cost Register.
Step 3: Examine costs, including non-monetary costs
MAP-3.2 does not require every impact to be monetized. It requires that costs be examined and documented, including non-monetary costs (NIST AI RMF Core). Use a two-lane approach:
Lane A: Monetizable costs (where feasible)
- Incident response time and rework
- Chargebacks/refunds
- SLA penalties with customers
- Increased call center volume
- Fraud loss or missed revenue due to misclassification
Lane B: Non-monetary costs (must be explicit)
- Safety risks (physical or psychological)
- Rights-impacting outcomes (unfair denial, exclusion, unequal access)
- Loss of customer trust and adoption
- Decision explainability failures that prevent appeal or recourse
- Mission impact (for public sector or critical services)
Technique: Use an impact scale with definitions (e.g., “negligible/limited/material/severe”) and tie each tier to decision triggers. The scale can be qualitative; the discipline is the point.
Output artifact: Cost Assessment Worksheet 1.
Step 4: Connect costs to organizational risk tolerance
This is the control’s hinge. Pull the relevant risk tolerance statements and show the mapping:
- If enterprise policy defines “zero tolerance” for certain harms (for example, certain safety outcomes), mark those failure modes as avoid or must-mitigate before launch.
- If tolerance allows some operational disruption, record what controls make the residual risk acceptable.
If your organization does not have AI-specific tolerance, you still document how you used existing risk tolerance (operational risk, compliance risk, customer harm tolerance) to decide (NIST AI RMF Core).
Output artifact: Risk Tolerance Mapping & Decision Record (signed by accountable owner).
Step 5: Decide and operationalize controls
For each “unacceptable” or “needs mitigation” item, write:
- Control(s) required (human-in-the-loop, thresholds, monitoring, rate limits, rollback plan, user messaging, appeal path)
- Owner and due date
- Evidence to prove it works (test results, monitoring dashboards, QA sampling)
Output artifact: Mitigation Plan with Control-to-Risk Traceability.
Step 6: Close the loop with realized errors
MAP-3.2 covers realized errors too (NIST AI RMF Core). Add a trigger: after any material incident, model rollback, or validated customer harm, you update:
- the failure-mode register,
- the cost assessment,
- the risk decision.
Output artifact: Post-Incident Cost Reassessment Addendum.
Required evidence and artifacts to retain (audit-ready)
Maintain a single “MAP-3.2 evidence pack” per AI system:
- System/Use-Case Profile (scope, affected parties, decision points)
- AI Failure Modes & Cost Register (including non-monetary costs) (NIST AI RMF Core)
- Risk tolerance references (risk appetite statement excerpts; committee minutes)
- Risk Tolerance Mapping & Decision Record (accept/mitigate/avoid/transfer) (NIST AI RMF Core)
- Mitigation plan and control designs (monitoring, human review, fallback)
- Testing/validation results that support reduced error cost exposure
- Incident log entries and post-incident reassessments tied back to costs (NIST AI RMF Core)
- Change management records showing reviews occurred before material changes
If you use Daydream to manage third-party risk and AI due diligence, store these artifacts as structured records tied to each system and third party, so refresh and evidence collection are automatic rather than ad hoc.
Common exam/audit questions and hangups
Auditors and internal reviewers typically press on these points:
- “Show me non-monetary costs.” If your register only has dollars, you missed the requirement (NIST AI RMF Core).
- “Where is risk tolerance stated, and who approved the decision?” They want governance, not a spreadsheet.
- “How did realized incidents change your assessment?” A static document signals weak operations (NIST AI RMF Core).
- “Did this analysis change controls or launch decisions?” If MAP-3.2 never drives action, it looks performative.
- “How do you handle third-party AI components?” Expect scrutiny on dependency and accountability.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Treating “cost” as only financial.
Fix: Require at least one explicit non-monetary impact category per failure mode, even if rated low (NIST AI RMF Core). -
Mistake: No tie to risk tolerance; just generic severity ratings.
Fix: Add a column that cites the relevant tolerance statement and the decision rule it triggers. -
Mistake: Writing one assessment for “the model” instead of per use case.
Fix: Bound by workflow. The same model can have different costs depending on automation level and user population. -
Mistake: Ignoring realized errors (production feedback).
Fix: Add operational triggers for review after incidents, drift signals, and major releases (NIST AI RMF Core). -
Mistake: Missing accountable sign-off.
Fix: Require product owner + risk owner approval with a date and a residual risk decision.
Enforcement context and risk implications
NIST AI RMF is a framework, not a statute, and the provided sources do not include enforcement actions (NIST AI RMF program page). Your risk is indirect: MAP-3.2 gaps tend to surface during regulatory inquiries, customer audits, and litigation discovery because you cannot show that you understood foreseeable harms, set tolerances, and made controlled decisions (NIST AI RMF Core). Treat MAP-3.2 as defensibility documentation for governance.
Practical 30/60/90-day execution plan
First 30 days (establish the control)
- Name an owner for MAP-3.2 and define the minimum required artifacts.
- Pick one high-impact AI use case and produce the first Failure Modes & Cost Register plus Decision Record.
- Identify where risk tolerance lives today and document how it will be referenced for AI decisions.
Days 31–60 (scale to portfolio and third parties)
- Roll the template across remaining AI systems and key third-party AI dependencies.
- Standardize non-monetary cost categories and rating definitions.
- Add MAP-3.2 checkpoints into SDLC: pre-launch, pre-material-change, and post-incident.
Days 61–90 (prove ongoing operation)
- Run a tabletop exercise for an AI error scenario and update the realized-cost addendum.
- Validate evidence collection: dashboards, sampling, incident tickets, sign-offs are all retrievable.
- Report themes to risk committee: top cost drivers, where risk tolerance forced mitigation, and any no-go decisions.
Frequently Asked Questions
Do we have to convert non-monetary costs into dollars to satisfy MAP-3.2?
No. MAP-3.2 requires you to examine and document non-monetary costs, not monetize them (NIST AI RMF Core). Use defined qualitative impact tiers and link them to risk tolerance decisions.
What counts as “trustworthiness” costs in practice?
Trustworthiness costs include loss of user confidence, reduced adoption, inability to contest outcomes, and operational slowdowns caused by low-confidence outputs that require manual rework (NIST AI RMF Core). Document these explicitly as non-monetary impacts.
How do we connect this to risk tolerance if our company only has an enterprise risk appetite statement?
Map each key cost category to the closest existing tolerance area (operational risk, compliance risk, customer harm) and document the rationale (NIST AI RMF Core). If gaps exist, record them as governance actions to define AI-specific tolerances.
Does MAP-3.2 apply if we only use a third-party AI API and don’t train models?
Yes. You still bear the operational and customer-impact costs of errors in your workflow, so you must examine and document those costs and align decisions to risk tolerance (NIST AI RMF Core). Include third-party dependencies in the failure-mode register.
What evidence convinces auditors that our MAP-3.2 process is real?
Signed decision records tied to risk tolerance, a maintained cost register with non-monetary impacts, and updates after incidents or material changes typically demonstrate operation (NIST AI RMF Core). Evidence should link to controls and monitoring, not sit in isolation.
How often should we refresh the cost assessment?
Refresh on triggers: material model/workflow changes, new user populations, new third-party dependencies, and realized incidents (NIST AI RMF Core). If you need a calendar cadence, set one internally and apply it consistently.
Footnotes
Frequently Asked Questions
Do we have to convert non-monetary costs into dollars to satisfy MAP-3.2?
No. MAP-3.2 requires you to examine and document non-monetary costs, not monetize them (NIST AI RMF Core). Use defined qualitative impact tiers and link them to risk tolerance decisions.
What counts as “trustworthiness” costs in practice?
Trustworthiness costs include loss of user confidence, reduced adoption, inability to contest outcomes, and operational slowdowns caused by low-confidence outputs that require manual rework (NIST AI RMF Core). Document these explicitly as non-monetary impacts.
How do we connect this to risk tolerance if our company only has an enterprise risk appetite statement?
Map each key cost category to the closest existing tolerance area (operational risk, compliance risk, customer harm) and document the rationale (NIST AI RMF Core). If gaps exist, record them as governance actions to define AI-specific tolerances.
Does MAP-3.2 apply if we only use a third-party AI API and don’t train models?
Yes. You still bear the operational and customer-impact costs of errors in your workflow, so you must examine and document those costs and align decisions to risk tolerance (NIST AI RMF Core). Include third-party dependencies in the failure-mode register.
What evidence convinces auditors that our MAP-3.2 process is real?
Signed decision records tied to risk tolerance, a maintained cost register with non-monetary impacts, and updates after incidents or material changes typically demonstrate operation (NIST AI RMF Core). Evidence should link to controls and monitoring, not sit in isolation.
How often should we refresh the cost assessment?
Refresh on triggers: material model/workflow changes, new user populations, new third-party dependencies, and realized incidents (NIST AI RMF Core). If you need a calendar cadence, set one internally and apply it consistently.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream