MAP-1.5: Organizational risk tolerances are determined and documented.
MAP-1.5 requires you to set clear, decision-ready organizational risk tolerances for AI (what levels of harm, noncompliance, or uncertainty you will accept), document them, and use them to approve, constrain, or stop AI use cases. Operationalize it by defining measurable tolerance statements, assigning governance ownership, and hardwiring those tolerances into intake, reviews, and exception handling. 1
Key takeaways:
- Documented AI risk tolerances must drive real approvals, constraints, and escalations, not sit in a policy binder. 1
- Tolerances should be specific enough to be tested (thresholds, triggers, required controls, and exception rules). 1
- Evidence matters: keep versioned tolerances, approval records, and examples of decisions made because of the tolerances. 1
“Risk tolerance” is the line between acceptable and unacceptable risk for your organization. For AI, that line must cover more than classic security and privacy. It also needs to address model behavior risk (hallucinations, toxic output, unsafe recommendations), legal and regulatory exposure (consumer protection, discrimination, IP), and operational risk (drift, vendor dependency, incident response). MAP-1.5: organizational risk tolerances are determined and documented. requirement exists because AI governance fails most often at the decision point: teams cannot explain why an AI system was allowed to launch, why it was restricted, or why it was rejected.
For a CCO, GRC lead, or Compliance Officer, the fastest path is to translate abstract “we are risk averse” language into a small set of tolerance statements that your intake and review workflows can enforce. That means: (1) define tolerance categories aligned to how your business can be harmed, (2) assign accountable owners who can accept risk, (3) embed tolerances into gates (procurement, SDLC, model risk review, third-party due diligence), and (4) keep audit-ready evidence that tolerances were applied.
This guidance is based on the NIST AI RMF Core requirement text for MAP-1.5. 1
Regulatory text
Requirement excerpt: “Organizational risk tolerances are determined and documented.” 1
Operator meaning: You must define, approve, and maintain written AI risk tolerance statements that are specific enough to guide decisions across AI lifecycle activities (design, build, buy, deploy, monitor, retire). The tolerances must be available to the teams making day-to-day calls (product, engineering, data science, procurement, legal, compliance), and you must be able to show evidence that decisions track back to those tolerances. 1
Plain-English interpretation (what MAP-1.5 demands)
MAP-1.5 is your “go/no-go and guardrails” requirement. It expects you to:
- Decide what AI risks you will accept, which you will accept only with controls, and which you will not accept.
- Document those decisions in a durable, version-controlled format.
- Use that documentation to drive approvals, control requirements, escalation triggers, and exception handling. 1
A practical test: if an auditor asks, “Why did you deploy this model to customers?”, you should be able to point to a specific tolerance statement, show the assessment against it, and show the approval or exception record.
Who it applies to
Entities: Any organization developing, procuring, or deploying AI systems, including internal tools and customer-facing features. 1
Operational contexts where this becomes urgent:
- Customer-impacting AI (recommendations, pricing, eligibility, support automation).
- Employee-impacting AI (HR screening, performance tooling).
- Decision support in regulated workflows (financial advice, healthcare operations, safety-related operations).
- Third-party AI (SaaS AI features, embedded models, outsourced annotation) where your organization still owns outcomes and accountability.
What you actually need to do (step-by-step)
Step 1: Name an accountable risk acceptance authority
Assign a role (or committee) that can accept AI risk on behalf of the organization. Common patterns:
- AI Risk Committee with CCO/GC, CISO, privacy, product, and model risk lead.
- A single accountable executive for defined risk classes, with committee escalation for higher-risk use cases.
Minimum outputs
- RACI for: drafting tolerances, approving, granting exceptions, and reviewing changes.
- A standing agenda item for tolerance review when material changes occur (new regulations, incidents, new model class, new market).
Step 2: Define tolerance categories that match your harm model
Keep this small enough to run. Typical categories you can document without inventing pseudo-metrics:
- Legal/regulatory tolerance (e.g., prohibited uses, restricted jurisdictions, required notices).
- Privacy/data protection tolerance (sensitive data handling, retention, training data constraints).
- Security tolerance (model supply chain, access control, logging, abuse monitoring).
- Fairness/consumer impact tolerance (discrimination risk, deception/manipulation, contestability).
- Safety and reliability tolerance (unsafe instructions, critical errors, fallback requirements).
- Operational resilience tolerance (monitoring, incident response, drift, vendor dependency).
Map the categories to the risks your enterprise already tracks (enterprise risk taxonomy, operational risk, model risk, third-party risk). The goal is alignment, not a new parallel universe. 1
Step 3: Write tolerance statements that are enforceable
A tolerance statement fails if it can’t be used to decide. Use this structure:
Tolerance statement template
- Scope: Which AI systems/use cases it covers.
- Tolerance level: Accept / accept with conditions / not acceptable.
- Decision triggers: What facts force escalation or prohibition.
- Required controls: What must be in place before approval.
- Exception path: Who can approve exceptions, documentation required, expiry.
Examples (edit to your reality)
- Prohibited use tolerance: “We do not deploy AI that makes final eligibility decisions for employment or credit without a documented human review step and an appeal path.”
- Data tolerance: “We do not use customer-provided sensitive data to train or fine-tune third-party foundation models unless legal approves contract terms and privacy signs off on data minimization and retention.”
- Reliability tolerance: “Any external-facing generative AI must have defined fallback behavior, logging, and an abuse monitoring plan before launch.”
These are policy-level examples; your control teams should then translate them into specific control requirements and test steps.
Step 4: Embed tolerances into workflows (make them real)
Tie tolerances to the points where work already stops for approval:
- AI use-case intake
- Add a mandatory “tolerance mapping” section: which tolerance statements apply and how compliance is demonstrated.
- Third-party intake and due diligence
- Require vendors/third parties to disclose model behavior limits, training data constraints, and monitoring capabilities as needed to meet your tolerance statements.
- Design and change management gates
- Add tolerance checks at: pre-design, pre-launch, and material change (new data source, new model version, new jurisdiction).
- Exceptions and risk acceptance
- Require written risk acceptance when teams cannot meet tolerances, with compensating controls and a sunset date.
- Ongoing monitoring
- Define which tolerance violations become incidents (e.g., unsafe outputs, privacy violations, policy breaches) and route them into your incident process. 1
If you use Daydream to manage third-party risk and compliance evidence, map MAP-1.5 to a policy control, name the control owner, and schedule recurring evidence collection so you can prove tolerances were reviewed and applied. 1
Step 5: Approve, publish, train, and version-control
Operational minimum:
- A formally approved “AI Risk Tolerances” standard or addendum to your enterprise risk policy.
- Version history and change rationale.
- Targeted training for the teams who do intake and approvals (product, procurement, security, privacy, legal, compliance).
Step 6: Test that decisions follow tolerances
Run a lightweight control test:
- Sample recent AI intakes and confirm each one shows mapping to tolerances.
- Sample exceptions and confirm they have approvals, compensating controls, and expiry.
- Sample third-party AI procurements and confirm contracts and due diligence align to tolerances.
Required evidence and artifacts to retain
Keep artifacts in a single system of record with versioning and easy export.
Core documents
- AI Risk Tolerances document (approved, dated, versioned). 1
- Governance charter/RACI for risk acceptance and exceptions.
- Crosswalk mapping tolerances to lifecycle controls (intake checklist, third-party due diligence requirements, SDLC gates). 1
Operational evidence
- Completed AI intake forms showing tolerance mapping.
- Review/approval minutes or tickets (committee decisions, sign-offs).
- Exception register (request, rationale, compensating controls, approver, expiry, closure evidence).
- Training records for relevant roles.
- Periodic review record showing tolerances were re-approved or updated after changes/incidents.
Common exam/audit questions and hangups
Auditors and internal validators tend to press on these points:
- “Show me your risk tolerance.” They expect a discrete document or clearly labeled policy section. 1
- “Who can accept AI risk?” If it’s ambiguous, you will get findings for weak governance.
- “How does this change a launch decision?” Be ready with examples where tolerances required additional controls, delayed launch, or triggered an exception.
- “Do tolerances apply to third-party AI?” If you deploy it, tolerances still apply; show procurement and due diligence hooks.
- “How do you keep tolerances current?” Expect scrutiny after incidents, regulatory changes, or new model classes. 1
Frequent implementation mistakes (and how to avoid them)
-
Writing values statements instead of tolerances
Fix: Convert aspirational language into “allowed/conditional/prohibited + triggers + required controls.” -
No explicit exception path
Fix: Create a simple exception workflow and an exception register. If exceptions happen informally, you will fail auditability. -
Tolerances that don’t reach procurement and third-party risk
Fix: Add tolerance checks to third-party intake and contract review playbooks; require vendors to support your monitoring and logging expectations. -
Too many categories and no ownership
Fix: Start with a small set of categories, assign owners, and expand only when you can test and evidence them. -
No proof of use
Fix: Retain decision artifacts (tickets, minutes, approvals) that show tolerances were applied to real systems. 1
Enforcement context and risk implications
NIST AI RMF is a framework, not a regulator, so MAP-1.5 won’t generate penalties by itself. The risk is indirect but real: undocumented or unused risk tolerances lead to inconsistent approvals, weak third-party oversight, and poor defensibility after an incident. If you cannot show how you set and applied tolerances, investigations and audits become fact disputes instead of document reviews. 1
Practical 30/60/90-day execution plan
First 30 days (foundation)
- Appoint risk acceptance authority and publish RACI.
- Inventory AI systems and in-flight use cases (including third-party AI features).
- Draft initial tolerance categories and a one-page tolerance statement per category.
- Stand up an exception register and approval workflow in your GRC tooling.
Days 31–60 (embed into operations)
- Add tolerance mapping to AI intake and launch gates.
- Update third-party due diligence questionnaires and contract review checklist to reflect tolerance requirements.
- Train reviewers and control owners on how to apply tolerances and document decisions.
- Run a pilot: apply tolerances to a handful of current AI use cases and capture evidence.
Days 61–90 (prove it works)
- Perform a control test: sample intakes, approvals, and exceptions for completeness and quality.
- Refine tolerances based on pilot findings (reduce ambiguity, add triggers, clarify prohibited uses).
- Establish recurring review cadence and recurring evidence collection for MAP-1.5 in Daydream or your existing GRC system. 1
Frequently Asked Questions
What counts as “documented” risk tolerance for MAP-1.5?
A dated, version-controlled document or policy section that states what AI risks are acceptable, conditional, or prohibited, plus who can approve exceptions. You also need records showing teams used those tolerances in decisions. 1
Do we need numeric thresholds to satisfy MAP-1.5?
Not always. What matters is decision usability: clear triggers, required controls, and escalation paths. Add metrics where you already measure them (incidents, false positives, drift), but don’t block implementation waiting for perfect quantification. 1
How do we apply risk tolerances to third-party AI tools?
Treat the third party as part of your control boundary: map the use case to your tolerances, require the third party to support needed controls (logging, monitoring, data restrictions), and document any gaps as exceptions or procurement blockers. 1
Who should approve AI risk tolerances: the board, a committee, or a single executive?
Use your existing risk governance model. High-impact tolerances often need executive or board visibility, while day-to-day approvals can sit with a committee or delegated authority as long as the delegation is written and auditable.
We already have enterprise risk appetite. Is that enough?
Usually not by itself. You can inherit enterprise appetite language, but MAP-1.5 expects AI-specific tolerance statements that translate into workflow gates and control requirements for AI systems. 1
How often should tolerances be reviewed?
Review after material events (major model changes, incidents, new jurisdictions, new AI capability) and on a scheduled cadence set by your governance body. The key is to retain evidence that reviews occurred and changes were approved. 1
Footnotes
Frequently Asked Questions
What counts as “documented” risk tolerance for MAP-1.5?
A dated, version-controlled document or policy section that states what AI risks are acceptable, conditional, or prohibited, plus who can approve exceptions. You also need records showing teams used those tolerances in decisions. (Source: NIST AI RMF Core)
Do we need numeric thresholds to satisfy MAP-1.5?
Not always. What matters is decision usability: clear triggers, required controls, and escalation paths. Add metrics where you already measure them (incidents, false positives, drift), but don’t block implementation waiting for perfect quantification. (Source: NIST AI RMF Core)
How do we apply risk tolerances to third-party AI tools?
Treat the third party as part of your control boundary: map the use case to your tolerances, require the third party to support needed controls (logging, monitoring, data restrictions), and document any gaps as exceptions or procurement blockers. (Source: NIST AI RMF Core)
Who should approve AI risk tolerances: the board, a committee, or a single executive?
Use your existing risk governance model. High-impact tolerances often need executive or board visibility, while day-to-day approvals can sit with a committee or delegated authority as long as the delegation is written and auditable.
We already have enterprise risk appetite. Is that enough?
Usually not by itself. You can inherit enterprise appetite language, but MAP-1.5 expects AI-specific tolerance statements that translate into workflow gates and control requirements for AI systems. (Source: NIST AI RMF Core)
How often should tolerances be reviewed?
Review after material events (major model changes, incidents, new jurisdictions, new AI capability) and on a scheduled cadence set by your governance body. The key is to retain evidence that reviews occurred and changes were approved. (Source: NIST AI RMF Core)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream