MAP-1.1: Intended purposes, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include: the specific set

MAP-1.1 requires you to clearly define and document what your AI system is for, who will use it, where it will be used, and which laws, norms, and user expectations govern that context, then tie those decisions to measurable evaluation and monitoring. Operationalize it by producing an “intended use and deployment context” record that gates design, testing, and release. (NIST AI RMF Core)

Key takeaways:

  • Document intended purpose, beneficial uses, users, settings, and context-specific legal and normative expectations in one controlled artifact. (NIST AI RMF Core)
  • Identify assumptions and limitations early, then reflect them in user-facing disclosures, access controls, and monitoring triggers. (NIST AI RMF Core)
  • Map intended use to TEVV plans and metrics so you can prove the system behaves as expected in the actual deployment context. (NIST AI RMF Core)

MAP-1.1 is a “map the context first” requirement. If you cannot explain the AI system’s intended purpose, who it serves, and where it will operate, you cannot credibly test it, monitor it, or defend it in an audit or incident review. NIST expects organizations to understand and document: intended purposes and beneficial uses; the specific user groups and their expectations; the prospective deployment settings; the applicable laws; and the relevant norms and expectations that shape what “acceptable” behavior looks like in that environment. NIST also expects you to document assumptions and limitations across the AI lifecycle and to connect this context to TEVV (testing, evaluation, verification, validation) and system metrics. (NIST AI RMF Core)

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat MAP-1.1 as a release gate: no production deployment until an owner-approved “Intended Use & Deployment Context” document exists, is reviewed by Legal/Compliance and product/engineering, and is traceably linked to the test plan and monitoring dashboard. This page gives you a requirement-level playbook to stand up that gate, collect defensible evidence, and reduce downstream risk from mis-scoped deployments. (NIST AI RMF Core)

Regulatory text

NIST AI RMF MAP-1.1 excerpt: “Intended purposes, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include: the specific set or types of users along with their expectations; potential positive and negative impacts of system uses to individuals, communities, organizations, society, and the planet; assumptions and related limitations about AI system purposes, uses, and risks across the development or product AI lifecycle; and related TEVV and system metrics.” (NIST AI RMF Core)

What the operator must do: create and maintain documentation that (1) defines intended purpose and beneficial uses, (2) identifies users and expectations, (3) specifies deployment settings and constraints, (4) captures context-specific laws and norms relevant to that use, (5) records assumptions/limitations, and (6) links all of the above to TEVV activities and measurable metrics used to evaluate and monitor the system in that context. (NIST AI RMF Core)

Plain-English interpretation (what MAP-1.1 means in practice)

MAP-1.1 is asking: “Do you know what you built, who will rely on it, and what ‘good’ looks like where it will run?” If your system was trained or evaluated in one environment but deployed in another, performance, fairness, safety, privacy, and security conclusions can break. Your documentation should make misuse harder by defining what is allowed, what is prohibited, and how you will detect drift into out-of-scope use. (NIST AI RMF Core)

A strong MAP-1.1 implementation reads like a contract between product, compliance, and engineering: it defines the promise you are making about the system, the conditions under which that promise is valid, and how you will prove it over time with TEVV and operational metrics. (NIST AI RMF Core)

Who it applies to

Entity scope: any organization developing, procuring, integrating, or deploying AI systems, including systems embedded in third-party products you configure and run in your environment. (NIST AI RMF Core)

Operational contexts where MAP-1.1 is frequently examined:

  • Regulated decisioning workflows (credit, employment, insurance, healthcare triage) where “intended purpose” determines legal duties and user disclosures.
  • Generative AI features exposed to customers or employees (summarization, copilots, chat) where user expectations and misuse risk differ by channel.
  • Monitoring and safety programs where TEVV and metrics must match the real deployment setting, not a lab environment. (NIST AI RMF Core)

What you actually need to do (step-by-step)

Step 1: Assign ownership and create a MAP-1.1 intake gate

  • Name a control owner (often GRC, Product Risk, or Model Risk) responsible for ensuring the documentation exists and stays current.
  • Add an intake checkpoint to your SDLC/ML lifecycle: “No launch or material change without an approved Intended Use & Deployment Context record.”
    This directly supports defensible implementation and recurring evidence collection. (NIST AI RMF Core)

Step 2: Document intended purpose and potentially beneficial uses

Capture, in unambiguous language:

  • Primary intended purpose (what outcome the system is designed to produce).
  • Beneficial uses (the positive outcomes you expect).
  • Explicitly out-of-scope uses (what the system must not be used for).
    Avoid vague claims like “improves productivity.” Specify the workflow and decision boundary: assistive vs. autonomous, advisory vs. determinative. (NIST AI RMF Core)

Example fields to include:

  • Decision type: recommend / rank / classify / generate / detect / forecast
  • Human role: human-in-the-loop approval required vs. optional
  • Impacted population: internal users, customers, applicants, patients, students, etc. (NIST AI RMF Core)

Step 3: Define user types and expectations

NIST explicitly calls for “the specific set or types of users along with their expectations.” (NIST AI RMF Core) Document:

  • User groups (end users, admins, reviewers, affected non-users).
  • What each group reasonably expects (accuracy, explainability, response time, privacy, language support).
  • Any mismatches between expectations and reality, plus mitigations (UI warnings, training, required review). (NIST AI RMF Core)

Practical move: add “user expectation statements” that can be copied into training materials and UI microcopy. If you cannot state it plainly, users will invent their own expectations.

Step 4: Specify prospective deployment settings (where and how it will run)

Document the “settings” in enough detail to drive testing and monitoring:

  • Channels: web app, mobile, contact center, API, internal tool
  • Geography and languages
  • Data sources and dependencies (upstream systems, third-party models, plug-ins)
  • Access patterns: public, authenticated, privileged
  • Operating conditions: peak load, offline mode, constrained devices (if relevant) (NIST AI RMF Core)

This step prevents a classic failure mode: TEVV passes for one channel, but the production channel changes inputs, latency, or user behavior.

Step 5: Identify context-specific laws, norms, and expectations

MAP-1.1 is not asking you to write a law review memo. It is asking you to show you identified the constraints that shape acceptable system behavior in the deployment context. (NIST AI RMF Core)

Build a short “context constraints” section:

  • Applicable legal domains (privacy, consumer protection, sector rules) as they relate to the specific use and geography.
  • Norms and expectations: what your customers, employees, or the public will consider acceptable or unacceptable in that setting (for example, automated refusal messaging vs. escalation paths). (NIST AI RMF Core)

Operational tip: make Legal/Compliance sign off on this section, then freeze it as a baseline. Later changes become “material changes” that trigger re-TEVV.

Step 6: Capture assumptions and limitations across the lifecycle

NIST calls out “assumptions and related limitations about AI system purposes, uses, and risks across the development or product AI lifecycle.” (NIST AI RMF Core) Document:

  • Data assumptions (representativeness, timeliness, labeling quality)
  • Model assumptions (stationarity, calibration, prompt patterns, tool availability)
  • Operational assumptions (human review happens, users are trained, certain inputs are blocked)
  • Known limitations (edge cases, language gaps, accessibility constraints)

Then convert key assumptions into controls. If you assume “trained reviewers will verify outputs,” create role-based access, training completion checks, and QA sampling.

Step 7: Link the context to TEVV and system metrics

MAP-1.1 explicitly requires “related TEVV and system metrics.” (NIST AI RMF Core) Create traceability:

  • Each intended use claim maps to a test method (offline evaluation, red teaming, UAT, shadow mode).
  • Each key risk maps to a metric and monitoring trigger (quality, safety, privacy, security, latency, escalation rate). (NIST AI RMF Core)

A simple traceability table is usually enough:

Context element Risk if wrong TEVV activity Metric/monitoring
User type: non-expert agents Overreliance Task-based usability testing Override rate, escalation rate
Setting: multilingual support Silent failure Language-specific evaluation Error reports by locale
Assumption: PII removed upstream Privacy incidents Input validation tests PII detection alerts

Step 8: Operationalize change management (keep MAP-1.1 current)

MAP-1.1 is fragile if it becomes shelfware. Define what triggers an update:

  • New user group or new channel
  • Expansion to a new geography
  • Model change (new base model, major fine-tune, retrieval source change)
  • New data source or logging change
  • New high-impact use case added
    Treat these as “material changes” that require re-approval and TEVV refresh. (NIST AI RMF Core)

Required evidence and artifacts to retain

Keep evidence in a system of record with versioning and approvals. Minimum artifacts:

  • Intended Use & Deployment Context document (versioned, owner-approved)
  • User types and expectation statements (or training/UI excerpts)
  • Deployment architecture/context diagram (channels, data flows, third parties)
  • Context constraints memo/section with Legal/Compliance review notes
  • Assumptions and limitations register, plus mapped mitigations
  • TEVV plan and results mapped to intended use and risks
  • Metrics definitions and monitoring dashboard screenshots or exports
  • Change log showing updates and re-approvals after material changes (NIST AI RMF Core)

If you manage controls in Daydream, map MAP-1.1 to a named control owner, attach these artifacts as recurring evidence, and set a review cadence aligned to your release cycle so audit prep is evidence collection, not archaeology. (NIST AI RMF Core)

Common exam/audit questions and hangups

Auditors and internal reviewers tend to ask:

  • “Show me the approved intended use. Where is prohibited use documented?”
  • “Who are the user types, and how do you ensure users behave within assumptions?”
  • “What settings do you deploy into, and how do you know your testing reflects those settings?”
  • “Where do you document legal/normative expectations, and who approved them?”
  • “How do TEVV results map to the claims you make about the system?”
  • “What triggers an update to this documentation?” (NIST AI RMF Core)

Hangup to expect: teams confuse “intended use” with a marketing description. Examiners want operational boundaries that change testing, controls, and monitoring.

Frequent implementation mistakes and how to avoid them

  1. Vague intended purpose.
    Fix: define task, decision boundary, and human role; list out-of-scope uses. (NIST AI RMF Core)

  2. No named user types.
    Fix: document at least end users, admins, and affected persons; capture expectations and training requirements. (NIST AI RMF Core)

  3. Laws/norms section is generic boilerplate.
    Fix: tie constraints to the specific deployment geography, channel, and workflow; document who reviewed it. (NIST AI RMF Core)

  4. Assumptions are stated but not controlled.
    Fix: convert assumptions into verifiable controls (input validation, access controls, workflow checks). (NIST AI RMF Core)

  5. TEVV exists but is not traceable to intended use.
    Fix: add a traceability matrix linking context elements to tests and metrics. (NIST AI RMF Core)

Enforcement context and risk implications

No public enforcement cases were provided in the supplied source catalog for MAP-1.1. Treat MAP-1.1 as a defensibility requirement: when an incident occurs (harmful output, discrimination allegation, privacy complaint, safety failure), your first line of defense is showing you defined intended use, identified user expectations, and tested and monitored the system in the actual deployment context. (NIST AI RMF Core)

Operational risk if you skip MAP-1.1:

  • “Unexpected use” becomes the default explanation, which is rarely persuasive if you never defined and enforced boundaries.
  • Monitoring becomes unfocused because you did not define what success and failure look like per setting and user type.
  • Third-party AI components become a blind spot if you do not document shared responsibility by context (what you control vs. what the third party controls). (NIST AI RMF Core)

Practical 30/60/90-day execution plan

First 30 days (stand up the gate)

  • Publish a one-page MAP-1.1 procedure: required fields, approvers, and launch gating criteria. (NIST AI RMF Core)
  • Create the “Intended Use & Deployment Context” template and store it in your controlled repository.
  • Pilot the template on one AI system already in production; backfill missing context and record assumptions/limitations. (NIST AI RMF Core)

By 60 days (make it auditable)

  • Build the TEVV-to-intended-use traceability table for the pilot system; identify gaps and assign owners. (NIST AI RMF Core)
  • Add monitoring metric definitions tied to the documented risks and settings.
  • Implement change triggers so model updates and deployment expansions require MAP-1.1 re-approval. (NIST AI RMF Core)

By 90 days (scale across the portfolio)

  • Roll out the MAP-1.1 intake gate across all in-scope AI systems (build, buy, and embedded third-party AI).
  • Set recurring evidence collection: latest approved context doc, latest TEVV summary, and monitoring snapshot per system. (NIST AI RMF Core)
  • Run a tabletop exercise: “System used out of scope.” Validate that documentation, controls, and escalation paths work under pressure. (NIST AI RMF Core)

Frequently Asked Questions

What counts as “documented” for MAP-1.1?

A version-controlled record with an owner and approvals that defines intended purpose, users, settings, legal/norm constraints, assumptions/limitations, and links to TEVV and metrics. A slide deck without approvals rarely holds up. (NIST AI RMF Core)

We buy an AI feature from a third party. Does MAP-1.1 still apply?

Yes. You still need to document your intended use, users, and deployment setting, plus any constraints and assumptions about the third party component, and tie that to your acceptance testing and monitoring. (NIST AI RMF Core)

How detailed should “norms and expectations” be?

Detailed enough to change design and operations. If a stated expectation does not drive a control, disclosure, or test, it is probably too generic. (NIST AI RMF Core)

Do we need to list every possible misuse scenario?

No. Focus on the highest-likelihood and highest-impact out-of-scope uses, then define how you prevent, detect, and respond to them through access controls, UI design, and monitoring. (NIST AI RMF Core)

Who should approve the MAP-1.1 documentation?

Assign a control owner, then require approvals from Product (intended use), Engineering/ML (assumptions and settings), and Legal/Compliance (context-specific constraints). The approver set should match your risk profile and governance model. (NIST AI RMF Core)

What is the minimum TEVV linkage you need to show?

A traceable mapping from intended use claims and key risks to test methods and production metrics. If you cannot point from “what it’s for” to “how we tested it” to “how we monitor it,” MAP-1.1 is incomplete. (NIST AI RMF Core)

Frequently Asked Questions

What counts as “documented” for MAP-1.1?

A version-controlled record with an owner and approvals that defines intended purpose, users, settings, legal/norm constraints, assumptions/limitations, and links to TEVV and metrics. A slide deck without approvals rarely holds up. (NIST AI RMF Core)

We buy an AI feature from a third party. Does MAP-1.1 still apply?

Yes. You still need to document your intended use, users, and deployment setting, plus any constraints and assumptions about the third party component, and tie that to your acceptance testing and monitoring. (NIST AI RMF Core)

How detailed should “norms and expectations” be?

Detailed enough to change design and operations. If a stated expectation does not drive a control, disclosure, or test, it is probably too generic. (NIST AI RMF Core)

Do we need to list every possible misuse scenario?

No. Focus on the highest-likelihood and highest-impact out-of-scope uses, then define how you prevent, detect, and respond to them through access controls, UI design, and monitoring. (NIST AI RMF Core)

Who should approve the MAP-1.1 documentation?

Assign a control owner, then require approvals from Product (intended use), Engineering/ML (assumptions and settings), and Legal/Compliance (context-specific constraints). The approver set should match your risk profile and governance model. (NIST AI RMF Core)

What is the minimum TEVV linkage you need to show?

A traceable mapping from intended use claims and key risks to test methods and production metrics. If you cannot point from “what it’s for” to “how we tested it” to “how we monitor it,” MAP-1.1 is incomplete. (NIST AI RMF Core)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream