MAP-2.1: The specific tasks and methods used to implement the tasks that the AI system will support are defined (e.g., classifiers, generative models, recommenders).
MAP-2.1 requires you to explicitly define what tasks your AI system is intended to perform and the technical methods used to perform them (for example, classification, generation, recommendation, retrieval, ranking), then make that definition operational in governance, change control, testing, and third-party oversight. Your goal is a stable, reviewable “task-and-method spec” that matches reality in production. (NIST AI RMF Core)
Key takeaways:
- Document the AI system’s supported tasks, boundaries, and intended decision or content outputs, in operator-friendly terms. (NIST AI RMF Core)
- Identify the methods used to implement each task (model type, architecture pattern, pipelines, human-in-the-loop points) and tie them to controls and testing. (NIST AI RMF Core)
- Keep the definition current through change management, third-party due diligence, and recurring evidence collection. (NIST AI RMF Core)
MAP-2.1 is a “definition control.” If you cannot clearly state what the AI system does and how it does it, you cannot reliably manage risk, validate performance, or explain decisions to internal governance, customers, or regulators. This requirement sits early in the lifecycle because it drives everything downstream: data needs, evaluation design, access controls, incident response triggers, and the right kind of monitoring. (NIST AI RMF Core)
For a Compliance Officer, CCO, or GRC lead, the fastest path to operationalizing MAP-2.1 is to produce a single authoritative artifact that (1) lists the tasks the system supports, (2) maps each task to the methods used (e.g., classifier, generative model, recommender, ranker, retrieval-augmented generation), and (3) records the constraints and non-goals that prevent scope creep. Then you attach owners, review cadence, and change gates so the definition stays aligned with production behavior and third-party components. (NIST AI RMF Core)
This page gives requirement-level guidance you can implement without waiting on a full AI governance program rewrite. It is written to produce audit-ready evidence and to reduce the most common failure mode: a system described as one thing in documentation but operating as something broader in practice. (NIST AI RMF Core)
Regulatory text
Excerpt: “The specific tasks and methods used to implement the tasks that the AI system will support are defined (e.g., classifiers, generative models, recommenders).” (NIST AI RMF Core)
What the operator must do:
You must produce and maintain a clear definition of (a) the tasks the AI system will support and (b) the methods used to implement those tasks. “Defined” means written down, versioned, reviewable, and used as an input to governance and engineering decisions, not sitting in a slide deck that nobody updates. The definition should be specific enough that a reviewer can tell whether the system is a classifier vs. a generator vs. a recommender, and what changes would constitute a material change that requires re-approval. (NIST AI RMF Core)
Plain-English interpretation of the requirement
MAP-2.1 asks: What job is this AI doing, and what type of AI is doing it? (NIST AI RMF Core)
In practice, you are drawing a box around:
- Tasks: the supported user/business functions (e.g., “classify incoming tickets by category,” “recommend next-best action,” “generate a draft response,” “rank search results”).
- Methods: the technical approach used (e.g., supervised classifier, retrieval + ranker, LLM with RAG, rules + model ensemble, human-in-the-loop review).
- Boundaries: what the system is not intended to do, what inputs it should not use, and what outputs it must not produce (especially where policy or law prohibits).
This definition becomes your anchor for risk assessment and control selection. If the “method” is generative, your evaluation and monitoring needs are different than for a classifier. If the task includes decisioning that affects customers, your documentation and oversight needs change again. (NIST AI RMF Core)
Who it applies to (entity and operational context)
MAP-2.1 applies to any organization developing, integrating, procuring, or deploying AI systems, including where the model is provided by a third party and you only control configuration and usage. (NIST AI RMF Core)
Typical in-scope contexts:
- Internal decision support (fraud triage, claims routing, HR screening assistance, security alert prioritization).
- Customer-facing automation (chatbots, content generation, product recommendations).
- Platform features embedded in SaaS products (search ranking, personalization, summarization).
- Third-party AI accessed via API where you still define the task and can constrain use.
If you have multiple models, MAP-2.1 still applies at the system level: define tasks and methods across the full pipeline (pre-processing, model inference, post-processing, human review). (NIST AI RMF Core)
What you actually need to do (step-by-step)
Step 1: Name the AI system and declare a stable scope
Create a single “AI System Task-and-Method Specification” for each AI system (or each materially distinct use case). Keep it short but unambiguous:
- System name, owner, business function.
- In-scope users and channels (internal UI, API, customer app).
- In-scope environments (dev/test/prod) and regions, if relevant to governance. (NIST AI RMF Core)
Operator tip: Auditors often look for a one-to-one match between “the thing in production” and “the thing described on paper.” If the system has multiple modes, document each mode explicitly.
Step 2: Define “tasks” in operational terms
For each task, document:
- Task statement (verb + object): “Generate a draft email response to customer inquiries.”
- Trigger: when it runs (user request, batch job, event-driven).
- Inputs: key data classes (tickets, chat history, product catalog).
- Outputs: what is produced (label, score, ranked list, draft text).
- Decision role: advisory vs. automated action, plus any required human review. (NIST AI RMF Core)
A usable format is a table:
| Task ID | Task | Output type | Human review required | Where used |
|---|---|---|---|---|
| T-1 | Classify ticket category | Label + confidence | No, but QA sampling | Service desk |
| T-2 | Generate draft response | Free-text draft | Yes, agent approves | Customer support |
Step 3: Define “methods” precisely enough to drive controls
For each task, document the method category and implementation pattern. Examples:
- Classifier: supervised model, features/embeddings, thresholding rules.
- Recommender/ranker: collaborative filtering, learning-to-rank, bandits, heuristic ranker.
- Generative: LLM, prompt templates, safety filters, system message, tool use.
- RAG: retriever (vector search), chunking strategy, grounding prompts, citation behavior.
- Hybrid: rules + model ensemble; model cascades; fallback logic. (NIST AI RMF Core)
Also record:
- Model provenance: in-house trained vs. third-party base model vs. fine-tuned model.
- Material dependencies: third-party APIs, embedding model, vector DB, moderation service.
- Control points: where you constrain behavior (prompt guardrails, decoding parameters, content filters, thresholding, business rules). (NIST AI RMF Core)
Step 4: Set boundaries and non-goals to prevent silent expansion
Document explicit exclusions, such as:
- Disallowed input categories (e.g., certain sensitive attributes if policy prohibits use).
- Disallowed outputs (medical advice, legal advice, identity verification statements) if your risk posture requires it.
- Non-supported tasks that people will ask for (“not used for employment decisions,” “not used to auto-deny claims”).
This is where MAP-2.1 becomes enforceable. Boundaries give you a basis to reject changes or escalate them for governance review. (NIST AI RMF Core)
Step 5: Map each task-method pair to evaluation and monitoring hooks
For each task-method pair, define what “works as intended” means and how you’ll detect drift or misuse:
- Classifier: performance metrics, threshold monitoring, label distribution monitoring.
- Recommender: bias checks for exposure, click/acceptance monitoring, feedback loops.
- Generative: output quality review, toxicity/safety checks, hallucination/grounding checks, prompt injection testing for RAG.
You do not need to publish every metric in the MAP-2.1 artifact, but you must link to where the evaluation plan lives and show it was derived from the defined tasks/methods. (NIST AI RMF Core)
Step 6: Operationalize through ownership, approvals, and change control
Make MAP-2.1 “real” by adding:
- Control owner (usually Product + Engineering, with Compliance oversight).
- Approval gate: required sign-off before adding a new task or changing method type (e.g., from “classifier” to “generative”).
- Change triggers: model swap, prompt template change, new data source, new automation level, new third-party component.
- Review routine: recurring review tied to releases and periodic governance. (NIST AI RMF Core)
If you use a GRC workflow tool (including Daydream), implement MAP-2.1 as a mapped control with a required evidence attachment and a recurring evidence task so documentation stays current release to release. (NIST AI RMF Core)
Required evidence and artifacts to retain
Keep evidence that proves the definition exists, is approved, and matches production:
-
AI System Task-and-Method Specification (versioned)
- Task list, method mapping, boundaries, owners, approval history. (NIST AI RMF Core)
-
Architecture and data-flow diagram (current)
- Components, third-party services, model endpoints, human review steps. (NIST AI RMF Core)
-
Model/system inventory entry
- Links to the spec, environments, and dependency list. (NIST AI RMF Core)
-
Change management records
- Tickets/PRs showing review when tasks or methods changed, with sign-offs. (NIST AI RMF Core)
-
Third-party documentation (as applicable)
- Contracts/SOWs, model cards or technical docs received, API change notices, and your internal assessment of what task/method the third party enables. (NIST AI RMF Core)
-
Evaluation plan linkage
- A pointer from each task-method to relevant testing protocols and monitoring dashboards. (NIST AI RMF Core)
Common exam/audit questions and hangups
Expect reviewers to press on these:
- “Show me where you define the AI system’s tasks and methods.” Provide the spec and last approval date. (NIST AI RMF Core)
- “How do you know production matches this definition?” Point to architecture diagrams, endpoint configs, and change control. (NIST AI RMF Core)
- “What constitutes a material change?” Show your change triggers and governance gate. (NIST AI RMF Core)
- “Where do third parties fit?” Identify third-party models/services in the method description and show due diligence records. (NIST AI RMF Core)
- “Who is accountable?” Provide named owners and an escalation path for scope changes. (NIST AI RMF Core)
Common hangup: teams document “use case” but not “method.” MAP-2.1 requires both. A generic statement like “we use AI to help support agents” is not sufficient for audit readiness. (NIST AI RMF Core)
Frequent implementation mistakes and how to avoid them
-
Mistake: Writing a high-level narrative with no task list.
Fix: require a task table with output types and automation level. (NIST AI RMF Core) -
Mistake: Confusing “model” with “system.”
Fix: document the end-to-end pipeline, including retrieval, filters, business rules, and human review. (NIST AI RMF Core) -
Mistake: Treating prompts as “not code.”
Fix: include prompt templates and safety layers as part of the method definition; put them under change control. (NIST AI RMF Core) -
Mistake: Letting third-party AI be a blind spot.
Fix: record third-party components explicitly, including what task they support and what method category they represent. (NIST AI RMF Core) -
Mistake: No operational linkage to testing/monitoring.
Fix: add a “derived controls” section that points to the evaluation plan and monitoring requirements per task-method. (NIST AI RMF Core)
Enforcement context and risk implications
NIST AI RMF is a framework, not a regulator, and the provided sources do not include public enforcement cases tied to MAP-2.1. (NIST AI RMF Core) The practical risk is still real: if you cannot define tasks and methods, you will struggle to demonstrate controlled scope, select appropriate evaluations, or defend why the system behaved the way it did after an incident. This gap also increases third-party risk because you cannot clearly articulate what you are outsourcing and what you must verify. (NIST AI RMF Core)
A practical 30/60/90-day execution plan
First 30 days (stabilize definition and ownership)
- Stand up the AI System Task-and-Method Specification template and require it for each in-scope AI system. (NIST AI RMF Core)
- Run workshops with Product and Engineering to enumerate tasks, outputs, and automation level. (NIST AI RMF Core)
- Identify method types and dependencies, including third-party services and internal rule layers. (NIST AI RMF Core)
- Assign a control owner and add an approval workflow for changes to tasks/method categories. (NIST AI RMF Core)
By 60 days (make it enforceable through process)
- Integrate the spec into SDLC and change management: no production release that changes tasks/methods without an updated spec and sign-off. (NIST AI RMF Core)
- Link each task-method pair to an evaluation plan and monitoring owner. (NIST AI RMF Core)
- Update third-party intake to capture “task supported” and “method type,” and attach third-party documentation to the spec record. (NIST AI RMF Core)
By 90 days (operate it as a recurring control)
- Establish recurring evidence collection in your GRC system (Daydream or equivalent): latest spec version, last review, and change log extracts. (NIST AI RMF Core)
- Add an internal audit-style check: sample a system, trace spec → architecture → production configuration. (NIST AI RMF Core)
- Require periodic re-attestation from owners that tasks/methods remain accurate and within bounds. (NIST AI RMF Core)
Frequently Asked Questions
Do we need MAP-2.1 if we only use a third-party LLM API?
Yes. You still define the tasks you assign to the system and the method category in use (generative, potentially with retrieval, tools, and filters). Document the third-party dependency and the controls you apply around it. (NIST AI RMF Core)
How detailed should the “method” description be?
Detailed enough to drive control selection and change triggers. “We use AI” is not enough; “LLM with RAG over internal knowledge base, plus moderation filter and human approval before sending” is usually sufficient for governance. (NIST AI RMF Core)
Our system does multiple things (classify, then generate). Is that one MAP-2.1 record or several?
Keep one system record if it is one product/system boundary, but break it into multiple task-method pairs so each function is explicit. Auditors want to see the decomposition because risks differ by task type. (NIST AI RMF Core)
Are prompts part of the “method”?
Yes for governance purposes. Prompts, templates, safety layers, decoding settings, and tool permissions materially shape behavior and should be captured in the method description and controlled through change management. (NIST AI RMF Core)
What counts as a “material change” under MAP-2.1?
Any change that alters the task list, the method category, the automation level, or key dependencies (for example, swapping the base model, adding a new retrieval corpus, or moving from “draft” to “auto-send”). Define your triggers in the spec and enforce them through release gates. (NIST AI RMF Core)
How do we prove the definition matches production?
Keep versioned specs, architecture diagrams, and change tickets that reference the spec version. For higher-risk systems, add a periodic traceability check from production configuration back to the approved definition. (NIST AI RMF Core)
Frequently Asked Questions
Do we need MAP-2.1 if we only use a third-party LLM API?
Yes. You still define the tasks you assign to the system and the method category in use (generative, potentially with retrieval, tools, and filters). Document the third-party dependency and the controls you apply around it. (NIST AI RMF Core)
How detailed should the “method” description be?
Detailed enough to drive control selection and change triggers. “We use AI” is not enough; “LLM with RAG over internal knowledge base, plus moderation filter and human approval before sending” is usually sufficient for governance. (NIST AI RMF Core)
Our system does multiple things (classify, then generate). Is that one MAP-2.1 record or several?
Keep one system record if it is one product/system boundary, but break it into multiple task-method pairs so each function is explicit. Auditors want to see the decomposition because risks differ by task type. (NIST AI RMF Core)
Are prompts part of the “method”?
Yes for governance purposes. Prompts, templates, safety layers, decoding settings, and tool permissions materially shape behavior and should be captured in the method description and controlled through change management. (NIST AI RMF Core)
What counts as a “material change” under MAP-2.1?
Any change that alters the task list, the method category, the automation level, or key dependencies (for example, swapping the base model, adding a new retrieval corpus, or moving from “draft” to “auto-send”). Define your triggers in the spec and enforce them through release gates. (NIST AI RMF Core)
How do we prove the definition matches production?
Keep versioned specs, architecture diagrams, and change tickets that reference the spec version. For higher-risk systems, add a periodic traceability check from production configuration back to the approved definition. (NIST AI RMF Core)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream