MAP-3.3: Targeted application scope is specified and documented based on the system’s capability, established context, and AI system categorization.

To meet MAP-3.3, you must clearly define, approve, and document where your AI system is allowed to be used (and not used) based on what it can actually do, the business and risk context, and your AI system categorization. Operationally, this becomes a controlled “intended use and scope” record tied to change management, testing, user access, and ongoing monitoring. 1

Key takeaways:

  • Document an explicit intended use, users, decisions supported, and environments, plus hard out-of-scope prohibitions. 1
  • Tie scope to capability evidence (model limits, performance constraints) and to your AI risk category for the use case. 1
  • Treat scope as a living control: approvals, change triggers, training, and enforcement through technical and process guardrails. 1

MAP-3.3 is a “scope discipline” requirement: you cannot manage AI risk if you cannot articulate exactly what the system is for, where it is used, and under what constraints. The practical goal is defensible alignment between (1) the system’s real capabilities and limitations, (2) the context in which it operates (people, process, data, environment, and downstream impacts), and (3) the AI system categorization you assign to the use case. 1

For a Compliance Officer, CCO, or GRC lead, the fastest path to operationalizing MAP-3.3 is to create a targeted application scope statement that is (a) specific enough to enforce and test, (b) approved by accountable owners, and (c) wired into existing governance mechanisms such as intake, risk assessment, vendor/third-party due diligence, SDLC, and change management. 1

This page gives requirement-level implementation guidance you can execute without turning it into a research project. It focuses on what to write down, who must sign it, where it must live, what evidence auditors will ask for, and how to stop “scope creep” that turns a low-risk AI assistant into an ungoverned decision engine. 2

Regulatory text

NIST AI RMF MAP-3.3: “Targeted application scope is specified and documented based on the system’s capability, established context, and AI system categorization.” 1

Operator meaning: You must (1) define the intended use and boundaries of the AI system, (2) justify that scope using evidence of capability and limitations plus the operating context, and (3) align it to your internal AI risk category for that use case, then document all of it in a way that drives controls and decisions. 1

Plain-English interpretation (what MAP-3.3 requires)

MAP-3.3 requires a written, reviewable statement of where your AI is supposed to be used and what it is supposed to do, including explicit exclusions. The scope must not be aspirational. It must reflect demonstrated capability, known constraints, and the environment where the system runs (data sources, users, affected individuals, and downstream decisions). 1

A compliant implementation also prevents accidental expansion:

  • A chatbot intended for internal policy Q&A should not quietly become a disciplinary decision tool.
  • A model approved for “drafting marketing copy” should not get embedded into eligibility or pricing decisions without re-scoping. 1

Who it applies to (entity + operational context)

MAP-3.3 applies to organizations developing or deploying AI systems, including where AI is built in-house, embedded in SaaS, provided by a third party, or assembled from multiple components (models, tools, retrieval systems, agents). 1

You should treat it as mandatory in these contexts:

  • Production AI used in business processes (customer-facing or internal).
  • AI supporting decisions about people (employment, access, fraud, safety, healthcare, education) because scope ambiguity creates outsized harm risk.
  • Third-party AI where you configure prompts, guardrails, or decision thresholds; you still own how it is used in your environment. 1

What you actually need to do (step-by-step)

Step 1: Create an “AI Targeted Application Scope” record (one per use case)

Minimum fields to include:

  • System identifier and version (model name/version, application release, key dependencies).
  • Intended users (roles, not names).
  • Intended decisions/actions (what the system can recommend, generate, classify, or trigger).
  • In-scope environments (business units, geographies, channels, production vs. test).
  • In-scope data (major categories; note sensitive data constraints).
  • Out-of-scope uses (hard prohibitions) (clear “must not” statements).
  • Human oversight model (human-in-the-loop/over-the-loop, escalation paths).
  • Impact statement (who can be affected, and how). 1

Practical tip: Write out-of-scope items as testable controls, such as “System outputs must not be used as the sole basis for adverse action decisions.”

Step 2: Tie scope to demonstrated capability (not marketing claims)

Attach or reference evidence that the system can perform as scoped:

  • Evaluation summaries (accuracy, robustness, failure modes) appropriate to the task.
  • Known limitations (languages, edge cases, confidence calibration, hallucination risk for generative systems).
  • Operating constraints (rate limits, input size limits, dependency availability). 1

If you do not have meaningful evaluation evidence, narrow the scope until you do.

Step 3: Document the established context that makes the scope safe or unsafe

Context elements to capture:

  • Process context: Where the AI sits in the workflow and what happens after output.
  • User context: Training level, incentives, likelihood of automation bias.
  • Data context: Data provenance, refresh cadence, and drift sensitivity.
  • Harm context: Foreseeable misuse and foreseeable affected populations. 1

This is where many teams fail: they document the model, but not the workflow that turns outputs into outcomes.

Step 4: Assign an AI system category and map it to governance requirements

MAP-3.3 expects your scope to be “based on…AI system categorization.” 1

Operationally:

  • Define your internal categories (example: low/medium/high impact) and the criteria.
  • Record the category for this use case and the rationale.
  • Link category to mandatory controls (testing depth, approvals, monitoring, incident response). 1

If you already have risk tiers for software, privacy, or model risk, align your AI categories to those to avoid parallel governance.

Step 5: Approve scope and bind it to change management

Required approvals should include:

  • Business owner (accountable for outcomes).
  • Technical owner (accountable for design and performance).
  • Risk/Compliance (accountable for governance and residual risk acceptance).
  • Privacy/Security as applicable based on data and threat model. 1

Define scope change triggers, such as:

  • New decision type (e.g., from “drafting” to “recommendation” to “automation”).
  • New user group (e.g., expanding from analysts to frontline staff).
  • New data category (e.g., adding HR or health data).
  • Model or vendor version changes that affect behavior. 1

Step 6: Enforce scope with practical guardrails

Examples of enforceable guardrails:

  • Role-based access controls aligned to intended users.
  • UI warnings and required acknowledgments for restricted uses.
  • Logging that tags requests by use case to detect misuse.
  • Output constraints (blocklists, PII redaction, citation requirements for RAG).
  • Policy and training for users on allowed and prohibited uses. 1

Step 7: Set recurring evidence collection and ownership

Assign a control owner and define what evidence must be produced on a schedule (for example: scope review attestations, monitoring reports, change tickets). The point is to keep MAP-3.3 “alive” after go-live. 1

Daydream fit (where it earns a mention): If you struggle to keep scope documents, approvals, and recurring evidence in one place across many AI use cases and third parties, Daydream can track MAP-3.3 as a control with an owner, workflow, and evidence requests so you are not chasing screenshots before an audit.

Required evidence and artifacts to retain

Keep these artifacts in an audit-ready repository:

  • Targeted Application Scope document (current version + history).
  • Capability evidence (testing/evaluation summaries; known limitations register). 1
  • Context and impact notes (workflow diagrams, data flow diagrams, affected stakeholder analysis).
  • AI categorization rationale and mapping to required controls. 1
  • Approval records (sign-offs, risk acceptance, meeting minutes).
  • Change management records for any scope-affecting changes.
  • Guardrail configuration evidence (RBAC settings, policy acknowledgments, technical restrictions).
  • Monitoring and exception logs (misuse detection, escalations, corrective actions). 1

Common exam/audit questions and hangups

Auditors and internal reviewers tend to ask:

  1. “Show me the intended use and what is explicitly prohibited.”
  2. “Where is the proof the system can do what you claim in this context?” 1
  3. “How did you categorize the AI system, and what controls does that category trigger?”
  4. “What prevents teams from reusing the model for a new purpose without review?”
  5. “How do you detect and respond to out-of-scope usage?” 1

Hangup to expect: teams confuse “system scope” with “model scope.” MAP-3.3 cares about the application in context, not a model card stored in a dev folder. 1

Frequent implementation mistakes (and how to avoid them)

  1. Scope statements that read like marketing.
    Fix: Use testable verbs and boundaries (“may draft,” “may summarize,” “must not decide,” “must not ingest”). 1

  2. No out-of-scope list.
    Fix: Require at least one prohibited use per use case, even if it feels obvious.

  3. Ignoring user behavior and workflow.
    Fix: Add a one-page workflow map and identify where humans can override, approve, or misapply outputs. 1

  4. Scope not tied to access controls or UI.
    Fix: Make enforcement a release gate: no production without RBAC and user-facing restrictions.

  5. Scope documents that never get updated.
    Fix: Add scope change triggers to change management intake and require re-approval. 1

Enforcement context and risk implications

NIST AI RMF is a framework, not a regulator, so MAP-3.3 is typically tested through internal audit, customer due diligence, contractual commitments, or alignment to other legal regimes that expect risk-based controls and documented intended use. Weak MAP-3.3 implementation creates predictable downstream failures: unreviewed expansion into higher-impact decisions, inconsistent control selection, and inability to explain or defend how the system was meant to operate. 2

Practical 30/60/90-day execution plan

First 30 days (stabilize scope)

  • Inventory AI use cases in production and near-production.
  • Stand up a standard “Targeted Application Scope” template and approval workflow. 1
  • Pick a small set of high-visibility use cases and document scope, prohibitions, and owners.

Days 31–60 (bind scope to controls)

  • Define AI categorization criteria and map categories to control requirements. 1
  • Add change triggers to your change management process.
  • Implement at least one technical guardrail per system (RBAC, logging, input/output constraints).

Days 61–90 (operate and prove)

  • Start recurring evidence collection (scope review attestations, monitoring checks). 1
  • Run a tabletop misuse scenario: out-of-scope use detected, escalation, corrective action.
  • Prepare an audit packet per system with scope, capability evidence, context, categorization, and approvals.

Frequently Asked Questions

What counts as “targeted application scope” for a general-purpose model we did not train?

It is the scope of your deployment and workflow, not the vendor’s generic model description. Document the specific tasks, users, data, and prohibitions in your environment, then tie them to capability evidence you can defend. 1

How detailed does the out-of-scope list need to be?

Detailed enough to enforce and test. If a reviewer cannot tell whether a proposed use is allowed without a meeting, the scope is too vague. 1

Do we need a separate MAP-3.3 scope for every prompt or agent?

Create one scope per distinct business use case and risk category, then manage prompts/agents as configurations under that scope. If an agent changes the decision type or user population, treat it as a scope change. 1

Who should own MAP-3.3 in a three-lines-of-defense model?

The business and technical owners should own the scope content and controls, with Compliance/Risk owning the governance requirement, approvals, and evidence expectations. Assign a single control owner accountable for keeping artifacts current. 1

What is the minimum capability evidence auditors accept?

A documented evaluation summary relevant to the scoped task plus a limitations statement tied to the operating context. If performance varies by segment (language, channel, region), note constraints and enforce them. 1

How do we stop “scope creep” after launch?

Make scope a release gate and a change-management gate. Require re-categorization and re-approval for new decisions, new data categories, new user groups, or model upgrades that change behavior. 1

Footnotes

  1. NIST AI RMF Core

  2. NIST AI RMF program page

Frequently Asked Questions

What counts as “targeted application scope” for a general-purpose model we did not train?

It is the scope of your deployment and workflow, not the vendor’s generic model description. Document the specific tasks, users, data, and prohibitions in your environment, then tie them to capability evidence you can defend. (Source: NIST AI RMF Core)

How detailed does the out-of-scope list need to be?

Detailed enough to enforce and test. If a reviewer cannot tell whether a proposed use is allowed without a meeting, the scope is too vague. (Source: NIST AI RMF Core)

Do we need a separate MAP-3.3 scope for every prompt or agent?

Create one scope per distinct business use case and risk category, then manage prompts/agents as configurations under that scope. If an agent changes the decision type or user population, treat it as a scope change. (Source: NIST AI RMF Core)

Who should own MAP-3.3 in a three-lines-of-defense model?

The business and technical owners should own the scope content and controls, with Compliance/Risk owning the governance requirement, approvals, and evidence expectations. Assign a single control owner accountable for keeping artifacts current. (Source: NIST AI RMF Core)

What is the minimum capability evidence auditors accept?

A documented evaluation summary relevant to the scoped task plus a limitations statement tied to the operating context. If performance varies by segment (language, channel, region), note constraints and enforce them. (Source: NIST AI RMF Core)

How do we stop “scope creep” after launch?

Make scope a release gate and a change-management gate. Require re-categorization and re-approval for new decisions, new data categories, new user groups, or model upgrades that change behavior. (Source: NIST AI RMF Core)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream