MAP-1.3: The organization’s mission and relevant goals for AI technology are understood and documented.

To meet map-1.3: the organization’s mission and relevant goals for ai technology are understood and documented. requirement, you must write down (and get leadership to adopt) a clear statement of why your organization uses AI and what outcomes it will and won’t pursue, then tie those goals to specific AI use cases, risk tolerances, and governance decisions. This becomes the “north star” auditors will test against. 1

Key takeaways:

  • Document an AI mission-and-goals statement that is specific enough to approve or stop AI use cases. 1
  • Assign an accountable owner, approvals, and a review cadence so the document stays current as strategy and AI inventory change. 1
  • Keep evidence that the goals drive real decisions (intake triage, risk acceptance, model selection, and decommissioning). 1

MAP-1.3 sits early in the NIST AI RMF “Map” function because most AI governance failures start upstream: teams build or buy AI because it’s available, not because it fits the organization’s mission, risk appetite, and customer commitments. If your organization cannot clearly articulate what AI is for, you will struggle to defend why a high-impact use case exists, why a specific model was chosen, or why certain risks were accepted.

For a Compliance Officer, CCO, or GRC lead, operationalizing MAP-1.3 means turning strategy into a control: a short, approved set of AI goals that connect to business objectives and explicitly constrain AI uses that conflict with legal, ethical, security, or customer expectations. Done well, MAP-1.3 gives you a stable anchor for downstream controls: AI inventory scoping, risk tiering, testing depth, third-party due diligence, documentation standards, and incident response triggers. 2

This page provides requirement-level steps, audit-ready artifacts, and common examiner hangups so you can implement quickly without building a “strategy deck” that no one uses.

Regulatory text

Requirement (MAP-1.3): “The organization’s mission and relevant goals for AI technology are understood and documented.” 1

Operator interpretation: You need a documented, leadership-approved statement of (1) why the organization uses AI and (2) what “success” means for AI in your context, expressed as measurable or decision-driving goals. “Understood” implies the right stakeholders can explain it and that it influences governance outcomes (what gets approved, what gets blocked, and what gets escalated). 1

Plain-English interpretation (what this really means)

MAP-1.3 requires a practical AI mission-and-goals record that answers:

  • Why AI at all? Efficiency, fraud detection, accessibility, customer support, safety, scientific discovery, etc.
  • Where is AI allowed? Which domains, products, and decisions.
  • What constraints apply? Risk appetite, regulatory boundaries, customer promises, security posture, and brand commitments.
  • How will you know it’s working? Outcome goals and guardrail goals (quality, fairness, safety, privacy, robustness) that shape design and oversight. 1

If you can’t use the document to make a go/no-go decision on a proposed AI use case, it is too vague to satisfy the “operationalize quickly” test.

Who it applies to

Entities: Any organization developing, deploying, or managing AI systems, including those using AI embedded in third-party products. 1

Operational contexts where examiners will expect it:

  • Central AI/ML platform teams enabling many business units.
  • Product organizations embedding AI into customer-facing features (recommendations, personalization, content moderation, automated support).
  • Enterprise functions using AI for workforce, finance, compliance monitoring, or security operations.
  • Procurement-heavy shops buying AI capabilities from third parties (SaaS with AI, foundation model APIs, managed services). 1

Practical scoping note: MAP-1.3 should cover both internally built AI and externally sourced AI. If your “mission for AI” ignores third-party AI, your governance will miss a major portion of exposure.

What you actually need to do (step-by-step)

1) Name the control owner and approval path

  • Assign a single accountable owner (often the CCO, CIO, Chief Risk Officer, or Head of AI Governance).
  • Define approvers: business leadership plus risk/compliance and, where relevant, security and privacy.
  • Define where it lives: policy repository, GRC tool, or controlled document system with versioning. 1

2) Gather inputs you already have (don’t start from blank paper)

Collect the minimum inputs needed to make the AI goals defensible:

  • Corporate mission/strategy statements.
  • Enterprise risk appetite and material risk categories.
  • Privacy and security principles, data classification rules, and customer commitments.
  • Existing AI inventory or list of known AI use cases (even if incomplete).
  • Regulatory obligations relevant to your sector and geographies (don’t restate laws; capture constraints as “AI will not be used to…” or “AI use requires…”). 1

3) Draft an “AI mission and goals” statement that drives decisions

Keep it short, specific, and testable. Include:

A. Mission statement (1–3 sentences) Example structure:

  • “We use AI to [business purpose] while maintaining [customer promise/risk constraints].”

B. Goal categories (the minimum set)

  • Business outcome goals: e.g., reduce manual review, improve detection, improve accessibility.
  • Risk/guardrail goals: privacy protection, security robustness, safety, non-discrimination expectations, human oversight for certain decisions.
  • Operational goals: documentation standards, monitoring expectations, incident readiness.

C. Explicit “won’t do” boundaries Write down disallowed or restricted uses based on your context, such as:

  • Fully automated decisions in high-impact contexts without human review.
  • Training models on certain sensitive data classes without approved legal basis and controls.
  • Deploying generative outputs to customers without disclosure or safety review, where applicable to your risk posture.

This “won’t do” list is what auditors look for when checking whether governance is real. 1

4) Map goals to AI use cases and risk tiers

Create a simple mapping table:

  • Use caseBusiness ownerGoal alignmentRisk tierRequired approvalsRequired testing and monitoring.

The point is traceability: each in-scope AI system should be justifiable against documented goals, and high-risk systems should show enhanced controls. 1

5) Socialize and confirm “understood”

“Understood” requires more than publication:

  • Hold a review session with business, product, engineering, security, privacy, and compliance.
  • Record decisions and disagreements; resolve them and update the statement.
  • Train intake reviewers (AI governance committee, architecture review board, procurement) to apply the goals during approvals. 1

6) Embed into operating processes (make it enforceable)

Update the mechanisms that turn goals into actions:

  • AI intake / use case approval: require an “alignment to AI mission/goals” section.
  • Third-party due diligence: require vendors to explain how their AI supports your stated goals and meets constraints (data rights, monitoring, transparency, safety).
  • Model risk management / testing: set baseline expectations driven by risk goals.
  • KPIs and reporting: report progress against the goals and violations of constraints. 1

7) Establish ongoing review and change control

Define triggers for review:

  • Material strategy changes, major incidents, entry into new markets, new model classes (e.g., generative), or changes in data practices.
  • Refresh when AI inventory changes materially. Track versions, approvals, and rationale. 1

Required evidence and artifacts to retain (audit-ready)

Maintain a packet that shows design and operation:

  1. AI Mission and Goals Statement (version-controlled, approved) 1
  2. Approval records (sign-offs, meeting minutes, governance committee decisions)
  3. AI use case mapping (inventory excerpt or register view showing goal alignment)
  4. AI intake template with a required “alignment” section and sample completed intakes
  5. Training/communications evidence (slides, attendance records, intranet publication record)
  6. Change log (what changed, why, who approved)
  7. Decision evidence: at least a few examples where goals caused an outcome (approved with conditions, rejected, escalated, decommissioned)

If you use Daydream for GRC workflow, store the mission/goals statement as a controlled policy artifact and link it directly to each AI system record, intake ticket, and third-party record to preserve traceability during audits.

Common exam/audit questions and hangups

Auditors and internal reviewers tend to ask:

  • “Show me where AI goals are documented and who approved them.” 1
  • “How do these goals constrain what teams can build or buy?”
  • “Pick an AI system at random. Show how it aligns to goals and where misalignment would be caught.”
  • “How do third-party AI services fit into your mission and constraints?”
  • “How do you keep this updated when strategy changes?”

Hangups that stall reviews:

  • Goals are copied from generic AI ethics language and don’t map to your business.
  • “Understood” is asserted, but there’s no evidence of communication or use in approvals.
  • Inventory exists, but there’s no linkage between systems and goals. 1

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: Writing a glossy AI principles memo without governance hooks.
    Fix: Require “goal alignment” in intake and procurement workflows, and keep examples of decisions driven by it. 1

  2. Mistake: No explicit boundaries.
    Fix: Add a “restricted and prohibited AI uses” section, plus escalation criteria.

  3. Mistake: Treating third-party AI as out of scope.
    Fix: Include purchased AI in the mapping table and require vendors to meet your guardrail goals.

  4. Mistake: No clear owner, so updates never happen.
    Fix: Assign an accountable owner and define change triggers in your governance charter. 1

Enforcement context and risk implications

NIST AI RMF is a framework, not a regulator, so MAP-1.3 is typically tested through customer audits, internal audit, board risk oversight, procurement requirements, and alignment to sector regulations you may already be subject to. The risk of a weak MAP-1.3 implementation is practical: inconsistent AI decisions across business units, uncontrolled third-party AI adoption, and inability to justify risk acceptance during an incident review or external inquiry. 3

Practical 30/60/90-day execution plan

First 30 days (Immediate)

  • Assign owner and approvers; set document control location.
  • Pull existing mission/strategy, risk appetite, privacy/security policies, and current AI use case list.
  • Draft the AI mission and goals statement with a first-pass “won’t do” list. 1

By 60 days (Near-term)

  • Run stakeholder workshops; resolve conflicts and finalize v1 with leadership approval.
  • Create the mapping table connecting goals to current AI use cases and risk tiers.
  • Update AI intake and procurement due diligence templates to require alignment and constraint checks. 1

By 90 days (Operationalize and prove)

  • Train reviewers and frontline approvers (product, procurement, security, privacy, risk).
  • Collect decision evidence: a small set of completed intakes and at least one example of a conditional approval or rejection tied to goals.
  • Implement a review trigger and change log process; schedule the next governance review. 1

Frequently Asked Questions

Do we need a separate “AI mission” if we already have a corporate mission statement?

Yes, because MAP-1.3 expects AI-specific goals and boundaries that can be applied to approve or stop AI use cases. Your corporate mission is an input, but it rarely provides operational constraints for AI. 1

How detailed should the goals be?

Detailed enough that an intake reviewer can determine whether a proposed AI system supports the goals and fits the constraints. If two reviewers reach opposite conclusions using the document, it needs tighter language or examples. 1

Who should approve the AI mission and goals statement?

Business leadership must own the “why” and outcomes, while risk/compliance, security, and privacy confirm constraints and oversight expectations. Keep approval evidence and version history. 1

Does MAP-1.3 apply if we only buy AI from third parties and don’t build models?

Yes. Your organization still decides where AI is used, what data is shared, and what risks are acceptable. Your mission and goals should constrain third-party selection and deployment conditions. 1

How do we prove “understood” during an audit?

Show communications and training artifacts, plus operational evidence that goals influence intake, procurement, and risk decisions. Meeting minutes and completed intake records usually work well. 1

What’s the fastest way to operationalize MAP-1.3 without boiling the ocean?

Start with a v1 mission/goals statement tied to your current AI inventory, then embed a single required “goal alignment” check into the AI intake and third-party onboarding workflows. Expand scope as the inventory matures. 1

Footnotes

  1. NIST AI RMF Core

  2. NIST AI RMF Core; Source: NIST AI RMF program page

  3. NIST AI RMF program page

Frequently Asked Questions

Do we need a separate “AI mission” if we already have a corporate mission statement?

Yes, because MAP-1.3 expects AI-specific goals and boundaries that can be applied to approve or stop AI use cases. Your corporate mission is an input, but it rarely provides operational constraints for AI. (Source: NIST AI RMF Core)

How detailed should the goals be?

Detailed enough that an intake reviewer can determine whether a proposed AI system supports the goals and fits the constraints. If two reviewers reach opposite conclusions using the document, it needs tighter language or examples. (Source: NIST AI RMF Core)

Who should approve the AI mission and goals statement?

Business leadership must own the “why” and outcomes, while risk/compliance, security, and privacy confirm constraints and oversight expectations. Keep approval evidence and version history. (Source: NIST AI RMF Core)

Does MAP-1.3 apply if we only buy AI from third parties and don’t build models?

Yes. Your organization still decides where AI is used, what data is shared, and what risks are acceptable. Your mission and goals should constrain third-party selection and deployment conditions. (Source: NIST AI RMF Core)

How do we prove “understood” during an audit?

Show communications and training artifacts, plus operational evidence that goals influence intake, procurement, and risk decisions. Meeting minutes and completed intake records usually work well. (Source: NIST AI RMF Core)

What’s the fastest way to operationalize MAP-1.3 without boiling the ocean?

Start with a v1 mission/goals statement tied to your current AI inventory, then embed a single required “goal alignment” check into the AI intake and third-party onboarding workflows. Expand scope as the inventory matures. (Source: NIST AI RMF Core)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream