MAP-1.2: Interdisciplinary AI actors, competencies, skills, and capacities for establishing context reflect demographic diversity and broad domain and user experience expertise, and their participation is documented. Opportunities for inter
To meet MAP-1.2, you must staff the “context establishment” work for each AI system with an interdisciplinary group that reflects demographic diversity and broad domain and user experience expertise, and you must document who participated, in what role, and what decisions they influenced. Treat this as a governance-and-evidence control, not a hiring slogan. 1
Key takeaways:
- Build a repeatable role-and-competency matrix for AI context setting (business, technical, legal, risk, UX, impacted user perspectives).
- Create a documented participation trail (rosters, meeting notes, approvals, decision log) tied to each AI system.
- Prioritize interdisciplinary collaboration by making it a required gate for model scoping, intended use, and impact assessment. 1
MAP-1.2 sits in the “Map” function of the NIST AI Risk Management Framework and focuses on who sets the context for an AI system and how you prove it happened. The practical intent is straightforward: AI risk decisions get worse when they are made by a narrow set of stakeholders, or when key user and domain perspectives are missing. MAP-1.2 pushes you to formalize an interdisciplinary approach that includes demographic diversity and real user experience expertise, then keep records that show participation was real and consequential. 1
For a Compliance Officer, CCO, or GRC lead, the fastest path to operationalizing MAP-1.2 is to treat it like any other control: define scope, assign owners, set minimum required roles/competencies, embed the requirement into SDLC/MLLC gates, and collect evidence on a cadence. Done well, MAP-1.2 reduces downstream rework in privacy reviews, bias testing, model risk sign-off, and customer escalations because assumptions about intended use and affected users are challenged early, with a clear audit trail. 2
Regulatory text
Excerpt (MAP-1.2): “Interdisciplinary AI actors, competencies, skills, and capacities for establishing context reflect demographic diversity and broad domain and user experience expertise, and their participation is documented. Opportunities for interdisciplinary collaboration are prioritized.” 1
What the operator must do:
You must (1) ensure the group establishing AI context is interdisciplinary and includes demographic diversity plus domain and user experience expertise, (2) make collaboration the default by embedding it into your lifecycle activities, and (3) maintain documentation that proves participation (not just that a committee exists). 1
Plain-English interpretation
MAP-1.2 requires a defensible, repeatable way to answer:
- Who helped define the system’s purpose, intended users, impacted groups, operating environment, and misuse scenarios?
- Why were those people qualified (competencies and lived/user experience)?
- How did their input change scope, requirements, or risk decisions?
- Where is the evidence?
This is not a mandate to collect sensitive personal data about staff. It is a requirement to ensure the context-setting process reflects diversity and broad expertise, and that you can demonstrate it through documentation. 1
Who it applies to
Entities: Organizations that develop, fine-tune, procure, integrate, or deploy AI systems, including those embedding third-party models into products or internal decisioning. 1
Operational contexts where MAP-1.2 matters most:
- High-impact decision workflows (credit, employment, insurance, healthcare triage, identity verification).
- Customer-facing generative AI (support agents, copilots) where user harm can arise from unsafe responses or poor UX assumptions.
- Internal surveillance, fraud, or productivity tools that affect employees or contractors.
- Regulated environments where governance evidence is regularly tested (banking, health, critical infrastructure), even if NIST AI RMF is “voluntary.”
Third-party angle: If a third party provides the model, MAP-1.2 still applies to your context establishment for your use case. Your interdisciplinary team must shape the intended use, limitations, monitoring, and guardrails for the deployment. 2
What you actually need to do (step-by-step)
1) Define “context establishment” as a control-scoped activity
Write a short procedure that states which lifecycle events require an interdisciplinary context workshop (or equivalent), for example:
- New AI system intake
- Material change to model, training data, prompts, or decision logic
- New user population, geography, language, or channel
- New high-risk use case or integration
Make it a gate: you cannot finalize intended use, requirements, or risk rating until this step is complete. 1
2) Create an interdisciplinary role and competency matrix
Build a matrix with required roles (or competencies) and acceptable alternates. Keep it practical:
| Category | Minimum participation expectations (examples) |
|---|---|
| Business owner | Accountable for intended use, success criteria, and constraints |
| Product/UX research | Brings user journeys, accessibility, usability risk, and feedback loops |
| Domain SME | Validates domain assumptions (e.g., lending policy, clinical workflow) |
| Data science/ML engineering | Explains model behavior limits, data dependencies, evaluation approach |
| Security | Threat modeling, abuse cases, access controls |
| Privacy | Data minimization, purpose limitation, notice/consent alignment where relevant |
| Legal/Compliance | Policy alignment, documentation standards, governance sign-offs |
| Risk/Model risk | Challenges assumptions, sets monitoring and issue triggers |
| Frontline operations | Real operating conditions, exceptions, workarounds, escalation paths |
| Impacted user perspective | Representative viewpoints from affected users or advocates where feasible |
MAP-1.2 specifically calls out demographic diversity and broad domain and user experience expertise. Your matrix should explicitly require UX/user research and a mechanism to include impacted perspectives (internal ERGs, customer advisory panels, accessibility specialists, community advocates, or trained user proxies). Document selection criteria and conflicts of interest. 1
3) Set participation rules and decision rights
Define:
- Who is required vs. “consulted”
- What constitutes meaningful participation (attendance plus documented input)
- Decision rights (who approves intended use, constraints, risk treatment plan)
- Escalation if required perspectives are missing
This prevents rubber-stamping where names appear on a roster but no challenge function occurs.
4) Run a structured “context workshop” and capture outputs
Use a standard agenda and template tied to your AI intake. Minimum outputs to capture:
- Intended use and out-of-scope use
- Users and impacted groups (including foreseeable misusers)
- Operating environment assumptions (languages, channels, geographies, accessibility needs)
- Harm analysis at the scenario level (what could go wrong for whom)
- System boundaries and dependencies (data sources, third parties, human reviewers)
- Risk controls that become requirements (guardrails, monitoring, human-in-the-loop)
Record dissenting views and how they were resolved. Auditors often look for evidence of challenge, not perfect consensus.
5) “Prioritize collaboration” by embedding cross-functional touchpoints
Convert MAP-1.2 from a one-time workshop into ongoing collaboration:
- Add required reviewers to product requirement documents and model cards.
- Require UX and domain review before launch approvals.
- Include operations and support in monitoring design (what they will see, triage steps).
6) Implement recurring evidence collection
Assign a control owner (often AI governance lead, Model Risk, or GRC) and define:
- Evidence to collect per system and per release
- Where evidence is stored (GRC system, ticketing, repository)
- Review cadence aligned to your release cycle
If you use Daydream, map MAP-1.2 to a control with owners, tasks, and recurring evidence requests so you can produce a clean audit packet per AI system without chasing screenshots at the last minute. 1
Required evidence and artifacts to retain
Keep artifacts tied to a specific AI system/version/release:
Governance design artifacts
- AI context establishment procedure (MAP-1.2 control narrative)
- Role/competency matrix with required roles and alternates
- RACI for context-setting decisions and escalation criteria
Operational evidence 1
- Participant roster with roles, org, and relevant competency justification
- Meeting invites/attendance logs or workshop sign-in
- Agenda and pre-reads
- Context worksheet outputs (intended use, user groups, environment assumptions)
- Decision log showing how input changed requirements or risk acceptance
- Approvals/sign-offs (product, risk, compliance) tied to release
Quality evidence
- Exceptions register (when a required role was not available) with compensating controls
- Retrospective notes (lessons learned) feeding into the matrix and procedure
Common exam/audit questions and hangups
Expect these, and pre-answer them in your artifacts:
- “Show me who established the intended use and user population.” Provide roster + decision log + context worksheet.
- “Where is UX/user experience represented?” Point to named roles and specific UX outputs (journeys, accessibility review, user testing plans).
- “How did demographic diversity influence decisions?” Don’t force personal disclosures; show representation mechanisms (diverse panels, ERG participation, accessibility experts) and decision impacts. 1
- “How do you ensure this isn’t a one-off?” Provide the control schedule, evidence cadence, and examples across multiple systems/releases.
- “What about third-party models?” Show your internal context-setting work plus third-party due diligence inputs and integration constraints.
Frequent implementation mistakes and how to avoid them
- Mistake: Committee on paper, no traceable outputs. Fix: require a context worksheet and decision log as gate artifacts.
- Mistake: Demographic diversity treated as a hiring metric. Fix: define participation mechanisms that do not require collecting sensitive HR data; document the mechanism and the outcomes.
- Mistake: UX invited after build. Fix: make UX a required reviewer at intake and before launch approval.
- Mistake: Only technical roles present. Fix: enforce the role/competency matrix with escalation if roles are missing.
- Mistake: No linkage to change management. Fix: trigger MAP-1.2 refresh on material change events in SDLC/MLLC.
Enforcement context and risk implications
NIST AI RMF is a framework, not a regulator, so MAP-1.2 does not come with direct statutory penalties in this source set. The risk is practical and exam-driven: weak interdisciplinary context-setting leads to foreseeable harms being missed (biased outcomes, accessibility failures, unsafe outputs, privacy surprises) and leaves you without defensible governance evidence when customers, regulators, or internal audit ask how you assessed impact. 2
Practical 30/60/90-day execution plan
First 30 days (stand up the control)
- Name an executive sponsor and a control owner for MAP-1.2.
- Define scope: which AI systems and which lifecycle events trigger context establishment.
- Publish the role/competency matrix and RACI.
- Create templates: context worksheet, decision log, roster, exception form.
By 60 days (operate it on real systems)
- Pilot on a small set of in-flight AI initiatives (include at least one third-party model integration).
- Hold context workshops, produce artifacts, and run a retro to tighten templates.
- Train product, data science, and risk partners on “what good looks like” and what evidence must be saved.
By 90 days (scale and audit-proof)
- Embed MAP-1.2 as an intake and release gate in your SDLC/MLLC tooling.
- Centralize evidence storage and establish recurring evidence checks.
- Start periodic governance reporting: which systems have complete participation evidence, open exceptions, and upcoming refresh triggers.
Frequently Asked Questions
Do we have to collect employees’ demographic data to prove “demographic diversity”?
No. MAP-1.2 requires that context-setting reflects demographic diversity; you can meet the intent through documented participation mechanisms (for example, structured inclusion of accessibility experts, ERGs, user panels, or advocates) without storing sensitive HR attributes. Keep the focus on documented representation and decision impact. 1
What counts as “user experience expertise” for MAP-1.2?
UX expertise should be someone who can represent real user journeys and failure modes, not just UI design. User research, accessibility, service design, and frontline support perspectives are all valid if their contributions are documented in the context outputs. 1
We buy an AI model from a third party. Are we still on the hook?
Yes for your deployment context. The third party can provide model documentation, but you still need an interdisciplinary group to define intended use, constraints, impacted users, and monitoring for your environment, then document that participation. 2
How do we handle small teams that can’t staff every role?
Use the matrix to define acceptable alternates and allow exceptions with compensating controls (for example, external advisors, rotating reviewers, or a centralized risk/UX function). Document the exception and the mitigation so it’s auditable. 1
What documentation is the minimum viable evidence for an audit?
Keep a roster (who/roles), a context worksheet (what was decided about use/users/environment), and a decision log (how interdisciplinary input changed requirements or risk treatment). Without those, MAP-1.2 is hard to defend. 1
How do we prove collaboration was “prioritized” rather than optional?
Make participation a lifecycle gate with required sign-offs, and show it triggered on new systems and material changes. Your change management records should reference the MAP-1.2 artifacts for the same release. 1
Footnotes
Frequently Asked Questions
Do we have to collect employees’ demographic data to prove “demographic diversity”?
No. MAP-1.2 requires that context-setting reflects demographic diversity; you can meet the intent through documented participation mechanisms (for example, structured inclusion of accessibility experts, ERGs, user panels, or advocates) without storing sensitive HR attributes. Keep the focus on documented representation and decision impact. (Source: NIST AI RMF Core)
What counts as “user experience expertise” for MAP-1.2?
UX expertise should be someone who can represent real user journeys and failure modes, not just UI design. User research, accessibility, service design, and frontline support perspectives are all valid if their contributions are documented in the context outputs. (Source: NIST AI RMF Core)
We buy an AI model from a third party. Are we still on the hook?
Yes for your deployment context. The third party can provide model documentation, but you still need an interdisciplinary group to define intended use, constraints, impacted users, and monitoring for your environment, then document that participation. (Source: NIST AI RMF program page)
How do we handle small teams that can’t staff every role?
Use the matrix to define acceptable alternates and allow exceptions with compensating controls (for example, external advisors, rotating reviewers, or a centralized risk/UX function). Document the exception and the mitigation so it’s auditable. (Source: NIST AI RMF Core)
What documentation is the minimum viable evidence for an audit?
Keep a roster (who/roles), a context worksheet (what was decided about use/users/environment), and a decision log (how interdisciplinary input changed requirements or risk treatment). Without those, MAP-1.2 is hard to defend. (Source: NIST AI RMF Core)
How do we prove collaboration was “prioritized” rather than optional?
Make participation a lifecycle gate with required sign-offs, and show it triggered on new systems and material changes. Your change management records should reference the MAP-1.2 artifacts for the same release. (Source: NIST AI RMF Core)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream