MAP-1.6: System requirements (e.g., “the system shall respect the privacy of its users”) are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.
MAP-1.6 requires you to turn high-level AI principles (privacy, safety, fairness, security) into clear, testable system requirements that the right people actually understand, then prove design decisions considered real-world socio-technical impacts. Operationalize it by running a cross-functional requirements workshop, documenting requirements-to-design traceability, and keeping evidence that tradeoffs and residual AI risks were reviewed and approved.
Key takeaways:
- Convert “respect privacy” into measurable requirements with owners, acceptance criteria, and tests.
- Confirm shared understanding across relevant AI actors with sign-offs and version control.
- Maintain traceability from requirement → design decision → risk assessment → validation evidence.
Compliance teams struggle with MAP-1.6 because it reads like “good engineering practice,” but it is a governance control: you must be able to show that system requirements were elicited from relevant AI actors, understood by them, and used to drive design decisions that consider socio-technical implications. MAP-1.6 is also where many AI risk programs fail audits: teams either keep requirements informal (slide decks, chat threads), or they document requirements but cannot demonstrate how those requirements shaped design choices, mitigations, and tests.
This requirement is part of the NIST AI Risk Management Framework (AI RMF) “MAP” function, which focuses on establishing context and mapping risks before and during AI system development and deployment. The operational goal is defensibility. If you cannot show who defined requirements, what those requirements meant, how the team interpreted them, and which design decisions resulted, you will not be able to demonstrate control over AI risks when challenged internally (model review, product council) or externally (customers, regulators, auditors). 1
Regulatory text
NIST AI RMF MAP-1.6 excerpt: “System requirements (e.g., ‘the system shall respect the privacy of its users’) are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.” 1
What the operator must do:
- Elicit system requirements from the people who have legitimate stakes and accountability for AI outcomes (not only engineering).
- Make the requirements unambiguous and confirm shared understanding (definitions, scope, constraints, acceptance criteria).
- Show that design decisions explicitly considered socio-technical implications (human behavior, organizational process, misuse/abuse, downstream impacts), and that those considerations influenced the chosen design and controls. 1
Plain-English interpretation (requirement-level)
MAP-1.6 means: “Write down what the AI system must do and must not do, get the right groups to agree on what those statements mean, and document how those requirements shaped design choices to reduce AI risk.”
If your requirement is “respect user privacy,” MAP-1.6 expects you to translate that into implementable requirements such as data minimization rules, retention limits, access controls, training-data exclusions, logging constraints, and user transparency requirements, then confirm that product, legal/privacy, security, and engineering all interpret them the same way.
Who it applies to
Entities: Any organization developing, deploying, or materially modifying AI systems. 1
Operational context where it matters most:
- AI features that process personal data, sensitive data, or regulated data
- Customer-facing AI where user trust, transparency, and misuse risks are high
- AI used in employment, credit, healthcare, education, or other high-impact decisions
- AI sourced from a third party (including foundation models), where you still own integration requirements and risk decisions
Relevant AI actors (typical):
- Product owner and engineering lead (build decisions)
- Privacy counsel / data protection officer (data constraints)
- Security (threat modeling, abuse cases)
- Legal/compliance (regulatory and contractual constraints)
- Risk/Model governance (validation and approval)
- UX/research and operations (human factors, escalation paths)
- Third parties (model provider, annotators) where they influence requirements or controls
What you actually need to do (step-by-step)
Step 1: Define scope and “system boundaries”
- Write a one-page system context: purpose, users, decisions supported, input data types, outputs, and integration points.
- Identify where the AI is probabilistic and where deterministic rules apply.
- List third-party components (models, data, tooling) and what you control vs. what you inherit.
Deliverable: System Context & Boundary Statement (versioned).
Step 2: Run a requirements elicitation workshop (cross-functional)
Structure the session to produce artifacts, not discussion:
- Start with business goals and user journeys.
- Add constraints: privacy, security, safety, reliability, transparency, accessibility, and policy constraints.
- Capture socio-technical scenarios: foreseeable misuse, overreliance, operator workarounds, feedback loops, and impacts on affected groups.
Technique that works in practice: Require each function to provide “non-negotiables” and “tradeable” requirements. That forces explicit decision-making instead of vague principles.
Deliverable: Requirements Workshop Notes + Attendance + Action Log.
Step 3: Convert principles into testable “shall” requirements
For each requirement, include:
- Requirement statement (clear “shall” language)
- Rationale (risk addressed)
- Owner (who ensures implementation)
- Acceptance criteria (how you test)
- Verification method (test, review, monitoring)
- Traceability links (to risk register, threat model, DPIA/PIA if applicable)
Example translations
- Principle: “Respect privacy.”
Requirement: “The system shall not store end-user prompts containing personal data in training datasets unless explicit opt-in consent is recorded and verified.” - Principle: “Avoid harmful outputs.”
Requirement: “The system shall block generation of disallowed content categories and log policy-trigger events for review.”
Deliverable: AI System Requirements Specification (SRS) with acceptance criteria.
Step 4: Confirm “understood by relevant AI actors”
MAP-1.6 is explicit: elicited from and understood by relevant AI actors. That means you need proof of shared understanding.
- Add a definitions section (privacy terms, sensitive data categories, “user,” “consent,” “retention,” “explainability,” “high-impact use”).
- Use a RACI for requirement ownership and approval.
- Collect sign-off (or recorded dissent) from required roles.
Control tip: Treat sign-off like a release gate. If one required approver cannot sign, document the exception and risk acceptance path.
Deliverable: Requirements Approval Record (sign-offs, version, date, exceptions).
Step 5: Tie design decisions to socio-technical implications
Create a lightweight “design decision record” (DDR) practice for AI. Each DDR should state:
- Decision taken (architecture, model choice, data pipeline, UI pattern, human-in-the-loop control)
- Options considered (at least two)
- Socio-technical implications considered (misuse, overreliance, access inequity, error recovery, monitoring burden)
- Risks introduced and mitigations selected
- Residual risk and acceptance owner
Deliverable: Design Decision Records linked to requirements and risk register. 1
Step 6: Validate and operationalize in the SDLC
- Convert acceptance criteria into test cases (privacy tests, red teaming prompts, bias checks where applicable, security tests).
- Add monitoring requirements (drift, policy violations, escalation triggers).
- Ensure change management: requirement updates trigger review of DDRs, tests, and risk assessment.
Deliverable: Test plan, monitoring plan, and change control evidence mapped back to requirements.
Required evidence and artifacts to retain (audit-ready)
Maintain these in a controlled repository with version history:
- System Context & Boundary Statement
- Stakeholder / “relevant AI actors” list and RACI
- Requirements workshop agenda, notes, attendance, action items
- System Requirements Specification with:
- “shall” requirements
- acceptance criteria and verification method
- owners
- definitions/glossary
- Requirements sign-offs and exceptions (risk acceptance if needed)
- Design Decision Records demonstrating socio-technical considerations
- AI risk register entries linked to requirements and design decisions
- Validation evidence (test results, review checklists, monitoring configuration)
- Third-party documentation where applicable (model cards, data sheets, contracts), plus your integration requirements and compensating controls
Common exam/audit questions and hangups
Auditors and internal reviewers commonly ask:
- “Show me one requirement like ‘respect privacy’ and how it became implementable controls.”
- “Who are the relevant AI actors, and where is their evidence of understanding?”
- “Where do you document socio-technical implications, and how did they change the design?”
- “How do requirement changes get governed after initial launch?”
- “If you rely on a third party model, what requirements did you impose on integration and monitoring?”
Hangups that slow teams down:
- Requirements live in product tickets with no stable versioning.
- Sign-offs exist, but they are for a feature launch, not for the requirements themselves.
- “Socio-technical” is treated as narrative, not as decision-driving input.
Frequent implementation mistakes (and how to avoid them)
-
Writing values statements instead of requirements.
Fix: enforce “shall + acceptance criteria + verification method” for every high-level principle. -
Only engineering defines requirements.
Fix: make privacy, security, legal/compliance, and operations required approvers for defined AI risk tiers. -
No traceability from requirement to design to test.
Fix: add a traceability matrix. If you use Daydream or another GRC tool, map MAP-1.6 to a control owner and recurring evidence collection so traceability is continuous, not a scramble before reviews. -
Socio-technical implications are a one-time brainstorming exercise.
Fix: require a DDR for every meaningful change (model swap, new data source, UI change that affects reliance, new user segment).
Enforcement context and risk implications
No public enforcement cases were provided in the supplied source catalog for MAP-1.6. Practically, MAP-1.6 reduces exposure in areas that commonly drive complaints, investigations, and contractual disputes: privacy failures, misleading user experiences, uncontrolled third-party components, and foreseeable misuse pathways. Treat it as a “defensibility” requirement: you are building a record that you identified risks, made informed tradeoffs, and implemented requirements with governance. 1
A practical 30/60/90-day execution plan
First 30 days (establish control design)
- Assign a MAP-1.6 control owner and define “relevant AI actors” for your organization by AI risk tier.
- Publish templates: System Context, SRS, DDR, traceability matrix, approval record.
- Pilot on one in-scope AI system and run the elicitation workshop end-to-end.
Days 31–60 (operate the control on real work)
- Convert pilot outputs into SDLC gates: requirements approval before build; DDR required for major design decisions.
- Train product and engineering leads on writing testable “shall” requirements and acceptance criteria.
- Stand up a central evidence folder structure and a simple audit checklist tied to MAP-1.6.
Days 61–90 (scale and make it repeatable)
- Expand to additional AI systems, including those sourced from third parties.
- Add recurring evidence collection (quarterly or per-release cadence as appropriate for your change rate) and exceptions workflow.
- Report metrics qualitatively to governance: coverage of in-scope systems, evidence completeness, and open exceptions by severity.
Frequently Asked Questions
Who counts as “relevant AI actors” for MAP-1.6 in a typical enterprise?
Include the roles that define constraints, bear risk, or operate outcomes: product, engineering, privacy, security, legal/compliance, model governance, and operations. If a third party supplies the model or data, include them where they influence requirements or controls.
How do I turn “the system shall respect user privacy” into something auditable?
Write specific “shall” statements with acceptance criteria and verification method (test, review, monitoring). Then link each requirement to a design decision record and test evidence that shows the requirement was implemented.
Do we need a separate document, or can we keep this in Jira/ADO tickets?
Tickets can work if you enforce version control, stable IDs for requirements, required approvers, and traceability to DDRs and tests. Auditors need a coherent requirements baseline, not scattered comments across multiple epics.
What qualifies as “socio-technical implications” in practice?
Capture how humans and organizations interact with the system: overreliance, error recovery, escalation paths, incentives to misuse, accessibility barriers, and downstream impacts on affected groups. Record how those implications changed UI, controls, monitoring, or rollout decisions.
How does MAP-1.6 apply if we use a third-party foundation model?
You still must elicit and document integration requirements (data handling, logging, retention, safety controls, monitoring) and show how you addressed socio-technical risks in your product context. Keep third-party artifacts, but do not substitute them for your own requirements and design decisions.
What evidence is the fastest to produce if we’re behind?
Start with a system context statement, a requirements list with acceptance criteria, and sign-offs from required AI actors. Then backfill DDRs for the most consequential design choices and link them to risks and tests.
Footnotes
Frequently Asked Questions
Who counts as “relevant AI actors” for MAP-1.6 in a typical enterprise?
Include the roles that define constraints, bear risk, or operate outcomes: product, engineering, privacy, security, legal/compliance, model governance, and operations. If a third party supplies the model or data, include them where they influence requirements or controls.
How do I turn “the system shall respect user privacy” into something auditable?
Write specific “shall” statements with acceptance criteria and verification method (test, review, monitoring). Then link each requirement to a design decision record and test evidence that shows the requirement was implemented.
Do we need a separate document, or can we keep this in Jira/ADO tickets?
Tickets can work if you enforce version control, stable IDs for requirements, required approvers, and traceability to DDRs and tests. Auditors need a coherent requirements baseline, not scattered comments across multiple epics.
What qualifies as “socio-technical implications” in practice?
Capture how humans and organizations interact with the system: overreliance, error recovery, escalation paths, incentives to misuse, accessibility barriers, and downstream impacts on affected groups. Record how those implications changed UI, controls, monitoring, or rollout decisions.
How does MAP-1.6 apply if we use a third-party foundation model?
You still must elicit and document integration requirements (data handling, logging, retention, safety controls, monitoring) and show how you addressed socio-technical risks in your product context. Keep third-party artifacts, but do not substitute them for your own requirements and design decisions.
What evidence is the fastest to produce if we’re behind?
Start with a system context statement, a requirements list with acceptance criteria, and sign-offs from required AI actors. Then backfill DDRs for the most consequential design choices and link them to risks and tests.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream