GOVERN-3.2: Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems.

To meet govern-3.2: policies and procedures are in place to define and differentiate roles and responsibilities for human-ai configurations and oversight of ai systems. requirement, you must document who configures AI, who reviews and approves those configurations, who monitors performance and risk, and who can override or shut down the system. Then you must operate that model consistently and retain evidence. 1

Key takeaways:

  • Define “human vs AI” decision points and assign accountable owners for each stage of the AI lifecycle. 1
  • Separate configuration authority from oversight authority, and document escalation and override paths. 1
  • Keep audit-ready artifacts: RACI, SOPs, approval logs, access controls, and monitoring/override records. 1

GOVERN-3.2 is a governance control disguised as a documentation requirement. Examiners and internal stakeholders rarely care that you have a policy with nice language; they care that you can point to named roles, show how those roles interact with AI system configuration, and prove those interactions happened for real systems in production. This requirement focuses on “human-AI configurations” and “oversight,” which means you must be explicit about where humans set objectives, thresholds, prompts, features, guardrails, and decision logic, and where humans supervise outcomes, monitor drift, investigate incidents, and intervene.

Operationally, the fastest path is to create a role-and-responsibility model that works across (1) AI built in-house, (2) AI bought from a third party, and (3) AI embedded in broader business processes (for example, fraud review, customer support, underwriting, marketing, HR screening). You are defining the control plane: who can change the model or its operating parameters, who can approve that change, and who must independently validate and monitor it over time. 1

Regulatory text

Requirement (verbatim): “Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems.” 1

What the operator must do:

  1. Put written governance into effect (policy + procedures), not just a slide deck. 1
  2. Clearly differentiate roles across two domains:
  • Human-AI configuration: who sets up the system (data inputs, model selection, prompts, thresholds, guardrails, integrations, deployment settings, retraining triggers).
  • Oversight: who monitors, reviews, audits, approves exceptions, investigates issues, and has authority to pause/rollback/shut down. 1
  1. Run the model: approvals, monitoring, and escalation must be evidenced for each AI system in scope. 1

Plain-English interpretation

You need a defensible operating model for AI accountability. For every AI system, a reviewer should be able to answer, quickly and consistently:

  • Who is accountable for the AI system’s business purpose and risk acceptance?
  • Who can change the AI system’s configuration, and under what controls?
  • Who provides independent oversight, and what do they review?
  • Who can override the AI output, and when must they do so?
  • What happens when the AI behaves unexpectedly or causes harm?

If you cannot answer those questions with names (or role titles), documented procedures, and records, you have not met GOVERN-3.2. 1

Who it applies to

Entity scope: Any organization developing or deploying AI systems. 1
Operational scope: Apply this to AI used in:

  • Customer-facing decisions (eligibility, pricing, content, recommendations, claims handling).
  • Employee or contractor decisions (screening, performance management).
  • Security and fraud workflows (anomaly detection, risk scoring).
  • Generative AI used for communications, code, knowledge retrieval, or case handling.
  • Third-party AI services where you configure inputs, prompts, thresholds, or routing logic.

Practical scoping rule: If a human can change how the AI behaves, or if the AI influences a material business decision, include it. If you exclude a system, document why. 1

What you actually need to do (step-by-step)

Step 1: Build an AI system inventory with “configuration surfaces”

Create (or extend) your AI inventory to capture:

  • System name, owner, purpose, and business process.
  • Deployment type: in-house model, third-party model/API, embedded feature.
  • Configuration surfaces: prompts, policies, thresholds, feature flags, retrieval sources, model parameters, training data, fine-tunes, guardrails, human review queues.
  • Where overrides occur: manual review steps, appeal processes, kill switch location.

This inventory becomes the backbone for mapping roles to real control points. 1

Step 2: Define a minimum role set and separation-of-duties pattern

Document role definitions that work across teams. A workable minimum set:

Role (example title) Core responsibility Must be different from
Business Owner Defines intended use, signs off risk acceptance Independent Oversight
AI Product/Model Owner Owns configuration and performance Independent Validation
Data Owner/Steward Approves data sources and use constraints Model Owner (preferred)
Independent Risk/Oversight (GRC/Model Risk/Compliance) Reviews controls, exceptions, monitoring Configuration role
Security Owner Access control, logging, incident response Business Owner
Human-in-the-loop Operator Reviews/overrides outputs in workflow Model Owner (preferred)

You do not need these exact titles. You do need the separation: the people changing the system should not be the only people approving and supervising it. 1

Step 3: Write the policy (governance intent) and the procedures (how work happens)

Policy must cover:

  • Role accountability model (RACI-style statement).
  • Required approvals for configuration changes.
  • Oversight expectations (monitoring, incident management, exception handling).
  • Documentation and evidence retention requirements. 1

Procedures must cover:

  • Configuration change workflow (request, risk review, approval, implementation, rollback).
  • Oversight cadence triggers (what events require review: incidents, drift, major change, new use case).
  • Human override procedures: when to override, how to document rationale, where to escalate.
  • Third-party change handling: how you evaluate vendor model updates and feature changes.

Procedures should be executable by a new team member without tribal knowledge. 1

Step 4: Create a RACI matrix per AI system (or per AI class)

Create a template RACI, then instantiate it for each system:

  • Responsible: performs configuration, monitoring, operations.
  • Accountable: final approver for purpose and risk acceptance.
  • Consulted: security, privacy, legal, data governance, customer ops.
  • Informed: senior leadership, audit, affected business units.

Tie each RACI row to a specific action: “Approve prompt template changes,” “Approve model version upgrades,” “Review incident postmortems,” “Authorize emergency shutdown.” 1

Step 5: Implement access controls and logging that match the role model

Your written role design must match real permissions:

  • Restrict who can edit prompts, thresholds, policy rules, retrieval sources, and model versions.
  • Require approvals (ticketing or change management) for production changes.
  • Log key oversight activities: reviews, overrides, exception approvals, shutdown events.

If the “oversight role” can also silently change production configs, your separation-of-duties story will fail under scrutiny. 1

Step 6: Operationalize oversight

Oversight is not a meeting. It is a control loop:

  • Monitoring outputs (quality, safety, bias signals where applicable, incident trends).
  • Reviewing configuration changes and their testing/validation records.
  • Tracking exceptions and compensating controls.
  • Escalating high-risk behavior and documenting interventions.

Keep minutes and decision records tied to the system inventory item. 1

Step 7: Map the requirement to an owner and recurring evidence collection

Assign a control owner (often GRC, model risk, or an AI governance lead) who:

  • Maintains the policy and procedure versions.
  • Runs a recurring evidence request from system owners.
  • Reports gaps and follows remediation through closure. 1

This is where Daydream fits naturally: many teams fail GOVERN-3.2 because evidence is scattered across wikis, tickets, and chat. A systemized control mapping and recurring evidence workflow reduces scramble and makes oversight repeatable.

Required evidence and artifacts to retain

Keep artifacts tied to each AI system in scope:

Core governance artifacts

  • AI governance policy and procedure documents, versioned and approved. 1
  • Role definitions and a RACI matrix per system or system class. 1
  • Named control owner and escalation contacts.

Operational records

  • Configuration change tickets (request, risk review, approval, implementation, rollback plan).
  • Access control lists/role-based access control mapping for AI configuration surfaces.
  • Logs of configuration changes (who/what/when).
  • Oversight meeting notes, review checklists, and decisions.
  • Incident records and postmortems related to AI behavior.
  • Human override records (samples, rationale, reviewer identity) where overrides are part of the control design.

Third-party AI evidence (where relevant)

  • Third-party due diligence records for the AI provider.
  • Vendor change notifications and your internal review/approval notes for those changes.

Common exam/audit questions and hangups

Expect questions like:

  • “Show me, for System X, who can change prompts/thresholds/model versions and who approves those changes.” 1
  • “Where is human oversight defined in the workflow? Show evidence of overrides and escalations.” 1
  • “How do you prevent the same person/team from both changing and approving material AI configurations?” 1
  • “How do you govern third-party model updates?” 1
  • “Provide evidence the policy is implemented for a sample of AI systems.” 1

Hangups typically occur when the organization has a central AI policy but no per-system instantiation, or when “oversight” is informal and leaves no records.

Frequent implementation mistakes (and how to avoid them)

  1. RACI without control points. A generic RACI (“IT is Responsible”) fails. Tie roles to specific configuration surfaces and decision gates.
  2. No separation of duties. If builders also self-approve, add an independent reviewer (GRC, model risk, internal audit, or a peer review board).
  3. Procedures that don’t match tooling. If your procedure says “approvals required,” but production changes happen in a console with no ticket, fix permissions or integrate change management.
  4. Ignoring “soft configurations.” Prompts, retrieval sources, and policy rules change model behavior. Treat them as configurations with governance.
  5. Third-party blind spot. You may not control the base model, but you control deployment settings and use-case constraints. Document your responsibilities and the vendor’s responsibilities.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement. Treat GOVERN-3.2 as a defensibility control: weak role clarity increases the chance of unmanaged changes, unclear accountability in incidents, and inconsistent human oversight. Those failures can translate into customer harm, operational loss, or inability to pass internal audit.

A practical 30/60/90-day execution plan

First 30 days: establish the governance spine

  • Stand up an AI system inventory and identify configuration surfaces per system. 1
  • Draft role definitions and a standard RACI template aligned to configuration and oversight. 1
  • Assign a control owner for GOVERN-3.2 and define evidence expectations. 1

By 60 days: implement procedures and permissions

  • Publish and approve the AI governance policy and the configuration/oversight procedures. 1
  • Apply RACI to highest-impact systems first and socialize accountability with named owners.
  • Align access controls: restrict configuration rights, require approvals, and turn on logging for configuration changes. 1

By 90 days: prove operation and close gaps

  • Run an oversight cycle for a sample of systems: monitoring review, change review, and incident tabletop where relevant.
  • Collect evidence: tickets, approvals, meeting notes, override logs, and permission reviews. 1
  • Remediate gaps found in sampling (missing approvals, unclear escalation, misaligned permissions) and update procedures.

Frequently Asked Questions

Do we need a separate policy for each AI system?

No. Use one enterprise policy plus system-level RACIs and procedures that instantiate the policy for each AI system in scope. Auditors will ask for system-specific proof. 1

What counts as a “human-AI configuration” in a generative AI tool?

Prompts, system instructions, retrieval sources, routing rules, content filters, thresholds, and integration settings all change model behavior. Treat those as configurable elements with approvals and logging. 1

We use a third-party AI API. How can we define roles if we don’t control the model?

Define roles around what you do control: provider selection and due diligence, allowed use cases, input/output handling, prompt and guardrail changes, monitoring, and incident response. Document the vendor’s responsibilities separately. 1

Who should own oversight: Compliance, Model Risk, or the business?

Assign accountability to the business owner for intended use and risk acceptance, and assign independent oversight to a control function that can challenge changes and require remediation. The exact department varies by your org structure. 1

What evidence is most persuasive in an audit?

A per-system RACI tied to real change tickets, approval records, and access control settings. Add monitoring review records and documented overrides to show ongoing oversight. 1

How do we handle emergency changes when the AI causes harm?

Define an emergency override and shutdown procedure with named authority, required notifications, and post-event review. Keep records of the decision, action taken, and corrective steps. 1

Footnotes

  1. NIST AI RMF Core

Frequently Asked Questions

Do we need a separate policy for each AI system?

No. Use one enterprise policy plus system-level RACIs and procedures that instantiate the policy for each AI system in scope. Auditors will ask for system-specific proof. (Source: NIST AI RMF Core)

What counts as a “human-AI configuration” in a generative AI tool?

Prompts, system instructions, retrieval sources, routing rules, content filters, thresholds, and integration settings all change model behavior. Treat those as configurable elements with approvals and logging. (Source: NIST AI RMF Core)

We use a third-party AI API. How can we define roles if we don’t control the model?

Define roles around what you do control: provider selection and due diligence, allowed use cases, input/output handling, prompt and guardrail changes, monitoring, and incident response. Document the vendor’s responsibilities separately. (Source: NIST AI RMF Core)

Who should own oversight: Compliance, Model Risk, or the business?

Assign accountability to the business owner for intended use and risk acceptance, and assign independent oversight to a control function that can challenge changes and require remediation. The exact department varies by your org structure. (Source: NIST AI RMF Core)

What evidence is most persuasive in an audit?

A per-system RACI tied to real change tickets, approval records, and access control settings. Add monitoring review records and documented overrides to show ongoing oversight. (Source: NIST AI RMF Core)

How do we handle emergency changes when the AI causes harm?

Define an emergency override and shutdown procedure with named authority, required notifications, and post-event review. Keep records of the decision, action taken, and corrective steps. (Source: NIST AI RMF Core)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream