MANAGE-1.3: Responses to the AI risks deemed high priority, as identified by the map function, are developed, planned, and documented. Risk response options can include mitigating, transferring, avoiding, or accepting.

To meet MANAGE-1.3, you must take the AI risks you already ranked as “high priority” in your MAP work and produce documented risk responses for each one, with a clear plan, owner, timeline, and decision record. Acceptable response types include mitigate, transfer, avoid, or accept, but each choice must be justified and traceable. 1

Key takeaways:

  • High-priority AI risks must have an explicit, documented response decision and plan, not just a risk register entry. 1
  • Your documentation must connect MAP outputs (risk prioritization) to MANAGE outputs (response actions, approvals, and ongoing tracking). 1
  • “Accept” is allowed, but only with defined rationale, sign-off, and monitoring conditions tied to your AI governance process. 1

MANAGE-1.3 is the point where AI risk management stops being analysis and becomes operational execution. Your MAP function should already have produced a set of AI risks, prioritized by likelihood, impact, and context. MANAGE-1.3 requires you to respond to the ones you deemed high priority by selecting a response strategy (mitigate, transfer, avoid, accept), building an implementable plan, and documenting both the decision and the work. 1

For a Compliance Officer, CCO, or GRC lead, the fastest way to operationalize this requirement is to treat each high-priority AI risk like a mini “treatment plan” with: (1) a decision, (2) an accountable owner, (3) defined actions, (4) measurable success criteria, and (5) governance sign-off. This page gives you the requirement-level checklist, evidence expectations, and common audit hangups so you can stand up a defensible program without overbuilding. The goal is simple: if someone asks, “What did you decide to do about this top AI risk, and did you do it?”, you can answer with documents, not narratives. 1

Regulatory text

Excerpt (framework requirement): “Responses to the AI risks deemed high priority, as identified by the map function, are developed, planned, and documented. Risk response options can include mitigating, transferring, avoiding, or accepting.” 1

What the operator must do:
You must maintain a repeatable process that takes high-priority AI risks from your MAP outputs and produces documented risk responses. For each high-priority risk, you need (a) a chosen response type, (b) a plan to carry it out (or a plan to monitor if accepted), and (c) records proving the decision and follow-through. 1

Plain-English interpretation

  • If your MAP work says a risk is high priority, you cannot leave it as “known.” You must decide what to do and document that decision. 1
  • “Developed, planned, and documented” means a response is more than an idea. It has an owner, tasks, dependencies, and a way to track progress. 1
  • The four response options mirror enterprise risk practice:
    • Mitigate: reduce likelihood and/or impact through controls or design changes.
    • Transfer: shift some risk to a third party (contractual terms, insurance, outsourced controls).
    • Avoid: stop the activity, restrict the use case, or prevent deployment in certain contexts.
    • Accept: formally approve residual risk within defined boundaries and monitoring triggers. 1

Who it applies to

Entities: Any organization developing, procuring, integrating, or deploying AI systems, including internal models and third-party AI services, where AI risks are assessed and prioritized. 2

Operational contexts where MANAGE-1.3 shows up quickly:

  • Deploying AI into customer-facing workflows (eligibility, pricing, support, marketing content generation).
  • Using AI to support regulated decisions (employment screening, financial decisions, healthcare triage).
  • Relying on third-party AI (SaaS features, embedded models, API-based model providers).
  • Automating security or fraud decisions where false positives/negatives have material impact.

Typical accountable functions: GRC/Compliance, AI governance lead, model risk management (if present), product owners, engineering, security, privacy, procurement/TPRM, and legal for contract/transfer decisions.

What you actually need to do (step-by-step)

Step 1: Confirm the input list from MAP (your “high-priority” population)

Create a bounded list of the AI risks deemed high priority and lock the scope for the cycle (quarterly, per release, or per major model change—choose what fits your governance cadence). Your output should identify:

  • Risk statement (cause → event → impact)
  • Affected AI system/use case
  • Inherent risk rating and why it’s high priority
  • Existing controls and known gaps
  • Risk owner business function (not just “AI team”) 1

Operator tip: If MAP outputs are inconsistent across teams, standardize the risk statement format before you write response plans. Otherwise you will “treat” vague risks and fail audit scrutiny.

Step 2: Choose a response option for each risk (and document the rationale)

For every high-priority risk, record a decision: mitigate / transfer / avoid / accept. 1

Use a short decision matrix so choices are consistent:

Option When it fits Documentation you need
Mitigate You can reduce risk through controls, design changes, testing, monitoring Control plan, milestones, acceptance criteria, residual risk estimate
Transfer A third party can take on operational responsibility or financial exposure Contract clauses, SOC/assurance artifacts, SLAs, insurance notes, shared responsibility map
Avoid The use case is too risky or not worth the residual exposure Decision memo, de-scoping plan, technical guardrails preventing use
Accept Residual risk is within tolerance and you can monitor it Acceptance memo, approvals, monitoring plan, trigger conditions for revisit

Step 3: Build the response plan as a trackable work item

For mitigation/transfer/avoid, your plan should read like an implementation ticket, not a policy excerpt. Include:

  • Owner (single accountable person) and supporting teams
  • Actions (engineering tasks, process changes, training, contract actions)
  • Dependencies (data access, vendor attestations, architectural changes)
  • Success criteria tied to measurable checks (test pass/fail criteria, monitoring thresholds, review gates)
  • Residual risk after response (qualitative is fine if you lack quantitative methods)
  • Milestones and reporting cadence (align to your existing GRC rhythm) 1

Example (mitigate): High-priority risk = model produces discriminatory outcomes for a protected class in a screening workflow. Response plan = add pre-deployment bias testing, add post-deployment drift/bias monitoring, restrict features, implement human review for borderline cases, and document model change approvals.

Step 4: Add governance checkpoints and approvals

Document who can approve:

  • Risk response selection
  • Risk acceptance (especially)
  • Exceptions (e.g., deployment before all mitigation tasks complete)

Then attach meeting minutes or approval records (ticket approvals, GRC workflow approvals, signed memos). The point is traceability from MAP → decision → plan → evidence. 1

Step 5: Operationalize tracking and recurring evidence collection

Set up a mechanism to prove the plan is executed:

  • Central risk register with status fields (Open/In progress/Implemented/Accepted/Retired)
  • Linked control evidence (test results, monitoring reports, change records)
  • Exception log (what slipped, why, who approved, compensating controls)
  • Periodic management reporting (board/committee or operating risk committee, as applicable)

If you use Daydream, treat MANAGE-1.3 as a recurring control with a mapped owner, workflow, and evidence requests tied to each high-priority risk item. This reduces “document scramble” during audits by collecting artifacts as work happens.

Required evidence and artifacts to retain

Keep evidence tied to each high-priority risk. Auditors typically fail programs for missing linkage, not missing intent.

Minimum artifact set 3:

  • MAP output showing the risk is high priority (risk assessment record)
  • Risk response decision record (mitigate/transfer/avoid/accept) with rationale 1
  • Response plan with owner, actions, milestones, and success criteria 1
  • Approval evidence (committee minutes, sign-off, ticket approvals)
  • Implementation evidence, as applicable:
    • Test plans/results (fairness, robustness, security, privacy)
    • Monitoring configuration and reports
    • Change management records (model versioning, rollback plan)
    • Third-party contracts/DPAs/SLAs and assurance artifacts for “transfer”
  • Residual risk statement and conditions to revisit (especially for “accept”)

Program-level artifacts (cross-cutting):

  • AI risk response procedure (how you do MANAGE-1.3)
  • RACI for AI risk ownership and approvals
  • Evidence retention schedule aligned to your audit needs

Common exam/audit questions and hangups

Expect reviewers to probe four themes:

  1. Completeness: “Show me all high-priority AI risks from MAP and the response status for each.” If you cannot produce a complete population, you look ad hoc. 1
  2. Traceability: “Where is the link from the MAP risk record to the mitigation plan and the evidence?” Broken links are a common failure mode. 1
  3. Decision quality: “Why did you accept this risk? What alternatives did you consider?” Acceptance without rationale reads like avoidance of work. 1
  4. Operational reality: “Did the planned control actually get implemented, or is it just in a slide deck?” You need execution evidence.

Frequent implementation mistakes and how to avoid them

  1. Mistake: “Risk response” equals “we have controls.”
    Fix: For each high-priority risk, write a specific response plan with deliverables and evidence pointers. Controls are inputs; the response plan is the operator output. 1

  2. Mistake: Accepting risks without boundaries.
    Fix: Risk acceptance must define scope (system/version/use case), duration or review trigger, and monitoring expectations. Record approvals. 1

  3. Mistake: Treating transfer as a contract signature.
    Fix: Transfer needs a shared responsibility map and ongoing assurance. If a third party provides the model, you still own deployment context risks. Keep assurance artifacts and review cadence.

  4. Mistake: No “avoid” pathway, so everything becomes mitigate or accept.
    Fix: Define “stop conditions” for unacceptable use cases (e.g., prohibited data sources, restricted geographies, high-impact domains without required controls). Document de-scoping and technical guardrails.

  5. Mistake: Plans that are impossible to verify.
    Fix: Write success criteria that an auditor can check, like test outputs, monitoring alerts, or change approvals.

Enforcement context and risk implications

No public enforcement cases are provided in the source catalog for this requirement. From a risk perspective, weak MANAGE-1.3 execution increases the chance that known AI risks become production incidents (harm to individuals, privacy/security failures, operational disruption) without governance records showing responsible decision-making. That lack of documentation is often what turns a technical issue into a compliance issue during exams and investigations. 1

A practical 30/60/90-day execution plan

First 30 days (stand up the mechanism)

  • Confirm the “high-priority” risk population from MAP and assign a business owner to each risk. 1
  • Publish a one-page AI risk response procedure: decision options, approval roles, and required artifacts. 1
  • Create a standard template: Risk Response Record + Plan (one per high-priority risk).
  • Start a single tracking view (GRC tool, ticketing system, or spreadsheet) with links to evidence.

Next 60 days (produce response plans and approvals)

  • For each high-priority risk, run a working session to pick the response option and draft the plan. 1
  • Route formal approvals for acceptance and avoidance decisions, plus any exception paths.
  • For transfer responses, gather third-party artifacts and document shared responsibility (procurement + security + legal).

Next 90 days (execute, prove, and stabilize)

  • Close the loop on at least one full cycle: risk → response plan → implemented controls → residual risk update. 1
  • Produce a management report showing status, overdue items, accepted risks, and upcoming triggers.
  • Tune the process: add required fields that were missing, standardize evidence naming, and set recurring review checkpoints tied to model releases or major changes.

Frequently Asked Questions

What qualifies as “high priority” for MANAGE-1.3?

Use the prioritization output from your MAP function and treat that list as the population in scope for response planning. If your MAP method is inconsistent, standardize criteria and document how items enter or exit the high-priority tier. 1

Can we accept a high-priority AI risk?

Yes, acceptance is explicitly listed as an option, but it must be documented with rationale, approvals, and monitoring or trigger conditions for revisit. Acceptance without governance records is hard to defend in an exam. 1

What does “transfer” mean for AI risks if we use a third-party model?

Transfer usually means contractual and operational allocation of responsibilities, not full offloading of accountability. Keep the contract terms, assurance evidence, and a shared responsibility map that clarifies what you still must control in deployment. 1

How detailed does the response plan need to be?

Detailed enough that a separate operator can execute it and a reviewer can verify completion through evidence. If the plan cannot be tied to test results, monitoring reports, change records, or approvals, it is too vague. 1

Do we need a separate plan for every model and use case?

If the same high-priority risk applies across multiple deployments, you can use a single plan with clearly scoped applicability and system-specific evidence links. Avoid “one plan covers all AI” unless your systems, data, and controls are truly uniform. 1

How should we document “avoid” decisions?

Record what is being avoided (use case, data source, jurisdiction, user group), why it is unacceptable, and what technical/process guardrails prevent the activity from reappearing. Keep approval records and evidence of de-scoping or access restrictions. 1

Footnotes

  1. NIST AI RMF Core

  2. NIST AI RMF program page

  3. high-priority risk

Frequently Asked Questions

What qualifies as “high priority” for MANAGE-1.3?

Use the prioritization output from your MAP function and treat that list as the population in scope for response planning. If your MAP method is inconsistent, standardize criteria and document how items enter or exit the high-priority tier. (Source: NIST AI RMF Core)

Can we accept a high-priority AI risk?

Yes, acceptance is explicitly listed as an option, but it must be documented with rationale, approvals, and monitoring or trigger conditions for revisit. Acceptance without governance records is hard to defend in an exam. (Source: NIST AI RMF Core)

What does “transfer” mean for AI risks if we use a third-party model?

Transfer usually means contractual and operational allocation of responsibilities, not full offloading of accountability. Keep the contract terms, assurance evidence, and a shared responsibility map that clarifies what you still must control in deployment. (Source: NIST AI RMF Core)

How detailed does the response plan need to be?

Detailed enough that a separate operator can execute it and a reviewer can verify completion through evidence. If the plan cannot be tied to test results, monitoring reports, change records, or approvals, it is too vague. (Source: NIST AI RMF Core)

Do we need a separate plan for every model and use case?

If the same high-priority risk applies across multiple deployments, you can use a single plan with clearly scoped applicability and system-specific evidence links. Avoid “one plan covers all AI” unless your systems, data, and controls are truly uniform. (Source: NIST AI RMF Core)

How should we document “avoid” decisions?

Record what is being avoided (use case, data source, jurisdiction, user group), why it is unacceptable, and what technical/process guardrails prevent the activity from reappearing. Keep approval records and evidence of de-scoping or access restrictions. (Source: NIST AI RMF Core)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream