MANAGE-2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identified.

MANAGE-2.3 requires you to have and follow a repeatable “unknown risk” response and recovery procedure for AI systems: detect the risk, triage it, contain impact, remediate root cause, restore operations, and document learnings so the risk becomes known and managed going forward (NIST AI RMF Core). Operationalize it by integrating this workflow into incident, change, and model governance processes with clear ownership and evidence capture.

Key takeaways:

  • You need a defined playbook for “unknown unknowns” in AI risk, not ad hoc heroics (NIST AI RMF Core).
  • The control is about execution: show that procedures were followed, decisions were recorded, and recovery actions were validated (NIST AI RMF Core).
  • Your best evidence is a complete case file: detection signal, risk decisioning, mitigation actions, approvals, testing, and post-incident updates to controls and documentation.

Previously unknown AI risks show up in uncomfortable ways: a new failure mode after a model update, an unexpected data drift pattern, a third party model behavior change, a newly discovered bias or safety issue, or a novel prompt-based misuse path. MANAGE-2.3 asks a simple question: once you discover a risk you did not anticipate, do you have procedures to respond and recover, and do teams actually follow them (NIST AI RMF Core)?

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat “unknown AI risk identified” as a formal event type with a standardized lifecycle. That lifecycle must connect your technical responders (ML engineering, security, product) to governance decision-makers (risk, compliance, legal, business owner). Your goal is audit-ready consistency: the same minimum steps every time, with enough flexibility to scale severity-based response.

This page gives requirement-level implementation guidance you can put into a policy/procedure update, a runbook, and an evidence checklist. It also shows where teams get stuck in exams and internal audits, and how to build a lightweight operating rhythm that turns surprises into controlled change (NIST AI RMF program page).

Regulatory text

Requirement (verbatim): “Procedures are followed to respond to and recover from a previously unknown risk when it is identified.” (NIST AI RMF Core)

What the operator must do: Maintain documented procedures for responding to newly identified AI risks, trigger those procedures when such a risk is identified, and retain evidence that you executed the procedure through recovery and governance closure (NIST AI RMF Core). “Previously unknown” means the risk was not already captured in your risk register, model card, system hazard analysis, or control library, or it materially changes the severity/likelihood assumptions of an already known risk.

Plain-English interpretation (what the requirement is really testing)

MANAGE-2.3 tests whether your organization can handle surprises without losing control. Auditors and internal reviewers will look for:

  • A defined trigger for when “unknown risk” procedures start (detection, report, monitoring, user complaint, third party alert).
  • Structured triage (severity, scope, affected systems, immediate containment).
  • Recovery (restore safe operation, validate fixes, monitor for recurrence).
  • Governance closure (document decisions, assign owners, update controls so the risk becomes “known” and managed) (NIST AI RMF Core).

Who it applies to (entity and operational context)

This applies to organizations developing, deploying, or operating AI systems, including:

  • Model developers (first-party ML teams) shipping new models or retraining existing ones.
  • AI deployers embedding third party models, APIs, or managed AI services into products and operations.
  • Business owners using AI for high-impact workflows (customer decisions, fraud, eligibility, safety, content moderation).
  • Third party risk programs when a previously unknown risk originates from an external model provider, data supplier, or systems integrator.

Operationally, MANAGE-2.3 is most relevant where you have:

  • Frequent model changes (retraining, prompt updates, agent/tool changes).
  • External dependencies (third party foundation model behavior changes).
  • Limited observability (weak monitoring, poor logging, no red-teaming).
  • Real-world user interaction where misuse patterns evolve quickly (NIST AI RMF program page).

What you actually need to do (step-by-step)

Implement MANAGE-2.3 as a procedure + workflow + evidence package. The simplest operating model is a dedicated “AI Risk Event” runbook that integrates with incident management and risk management.

Step 1: Define the trigger and intake channels

Create a single intake mechanism for “new/unknown AI risk” reports:

  • Monitoring alerts (drift, anomalies, safety filters, policy violations)
  • Security findings (prompt injection, data exfiltration paths, model inversion concerns)
  • User complaints / customer support escalations
  • Third party notifications (provider model change notes, safety advisories)
  • Internal testing/red-team findings

Minimum control requirement: a documented definition of what qualifies as a “previously unknown risk” and when to open a case file (NIST AI RMF Core).

Step 2: Open a case file and assign ownership

For every qualifying event, open a tracked record (ticket, GRC issue, or incident record) with:

  • System/model identifier, version, environment
  • Reporter and timestamp
  • Initial description and suspected impact
  • Assigned risk owner (business) and response owner (technical)
  • Stakeholders: compliance, legal, security, privacy, product

Practical tip: If ownership is unclear, your procedure should default to the AI system owner and the central AI governance lead, then reassign after triage.

Step 3: Triage severity and decide immediate containment

Define triage questions your responders must answer:

  • What is the harm type (safety, discrimination, privacy, security, financial, operational)?
  • What is the exposure (internal only, limited users, broad production)?
  • Is there a legal/regulatory notification consideration (route to counsel as needed)?
  • What is the fastest safe containment action?

Containment actions can include:

  • Feature flag off, revert model version, disable tool access for an agent
  • Increase human review gates
  • Tighten prompts/system instructions or policy rules
  • Block certain input classes, rate-limit, or add stronger authentication

Evidence to capture: who made the containment decision, when, and why, plus the change record that implemented it (NIST AI RMF Core).

Step 4: Investigate root cause and impacted scope

Your procedure should force a disciplined investigation plan:

  • Reproduce the issue with test cases (store prompts/inputs safely if sensitive)
  • Identify impacted cohorts (which users, geographies, customer segments)
  • Confirm whether the risk stems from data, model behavior, integrations, or third party changes
  • Assess whether the issue is a one-off or systemic

If third parties are involved, include a third party outreach step: request provider incident details, model change history, mitigations, and timelines.

Step 5: Remediate and recover operations

Recovery means restoring a defined acceptable state and verifying it:

  • Implement corrective controls (guardrails, training data fixes, filtering, access controls)
  • Run validation tests (safety tests, bias checks, regression tests, security tests)
  • Restore service gradually with monitoring thresholds and rollback criteria
  • Communicate internally (and externally if required by your communications policy)

Define “done” for recovery in your runbook: approved by system owner + risk/compliance sign-off for medium/high severity events.

Step 6: Convert the unknown risk into “known risk” governance

MANAGE-2.3 is incomplete if you fix the issue but fail to update governance artifacts. Close the loop by:

  • Adding the risk to the risk register with updated likelihood/severity assumptions
  • Updating model/system documentation (model card, system design docs, usage constraints)
  • Creating or updating controls and monitoring rules to detect recurrence
  • Recording lessons learned and assigning follow-up actions with due dates (NIST AI RMF Core)

Step 7: Prove the procedure was followed (recurring evidence)

Map MANAGE-2.3 to a control owner and schedule recurring evidence collection, such as quarterly sampling of AI risk events and verification that each file contains required artifacts (NIST AI RMF Core). Tools like Daydream can help you map MANAGE-2.3 to the specific policy/procedure, the accountable owner, and an evidence request list that stays consistent across teams.

Required evidence and artifacts to retain

Keep an “AI Unknown Risk Event” evidence packet per case. A strong packet includes:

Artifact What it proves Owner
Risk event ticket/case record Formal initiation + traceability GRC / AI governance
Triage notes + severity rationale Consistent decisioning Response lead
Containment change record Action taken quickly and controlled Engineering / IT
Investigation log + test results Root cause effort and scope ML/Sec/QA
Remediation plan + approvals Governance oversight System owner + Compliance
Validation/regression results Recovery verified, not assumed QA / ML
Communications log Stakeholder management Product/Comms
Post-incident review Learning captured AI governance
Updates to risk register + docs Risk becomes managed GRC / System owner

Common exam/audit questions and hangups

Auditors tend to focus on execution and consistency:

  • “Show me the procedure and then show me three examples where you followed it end-to-end.” (NIST AI RMF Core)
  • “How do you decide severity, and who can approve a rollback or shutdown?”
  • “How do you ensure learnings feed back into monitoring and documentation?”
  • “What happens if the risk involves a third party model provider?”
  • “How do you prevent ‘shadow fixes’ that bypass change control?”

Hangup to anticipate: teams often have a security incident process but lack an equivalent for non-security AI harms (fairness, safety, integrity). Your procedure must cover the full risk set, not only cyber.

Frequent implementation mistakes (and how to avoid them)

  1. No clear trigger definition.
    Fix: define “previously unknown risk” and require a case file when the definition is met (NIST AI RMF Core).

  2. Treating it as purely technical.
    Fix: require business risk ownership and compliance/legal routing steps for material harms.

  3. Containment without change control.
    Fix: require an emergency change path with post-change review and documented approvals.

  4. Recovery declared without validation.
    Fix: require regression testing and monitoring criteria before closing the event.

  5. No closure into governance artifacts.
    Fix: make risk register/model documentation updates part of the closure checklist (NIST AI RMF Core).

Enforcement context and risk implications

No public enforcement cases are provided in the source catalog for this requirement. Practically, MANAGE-2.3 reduces exposure to operational disruptions, customer harm, contractual breaches, and regulator criticism after an AI incident because it forces disciplined response, documented decisioning, and preventive control updates (NIST AI RMF program page).

Practical 30/60/90-day execution plan

Use a phased rollout tied to existing incident and change processes.

First 30 days (stand up the minimum viable procedure)

  • Publish an “Unknown AI Risk Response & Recovery” procedure aligned to MANAGE-2.3 (NIST AI RMF Core).
  • Define triggers, severity categories, and mandatory stakeholders.
  • Create the case file template (ticket fields + required attachments checklist).
  • Assign control owner and backups (primary in AI governance/GRC; technical co-owner in engineering).

Days 31–60 (integrate into operations)

  • Wire the workflow into incident management and change management tooling (open case, link changes, attach test evidence).
  • Train responders and approvers (engineering, security, product, compliance).
  • Run a tabletop exercise using a realistic scenario: third party model behavior change causing harmful outputs.
  • Start recurring evidence collection: sample recent events and check for completeness (NIST AI RMF Core).

Days 61–90 (prove repeatability and close gaps)

  • Perform a second exercise focused on a non-cyber risk (bias or safety failure).
  • Add monitoring and detection improvements based on first two runs.
  • Implement a lightweight metrics dashboard (counts by severity, average time to containment, recurring root causes) without publishing numeric targets unless your organization sets them internally.
  • Prepare an audit-ready binder: procedure, training records, two completed case files, and governance updates linked to each event (NIST AI RMF Core).

Frequently Asked Questions

What qualifies as a “previously unknown risk” for MANAGE-2.3?

A risk is “previously unknown” when it was not captured in your documented risk inventory or when new evidence materially changes the expected impact or likelihood. Your procedure should define clear triggers so teams open a case consistently (NIST AI RMF Core).

Does this have to be a security incident process?

No. You can route through security incident management when appropriate, but MANAGE-2.3 also covers safety, fairness, privacy, and integrity failures that are not classic security incidents (NIST AI RMF Core).

What’s the minimum evidence an auditor will accept?

A complete case record that shows detection, triage, containment, remediation, validation, and governance closure. If you cannot show that the documented steps happened, you will struggle to demonstrate “procedures are followed” (NIST AI RMF Core).

How do we handle unknown risks caused by a third party model provider?

Open your internal risk event case, apply your containment steps in your environment, and initiate third party escalation with documented requests and responses. Treat the provider’s actions as an input, not a substitute for your own response and recovery procedure.

Can we close the event once we deploy a fix?

Close only after you verify the fix works and you update governance artifacts (risk register, monitoring rules, documentation) so the risk is now “known” and managed going forward (NIST AI RMF Core).

Where does Daydream fit without adding process overhead?

Daydream is useful as the control mapping and evidence backbone: tie MANAGE-2.3 to the procedure, the accountable owner, and a recurring evidence checklist so audits rely on consistent case files rather than ad hoc document hunts (NIST AI RMF Core).

Frequently Asked Questions

What qualifies as a “previously unknown risk” for MANAGE-2.3?

A risk is “previously unknown” when it was not captured in your documented risk inventory or when new evidence materially changes the expected impact or likelihood. Your procedure should define clear triggers so teams open a case consistently (NIST AI RMF Core).

Does this have to be a security incident process?

No. You can route through security incident management when appropriate, but MANAGE-2.3 also covers safety, fairness, privacy, and integrity failures that are not classic security incidents (NIST AI RMF Core).

What’s the minimum evidence an auditor will accept?

A complete case record that shows detection, triage, containment, remediation, validation, and governance closure. If you cannot show that the documented steps happened, you will struggle to demonstrate “procedures are followed” (NIST AI RMF Core).

How do we handle unknown risks caused by a third party model provider?

Open your internal risk event case, apply your containment steps in your environment, and initiate third party escalation with documented requests and responses. Treat the provider’s actions as an input, not a substitute for your own response and recovery procedure.

Can we close the event once we deploy a fix?

Close only after you verify the fix works and you update governance artifacts (risk register, monitoring rules, documentation) so the risk is now “known” and managed going forward (NIST AI RMF Core).

Where does Daydream fit without adding process overhead?

Daydream is useful as the control mapping and evidence backbone: tie MANAGE-2.3 to the procedure, the accountable owner, and a recurring evidence checklist so audits rely on consistent case files rather than ad hoc document hunts (NIST AI RMF Core).

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream