MANAGE-1.2: Treatment of documented AI risks is prioritized based on impact, likelihood, and available resources or methods.

To meet manage-1.2: treatment of documented ai risks is prioritized based on impact, likelihood, and available resources or methods. requirement, you must run a consistent triage process that ranks documented AI risks by impact and likelihood, then assigns treatment actions based on what methods and resources you can realistically apply. The outcome should be a defensible, repeatable prioritization with clear owners, due dates, and evidence.

Key takeaways:

  • Maintain an AI risk register and score each risk using defined impact and likelihood criteria.
  • Tie prioritization to treatment capacity: available controls, technical mitigations, and operational resources.
  • Retain evidence that shows ranking decisions, approvals, and follow-through on treatment plans.

MANAGE-1.2 is a practical execution requirement: once AI risks are documented, you must prove you can decide what gets fixed first and why, using a rational method that blends risk severity with what you can actually do about it. In audits and internal governance reviews, teams rarely fail because they lack a list of risks. They fail because the list is not connected to action: the highest-impact risks sit idle, different teams use different scoring scales, and “prioritization” becomes whoever escalates the loudest.

For a Compliance Officer, CCO, or GRC lead, operationalizing MANAGE-1.2 means establishing a single triage workflow that covers both AI development (model training, evaluation, release) and AI deployment (monitoring, drift response, incident handling). You need consistent scoring definitions, a clear decision forum, and a record of treatment decisions that reflects constraints (staffing, tooling, vendor dependencies, and available mitigations). NIST frames this as part of the AI risk management lifecycle in the AI RMF Core. Your job is to translate that into control language: owners, cadence, artifacts, and measurable completion criteria 1.

Regulatory text

Requirement excerpt: “Treatment of documented AI risks is prioritized based on impact, likelihood, and available resources or methods.” 1

What the operator must do:

  1. Ensure AI risks are documented (not just discussed in meetings).
  2. Use a defined approach to rank those risks by impact and likelihood.
  3. Decide treatment priority based on that rank and on the organization’s available resources or methods (e.g., proven mitigations, engineering capacity, monitoring tools, third-party support).
  4. Show evidence that priorities drive work: assigned owners, planned treatments, and tracked completion.

Plain-English interpretation

You need a “risk triage” process for AI that works like vulnerability management or enterprise risk management: high-impact and likely risks rise to the top, but the treatment plan must also reflect feasibility. If a risk is severe but you lack a viable mitigation method today, MANAGE-1.2 still expects a prioritized treatment outcome (for example: restrict use, add human review, delay release, or procure tooling) rather than ignoring it.

A useful mental model for MANAGE-1.2:

  • Impact answers: “If this goes wrong, how bad is it for customers, safety, compliance, operations, or the business?”
  • Likelihood answers: “How plausible is it that this will happen in our environment, given controls and exposure?”
  • Resources/methods answers: “What can we realistically do now, and what will we do if we can’t fully mitigate?”

Who it applies to (entity and operational context)

This applies to organizations developing or deploying AI systems 1. In practice, it includes:

  • Product teams shipping AI features (genAI, ML scoring, recommendations, vision, NLP).
  • Central AI/ML platform teams managing shared tooling and pipelines.
  • Security, Privacy, Compliance, and Model Risk teams governing AI risk decisions.
  • Third parties that provide models, data, labeling, evaluation tooling, or hosted AI services when their components change your risk profile.

Operational contexts where MANAGE-1.2 becomes exam-critical:

  • Pre-release approvals (launch gates, model go/no-go).
  • Model updates (retraining, prompt changes, feature changes).
  • Post-incident response (harm events, erroneous outputs, policy violations).
  • Ongoing monitoring (drift, bias signals, performance degradation).

What you actually need to do (step-by-step)

Step 1: Define the unit of analysis (what gets prioritized)

Choose a consistent scope so risks are comparable. Common options:

  • Per AI system (each model or feature has its own register items).
  • Per use case (same model used in different workflows gets different risks).
  • Per release (risks tied to a version and change set).

Write this down in your AI risk management procedure so teams don’t invent their own units midstream.

Step 2: Build (or standardize) the AI risk register

Minimum fields to make MANAGE-1.2 operable:

  • Risk statement (cause → event → impact).
  • AI system/use case, owner, and business process.
  • Impact rating + rationale.
  • Likelihood rating + rationale.
  • Existing controls and detection signals.
  • Treatment options considered.
  • Chosen treatment, priority tier, due date, and approver.

Daydream is typically introduced here as the system of record that maps MANAGE-1.2 to a control owner and recurring evidence collection, so the register and prioritization decisions don’t live in scattered docs 1.

Step 3: Define impact and likelihood criteria (simple, consistent, testable)

Avoid debates by predefining what “impact” means for your organization. Example impact dimensions you can embed in a rubric:

  • Legal/regulatory exposure (privacy, consumer protection, sector rules).
  • Customer harm (financial, safety, denial of service/benefit).
  • Security harm (data leakage, prompt injection leading to disclosure).
  • Operational harm (critical workflow disruption).
  • Reputational harm (misleading outputs in customer-facing channels).

Do the same for likelihood, using factors such as:

  • Exposure (internal-only vs. public-facing).
  • Automation level (advisory vs. fully automated decisioning).
  • Control strength (human review, guardrails, monitoring).
  • Change frequency (static vs. continuously updated).

Step 4: Add the “resources/methods” decision layer (the MANAGE-1.2 differentiator)

Many teams stop after scoring. MANAGE-1.2 requires prioritizing treatment based on feasibility. Add a structured feasibility assessment:

  • Known mitigations exist and are implementable now (yes/no).
  • Engineering effort is available in the relevant team (yes/no).
  • Required third-party support is available (yes/no).
  • Mitigation introduces unacceptable performance/product tradeoffs (yes/no).
  • Interim risk reduction options exist (e.g., restrict scope, add review) (yes/no).

If feasibility is low, your treatment priority can still be high, but the treatment plan may be “restrict or pause” plus “procure capability” rather than “implement control X next sprint.” The artifact you need is the decision logic, not perfection.

Step 5: Create a prioritization matrix that produces a ranked backlog

Use a matrix that combines:

  • Impact (e.g., low/medium/high/critical)
  • Likelihood (e.g., low/medium/high)
  • Feasibility (e.g., ready/partial/blocker)

Then output:

  • Priority tier (P0/P1/P2 or equivalent)
  • Treatment path (mitigate, transfer, avoid, accept)
  • Deadline logic (tie to release gates or risk committee cadence)

Keep the matrix stable. Change it only through governance approval and version it like a policy.

Step 6: Assign owners and route approvals

Define who can:

  • Accept risk (and at what threshold).
  • Approve launch with open P0/P1 items.
  • Override feasibility constraints (funding, staffing, vendor switching).

Make this explicit in a RACI:

  • Product owner: accountable for treatment execution.
  • Risk/compliance: owns method and challenge.
  • Engineering/ML: implements mitigations.
  • Security/privacy: signs off on relevant controls.
  • Senior approver: accepts residual risk for high-tier items.

Step 7: Operationalize tracking and recurring review

Prioritization is a living process. Minimum operating rhythm:

  • Re-score risks when the model changes, the use case expands, or new incidents occur.
  • Track treatment tasks in a system that supports audit export.
  • Require closure evidence: test results, monitoring dashboards, or control configuration snapshots.

Required evidence and artifacts to retain

Auditors and internal risk committees look for “show me” artifacts. Retain:

  • AI risk register with impact/likelihood scores and rationales.
  • Prioritization rubric and matrix (version-controlled).
  • Feasibility/resource assessment notes per high-priority risk.
  • Treatment plans: tasks, owners, due dates, and completion criteria.
  • Approval records for risk acceptance, deferrals, or launch decisions.
  • Status reporting (risk burndown summaries, committee minutes).
  • Evidence of completed mitigations (test reports, configuration exports, monitoring alerts).

If you use Daydream, keep the control mapping (MANAGE-1.2 → policy/procedure/owner → evidence cadence) and evidence collection logs as your “single pane” audit packet 1.

Common exam/audit questions and hangups

Expect these questions:

  • “Show the top risks for each AI system and why they’re ranked that way.”
  • “Where is the definition of impact and likelihood, and who approved it?”
  • “How do you account for resource constraints without ignoring severe risks?”
  • “What happens when a high-impact risk has no available mitigation method?”
  • “Prove that prioritization changes engineering work, not just reporting.”

Hangups:

  • Inconsistent scoring across teams.
  • No proof of approvals for risk acceptance.
  • Risks documented, but treatments not tracked to closure.

Frequent implementation mistakes and how to avoid them

  1. Only scoring, no prioritization output.
    Fix: force a backlog view (ranked list) with named owners and due dates.

  2. Feasibility used as an excuse to downgrade risk.
    Fix: keep impact/likelihood as the “inherent/severity” view; track feasibility separately as a treatment planning constraint.

  3. No interim controls.
    Fix: require a default “containment” playbook (scope restriction, human-in-the-loop, monitoring thresholds) for high-tier risks with longer-term fixes.

  4. Risk acceptance by email with no structure.
    Fix: standard acceptance form: rationale, duration, compensating controls, approver, and review date (your recommended cadence).

  5. Third-party AI components ignored.
    Fix: add third-party dependency fields in the register and require vendor-provided changes to trigger re-triage.

Enforcement context and risk implications

NIST AI RMF is a framework, not a penalty-bearing regulation in itself 2. Your exposure comes from downstream obligations where AI failures become privacy violations, consumer harm, discrimination claims, contract breaches, or safety issues. MANAGE-1.2 reduces that exposure by making your choices defensible: if something goes wrong, you can show you identified the risk, ranked it rationally, allocated resources, and chose an appropriate treatment path.

Practical 30/60/90-day execution plan

First 30 days (stand up the mechanism)

  • Name the control owner for MANAGE-1.2 and publish a one-page procedure.
  • Decide the unit of analysis (system/use case/release) and standardize the AI risk register template.
  • Define impact and likelihood scales with short descriptors and examples relevant to your business.
  • Pilot triage on a small set of AI systems to test scoring consistency.

By 60 days (make it governable)

  • Approve a prioritization matrix that includes feasibility/resources as a distinct input.
  • Establish the approval workflow: who can accept, defer, or block release for high-tier risks.
  • Connect prioritized treatment items to your ticketing/SDLC tool so engineering work is traceable.
  • Start recurring reporting to an AI risk committee or existing operational risk forum.

By 90 days (make it auditable and repeatable)

  • Run at least one full cycle: identify → score → prioritize → treat → verify closure evidence.
  • Tune rubrics based on pilot outcomes, then lock the rubric version.
  • Create an audit-ready evidence packet per system (register export, approvals, closure proof).
  • If you have multiple teams, require consistent scoring calibration sessions to reduce drift.

Frequently Asked Questions

Do we have to fix the highest-impact risk first even if we lack engineering capacity?

You have to prioritize treatment based on impact, likelihood, and resources or methods, so capacity constraints can affect the treatment plan. If you can’t remediate quickly, document interim containment steps, escalation, and an approved plan to obtain resources or alternate methods 1.

How do we handle risks that are severe but don’t have a known mitigation?

Keep the risk ranked high based on impact and likelihood, then choose a feasible treatment path such as avoiding the use case, restricting scope, adding human review, or procuring tooling. Document the methods considered and why the selected approach is the only viable near-term option 1.

What counts as “documented AI risks” for MANAGE-1.2?

A documented risk is a written entry with a clear statement, context (system/use case), scoring rationale, and an owner. Meeting notes alone are weak evidence unless they roll into a controlled register and produce prioritized treatment actions 1.

Can we use our existing enterprise risk management (ERM) scoring model?

Yes, if it produces consistent impact/likelihood ratings for AI risks and supports treatment prioritization with resource/method constraints. Most teams add AI-specific criteria (automation level, model change frequency, monitoring coverage) to make ERM scoring workable 1.

How should third-party AI providers fit into the prioritization?

Treat third-party dependencies as part of feasibility and likelihood. If a mitigation requires a provider change, log that as a constraint, escalate through third-party management, and consider interim controls or scope restrictions while the provider issue is addressed.

What evidence is most persuasive in an audit?

A time-ordered trail: risk register entries, scoring rubric, prioritization output, approvals, treatment tickets, and closure proof. Auditors want to see that prioritization decisions changed what work got done, not just how risks were described 1.

Footnotes

  1. NIST AI RMF Core

  2. NIST AI RMF program page

Frequently Asked Questions

Do we have to fix the highest-impact risk first even if we lack engineering capacity?

You have to prioritize treatment based on impact, likelihood, and resources or methods, so capacity constraints can affect the treatment plan. If you can’t remediate quickly, document interim containment steps, escalation, and an approved plan to obtain resources or alternate methods (Source: NIST AI RMF Core).

How do we handle risks that are severe but don’t have a known mitigation?

Keep the risk ranked high based on impact and likelihood, then choose a feasible treatment path such as avoiding the use case, restricting scope, adding human review, or procuring tooling. Document the methods considered and why the selected approach is the only viable near-term option (Source: NIST AI RMF Core).

What counts as “documented AI risks” for MANAGE-1.2?

A documented risk is a written entry with a clear statement, context (system/use case), scoring rationale, and an owner. Meeting notes alone are weak evidence unless they roll into a controlled register and produce prioritized treatment actions (Source: NIST AI RMF Core).

Can we use our existing enterprise risk management (ERM) scoring model?

Yes, if it produces consistent impact/likelihood ratings for AI risks and supports treatment prioritization with resource/method constraints. Most teams add AI-specific criteria (automation level, model change frequency, monitoring coverage) to make ERM scoring workable (Source: NIST AI RMF Core).

How should third-party AI providers fit into the prioritization?

Treat third-party dependencies as part of feasibility and likelihood. If a mitigation requires a provider change, log that as a constraint, escalate through third-party management, and consider interim controls or scope restrictions while the provider issue is addressed.

What evidence is most persuasive in an audit?

A time-ordered trail: risk register entries, scoring rubric, prioritization output, approvals, treatment tickets, and closure proof. Auditors want to see that prioritization decisions changed what work got done, not just how risks were described (Source: NIST AI RMF Core).

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream