MANAGE-1.4: Negative residual risks (defined as the sum of all unmitigated risks) to both downstream acquirers of AI systems and end users are documented.

To meet MANAGE-1.4, you must produce and maintain written documentation that lists the negative residual risks (the sum of unmitigated risks) your AI system leaves behind, and you must document those risks separately for downstream acquirers and for end users. Operationalize this by tying residual-risk statements to your risk register, release gates, and customer-facing disclosures. 1

Key takeaways:

  • Document what remains unmitigated, not just what you fixed, and treat the “sum of unmitigated risks” as a managed output. 1
  • Produce two audience-specific residual-risk views: one for downstream acquirers (integrators/operators) and one for end users. 1
  • Make residual-risk documentation a recurring control with an owner, cadence, and evidence trail tied to model changes. 1

MANAGE-1.4 is a documentation requirement with operational teeth: it forces you to name the risks you are choosing to live with after mitigation, and to communicate them to the people who inherit those risks. “Negative residual risk” here is explicitly defined as the sum of all unmitigated risks, which means a scattered set of risk notes in tickets will not pass review if you cannot consolidate them into an intelligible residual-risk picture. 1

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat residual-risk documentation as a governed deliverable produced from your AI risk register, validated at release time, and versioned alongside the AI system documentation that downstream acquirers rely on. Downstream acquirers need implementation and integration conditions (assumptions, constraints, monitoring duties). End users need plain-language limitations and foreseeable harms. Both groups need clarity on what is not mitigated, why, and what must be true for safe use. 1

This page gives you requirement-level steps, evidence artifacts, and audit-ready language so you can implement MANAGE-1.4 without building a new program from scratch. 2

Regulatory text

Requirement (verbatim): “Negative residual risks (defined as the sum of all unmitigated risks) to both downstream acquirers of AI systems and end users are documented.” 1

Operator translation (what you must do):

  1. Identify which AI risks remain unmitigated after controls, design choices, and process mitigations.
  2. Consolidate those unmitigated items into a coherent residual-risk view (the “sum”).
  3. Document the residual risks in a form usable by:
    • Downstream acquirers (teams or third parties that acquire and integrate/deploy your AI system).
    • End users (people impacted by, or directly using, the AI-enabled functionality). 1

Documentation is the deliverable, but auditors will test whether the documentation is fed by a real process (traceability to risk assessments, changes, and approvals). 1

Plain-English interpretation

Residual risk is what’s left after you’ve done what you can. MANAGE-1.4 requires you to write down:

  • The specific unmitigated risks that remain.
  • The conditions under which those risks get worse (assumptions, intended-use boundaries, known failure modes).
  • The practical implications for two audiences who experience the risk differently:
    • Downstream acquirers need to know what they must implement, monitor, or restrict to keep the system within safe bounds.
    • End users need to know limitations, expected errors, and foreseeable negative outcomes in plain language. 1

A useful mental model: your mitigations reduce risk; MANAGE-1.4 forces you to publish the remainder.

Who it applies to

Entities: Any organization developing or deploying AI systems, including those distributing AI components to other parties. 1

Operational contexts where this shows up immediately:

  • You ship an AI model, API, SDK, or embedded feature that customers integrate into their workflows.
  • You deploy AI internally where employees or consumers rely on outputs for decisions.
  • You use third parties (model providers, data brokers, labeling firms) and your system risk profile depends on their constraints and gaps. In this case, downstream acquirers might include your own internal product teams and external customers, depending on who “inherits” operational responsibility. 1

What you actually need to do (step-by-step)

1) Define “unmitigated” in your program language

Write a short internal rule that classifies each risk as:

  • Mitigated (controls implemented and operating, with evidence).
  • Partially mitigated (some controls exist, but meaningful exposure remains).
  • Unmitigated (no effective control, or control not feasible/approved). 1

This prevents teams from calling a risk “mitigated” because a mitigation is planned.

2) Build/extend your AI risk register to support residual-risk outputs

At minimum, each risk entry should include fields that can be compiled into the “sum of unmitigated risks”:

  • Risk statement (cause → event → impact).
  • Impacted party (downstream acquirer, end user, both).
  • Current controls and control status (implemented vs planned).
  • Residual risk rating (your chosen scale).
  • Rationale for acceptance (why it remains).
  • Required conditions of use (assumptions, constraints, operational duties). 1

If you already have an enterprise risk register, add AI-specific attributes (model version, dataset lineage pointer, evaluation coverage pointer) so you can produce system-level residual-risk documentation.

3) Produce two residual-risk documents (or two sections in one controlled doc)

Create an AI Residual Risk Statement with two distinct views:

A. Downstream acquirer residual-risk appendix (operator-facing) Include:

  • Intended use and prohibited uses.
  • Integration assumptions (data quality expectations, environment constraints).
  • Monitoring requirements the acquirer must implement (drift checks, human review triggers, logging expectations).
  • Known limitations and failure modes tied to real scenarios.
  • Configuration hazards (what settings or prompts make outcomes worse).
  • Residual-risk acceptance boundaries (what risk you accept, what you refuse). 1

B. End-user residual-risk notice (user-facing) Include:

  • Plain-language limitations (where it can be wrong, biased, incomplete, or unsafe).
  • What the user should do when uncertain (fallback steps, escalation).
  • High-stakes warnings where relevant (do not rely on this for specific decisions, if applicable to your system’s context).
  • How users can report issues. 1

Keep the end-user notice readable, but do not omit meaningful residual risks just because they are uncomfortable.

4) Tie residual-risk documentation to change management and release gates

Make residual-risk documentation a required artifact for:

  • New model releases.
  • Material data changes.
  • New use cases or customer segments.
  • Third-party model/provider swaps. 1

Practical gating rule: no production release until the residual-risk statement is updated, approved, and published to the right channels (customer docs, admin console, internal playbooks), with version control.

5) Assign ownership and set a recurring review motion

Assign a control owner (often Product Risk, Model Risk, or GRC) accountable for:

  • Collecting updates from engineering and product.
  • Validating that “unmitigated” is accurate (controls exist and operate).
  • Publishing the downstream-acquirer view and end-user view. 1

If you use a system like Daydream to map requirements to controls and evidence, configure MANAGE-1.4 as a recurring evidence item so you can prove it runs on schedule and survives personnel changes. 1

Required evidence and artifacts to retain

Retain artifacts that show both content and process:

Core documents

  • Versioned AI Residual Risk Statement (with downstream acquirer section and end-user section). 1
  • AI risk register export showing all risks marked unmitigated/partially mitigated and mapped to the statement. 1

Traceability

  • Crosswalk table: risk IDs → residual-risk disclosures → owners → last review date → linked controls/evaluations. 1
  • Change log showing when residual-risk statements changed and why (model version, data changes, incident learnings). 1

Approvals

  • Release gate evidence: ticket/work item showing approval from accountable leader (Product, Security, Compliance, Model Risk). 1

Distribution proof

  • Where acquirers can access the doc (customer portal link, documentation repository path).
  • Where end users can access the notice (UI screenshot, help center article draft). 1

Common exam/audit questions and hangups

Expect reviewers to test these:

  1. “Show me the ‘sum of all unmitigated risks’ for System X.”
    Hangup: teams provide scattered risk notes without consolidation. Provide an index of unmitigated risks with system scope and current status. 1

  2. “How do you distinguish downstream acquirers from end users?”
    Hangup: a single generic disclosure. Provide two audience-specific outputs, even if some risks overlap. 1

  3. “How do you know these are truly unmitigated?”
    Hangup: mitigation plans treated as implemented. Show control operation evidence or label risks as unmitigated/partial until controls are operating. 1

  4. “What triggers an update?”
    Hangup: residual risks go stale after model updates. Show change-management hooks tied to model/data/use-case changes. 1

Frequent implementation mistakes and how to avoid them

Mistake Why it fails Fix
Only documenting mitigations MANAGE-1.4 is about what remains Require an “unmitigated risks” section in every system risk report. 1
One-size-fits-all disclosure Downstream acquirers and end users need different detail Split documents or sections; tailor language and actionability. 1
No traceability to the risk register Auditors can’t reconcile “sum of unmitigated risks” Use stable risk IDs and a crosswalk table. 1
Residual risks not tied to release management Documentation becomes outdated Add a release gate and change triggers. 1
Hiding hard truths in internal docs only End users still face the risk Publish a user-facing notice for relevant residual risks. 1

Enforcement context and risk implications

NIST AI RMF is a framework, and this page does not list public enforcement actions tied to MANAGE-1.4 because none were provided in the source catalog. 2

Operationally, weak residual-risk documentation creates predictable failure modes in audits and customer diligence: you cannot show informed risk acceptance, you cannot set safe integration conditions for downstream acquirers, and you cannot credibly claim users were warned about known limitations. Treat MANAGE-1.4 as a defensibility control for procurement reviews, incident response narratives, and contractual risk allocations. 1

A practical 30/60/90-day execution plan

First 30 days (stand up the control)

  • Name a MANAGE-1.4 control owner and approver.
  • Add “mitigated/partial/unmitigated” fields and “impacted party” fields to the AI risk register.
  • Draft the AI Residual Risk Statement template with two sections: downstream acquirers and end users.
  • Pilot on one AI system and publish internally. 1

By 60 days (operationalize and prove repeatability)

  • Backfill residual-risk statements for additional in-scope AI systems.
  • Implement the crosswalk table (risk IDs to disclosures) and store it in a controlled repository.
  • Add a release gate: residual-risk statement update required for model/data/use-case changes.
  • Create distribution paths: customer-facing acquirer appendix location and end-user notice location. 1

By 90 days (audit-ready and scalable)

  • Run a tabletop review: pick one system and test whether you can reconcile the risk register to published disclosures quickly.
  • Add periodic review tasks and evidence capture (meeting notes, approvals, version history).
  • Integrate third-party dependencies: ensure unmitigated third-party risks that flow to your system are reflected in residual-risk documentation for acquirers and end users. 1

Frequently Asked Questions

What counts as a “downstream acquirer” in practice?

A downstream acquirer is any party that obtains your AI system and then deploys, integrates, configures, or resells it in a way that carries operational responsibility. That can be an external customer, a channel partner, or an internal platform team, depending on your distribution model. 1

Do we need to quantify “the sum of all unmitigated risks”?

MANAGE-1.4 requires documentation of the sum, but it does not mandate a specific quantitative method in the provided text. Many teams document the “sum” as a consolidated list with residual-risk ratings and clear scope, then maintain traceability to the risk register. 1

Can we keep the downstream-acquirer residual risks confidential and only provide end-user notices?

The requirement explicitly calls for documentation to both downstream acquirers and end users. If confidentiality is a concern, provide a controlled acquirer appendix under NDA or contractual access, but still document and provide it to acquirers. 1

How do we handle residual risks that originate from a third party model provider?

Treat them as your system’s residual risks if they remain unmitigated in your deployment context. Document the dependency, what you cannot control, and what the downstream acquirer and end user must do (or must not do) to reduce exposure. 1

What artifact do auditors usually want first for MANAGE-1.4?

Provide the versioned AI Residual Risk Statement plus the crosswalk to the risk register entries marked unmitigated or partially mitigated. That pairing proves you can identify, consolidate, and communicate residual risks for both audiences. 1

Where does Daydream fit without creating extra work for engineering?

Use Daydream to map MANAGE-1.4 to a control owner, define the recurring evidence you will collect (residual-risk statement version history, approvals, crosswalk), and automate reminders tied to change events. Engineering contributes content through existing risk and release workflows; GRC governs the output. 1

Footnotes

  1. NIST AI RMF Core

  2. NIST AI RMF program page

Frequently Asked Questions

What counts as a “downstream acquirer” in practice?

A downstream acquirer is any party that obtains your AI system and then deploys, integrates, configures, or resells it in a way that carries operational responsibility. That can be an external customer, a channel partner, or an internal platform team, depending on your distribution model. (Source: NIST AI RMF Core)

Do we need to quantify “the sum of all unmitigated risks”?

MANAGE-1.4 requires documentation of the sum, but it does not mandate a specific quantitative method in the provided text. Many teams document the “sum” as a consolidated list with residual-risk ratings and clear scope, then maintain traceability to the risk register. (Source: NIST AI RMF Core)

Can we keep the downstream-acquirer residual risks confidential and only provide end-user notices?

The requirement explicitly calls for documentation to both downstream acquirers and end users. If confidentiality is a concern, provide a controlled acquirer appendix under NDA or contractual access, but still document and provide it to acquirers. (Source: NIST AI RMF Core)

How do we handle residual risks that originate from a third party model provider?

Treat them as your system’s residual risks if they remain unmitigated in your deployment context. Document the dependency, what you cannot control, and what the downstream acquirer and end user must do (or must not do) to reduce exposure. (Source: NIST AI RMF Core)

What artifact do auditors usually want first for MANAGE-1.4?

Provide the versioned AI Residual Risk Statement plus the crosswalk to the risk register entries marked unmitigated or partially mitigated. That pairing proves you can identify, consolidate, and communicate residual risks for both audiences. (Source: NIST AI RMF Core)

Where does Daydream fit without creating extra work for engineering?

Use Daydream to map MANAGE-1.4 to a control owner, define the recurring evidence you will collect (residual-risk statement version history, approvals, crosswalk), and automate reminders tied to change events. Engineering contributes content through existing risk and release workflows; GRC governs the output. (Source: NIST AI RMF Core)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream