GOVERN-1.7: Processes and procedures are in place for decommissioning and phasing out AI systems safely and in a manner that does not increase risks or decrease the organization’s trustworthiness.
To meet govern-1.7: processes and procedures are in place for decommissioning and phasing out ai systems safely and in a manner that does not increase risks or decrease the organization’s trustworthiness. requirement, you need an operational offboarding process for AI systems that prevents new harms during retirement: controlled shutdown, dependency mapping, data/model artifact handling, user and third-party communications, monitoring for residual use, and documented approvals and evidence. This must work for planned sunsets and emergency takedowns.
Key takeaways:
- Treat AI decommissioning as a controlled change with risk gates, not a “turn it off” ticket.
- Define what happens to data, models, prompts, logs, and downstream integrations after retirement.
- Keep an evidence bundle that proves the process ran (approvals, runbooks executed, and post-sunset monitoring results).
GOVERN-1.7 in the NIST AI Risk Management Framework expects more than a policy statement. It expects repeatable processes and procedures that let you phase out an AI system without increasing risk and without creating a trust gap with customers, regulators, and internal stakeholders (NIST AI RMF Core). That means you need a defined “AI retirement” path that covers the full system lifecycle: technical disablement, human process controls, communications, and post-decommission monitoring.
For a Compliance Officer, CCO, or GRC lead, the fastest way to operationalize this requirement is to convert it into a control that runs on clear trigger events (sunset, model replacement, vendor termination, critical incident) with named owners and a minimum evidence set. In practice, teams fail this requirement when they retire a model but leave a shadow integration active, keep using a deprecated model endpoint through a third party, or lose track of retained training data and logs that still carry privacy, IP, or security risk.
This page gives requirement-level implementation guidance you can put into production: a step-by-step procedure, decision points, artifacts to retain, audit questions to expect, and a pragmatic execution plan aligned to NIST AI RMF’s GOVERN function (NIST AI RMF 1.0; NIST AI RMF Core).
Regulatory text
NIST AI RMF GOVERN-1.7 excerpt: “Processes and procedures are in place for decommissioning and phasing out AI systems safely and in a manner that does not increase risks or decrease the organization’s trustworthiness.” (NIST AI RMF Core)
What the operator must do: Maintain an implemented, repeatable decommissioning and phase-out process for AI systems that (1) identifies and mitigates retirement-related risks, (2) controls the technical shutdown and dependency removal, (3) governs data/model artifact disposition, (4) manages communications to impacted users and third parties, and (5) verifies the system is truly retired and not still in use through residual pathways (NIST AI RMF Core).
Plain-English interpretation
If you deploy AI, you must be able to retire it safely. “Safely” includes security, privacy, resilience, legal/compliance obligations, and user harm. “Trustworthiness” means your retirement process should not create surprises: unexplained behavior changes, silent loss of controls, broken disclosures, or lingering access to sensitive data.
This requirement covers planned sunsets (end-of-life, product changes, model upgrades) and forced takedowns (safety incident, unacceptable bias findings, vendor termination, or a security event that requires disabling a capability). The common thread is consistent governance: controlled actions, documented decisions, and verification.
Who it applies to
Entity scope (typical):
- Organizations developing AI systems, including internal ML and GenAI features (NIST AI RMF 1.0; NIST AI RMF Core)
- Organizations deploying AI systems, including business-owned tools embedded into operations (NIST AI RMF 1.0; NIST AI RMF Core)
- Service organizations providing AI capabilities to customers, including managed services and API platforms (NIST AI RMF 1.0; NIST AI RMF Core)
Operational scope (where you need a decommissioning procedure):
- Customer-facing models (recommendations, underwriting, moderation, copilots)
- Internal decision-support tools (risk scoring, HR screening, fraud analytics)
- Embedded third-party models and APIs (including model hosting providers and feature vendors)
- Fine-tuned models and prompt-based systems where “model retirement” may mean disabling a workflow, a system prompt, or an orchestration layer
What you actually need to do (step-by-step)
Use this as your operating procedure. Treat it as a controlled change with a clear owner and evidence trail.
1) Establish ownership and triggers (control design)
- Name a control owner (usually Product + Engineering execution, with GRC oversight).
- Define trigger events: end-of-life date, replacement model release, repeated KPI failures, safety event, adverse audit finding, third-party contract termination, material policy change.
- Define decommission types:
- Soft phase-out: reduce traffic, restrict use cases, stop onboarding, add guardrails.
- Hard shutdown: disable endpoints, revoke keys, remove UI access, block network routes.
- Emergency takedown: expedited path with after-action documentation.
Operational note: Put triggers into your SDLC / change management intake so sunsets don’t happen informally via “roadmap decisions.”
2) Create an AI Decommission Plan (system-specific)
Minimum contents:
- System inventory identifiers: system name, version, owners, environment(s).
- Dependency map: upstream data sources, downstream consumers, integrations, embedded third parties, model endpoints, feature flags, batch jobs.
- Risk assessment for retirement: what can go wrong during sunset (e.g., loss of human review, degraded fraud detection, broken disclosures, users routed to an untested fallback).
- Fallback plan: manual process, rules-based alternative, or “no decision” pathway with clear ownership.
- Communications plan: internal stakeholders, customer notices if applicable, third parties relying on outputs.
3) Freeze and control changes before sunset (change governance)
- Create a change freeze window for the retiring system except for approved decommission tasks.
- Confirm you have rollback criteria for phase-out steps (if you use a progressive rollout).
- Require approval gates for:
- Security (access revocation plan, secrets rotation impacts)
- Privacy/Legal (data retention and deletion requirements)
- Business owner (fallback readiness, operational coverage)
4) Handle data, model, and log artifacts intentionally
This is where decommissioning often fails. Your process should explicitly address:
- Training data: retention basis, deletion method, and whether it can be reused in the successor model.
- Model artifacts: weights, fine-tuning datasets, embeddings, vector stores, feature stores.
- Prompts and evaluation sets: system prompts, red-team prompts, test cases.
- Logs: prompts/responses, decision logs, audit logs, monitoring signals.
- Access pathways: API keys, service accounts, IAM roles, network allowlists.
Decide and document disposition:
- delete, archive, retain for legal hold, or retain for auditability.
- if retained, specify storage location, access controls, and purpose limitation.
5) Execute technical retirement with verification
Run a checklist that includes:
- Disable production endpoints and UI entry points.
- Remove scheduled jobs, queues, webhooks, and event triggers.
- Revoke credentials and rotate shared secrets that were accessible to the retiring system.
- Remove or update downstream integrations so consumers cannot call stale endpoints.
- Update documentation, runbooks, and user guidance to point to the replacement or fallback.
Verification should include:
- Observability checks for attempted calls to retired endpoints.
- Confirmation of zero traffic (or only allowed traffic) from approved sources.
- Confirmation that third parties have removed dependencies if they consume your AI outputs.
6) Maintain trust: disclosures, comms, and user impact controls
A safe retirement can still harm trust if users experience silent changes.
- Update product disclosures and internal SOPs that referenced the retired system.
- If the AI system supported decisions affecting people, confirm the replacement/fallback preserves required controls (human review, appeal paths, explanations) that your governance program relies on.
- Document customer communications where contractually required or where reasonable to prevent surprise.
7) Post-decommission monitoring and closure
Define a closure criterion:
- Evidence of shutdown steps completed.
- Monitoring confirms no residual use.
- Risks and exceptions documented and accepted by an accountable owner.
- Lessons learned captured, with backlog items created for process improvement.
8) Run control health checks (program-level)
You need repeatability. Do a periodic check across the AI inventory:
- systems past end-of-life dates,
- systems replaced without a recorded decommission event,
- orphaned endpoints,
- unknown owners.
This aligns with maintaining governance processes over time (NIST AI RMF 1.0).
Required evidence and artifacts to retain
Keep an “AI Decommission Evidence Bundle” per retirement event. Minimum:
- Control card / requirement control record: objective, owner, triggers, steps, exception rules.
- Decommission Plan with dependency map and fallback approach.
- Approvals: Security, Privacy/Legal, Business owner sign-off; emergency approvals if applicable.
- Execution logs: change tickets, pull requests, configuration diffs, runbook checklists with timestamps.
- Artifact disposition record: what was deleted/archived/retained; storage locations; access controls; legal hold notes.
- Comms artifacts: internal announcements, customer notices (if applicable), third-party notifications.
- Post-sunset verification: monitoring screenshots/exports, traffic reports, alert reviews, incident notes if anomalies occurred.
- Exceptions register: any residual use allowed temporarily, with expiry and compensating controls.
Retention period: align to your corporate retention schedule and any applicable legal hold requirements; GOVERN-1.7 cares that you can prove the process ran and risk did not increase (NIST AI RMF Core).
Common exam/audit questions and hangups
Expect diligence teams (customers, internal audit, assessors) to ask:
- “Show me the last AI system you retired. Where’s the runbook and evidence?”
- “How do you ensure no one can still access the old model endpoint or weights?”
- “What happens to training data, prompts, and logs after retirement?”
- “How do you handle third-party dependencies and downstream consumers?”
- “How do you execute an emergency takedown outside normal CAB timelines?”
- “How do you confirm user-facing disclosures and internal SOPs were updated?”
Hangups that create findings:
- Inventory cannot identify all deployed AI systems, so you cannot prove completeness.
- Retirement is treated as an engineering task with no compliance gates or evidence.
- You can’t show post-decommission verification; you assume “disabled” means “gone.”
Frequent implementation mistakes (and how to avoid them)
- No dependency mapping. Fix: require a dependency map section in the Decommission Plan and block approval without it.
- Forgetting non-model components. Fix: include embeddings, vector stores, prompts, feature pipelines, and evaluation harnesses in scope.
- No plan for downstream consumers. Fix: add a “consumer confirmation” step for internal apps and third parties.
- Retaining sensitive logs without controls. Fix: treat prompt/response logs as high-risk artifacts; document purpose, access, and retention basis.
- Emergency takedown with no paper trail. Fix: allow expedited approval, but require after-action documentation and risk acceptance.
Enforcement context and risk implications
NIST AI RMF is a framework, not a penalty schedule in the provided sources (NIST AI RMF 1.0; NIST AI RMF Core). The practical risk is still real: decommissioning failures often surface as security incidents (orphaned credentials), privacy issues (retained data without a basis), customer-impacting outages (no fallback), or governance failures (inaccurate disclosures). Those outcomes tend to trigger contractual claims, customer audit findings, and regulator interest depending on your sector. Treat GOVERN-1.7 as part of operational resilience and trust controls.
Practical execution plan (30/60/90-day)
First 30 days (stand up the control)
- Assign owner(s) and publish the decommissioning SOP and emergency takedown path.
- Create the control card: triggers, steps, approvals, exception handling.
- Define the minimum evidence bundle and where it will be stored.
- Pilot on one low-risk AI system retirement or a tabletop “mock retirement” if none are scheduled.
By 60 days (make it repeatable across the inventory)
- Tie decommission triggers into change management and the AI system inventory.
- Standardize templates: Decommission Plan, artifact disposition record, comms checklist.
- Train Engineering, Product, and Security on the retirement checklist and required evidence.
- Add monitoring for residual endpoint traffic and unauthorized access attempts.
By 90 days (prove operational effectiveness)
- Run a control health check across AI systems: identify “unknown owner,” “no sunset plan,” or “deprecated but live.”
- Execute at least one end-to-end retirement with full evidence, including post-sunset verification.
- Track remediation items to closure with accountable owners and due dates.
- If you manage third parties, update offboarding clauses and technical exit steps for AI-related services.
Tooling note: Many teams track this in GRC workflows plus change tickets. If you use Daydream, map GOVERN-1.7 to a single control workflow with required evidence fields so each decommission event produces an audit-ready packet without rework.
Frequently Asked Questions
Does GOVERN-1.7 apply if we only use third-party AI tools and don’t build models?
Yes. If you deploy AI in your operations, you still need a safe phase-out path, including how you exit the third party, revoke access, and prevent residual use through integrations (NIST AI RMF Core).
What counts as “decommissioning” for a GenAI feature that is mostly prompts and routing?
Decommissioning includes disabling the workflow, revoking keys, removing access paths, and deciding what happens to prompt logs, retrieval indexes, and stored conversation data. Treat orchestration components as first-class artifacts in the retirement plan.
Do we have to delete all AI-related data when retiring a system?
No. You need a documented disposition decision that avoids increasing risk. Some artifacts may be retained for auditability or legal hold, but you must control access and document purpose and retention.
How do we handle emergency shutdowns without slowing incident response?
Define an emergency takedown procedure with pre-authorized roles and a short after-action requirement: what was disabled, why, who approved, what artifacts were preserved, and what follow-up is required.
What evidence is most persuasive to auditors and customer assessors?
A complete packet from a real retirement event: plan, approvals, execution records, artifact disposition, and post-sunset verification that no residual traffic or access remains.
What if a downstream team refuses to migrate off the retiring model?
Treat it as an exception with an owner, expiry, and compensating controls. Keep the exception visible to governance, and require a migration plan tied to a date or objective closure criteria.
Frequently Asked Questions
Does GOVERN-1.7 apply if we only use third-party AI tools and don’t build models?
Yes. If you deploy AI in your operations, you still need a safe phase-out path, including how you exit the third party, revoke access, and prevent residual use through integrations (NIST AI RMF Core).
What counts as “decommissioning” for a GenAI feature that is mostly prompts and routing?
Decommissioning includes disabling the workflow, revoking keys, removing access paths, and deciding what happens to prompt logs, retrieval indexes, and stored conversation data. Treat orchestration components as first-class artifacts in the retirement plan.
Do we have to delete all AI-related data when retiring a system?
No. You need a documented disposition decision that avoids increasing risk. Some artifacts may be retained for auditability or legal hold, but you must control access and document purpose and retention.
How do we handle emergency shutdowns without slowing incident response?
Define an emergency takedown procedure with pre-authorized roles and a short after-action requirement: what was disabled, why, who approved, what artifacts were preserved, and what follow-up is required.
What evidence is most persuasive to auditors and customer assessors?
A complete packet from a real retirement event: plan, approvals, execution records, artifact disposition, and post-sunset verification that no residual traffic or access remains.
What if a downstream team refuses to migrate off the retiring model?
Treat it as an exception with an owner, expiry, and compensating controls. Keep the exception visible to governance, and require a migration plan tied to a date or objective closure criteria.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream