GOVERN-1.3: Processes, procedures, and practices are in place to determine the needed level of risk management activities based on the organization's risk tolerance.
To meet GOVERN-1.3, you need a documented, repeatable method to scale AI risk management activities up or down based on your organization’s risk tolerance, and you must be able to prove it operates in practice. Operationalize this by defining risk tiers, mapping each tier to required controls and reviews, and enforcing tier-based gates across the AI lifecycle. 1
Key takeaways:
- Define risk tolerance in operational terms (thresholds, triggers, decision rights) and connect it to AI system risk tiers.
- Build a “tier-to-activities” matrix so teams know exactly what extra testing, review, and approvals are required at higher risk.
- Retain evidence that the tiering decision happened, the right activities ran, and exceptions were approved and tracked. 2
GOVERN-1.3 is a governance requirement that forces a practical question: “Given our risk tolerance, how much risk management do we need for this AI system?” If your answer is “it depends,” that’s fine, but the dependency must be defined, documented, and consistently applied. NIST expects processes, procedures, and practices that let you calibrate effort to risk, rather than treating every AI use case the same. 1
For a CCO, GRC lead, or Compliance Officer, the fastest way to operationalize this requirement is to stop debating abstract risk appetite language and translate it into execution: risk tiers, control expectations per tier, approval gates, and evidence. Your auditors, customers, and internal oversight bodies will look for three things: clear ownership, a predictable cadence, and traceable artifacts that show the mechanism works across teams, not just on paper. 2
This page gives you a requirement-level implementation playbook: who it applies to, exactly what to build, the artifacts to retain, common audit questions, and an execution plan that turns “risk tolerance” into a working control.
Regulatory text
Excerpt: “Processes, procedures, and practices are in place to determine the needed level of risk management activities based on the organization's risk tolerance.” 1
Operator interpretation: You must (1) define or reference organizational risk tolerance, (2) assess each AI system or material change against that tolerance, and (3) use the result to determine the depth of required risk management activities (reviews, testing, approvals, monitoring). Then you must show evidence that teams follow the method consistently. 2
Plain-English interpretation (what the requirement really demands)
- A scaling mechanism: Low-risk AI work should not require the same rigor as high-risk AI work, but the boundary between “low” and “high” must be pre-defined.
- A decision record: For each AI system (and major change), someone must decide the risk tier and document why.
- A mapped set of activities: Each tier must have required activities (technical, legal/compliance, operational) that are proportional to the tier.
- A governance backstop: Exceptions can happen, but only with explicit approval, expiration, and follow-up.
If you cannot show who decides, how they decide, and what changes based on that decision, you will struggle to evidence GOVERN-1.3.
Who it applies to (entity and operational context)
This applies to any organization developing, deploying, or operating AI systems, including service organizations providing AI-enabled services to customers. 1
In practice, it applies across:
- Product and Engineering: model development, fine-tuning, evaluation, release gates.
- Data Science / ML Ops: dataset changes, retraining triggers, monitoring and drift response.
- Security and Privacy: threat modeling, access controls, privacy reviews, incident response integration.
- Legal/Compliance and Risk: risk tolerance definition, policy enforcement, documentation, issue management.
- Procurement / Third-party risk management: third party models, APIs, data suppliers, labeling vendors, and system integrators.
If third party components are material to the AI system, your tiering and activity selection must include third party risk considerations (for example: opaque training data provenance can raise the risk tier even if your internal code is controlled).
What you actually need to do (step-by-step)
Step 1: Write a “requirement control card” for GOVERN-1.3
Create a one-page control runbook that operators can execute consistently. Include:
- Objective: Scale AI risk management activities to match risk tolerance.
- Owner: Named role (for example, Head of AI Governance, Enterprise Risk, or GRC).
- In-scope objects: AI systems, AI features, major model updates, new data sources, new third party model providers.
- Trigger events: new AI system, significant change, incident, expanded user population, new jurisdiction, new intended use.
- Cadence: defined review rhythm plus event-based triggers.
- Exception rules: who can approve, required compensating controls, expiration date, and escalation path. 2
Step 2: Translate “risk tolerance” into decisionable criteria
Risk tolerance often exists as broad enterprise language. Convert it into criteria you can apply to AI:
- Impact dimensions: harm to individuals, financial loss, regulatory exposure, safety, discrimination risk, security risk, operational dependency.
- Context dimensions: user population, use in consequential decisions, automation level, human oversight, external vs internal use.
- Data dimensions: sensitive data handling, data provenance, data rights, cross-border flows.
- Model dimensions: explainability needs, adversarial robustness needs, susceptibility to hallucination/misuse (where relevant).
Output: a short, stable list of criteria and a scoring or classification approach that yields a tier (example tiers below). Keep it simple enough that teams will actually apply it.
Step 3: Define AI risk tiers and document tier thresholds
Use a small number of tiers. A practical starting point:
- Tier 1 (Low): internal productivity tooling, minimal user impact, no sensitive data, non-consequential outputs.
- Tier 2 (Moderate): customer-facing features, moderate operational dependency, limited sensitive data, human review in the loop.
- Tier 3 (High): consequential decisions, vulnerable populations, significant automation, safety-critical or regulated workflows, sensitive data at scale.
You do not need perfect math. You need consistent categorization that reflects your risk tolerance and can withstand scrutiny.
Step 4: Build a “tier-to-activities” matrix (the core GOVERN-1.3 artifact)
Create a table that maps each tier to minimum required risk management activities. Example categories:
| Activity category | Tier 1 | Tier 2 | Tier 3 |
|---|---|---|---|
| Pre-launch risk assessment | Lightweight | Standard | Enhanced + independent review |
| Data and privacy review | If sensitive data present | Required | Required + stricter sign-off |
| Security review | Standard appsec | Threat modeling for AI features | Enhanced testing + red teaming plan |
| Model evaluation | Basic quality checks | Bias + robustness checks where relevant | Expanded evaluation + stress tests |
| Human oversight | Documented | Required | Required + clear fallback procedures |
| Monitoring | Basic metrics | Drift + incident triggers | 24/7-ready escalation + tighter thresholds |
| Third party due diligence | As needed | Required for critical suppliers | Required + contractual controls + exit plan |
Your matrix should include: owner, required approvers, and evidence produced for each activity.
Step 5: Embed tiering and gates into delivery workflows
Controls fail when they sit outside delivery tooling. Put the tier decision and required activities into:
- Intake forms for new AI initiatives.
- Change management for model updates, new datasets, and prompt/policy changes.
- Release checklists (CI/CD or product launch readiness).
- Third party onboarding flows for AI providers and critical data suppliers.
Make “tier selected + evidence attached” a completion criterion for moving forward.
Step 6: Define the minimum evidence bundle and retention location
For each execution cycle (new system or major change), standardize what “proof” looks like:
- Intake and tiering decision record (with rationale)
- Completed required reviews per the matrix (security, privacy, compliance, model evaluation)
- Approvals (named approvers, dates, conditions)
- Monitoring plan and incident response linkage
- Exceptions (if any): documented compensating controls, expiry, and tracked remediation items 2
Store evidence in a system that supports retrieval by AI system name, owner, version, and date.
Step 7: Run control health checks and close remediation to validated completion
Operate the control like a living program:
- Periodic sampling to confirm tiering decisions match actual use
- Checks that required activities were completed for sampled Tier 2/Tier 3 systems
- Tracking of issues to closure with validation (for example: re-test, re-review, or updated monitoring)
A common failure mode is “we did the review” without proof of follow-through.
Required evidence and artifacts to retain
Keep these artifacts audit-ready:
- GOVERN-1.3 control card (owner, triggers, steps, exceptions) 2
- Risk tolerance mapping (how enterprise risk appetite becomes AI tier criteria)
- AI tier definitions and thresholds (including examples)
- Tier-to-activities matrix (minimum required activities per tier)
- Tiering decisions for each in-scope AI system and major change (dated, owned, justified)
- Review outputs (risk assessments, evaluation reports, security/privacy sign-offs, monitoring plans)
- Exception register (approvals, expiry, compensating controls, closure evidence)
- Control health check results and remediation tracker 2
Common exam/audit questions and hangups
Expect questions like:
- “Show me how your risk tolerance changes what you do for AI system A vs B.”
- “Who can approve a high-risk AI launch, and what do they review?”
- “How do you ensure teams don’t under-tier systems to avoid work?”
- “How do you handle third party models or datasets in tiering?”
- “Show evidence of an exception and the follow-up controls.”
Hangups that stall audits:
- Risk appetite statements that never convert into tier thresholds.
- Tiering exists, but no enforced linkage to required activities and approvals.
- Evidence scattered across email, chat, and tickets without a retrieval method.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Treating tiering as a one-time classification.
Fix: make tiering event-driven (new data, new user population, new deployment context) and require re-tiering on material changes. -
Mistake: Overly complex scoring models.
Fix: fewer tiers, clearer triggers, and examples. Complexity increases inconsistency. -
Mistake: “Policy says so” without operational gates.
Fix: embed tiering and required artifacts into release readiness and change management. -
Mistake: No exception design.
Fix: define exception approval, expiry, and compensating controls; track exceptions like issues with owners and due dates. -
Mistake: Ignoring third party risk.
Fix: treat third party opacity (training data provenance, subprocessor chains, security posture) as risk inputs that can raise a tier.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so you should treat enforcement risk as indirect: weak GOVERN-1.3 execution increases the chance of product harm, privacy/security incidents, and misleading risk disclosures. Where regulators or customers ask “why didn’t you do more testing or oversight,” your defensible answer is a tiered program tied to risk tolerance, with evidence that the prescribed activities ran. 2
Practical 30/60/90-day execution plan
First 30 days (Immediate)
- Assign a control owner and publish the GOVERN-1.3 control card. 2
- Inventory in-scope AI systems and third party AI dependencies at a workable level of detail.
- Draft tier criteria and a simple tiering worksheet; pilot on a small set of systems.
By 60 days (Near-term)
- Finalize tier definitions and the tier-to-activities matrix with Security, Privacy, Product, and Legal/Compliance input.
- Implement workflow hooks: intake form fields, required evidence checklist, and approval gates for Tier 2/Tier 3.
- Stand up a centralized evidence repository structure (by system, version, date).
By 90 days (Operational)
- Run the first control health check: sample systems, confirm tiering accuracy, verify evidence bundles, log gaps.
- Launch an exception register and remediation workflow with validation steps. 2
- Train reviewers and system owners on how to tier, what evidence is required, and how changes trigger re-tiering.
Where Daydream fits: If you are struggling to prove operational consistency, Daydream can act as the system of record for control cards, tier-to-activities matrices, evidence bundles, and recurring control health checks, so you can answer diligence and audit requests without reconstructing history from tickets and email. 2
Frequently Asked Questions
How do we define “risk tolerance” in a way engineers can apply?
Convert enterprise risk appetite into a short set of AI-specific criteria and tier thresholds (impact, context, data, model, third party dependency). Then require a tiering decision record for each AI system and major change. 2
Can we start with qualitative tiers instead of a numeric scoring model?
Yes. Consistency matters more than precision early on. Use examples and clear triggers that move a system into a higher tier, then tighten the model over time based on audit findings and incidents.
What counts as a “material change” that should trigger re-tiering?
Changes that increase impact or uncertainty: new user populations, new use cases, higher automation, new sensitive data sources, model architecture changes, or a new third party model/data provider. Document the trigger in your control card. 2
How do we keep teams from gaming the tiering outcome?
Separate “tier proposer” from “tier approver” for higher tiers, require rationale, and run periodic sampling in control health checks. Track mis-tiering as a governance issue with remediation and retraining. 2
Do we need the same rigor for internal-only AI tools?
Not always. Internal tools can be Tier 1 if impact and data sensitivity are low. If an internal tool influences customer outcomes, handles sensitive data, or creates operational dependency, re-tier it and apply the matrix.
How should we handle third party AI services in tiering?
Treat the third party as part of the system. If you lack visibility into training data, security controls, or subprocessing, that uncertainty should push the system into a higher tier and require stronger due diligence and monitoring.
Footnotes
Frequently Asked Questions
How do we define “risk tolerance” in a way engineers can apply?
Convert enterprise risk appetite into a short set of AI-specific criteria and tier thresholds (impact, context, data, model, third party dependency). Then require a tiering decision record for each AI system and major change. (Source: NIST AI RMF 1.0)
Can we start with qualitative tiers instead of a numeric scoring model?
Yes. Consistency matters more than precision early on. Use examples and clear triggers that move a system into a higher tier, then tighten the model over time based on audit findings and incidents.
What counts as a “material change” that should trigger re-tiering?
Changes that increase impact or uncertainty: new user populations, new use cases, higher automation, new sensitive data sources, model architecture changes, or a new third party model/data provider. Document the trigger in your control card. (Source: NIST AI RMF 1.0)
How do we keep teams from gaming the tiering outcome?
Separate “tier proposer” from “tier approver” for higher tiers, require rationale, and run periodic sampling in control health checks. Track mis-tiering as a governance issue with remediation and retraining. (Source: NIST AI RMF 1.0)
Do we need the same rigor for internal-only AI tools?
Not always. Internal tools can be Tier 1 if impact and data sensitivity are low. If an internal tool influences customer outcomes, handles sensitive data, or creates operational dependency, re-tier it and apply the matrix.
How should we handle third party AI services in tiering?
Treat the third party as part of the system. If you lack visibility into training data, security controls, or subprocessing, that uncertainty should push the system into a higher tier and require stronger due diligence and monitoring.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream