GOVERN-6.1: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third-party’s intellectual property or other rights.

To meet GOVERN-6.1, you need written third-party risk policies and operating procedures that specifically cover AI-related risks, including intellectual property (IP) and other rights infringement by or through third parties. Operationalize it by scoping all AI-involved third parties, adding AI/IP due diligence and contract controls, and collecting recurring evidence that the process runs. 1

Key takeaways:

  • Treat “AI third parties” as a defined population and apply gated onboarding plus periodic reviews. 1
  • Add explicit IP/rights safeguards to due diligence, contracting, and ongoing monitoring for AI services and data sharing. 1
  • Map the requirement to owners, procedures, and evidence so you can prove design and operation in audits. 1

GOVERN-6.1 is a governance requirement from the NIST AI Risk Management Framework: have policies and procedures that address AI risks introduced by third-party entities, including risk of infringing a third party’s IP or other rights. 1 For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat this as a third-party due diligence and contracting uplift specifically for AI.

This requirement usually fails in practice for one reason: organizations already have third-party risk management (TPRM), but it is not “AI-aware.” The result is a gap where AI vendors, model providers, data brokers, labeling shops, consultants, and even cloud features with embedded AI get onboarded under generic security questionnaires without addressing AI-specific rights, training data provenance, output ownership, usage restrictions, and infringement response workflows.

Your goal is not to write a long policy. Your goal is to implement an auditable operating model: define the AI third-party population, assess risk before use, contract for rights and guardrails, monitor for change, and retain evidence. This page gives you requirement-level guidance you can copy into your control library and run with.

Regulatory text

Text (excerpt): “Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third-party’s intellectual property or other rights.” 1

What the operator must do:
You must (1) document expectations (policy) and (2) run an operational process (procedures) that identifies and manages AI risks from third parties. Your process must explicitly cover IP/rights infringement risks, not only security and privacy. Evidence must show the policy exists, the procedure exists, and the procedure is followed for relevant third parties. 1

Plain-English interpretation (what this means in practice)

If a third party touches your AI lifecycle, you are responsible for managing the legal, compliance, and operational fallout if their AI infringes someone else’s rights or causes your organization to infringe. That includes:

  • A third party’s model was trained on data without appropriate rights, and your use triggers a claim.
  • Your employees send copyrighted customer content into a third-party AI tool in a way that violates licenses or terms.
  • Outputs generated through a third party reproduce protected content, trademarks, or trade secrets.
  • A third party claims rights over your prompts, inputs, or outputs in a way that conflicts with your obligations to customers.

GOVERN-6.1 expects you to have written rules and a repeatable workflow to prevent, detect, and respond to these scenarios for third parties. 1

Who it applies to

Entity scope: Any organization developing, deploying, or operating AI systems, or using third-party AI capabilities within products, internal operations, or customer workflows. 1

Operational scope (common third-party types):

  • Model/API providers (LLM, vision, speech)
  • AI-enabled SaaS (support chat, sales enablement, HR screening, code assistants)
  • Data providers (data brokers, synthetic data vendors, web-scraped datasets)
  • Labeling/annotation firms
  • Systems integrators/consultants building or tuning models
  • Cloud platforms where AI features are enabled by configuration

If a third party processes your data for AI purposes, supplies training data, provides model access, or influences model behavior, include them in scope.

What you actually need to do (step-by-step)

Step 1: Define “AI third party” and build the in-scope inventory

  1. Create an AI third-party definition (one paragraph) that triggers this control: any third party that provides AI models, AI features, AI development services, AI training data, or processes data for AI purposes.
  2. Identify the population by querying:
    • Procurement/AP records for AI tools and “productivity assistants”
    • Security CASB/SaaS discovery for unsanctioned AI tools
    • Engineering dependency lists (model SDKs, hosted inference, vector DB services)
  3. Tag each third party with: use case, data types shared, customer impact, and whether outputs go to customers.

Deliverable: an “AI Third-Party Register” linked to your standard third-party inventory.

Step 2: Publish a policy that sets minimum requirements for AI third parties

Your policy should be short and testable. Include “must” statements:

  • Due diligence required before use for any AI third party in the register. 1
  • IP/rights risk must be assessed for training data provenance (where relevant), output rights, and usage restrictions. 1
  • Contracting must include rights and restrictions for inputs/outputs and infringement handling.
  • Ongoing monitoring for material changes (model updates, sub-processors, terms updates).
  • Exception process with approvals (Legal + Security + Business owner).

Tip that helps in audits: add a simple RACI in the policy: Business owner, TPRM, Legal/IP counsel, Security, Privacy, AI governance.

Step 3: Write procedures that turn the policy into a workflow

Procedures should be implementable by an analyst. Include:

A. Intake and triage

  • Intake form fields: use case, data types, whether outputs are customer-facing, and whether the third party can train on your inputs.
  • Risk tiering guidance (example tiers): low (no sensitive data, internal-only), moderate (sensitive data or customer impact), high (regulated data, automated decisions, customer-facing generation).

B. AI/IP due diligence checklist (minimum) Cover questions that operationalize “other rights” beyond copyright:

  • Training data and model provenance (what the provider will disclose; contractual commitments where disclosure is limited).
  • Input/output rights: who owns prompts, inputs, fine-tunes, embeddings, and outputs; what licenses are granted.
  • Restrictions: prohibited content, geographic constraints, brand/trademark use, and re-distribution limits.
  • Indemnities and limitations: whether infringement claims are covered and under what conditions (e.g., if you follow acceptable use).
  • Sub-processors: whether other model providers are used downstream.
  • Content filters and safety controls relevant to rights (e.g., prevention of regurgitation of copyrighted text).

C. Required contract clauses (playbook) Build a clause library and require Legal review for deviations:

  • Data use: “no training on our inputs” unless approved, plus retention limits.
  • Confidentiality and trade secret protections for prompts and internal content.
  • IP ownership of outputs and permitted uses.
  • Infringement cooperation: notice, takedown, investigation support, audit rights where feasible.
  • Flow-downs to sub-processors.

D. Go-live gates

  • Risk acceptance sign-off by accountable owner
  • Security/privacy review completed
  • Legal/IP review completed for moderate/high tiers
  • Documented user guidance (what employees can/can’t put into the tool)

Step 4: Operationalize ongoing monitoring (procedural evidence matters)

  • Change monitoring: track updates to terms, sub-processors, and model versions that affect rights.
  • Periodic reassessment triggers: new use case, new data type, customer-facing expansion, or incident.
  • Incident playbook integration: if a rights claim occurs, route to Legal and the AI governance function, preserve prompts/outputs, and notify the third party per contract.

Step 5: Map control ownership and recurring evidence

Auditors will ask: “Show me it runs.” Assign owners for:

  • Policy maintenance
  • Third-party intake triage
  • Contract review and clause enforcement
  • Ongoing monitoring and reassessments

If you use Daydream, treat this as a control mapped to an owner, tasks, and a recurring evidence request schedule so evidence is collected continuously rather than rebuilt during audit season. 1

Required evidence and artifacts to retain

Keep artifacts that prove both design (policy/procedure exists) and operation (it was followed):

Design evidence

  • Approved AI Third-Party Risk Policy (versioned, dated, approver)
  • Procedure/work instruction (intake, due diligence, contracting, monitoring)
  • Clause library or contracting standards for AI/IP/rights

Operational evidence 1

  • Completed intake record and risk tier
  • Due diligence questionnaire/results, including AI/IP/rights section
  • Legal review notes and contract redlines
  • Executed agreement and DPAs/addenda relevant to AI use
  • Exception approvals (if any) with compensating controls
  • Monitoring logs: terms-change review, reassessment record
  • Training/communications to users about allowed/prohibited use for the tool

Common exam/audit questions and hangups

Expect these questions from internal audit, regulators, or customers:

  1. “How do you define an AI third party?” If you cannot define and inventory them, everything else looks ad hoc.
  2. “Show me where IP/rights are assessed.” A generic security review will not satisfy the “infringement” portion. 1
  3. “Do contracts prevent training on your data?” Many providers default to broad rights unless you negotiate or configure.
  4. “What happens when the provider changes terms or model behavior?” Monitoring is usually weak; have a trigger-based reassessment process.
  5. “Who owns outputs, and can you use them commercially?” Your answer must match contract terms and product promises.

Frequent implementation mistakes and how to avoid them

  • Mistake: Treating this as a one-time policy publish. Fix: require evidence per onboarding and per reassessment.
  • Mistake: Scoping only “AI vendors” procured by IT. Fix: include AI-enabled SaaS and embedded AI features already in your stack.
  • Mistake: Skipping Legal review for “free” tools. Fix: disallow unapproved AI tools for company data; route exceptions through approvals.
  • Mistake: Focusing only on copyright. Fix: include trademarks, confidentiality, trade secrets, publicity rights, and contractual rights (“other rights”) in the checklist.
  • Mistake: No playbook for a claim. Fix: define who gets paged, what gets preserved, and how you coordinate with the third party.

Enforcement context and risk implications (what’s at stake)

NIST AI RMF is a framework, not a penalty schedule. Your risk is indirect but real:

  • Customer audits and procurement friction when you cannot demonstrate AI third-party governance.
  • Contract disputes over output ownership, sublicensing, or training rights.
  • IP infringement claims that force product changes, output suppression, or customer notifications.
  • Brand and trust damage if third-party AI behavior is inconsistent with your policies.

Treat GOVERN-6.1 as an audit-readiness and liability-containment control: your paper trail matters because it shows you had guardrails and followed them. 1

Practical 30/60/90-day execution plan

First 30 days (stabilize and stop new risk)

  • Name owners: TPRM lead, Legal/IP reviewer, Security reviewer, business accountable owner per tool.
  • Create the AI third-party definition and start the register.
  • Implement a temporary gate: no new AI third parties without intake + Legal review.

Days 31–60 (institutionalize the workflow)

  • Publish the AI Third-Party Risk Policy and procedures with approval.
  • Add the AI/IP/rights checklist to your third-party due diligence package.
  • Build the contract clause playbook and a “must-escalate” list for Legal.

Days 61–90 (prove operation and close legacy gaps)

  • Retroactively assess existing AI third parties in priority order (customer-facing first).
  • Put monitoring in place for terms/sub-processor/model changes.
  • Run one tabletop exercise for an IP/rights claim scenario and capture lessons learned as procedure updates.

Daydream fit: set GOVERN-6.1 as a control with a clear owner, link it to your AI third-party register, and automate evidence requests (intake record, due diligence, contract, reassessment) so audits become sampling, not archaeology. 1

Frequently Asked Questions

Do we need a separate “AI vendor policy,” or can we update TPRM?

Updating TPRM is fine if the policy and procedures explicitly address AI third-party risks and IP/rights infringement. Auditors care that the requirement is covered and operating, not whether the document is standalone. 1

What counts as “other rights” besides IP?

Treat it broadly: confidentiality, trade secrets, contract/license restrictions, publicity rights, and trademark/brand misuse. Capture these in your due diligence and contracting checklist so reviews are consistent. 1

If the provider won’t disclose training data sources, can we still onboard?

Potentially, but document the limitation, require contractual commitments (warranties/indemnities where available), and implement compensating controls like restricted use cases and no-training-on-inputs terms. Your file should show a reasoned risk decision. 1

How do we handle embedded AI in tools we already buy (CRM, ticketing, email)?

Treat the software provider as an AI third party for the AI feature scope. Reassess based on the feature’s data access, whether it trains on your content, and whether outputs are customer-facing.

What evidence will auditors sample first?

They typically sample a high-risk AI third party and ask for the end-to-end file: intake, AI/IP due diligence, Legal review, executed contract terms, and proof of ongoing monitoring. If any link is missing, they will question control operation. 1

Who should own GOVERN-6.1 day-to-day?

Assign operational ownership to TPRM or GRC, with Legal as the accountable reviewer for IP/rights terms and Security/Privacy as required reviewers. Document the RACI so escalation paths are unambiguous. 1

Footnotes

  1. NIST AI RMF Core

Frequently Asked Questions

Do we need a separate “AI vendor policy,” or can we update TPRM?

Updating TPRM is fine if the policy and procedures explicitly address AI third-party risks and IP/rights infringement. Auditors care that the requirement is covered and operating, not whether the document is standalone. (Source: NIST AI RMF Core)

What counts as “other rights” besides IP?

Treat it broadly: confidentiality, trade secrets, contract/license restrictions, publicity rights, and trademark/brand misuse. Capture these in your due diligence and contracting checklist so reviews are consistent. (Source: NIST AI RMF Core)

If the provider won’t disclose training data sources, can we still onboard?

Potentially, but document the limitation, require contractual commitments (warranties/indemnities where available), and implement compensating controls like restricted use cases and no-training-on-inputs terms. Your file should show a reasoned risk decision. (Source: NIST AI RMF Core)

How do we handle embedded AI in tools we already buy (CRM, ticketing, email)?

Treat the software provider as an AI third party for the AI feature scope. Reassess based on the feature’s data access, whether it trains on your content, and whether outputs are customer-facing.

What evidence will auditors sample first?

They typically sample a high-risk AI third party and ask for the end-to-end file: intake, AI/IP due diligence, Legal review, executed contract terms, and proof of ongoing monitoring. If any link is missing, they will question control operation. (Source: NIST AI RMF Core)

Who should own GOVERN-6.1 day-to-day?

Assign operational ownership to TPRM or GRC, with Legal as the accountable reviewer for IP/rights terms and Security/Privacy as required reviewers. Document the RACI so escalation paths are unambiguous. (Source: NIST AI RMF Core)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream