DE.CM-06: External service provider activities and services are monitored to find potentially adverse events

DE.CM-06 requires you to continuously monitor third-party activities and the services they deliver to detect potentially adverse events, then act on those signals through triage, escalation, and remediation. Operationalize it by defining what “adverse” means for each third party, instrumenting monitoring sources (technical and non-technical), and running a repeatable review cadence with documented outcomes. (NIST CSF 2.0)

Key takeaways:

  • Monitoring third parties is a detection control, not a one-time due diligence task; it must run throughout the relationship. (NIST CSF 2.0)
  • Scope must cover both service performance and third-party activities that touch your data, systems, or operations. (NIST CSF 2.0)
  • Auditable execution requires defined indicators, owners, review records, and tracked exceptions with closure evidence. (NIST CSF 2.0)

The de.cm-06: external service provider activities and services are monitored to find potentially adverse events requirement is where many third-party risk programs become real. Most teams can show onboarding due diligence, contracts, and a SOC report archive. Fewer can show ongoing monitoring that reliably detects issues early enough to reduce impact.

DE.CM-06 sits in the Detect function, Continuous Monitoring category. Practically, it means you treat third parties as part of your operating environment: you watch for security signals (suspicious access, failed controls, incident notifications), resilience signals (outages, degraded SLAs), and trust signals (ownership changes, sanctions exposure, adverse media) that could affect confidentiality, integrity, availability, safety, or compliance. (NIST CSF 2.0)

This page gives requirement-level implementation guidance you can hand to operations: what to monitor, how to set thresholds, who reviews results, what evidence to retain, and what auditors typically challenge. It also includes a pragmatic execution plan and field-tested pitfalls to avoid.

Requirement: DE.CM-06 (plain-English interpretation)

You must monitor the ongoing activities of external service providers and the services they deliver to identify potentially adverse events, then route those events into your incident, problem, and risk processes. Monitoring includes technical telemetry (where feasible), operational performance, and business signals that indicate elevated risk. (NIST CSF 2.0)

“Potentially adverse events” is intentionally broad. For implementation, define it as any third-party-related signal that could:

  • Indicate compromise or attempted compromise
  • Indicate a control failure that increases exposure (for example, MFA disabled in an admin console you rely on)
  • Indicate service instability that threatens critical operations
  • Trigger contractual, regulatory, or customer notification duties

Regulatory text

Framework excerpt: “External service provider activities and services are monitored to find potentially adverse events.” (NIST CSWP 29)

Operator interpretation: You need an operating loop that (1) identifies which third parties warrant monitoring, (2) defines the adverse events you care about for each, (3) collects signals continuously or on a defined cadence, and (4) documents triage decisions and follow-up actions. A folder of annual attestations does not meet the intent because it does not detect adverse events during the contract term. (NIST CSF 2.0)

Who it applies to (entity and operational context)

This applies to any organization that depends on third parties to operate systems, process data, deliver products, or maintain infrastructure, including critical infrastructure operators, organizations with formal cybersecurity programs, and service organizations supporting customers. (NIST CSF 2.0)

Operationally, DE.CM-06 becomes mandatory in practice when a third party:

  • Processes, stores, transmits, or can access your sensitive data
  • Hosts or operates systems that are part of your critical services
  • Has privileged access (admin tools, CI/CD, support consoles)
  • Provides security controls you rely on (identity, logging, endpoint, backups)
  • Provides operational dependencies (payment processing, communications, cloud)

What you actually need to do (step-by-step)

1) Define monitoring scope by third-party tier

Build (or refresh) your inventory of third parties, then assign a monitoring tier based on inherent risk and connectivity. Minimum fields to support DE.CM-06:

  • Third party name, service, business owner
  • Data types and access level
  • Integration points (SSO, APIs, network paths)
  • Criticality (supports a critical business service or not)

Output: a tiered monitoring register that tells ops which providers get which monitoring controls.

2) Define “potentially adverse events” as measurable indicators

Translate the requirement into indicators that can be observed and reviewed. Good indicators have:

  • A clear description
  • A signal source
  • A threshold/trigger
  • An owner and an escalation path

Example indicator set (adapt to your environment):

  • Security signals: third-party incident notification received; suspicious admin activity in the provider console; unexpected changes to federation/SSO settings; abnormal API error patterns from the provider
  • Resilience signals: outage notifications; repeated SLA breaches; degraded performance impacting a critical service
  • Governance signals: expired SOC report or security attestation; refusal to provide required notifications; material subcontractor change without notice
  • External signals: adverse media relevant to breach, fraud, or operational failure; major ownership change that affects risk posture

Output: monitoring indicator catalog mapped to provider tiers.

3) Implement signal collection (technical + operational + external)

Use multiple channels; no single feed catches everything.

Technical monitoring (where you have access):

  • Centralize logs from identity, network, endpoint, and cloud systems that reflect third-party activity (for example, vendor-admin accounts, API calls, SSO events).
  • Tag third-party identities and integrations so detections can be attributed.
  • Route detections into your ticketing/incident tooling with service owner context.

Operational monitoring:

  • Track uptime, latency, error rates, and support responsiveness for critical services.
  • Require providers to send outage and incident notifications to a monitored mailbox or intake queue.

External monitoring:

  • Watch for public reports that indicate compromise or significant operational disruption.
  • Track changes in the provider’s trust posture that affect your reliance (for example, loss of a key certification might be a signal, depending on your contractual requirements).

Output: documented monitoring sources per provider and an intake path into triage.

4) Establish triage and escalation workflow

Define what happens when a signal fires:

  • Triage owner: usually the SOC for security signals; service management for uptime; TPRM/GRC for governance signals.
  • Decisioning: classify as incident, problem, risk issue, or noise; document rationale.
  • Escalation: define when to notify Legal, Privacy, Procurement, and business owners.
  • Containment options: revoke access, rotate secrets, disable integrations, shift traffic, invoke DR, or engage alternate providers.

Output: a runbook with routing rules and response playbooks for third-party events.

5) Run periodic control performance reviews (and prove it)

DE.CM-06 is easy to “design” and hard to operate consistently. Set a review cadence by tier and document:

  • Monitoring coverage (what signals you collected, gaps, and why)
  • Events detected and how they were handled
  • Exceptions, risk acceptances, and remediation due dates
  • Management review and approvals where required

Output: repeatable review minutes and an exceptions log with closure evidence. (NIST CSF 2.0)

6) Close the loop with contracts and governance

Monitoring only works if the relationship supports it:

  • Ensure contracts require timely incident/outage notifications and cooperation.
  • Ensure you can audit or obtain evidence needed to validate the provider’s response.
  • Ensure subcontractor dependencies are disclosed where they create material risk.

Output: contract clause mapping to monitoring obligations and notification SLAs.

Required evidence and artifacts to retain

Auditors and customers will ask for objective proof of operation. Retain:

  • Third-party inventory with tiering and criticality rationale
  • Monitoring indicator catalog (signals, thresholds, owners, escalation paths)
  • Data flow / integration list showing where third parties touch systems and data
  • Runbooks for third-party security events and service disruptions
  • Sample monitoring outputs: alerts, dashboards, tickets, and triage notes
  • Periodic review records: agendas, minutes, sign-offs, and action items
  • Exceptions register: gaps, compensating controls, risk acceptance approvals, due dates, closure proof (NIST CSF 2.0)
  • Provider communications: incident notices, outage notices, RCA reports, remediation commitments

A practical tip: create an “evidence bundle” per review cycle so you can respond quickly without re-collecting artifacts. (NIST CSF 2.0)

Common exam/audit questions and hangups

Expect questions like:

  • “Show me how you detect third-party-related events, not just internal events.”
  • “Which providers are in scope, and why are lower-tier providers excluded?”
  • “How do you know monitoring is working? Show a test, a past event, or a review record.”
  • “What happens if a provider does not notify you? What compensating signals exist?”
  • “How do you monitor fourth parties (subcontractors) that support your critical providers?”

Hangups that trigger findings:

  • No link between provider tiering and monitoring depth
  • Alerts exist, but there is no documented triage decision or closure
  • Reviews happen informally with no minutes, no exceptions log, and no ownership

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: equating annual due diligence with monitoring.
    Fix: separate “assessment artifacts” (SOC reports, questionnaires) from “monitoring signals” (alerts, outages, tickets) and show both.

  2. Mistake: no provider attribution in logs.
    Fix: tag third-party identities, API keys, and integrations; otherwise detections cannot support third-party monitoring.

  3. Mistake: monitoring only cybersecurity, ignoring service health.
    Fix: include availability/performance triggers for providers that support critical services, not just security detections.

  4. Mistake: alerts with no governance.
    Fix: assign owners, set escalation thresholds, and require periodic review with documented outcomes. (NIST CSF 2.0)

  5. Mistake: no exception discipline.
    Fix: if you cannot technically monitor a provider, log the gap, add compensating controls (contractual notices, access restrictions, segmentation), assign a due date, and track to closure.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so you should treat DE.CM-06 as a control expectation driven by framework alignment and by downstream obligations in contracts, customer security reviews, and regulator examinations that evaluate third-party oversight maturity. (NIST CSF 2.0)

Risk-wise, weak third-party monitoring increases:

  • Detection delay for provider breaches and outages
  • Customer impact due to unplanned downtime or data exposure
  • Inability to prove “reasonable oversight” during audits or post-incident reviews

Practical execution plan (30/60/90 days)

You can execute DE.CM-06 quickly without boiling the ocean.

First 30 days (stabilize and define)

  • Confirm the in-scope third-party list and tiering for critical providers.
  • Write the adverse event definitions and indicator catalog for top-tier providers.
  • Stand up an intake path (mailbox/queue) for provider incident/outage notifications.
  • Draft the triage runbook and assign primary/backup owners.

Next 60 days (instrument and operationalize)

  • Implement provider attribution in identity and access logs (where feasible).
  • Connect monitoring sources to ticketing; define required ticket fields (provider, service, severity, action taken).
  • Start operational service health monitoring for critical providers.
  • Run the first control performance review; open exceptions with owners and due dates. (NIST CSF 2.0)

By 90 days (prove repeatability)

  • Demonstrate at least one complete monitoring-to-review cycle with evidence bundles.
  • Test escalation paths with a tabletop exercise based on a provider outage or incident notification.
  • Update contracts or renewal addenda for missing notification/cooperation clauses.
  • Present metrics and exceptions to management for review and decisions. (NIST CSF 2.0)

If you need to systematize this, Daydream can help you maintain a tiered monitoring register, attach evidence bundles to each review cycle, and track third-party monitoring exceptions to closure without losing the audit trail.

Frequently Asked Questions

Does DE.CM-06 require continuous (real-time) monitoring for every third party?

No. The requirement is monitoring to detect adverse events; the intensity should match risk and connectivity. Use continuous signals for critical/high-connectivity providers, and periodic checks for lower-risk providers. (NIST CSF 2.0)

What counts as an “external service provider” under DE.CM-06?

Treat any third party delivering a service your organization depends on as in scope, including cloud providers, SaaS, MSPs, payment processors, and contractors with privileged access. Scope decisions should be documented and risk-based. (NIST CSF 2.0)

We can’t get logs from a SaaS provider. How do we meet the requirement?

Monitor what you can observe from your side (SSO events, API activity, admin changes, service status) and enforce contractual notification and cooperation duties. Track the logging limitation as an exception with compensating controls and a review cadence. (NIST CSF 2.0)

How do we show evidence that monitoring is “working”?

Keep alerts/tickets with triage notes, escalation records, and closure proof, plus periodic review minutes that show coverage, exceptions, and decisions. Auditors accept “no events” periods if you can show monitoring ran and was reviewed. (NIST CSF 2.0)

Should TPRM/GRC or the SOC own DE.CM-06?

Split ownership: the SOC (or security operations) typically owns detection and incident triage, while TPRM/GRC owns provider tiering, monitoring requirements, and exception governance. Document the RACI so events do not stall between teams.

How do we handle fourth parties (our providers’ subcontractors)?

Focus on material subcontractors that affect your critical services. Require disclosure where possible, monitor provider-level signals that reflect subcontractor impact (outages, incidents), and treat missing visibility as a documented risk with contract and resilience mitigations.

Frequently Asked Questions

Does DE.CM-06 require continuous (real-time) monitoring for every third party?

No. The requirement is monitoring to detect adverse events; the intensity should match risk and connectivity. Use continuous signals for critical/high-connectivity providers, and periodic checks for lower-risk providers. (NIST CSF 2.0)

What counts as an “external service provider” under DE.CM-06?

Treat any third party delivering a service your organization depends on as in scope, including cloud providers, SaaS, MSPs, payment processors, and contractors with privileged access. Scope decisions should be documented and risk-based. (NIST CSF 2.0)

We can’t get logs from a SaaS provider. How do we meet the requirement?

Monitor what you can observe from your side (SSO events, API activity, admin changes, service status) and enforce contractual notification and cooperation duties. Track the logging limitation as an exception with compensating controls and a review cadence. (NIST CSF 2.0)

How do we show evidence that monitoring is “working”?

Keep alerts/tickets with triage notes, escalation records, and closure proof, plus periodic review minutes that show coverage, exceptions, and decisions. Auditors accept “no events” periods if you can show monitoring ran and was reviewed. (NIST CSF 2.0)

Should TPRM/GRC or the SOC own DE.CM-06?

Split ownership: the SOC (or security operations) typically owns detection and incident triage, while TPRM/GRC owns provider tiering, monitoring requirements, and exception governance. Document the RACI so events do not stall between teams.

How do we handle fourth parties (our providers’ subcontractors)?

Focus on material subcontractors that affect your critical services. Require disclosure where possible, monitor provider-level signals that reflect subcontractor impact (outages, incidents), and treat missing visibility as a documented risk with contract and resilience mitigations.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream