Monitoring and Review of Third Party Services
To meet the monitoring and review of third party services requirement, you must run an ongoing oversight program that regularly reviews third-party services, reports, and records; evaluates performance against contracted service levels and security requirements; and performs audits on a regular basis. Your goal is provable, repeatable monitoring that produces decisions, remediation, and documentation.
Key takeaways:
- Tie monitoring directly to contract terms: SLAs plus security requirements, not generic scorecards.
- Define “regularly” as a risk-based cadence and document it by third party and service.
- Keep evidence that shows you reviewed outputs, followed up findings, and enforced contract obligations.
“Monitoring and review of third party services” is the part of third-party risk management that fails most often in practice because it is operational, not theoretical. You can complete onboarding due diligence and still be out of compliance if you do not monitor what the third party actually delivers over time: service performance, security posture, incident reporting, and audit results.
HITRUST CSF v11 control 09.f requires ongoing oversight across three things third parties produce for you: (1) services (the actual operational delivery), (2) reports (attestations, performance reports, incident notifications, vulnerability summaries, etc.), and (3) records (logs, tickets, access records, evidence requested by your auditors). It also requires that audits occur regularly and that you evaluate performance against the service levels and security requirements you contracted for.
Operationalizing this well means you set explicit monitoring triggers, define who reviews what, and create an escalation path when a third party misses an SLA or fails a security obligation. You also retain enough evidence to prove the monitoring happened and that you acted on what you learned.
Regulatory text
HITRUST CSF v11 09.f states: “Services, reports, and records provided by a third party shall be regularly monitored and reviewed, and audits shall be carried out regularly. Third-party service performance shall be evaluated against contracted service levels and security requirements.” 1
What an operator must do:
- Establish an ongoing monitoring and review process for each in-scope third party and service.
- Define a “regular” cadence (risk-based) for reviewing service delivery, third-party reports, and third-party records.
- Conduct audits on a regular schedule (your audits, independent audits, or contractual audit rights exercised).
- Compare results to what you contracted for: SLAs and security requirements, then drive remediation, enforcement, or offboarding.
Plain-English interpretation (requirement-level)
You need a living oversight loop. A third party’s controls and service levels can drift after onboarding, so your program must (1) collect signals, (2) review them, (3) document the review, and (4) take action when expectations are not met. “Regularly” is not a vibe; it is a defined cadence with named owners and retained evidence.
The “evaluate against contracted service levels and security requirements” clause is the anchor. If your contracts do not define measurable service levels and specific security obligations, you will struggle to show compliance. Tight contracts make monitoring objective and auditable.
Who it applies to
Entity types: All organizations 1
Operational scope (what to include):
- Third parties that store, process, transmit, administer, or can access sensitive systems or data.
- Third parties that provide operationally critical services (availability, call center, claims processing, EHR hosting, managed IT, cloud services, payment processing).
- Subcontractors/4th parties when your third party’s performance depends on them and your contract or risk model treats them as relevant.
Owners (typical):
- Third-party risk (program design, cadence, evidence standards)
- Service owner / business owner (operational performance review, issue ownership)
- Security / privacy (security requirement monitoring, incident and vulnerability follow-up)
- Procurement / legal (contractual enforcement, audit rights, renewal decisions)
What you actually need to do (step-by-step)
1) Inventory third-party services and map them to contracts
Create a service-level inventory that answers:
- What service is provided (not just the third party name).
- Which systems/data the service touches.
- Which contract governs it, including SLAs and security exhibits.
Practical tip: If you cannot point to the clause that defines an SLA or security requirement, you cannot “evaluate against” it. Fix the contract or add an addendum before you pretend monitoring is meaningful.
2) Define “regular” monitoring cadences by risk tier
Set monitoring frequency by risk tier and service criticality. Document the rationale and apply it consistently. Your cadence should cover:
- Service monitoring: uptime, response times, support/ticket performance, backup/restore tests (if applicable).
- Security monitoring: required security reports (SOC reports, penetration test summaries if contractually required), security incidents, vulnerability management communications, access review confirmations.
- Record review: evidence the third party provides per contract (logs, change records, DR test results, compliance reports).
3) Create a monitoring checklist that ties directly to contract terms
Build a checklist per service with:
- SLA metrics and thresholds pulled from the contract.
- Security requirements pulled from the security addendum (encryption, access controls, incident notification timelines, subcontractor controls, data deletion/return, etc.).
- Required reporting artifacts and their expected delivery schedule.
This becomes your “control-to-contract” traceability. Auditors want to see that monitoring criteria came from signed obligations, not ad hoc expectations.
4) Establish review workflow, not just data collection
Define:
- Reviewer: role accountable for the review (service owner, security, TPRM).
- Evidence: what gets saved for each review cycle.
- Escalation: when SLA misses or security issues trigger a formal corrective action plan (CAP), service credits, or executive escalation.
- Decision points: renewal approval gates, scope changes, and risk acceptance process.
A common failure mode is collecting SOC reports and uptime dashboards but never documenting that someone reviewed them and decided what to do next.
5) “Audits carried out regularly”: pick an audit strategy and document it
HITRUST 09.f expects audits to occur regularly 1. Operationalize this with one or more of the following, based on contract rights and risk:
- Independent assurance review: obtain and review independent audit reports the third party already produces, then document your review and follow-up.
- Right-to-audit exercise: conduct or commission an audit under your contractual audit clause for higher-risk services.
- Targeted control testing: test specific controls that matter to you (access administration, logging, incident response coordination, data deletion).
Document the audit schedule rationale and how findings are tracked to closure.
6) Track issues to closure with contractual enforcement options
Create a single issue register per third party (or per service) that includes:
- SLA misses (date, metric, impact, root cause, remediation)
- Security findings (from audits, incidents, vulnerability notifications)
- Contractual obligations not met (late reports, incomplete evidence, missed DR tests)
Then decide and document the consequence:
- CAP with due dates and owner
- service credits or other remedies (if in contract)
- scope reduction, compensating controls, or exit plan
7) Feed monitoring results into governance
Make monitoring outputs actionable:
- Quarterly (or scheduled) vendor/third-party review meetings with service owners
- Renewal decisions that reference monitoring evidence
- Risk acceptance approvals when you knowingly keep a third party with open issues
If you use Daydream or a similar GRC system, configure it so each third party has a monitoring plan (cadence + required artifacts), automated evidence requests, and a workflow that forces review sign-off and issue closure before renewal.
Required evidence and artifacts to retain
Keep artifacts that prove three things: (1) you monitored, (2) you reviewed, (3) you acted.
Minimum evidence set (tailor by risk tier):
- Monitoring plan per third party/service (cadence, owners, required inputs).
- Contract excerpts or obligations matrix mapping SLAs and security requirements to monitoring checks.
- Service performance reports (dashboards, monthly reports, ticket metrics) plus documented review notes/approvals.
- Security artifacts required by contract (attestations/reports) plus your review record and follow-up questions.
- Audit artifacts: audit schedule, scope, results, and remediation tracking.
- Issue register and CAPs, including closure evidence.
- Governance artifacts: meeting minutes, renewal memos, risk acceptances, escalation emails.
Common exam/audit questions and hangups
Expect questions like:
- “Show me how you define ‘regular’ monitoring for critical third parties.”
- “Where in the contract are the SLAs and security requirements you monitor against?”
- “Who reviewed the third party’s reports, and what did they decide?”
- “Show evidence of audits carried out regularly and your follow-up on findings.”
- “How do monitoring results affect renewals and continued use decisions?”
Hangups auditors often flag:
- Monitoring exists, but it is informal and not documented.
- The organization reviews “reports” but ignores actual service delivery metrics or incident trends.
- No proof of remediation follow-up; issues sit open with no decision trail.
- Contracts lack measurable SLAs or security obligations, so evaluation is subjective.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: treating onboarding due diligence as “monitoring.”
Fix: separate “initial due diligence” from “ongoing monitoring” with distinct workflows and evidence. -
Mistake: collecting SOC reports without a review memo.
Fix: require a short review record: scope coverage, exceptions noted, questions asked, risk decision, and tickets created. -
Mistake: monitoring SLAs but not security requirements.
Fix: build a requirements matrix that includes security addendum obligations and test them with evidence requests. -
Mistake: no documented audit approach.
Fix: define your audit strategy by tier. Document why you rely on independent reports for some third parties and why you perform deeper audits for others. -
Mistake: monitoring does not drive enforcement.
Fix: predefine triggers for CAPs, executive escalation, service credits, and exit planning. Then use them.
Risk implications (why regulators and auditors care)
This control is about drift and detection. Third parties change: staffing, subcontractors, architectures, and controls. If you do not monitor delivery and security performance, you will find out during an incident, an outage, or an audit. HITRUST’s framing makes “services, reports, and records” all in-scope, so a narrow “SOC report once a year” approach rarely satisfies the intent 1.
Practical execution plan (30/60/90)
First 30 days (stabilize and define)
- Identify in-scope third parties by data access and criticality.
- Pull contracts and extract SLAs and security requirements into a single obligations matrix.
- Define monitoring cadences by tier and assign named owners.
- Stand up an issue register and a consistent review memo template.
By 60 days (run the loop)
- Run the first monitoring cycle for highest-risk services (performance + security + records).
- Document reviews and open issues with due dates.
- Set calendarized governance touchpoints (service owner reviews, renewal gates).
- Define your audit approach and schedule by tier, including how you will validate remediation.
By 90 days (prove repeatability)
- Complete at least one full review-to-remediation cycle for priority third parties.
- Validate evidence quality: can you answer auditor questions in minutes, not days?
- Tune triggers and escalation based on what you learned (missing reports, unclear SLAs, weak security addenda).
- Integrate monitoring outputs into renewal and procurement decisions so oversight is enforceable.
Frequently Asked Questions
What counts as “regularly” for monitoring and audits?
“Regularly” needs a defined cadence set by risk and criticality, then applied consistently. Document the cadence per third party/service and keep evidence of each completed review 1.
Do we need to perform on-site audits of every third party?
No. The requirement is that audits are carried out regularly, which can include independent audits you review plus targeted audits or testing for higher-risk services. What matters is that the audit approach is planned, documented, and results drive remediation 1.
Our contracts don’t have strong SLAs or security exhibits. Can we still comply?
You can monitor operationally, but you will struggle to prove you evaluated performance against “contracted service levels and security requirements.” Prioritize contract updates or addenda for high-risk relationships 1.
What evidence is most convincing to auditors?
A contract-to-monitoring matrix, dated review records showing human sign-off, and an issue log with remediation closure evidence. Auditors also look for renewal decisions that reference monitoring outcomes.
How do we handle cloud providers where we can’t negotiate audit rights?
Document the constraint, rely on available independent assurance artifacts, and add compensating monitoring such as service health metrics, incident notification testing, and access/log reviews you control. Record the risk decision and any additional safeguards you implemented.
Who should own third-party monitoring: procurement, security, or the business?
Assign shared ownership: the business owner for service performance, security for security requirements, and TPRM/GRC for program governance and evidence consistency. One accountable owner per monitoring task avoids “everyone owns it” failures.
Footnotes
Frequently Asked Questions
What counts as “regularly” for monitoring and audits?
“Regularly” needs a defined cadence set by risk and criticality, then applied consistently. Document the cadence per third party/service and keep evidence of each completed review (Source: HITRUST CSF v11 Control Reference).
Do we need to perform on-site audits of every third party?
No. The requirement is that audits are carried out regularly, which can include independent audits you review plus targeted audits or testing for higher-risk services. What matters is that the audit approach is planned, documented, and results drive remediation (Source: HITRUST CSF v11 Control Reference).
Our contracts don’t have strong SLAs or security exhibits. Can we still comply?
You can monitor operationally, but you will struggle to prove you evaluated performance against “contracted service levels and security requirements.” Prioritize contract updates or addenda for high-risk relationships (Source: HITRUST CSF v11 Control Reference).
What evidence is most convincing to auditors?
A contract-to-monitoring matrix, dated review records showing human sign-off, and an issue log with remediation closure evidence. Auditors also look for renewal decisions that reference monitoring outcomes.
How do we handle cloud providers where we can’t negotiate audit rights?
Document the constraint, rely on available independent assurance artifacts, and add compensating monitoring such as service health metrics, incident notification testing, and access/log reviews you control. Record the risk decision and any additional safeguards you implemented.
Who should own third-party monitoring: procurement, security, or the business?
Assign shared ownership: the business owner for service performance, security for security requirements, and TPRM/GRC for program governance and evidence consistency. One accountable owner per monitoring task avoids “everyone owns it” failures.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream