Service Delivery
HITRUST CSF v11 “Service Delivery” requires you to verify that the third party actually runs the security controls, service definitions, and delivery levels promised in the contract, and to continuously monitor the service so you can detect and correct drift. In practice, this means binding security obligations to measurable SLAs, then collecting evidence on an ongoing cadence. 1
Key takeaways:
- Your agreement is not the control; verified operation of contracted controls is the control.
- Monitoring must cover both security requirements and delivery levels tied to the service.
- Exams commonly fail teams that collect one-time due diligence but cannot show ongoing service monitoring.
Footnotes
“Service delivery requirement” in HITRUST is a contract-to-operations control. You are accountable for making sure the third party’s real-world delivery matches what you negotiated: the security controls, the defined service scope, and the delivery levels (typically SLAs) in the third-party service delivery agreement. Then you must monitor the service so you can prove security requirements stay met over time. 1
This matters most for third parties that process regulated data, provide core business services, or run infrastructure you depend on. A clean SOC report or a signed contract will not carry you in a HITRUST assessment if you cannot show (1) how requirements were embedded into the agreement, (2) how you validated implementation, and (3) what monitoring data you review and what you do when performance or security slips.
This page is written for Compliance Officers, CCOs, and GRC leads who need to operationalize the requirement quickly. It gives you a practical implementation approach, the artifacts to retain, and the audit questions you should be ready to answer without scrambling.
Regulatory text
HITRUST CSF v11 09.e (Service Delivery) states: “It shall be ensured that the security controls, service definitions, and delivery levels included in the third-party service delivery agreement are implemented, operated, and maintained by the third party. Third-party services shall be monitored to ensure security requirements are being met.” 1
Operator translation:
You must do two things:
- Ensure the third party’s contracted security and delivery commitments are not merely written down, but implemented, operated, and maintained in the live service.
- Monitor the third-party service to confirm security requirements continue to be met and to trigger action when they are not. 1
Plain-English interpretation (what “good” looks like)
A passing implementation has a straight line from:
- Contract language (service scope, security obligations, SLAs, reporting, right-to-audit, incident notification), to
- Operational validation (what you did to confirm the third party has the controls in place for your specific service), to
- Ongoing monitoring (what telemetry, reports, attestations, tickets, meetings, and metrics you review; who reviews them; what happens when thresholds are missed). 1
If you cannot show ongoing monitoring beyond a one-time onboarding questionnaire, you will struggle to demonstrate the “operated and maintained” and “shall be monitored” parts of the requirement.
Who it applies to
Entity types: All organizations that rely on third parties for services in scope for HITRUST. 1
Operational contexts where this control is usually tested hard:
- Cloud hosting and managed infrastructure (IaaS/PaaS)
- SaaS systems handling sensitive or regulated data
- Managed security service providers (MSSPs), SOC operations, monitoring tools
- Claims processing, billing, call centers, data analytics, and other outsourced processing
- Critical subcontractors (fourth parties) where the third party depends on them to deliver your service, if your agreement and risk model cover them
What you actually need to do (step-by-step)
Step 1: Inventory “service delivery agreements” and map them to in-scope services
Create a list of third-party relationships that deliver in-scope services. For each, capture:
- Service name and owner (internal)
- Data types touched
- Where the service sits in your architecture/process
- Contract(s) governing delivery (MSA, SOW, DPA, SLA exhibit, security addendum)
Practical tip: If contracts are decentralized, start from your system inventory and work outward: “Which third parties support this system?”
Step 2: Extract “service definitions, delivery levels, and security controls” into a control-to-contract matrix
Build a simple matrix per third party:
- Service definition: what’s in scope (functions, environments, locations, support model)
- Delivery levels: uptime/availability commitments, support response targets, RTO/RPO where relevant, maintenance windows, reporting frequency
- Security controls: access control, encryption, logging/monitoring, vulnerability management, incident notification, change management, data handling, subcontractor controls, etc.
Then identify:
- Which items are measurable
- Which items are auditable
- Which items are missing and need an amendment or compensating control
Step 3: Assign internal ownership and monitoring responsibilities
HITRUST assessors will look for clear accountability. Define:
- Business owner (owns service outcomes and escalation)
- Security/GRC owner (owns monitoring of security requirements and evidence)
- Procurement/Vendor management (owns contract governance and renewal gates)
Create a RACI that specifies who reviews reports, who approves exceptions, and who escalates issues.
Step 4: Validate the third party’s implementation for your service (not their generic posture)
“Implemented” means you validated that controls apply to the specific service delivery you consume. Methods that usually work:
- Review independent assurance artifacts (SOC reports, certifications) if they clearly cover the service scope you use
- Verify shared responsibility boundaries for cloud/SaaS
- Confirm your tenant configuration requirements (SSO, MFA, logging settings, encryption modes) are enabled
- Confirm incident notification paths and contacts work (tabletop or at least contact verification)
Document what you checked, what evidence you received, and what gaps you accepted or remediated.
Step 5: Establish ongoing monitoring with defined signals and thresholds
“Third-party services shall be monitored” requires an operational loop. Define monitoring inputs such as:
- SLA reports (availability, support response, backlog)
- Security deliverables (vuln scan summaries, patch cadence attestations, penetration test summaries where contractually required, incident reports)
- Access and user lifecycle signals (admin list review, SSO/MFA enforcement confirmation)
- Change notifications (material changes, new subprocessors/subcontractors if relevant)
- Service review meetings and action logs
For each input, specify:
- Frequency (aligned to contract commitments and service criticality)
- Reviewer (named role)
- Acceptance criteria (what “meets requirements” means)
- Escalation trigger (what happens when it fails)
Execution pattern that holds up in audits: a “monthly operational review” for delivery metrics plus a “security review” on a cadence tied to your risk tiering, with documented outcomes and follow-ups.
Step 6: Run the exception and remediation workflow
You need a consistent way to handle misses:
- Log the issue (ticket or risk register entry)
- Classify impact (security requirement vs delivery level vs scope breach)
- Require a corrective action plan from the third party
- Track to closure
- Document compensating controls or business acceptance if you accept risk
Renewals and contract extensions should be gated on unresolved material issues.
Step 7: Prove “maintained” through continuous evidence
“Operated and maintained” is where many programs fail. Your evidence should show:
- Monitoring happened as planned
- Issues were detected
- Remediation occurred (or was formally accepted)
- The service stayed within required security and delivery parameters over time 1
Required evidence and artifacts to retain
Keep artifacts in a single “service delivery evidence pack” per third party:
Contractual
- Executed MSA/SOW, SLA exhibit, security addendum, DPA
- Right-to-audit and reporting clauses (if present)
- Documented service definition/scope statement
Operational validation
- Control-to-contract matrix
- Due diligence results tied to the specific service scope
- Shared responsibility mapping (especially for cloud/SaaS)
- Configuration validation records (SSO/MFA enabled, logging configured, encryption settings)
Ongoing monitoring
- SLA reports and review notes
- Security monitoring deliverables required by the agreement
- Meeting minutes/QBR notes with action items
- Ticket/export showing issues found and remediation closure
- Exceptions, risk acceptances, and approvals
Governance
- RACI and monitoring procedure
- Risk tiering rationale that drives monitoring intensity
Common exam/audit questions and hangups
Expect questions like:
- “Show me where the contract requires the security controls you say are in place.”
- “How do you confirm the third party operates these controls for your instance/tenant?”
- “Where is your evidence of monitoring? Who reviewed it and what did they do?”
- “Show the last time an SLA or security requirement was missed and how it was handled.”
- “How do you ensure subcontractors don’t break your contracted commitments?” (if applicable to your agreements)
Hangups that trigger findings:
- Contracts lack measurable security deliverables (only vague “industry standard security” language).
- Monitoring exists informally (emails/meetings) but no retained record.
- Evidence is generic (company-wide SOC) without mapping to your service scope.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Treating onboarding due diligence as monitoring.
Fix: Create a recurring monitoring calendar with named reviewers and retained outputs. -
Mistake: SLAs exist, but no one checks them.
Fix: Require SLA reporting in the agreement and store monthly/quarterly reports with review notes. -
Mistake: Security obligations are not measurable.
Fix: Add deliverables (reporting, notification windows, audit support, vulnerability handling expectations) that produce evidence. -
Mistake: No link between issues and contract enforcement.
Fix: Tie repeated misses to escalation, service credits (if used), remediation deadlines, and renewal decisions. -
Mistake: Evidence scattered across tools and inboxes.
Fix: Centralize per-third-party evidence packs. Daydream can help by collecting artifacts, tracking review tasks, and keeping an audit-ready timeline of monitoring and follow-ups without chasing stakeholders.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this specific HITRUST control, so this page focuses on assessment and operational risk.
Risk implications if you under-implement:
- Security controls can drift after onboarding (new features, staff turnover, subcontractors, platform changes).
- Delivery failures (availability, support, incident response) become security failures when they impact detection, containment, recovery, or data handling.
- You lose negotiation power at renewal if you cannot evidence persistent noncompliance with defined obligations.
Practical execution plan (30/60/90)
First 30 days (stabilize)
- Identify in-scope third parties and gather all governing agreements.
- Create the control-to-contract matrix for high-criticality third parties first.
- Stand up ownership (RACI) and a single evidence location per third party.
- Define the minimum monitoring signals you will collect for each critical service.
Days 31–60 (operationalize)
- Validate implementation for each critical service (scope mapping + targeted configuration checks).
- Implement monitoring tasks and reminders; start collecting SLA and security deliverables.
- Define escalation paths and an exception workflow tied to remediation tracking.
- Run one service review meeting per critical third party and retain minutes plus action items.
Days 61–90 (prove and harden)
- Close the first cycle of monitoring with documented reviews and outcomes.
- Fix contract gaps for renewals or negotiate amendments where security deliverables are missing.
- Test the workflow: pick one open issue and drive it through corrective action to closure.
- Prepare an “audit packet” per critical third party: contract excerpts, matrix, last monitoring outputs, and one remediation example.
Frequently Asked Questions
Do we have to monitor every third party the same way?
No. HITRUST requires monitoring to ensure security requirements are met, but you can scale depth based on service criticality and risk tiering if you document the rationale and follow the plan consistently. 1
If a third party gives us a SOC report, does that satisfy “implemented, operated, and maintained”?
It can support your evidence, but you still need to show the SOC scope matches the service you consume and that you run ongoing monitoring of your contracted requirements. A SOC report is rarely a complete substitute for service-specific validation and monitoring. 1
What counts as “monitoring” for a SaaS provider where we can’t see their internal controls?
Monitoring can be a mix of SLA reporting, security deliverables promised in the agreement, confirmation of your tenant security configuration (SSO/MFA/logging), and documented service reviews with tracked action items. The key is repeatable review and retained evidence. 1
Our contract language is high-level. How do we comply without renegotiating immediately?
Stand up monitoring based on what you can measure now (availability reports, incident notifications, access/config reviews) and document gaps as risks. Then address missing security deliverables at renewal or via an addendum for critical services. 1
What evidence do assessors want to see most often?
They usually want the executed agreement, a mapping from contract commitments to controls, and proof of ongoing monitoring with reviewer notes plus remediation tracking for at least one issue. Keep it organized per third party so you can produce it quickly. 1
How should we handle repeated SLA or security misses?
Treat them as governance events: document the miss, require a corrective action plan, track to closure, and escalate through contract remedies and renewal gating if the pattern continues. Consistent follow-through is part of proving the service is “maintained.” 1
Footnotes
Frequently Asked Questions
Do we have to monitor every third party the same way?
No. HITRUST requires monitoring to ensure security requirements are met, but you can scale depth based on service criticality and risk tiering if you document the rationale and follow the plan consistently. (Source: HITRUST CSF v11 Control Reference)
If a third party gives us a SOC report, does that satisfy “implemented, operated, and maintained”?
It can support your evidence, but you still need to show the SOC scope matches the service you consume and that you run ongoing monitoring of your contracted requirements. A SOC report is rarely a complete substitute for service-specific validation and monitoring. (Source: HITRUST CSF v11 Control Reference)
What counts as “monitoring” for a SaaS provider where we can’t see their internal controls?
Monitoring can be a mix of SLA reporting, security deliverables promised in the agreement, confirmation of your tenant security configuration (SSO/MFA/logging), and documented service reviews with tracked action items. The key is repeatable review and retained evidence. (Source: HITRUST CSF v11 Control Reference)
Our contract language is high-level. How do we comply without renegotiating immediately?
Stand up monitoring based on what you can measure now (availability reports, incident notifications, access/config reviews) and document gaps as risks. Then address missing security deliverables at renewal or via an addendum for critical services. (Source: HITRUST CSF v11 Control Reference)
What evidence do assessors want to see most often?
They usually want the executed agreement, a mapping from contract commitments to controls, and proof of ongoing monitoring with reviewer notes plus remediation tracking for at least one issue. Keep it organized per third party so you can produce it quickly. (Source: HITRUST CSF v11 Control Reference)
How should we handle repeated SLA or security misses?
Treat them as governance events: document the miss, require a corrective action plan, track to closure, and escalate through contract remedies and renewal gating if the pattern continues. Consistent follow-through is part of proving the service is “maintained.” (Source: HITRUST CSF v11 Control Reference)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream