Supporting Utilities
The HITRUST supporting utilities requirement means you must protect critical equipment from power, HVAC, and other utility failures by providing backup capacity and treating utility loss as an incident that triggers response procedures. Operationalize it by defining “critical equipment,” engineering redundancy where you control facilities, and enforcing contractual and testing evidence where a third party runs your data centers. 1
Key takeaways:
- Build and prove backup capacity for power and environmental controls that support in-scope systems. 1
- Treat utility failures as incident-response-triggering events with monitoring, escalation, and post-incident reviews. 1
- Evidence must cover design (what backups exist), operation (monitoring and maintenance), and execution (tests and incident records). 1
“Supporting utilities” is a facilities-and-operations control with security consequences: if electricity, cooling, or water fails, systems fail. In healthcare and other regulated environments, that failure quickly becomes a confidentiality, integrity, and availability issue because downtime can interrupt patient care workflows, delay claims, or break audit trails. HITRUST CSF v11 08.h makes this explicit by requiring backup capacity for supporting utilities and requiring that utility failures trigger incident response procedures. 1
For a Compliance Officer, CCO, or GRC lead, the work is less about writing a policy and more about proving engineering reality and operational discipline. Auditors will look for a clean line from (1) an inventory of critical equipment, to (2) the utility dependencies for that equipment, to (3) documented backup capacity and preventive maintenance, to (4) monitoring that detects utility failures, and (5) incident handling that treats a utility failure like a real operational incident with tickets, escalation, and lessons learned. 1
This page gives requirement-level implementation guidance you can execute quickly, including evidence checklists, audit questions, and a practical phased plan.
Regulatory text
HITRUST CSF v11 08.h states: “Equipment shall be protected from power failures and other disruptions caused by failures in supporting utilities. Supporting utilities such as electricity, water supply, and air conditioning shall have backup capacity, and utility failures shall trigger appropriate incident response procedures.” 1
Operator translation (what the auditor expects):
- You identified which equipment is “critical” to in-scope systems (production infrastructure, key network/security components, and supporting building systems where applicable). 1
- You know which utilities those systems depend on (power, cooling, water, generators, fuel, UPS batteries, building management systems). 1
- You implemented backup capacity appropriate to the risk (on-prem: UPS/generator/redundant cooling; cloud/colocation: documented resilience commitments and your own failover design). 1
- A supporting utility failure is detectable and actionable, and it triggers incident response steps, not informal “someone noticed it” handling. 1
Plain-English interpretation
This requirement is about availability controls with audit-grade proof:
- Protect equipment from utility failures: Don’t rely on a single feed, a single cooling unit, or a single building dependency for equipment that supports regulated workloads. 1
- Have backup capacity for utilities: Backups must exist and be maintained, not just planned. Backup can be physical redundancy (UPS, generator, redundant HVAC), architectural redundancy (multi-zone, multi-region), or contractual/third-party redundancy (colocation with documented resilience plus your own DR). 1
- Treat utility failures as incidents: Loss of power/cooling/water that impacts equipment must initiate your incident response process: detection, triage, escalation, communication, restoration, and after-action review. 1
Who it applies to
Entity scope: All organizations that use HITRUST CSF and operate or rely on equipment supporting in-scope information systems. 1
Operational contexts you must cover:
- On-prem data centers, server rooms, wiring closets: You own the facility risk, so you must show backup power and environmental controls or documented compensating architecture. 1
- Colocation: The third party owns utilities; you must obtain their evidence and map it to your control, plus ensure your own design tolerates a site issue. 1
- Public cloud / SaaS: You typically cannot inspect “generators and chillers,” so your obligation becomes (a) supplier due diligence for resilience controls and incident notification, and (b) your own architectural resiliency (multi-zone/region strategies where required by your risk). 1
- Critical network/security equipment: Firewalls, VPN concentrators, identity infrastructure, monitoring collectors, and storage systems often get missed because they “aren’t apps.” They still depend on supporting utilities. 1
What you actually need to do (step-by-step)
1) Define “critical equipment” and map utility dependencies
Create (or update) an inventory that, at minimum, tags:
- Equipment supporting in-scope systems (production, key corporate systems that touch regulated data, security infrastructure). 1
- Location and hosting model (on-prem, colo, cloud). 1
- Utility dependencies (power feed/UPS, cooling/HVAC path, water where applicable, building management dependencies). 1
Practical tip: Make this a join between your CMDB/asset inventory and your BIA/criticality ratings. Auditors accept “criticality + dependency mapping” more readily than a generic asset list.
2) Set minimum resilience requirements by hosting model
Write a short standard (one page is fine) that answers:
- For on-prem/colo, what backup capacity must exist for power and cooling supporting critical equipment? 1
- For cloud, what architecture is required for critical services (for example, redundant zones for tier-1 apps) and what evidence must be retained from the cloud provider? 1
- For remote/edge sites, what’s required (UPS for network closets, temperature monitoring, documented response steps)? 1
Keep it risk-based. The requirement does not prescribe a single engineering design, but it does require backup capacity and incident handling.
3) Implement or validate backup capacity
For sites you control, confirm and document:
- UPS presence and coverage for critical loads. 1
- Generator presence where required, plus fuel strategy (onsite fuel, refueling contract, or other documented plan). 1
- Redundant cooling capacity or failover plan that prevents thermal shutdown. 1
- Maintenance schedules for UPS batteries, generators, and HVAC systems. 1
For sites a third party controls (colo/cloud/SaaS), confirm and retain:
- Contract terms or supplier commitments for uptime/resilience and incident notification. 1
- Current third-party assurance or resilience documentation you can map to “backup capacity for supporting utilities” (for example, their control descriptions and testing narratives). 1
4) Add monitoring and define “utility failure” triggers
You need proof that failures are detected and acted on. Implement:
- Monitoring for power events (UPS on battery, generator on, power feed loss), temperature/humidity excursions, and key facility alarms where available. 1
- Alert routing into your ticketing/incident platform. 1
- A written trigger definition: what conditions require an incident ticket versus a maintenance ticket. 1
5) Wire utility failure into incident response procedures
Update incident response runbooks so “supporting utility failure” is a named scenario:
- Triage: confirm scope (which systems/equipment impacted), safety considerations, and business impact. 1
- Escalation: facilities/on-call, IT ops, security, and business owners as appropriate. 1
- Communications: internal updates and third-party notifications where relevant. 1
- Recovery: restoration steps, validation, and monitoring. 1
- Post-incident review: document root cause and corrective actions. 1
6) Test and document
Auditors want evidence of reality:
- Generator/UPS transfer tests or documented exercises (or third-party test attestations for colo). 1
- Tabletop exercises for utility-failure incident handling if full facility testing is not feasible. 1
- Corrective action tracking from test findings through closure. 1
7) Make it auditable: assign ownership and review cadence
Assign clear owners:
- Facilities/Engineering: backups and maintenance.
- IT Operations: monitoring, architecture, recovery.
- Security/IR: incident classification and response governance.
- GRC: evidence collection, third-party due diligence tracking, control testing coordination.
If you use Daydream to run third-party risk, this is a good place to standardize requests to colocation providers and critical SaaS: one intake workflow for “supporting utilities and resilience evidence,” mapped to HITRUST 08.h, with renewal reminders and exception tracking.
Required evidence and artifacts to retain
Keep evidence in a single control folder mapped to HITRUST 08.h. Typical artifacts:
- Utility dependency map for critical equipment (inventory extract + dependency fields). 1
- Facilities resilience documentation: UPS/generator/HVAC design summaries, one-lines, or facility spec sheets appropriate to your environment. 1
- Maintenance records for UPS/generator/HVAC and any building monitoring systems in scope. 1
- Monitoring configuration evidence: alert definitions, routing rules, and sample alerts/tickets. 1
- Incident response runbook section for utility failures, plus on-call rosters/escalation paths. 1
- Test records: generator/UPS test reports, DR/BC exercise notes where utility failure was included, and corrective actions. 1
- Third-party evidence for colo/cloud/SaaS: contract clauses, assurance reports or control narratives, and incident notification terms relevant to utility failures. 1
- Exceptions register for gaps (for example, a remote closet without UPS), with compensating controls and remediation plan. 1
Common exam/audit questions and hangups
Auditors and HITRUST assessors commonly press on:
- “Show me backup capacity.” They will ask for tangible proof, not a policy statement. Bring diagrams, maintenance logs, and test reports. 1
- “What happens when the AC fails?” If the answer is “Facilities gets an email,” expect a finding unless it is integrated with incident handling and escalation. 1
- “How do you cover cloud?” You need a defensible story: provider evidence + your architecture decisions for resilience. 1
- “Do you treat utility failures as security incidents?” The requirement says utility failures trigger appropriate incident response procedures. Show classification guidance and examples of tickets. 1
- “Are remote sites included?” If they support regulated processing or connectivity, they are in scope. Expect questions about closets, clinics, labs, and branch offices. 1
Frequent implementation mistakes (and how to avoid them)
- Writing policy without engineering evidence. Fix: pair every statement with an artifact type (test record, maintenance log, diagram). 1
- Forgetting HVAC/environmental dependencies. Fix: add temperature/humidity monitoring and escalation rules for server rooms and critical closets. 1
- Assuming “cloud provider handles it” ends the control. Fix: retain provider documentation and show your own resilience design decisions for critical services. 1
- No incident linkage. Fix: create an incident category for utility failures and require tickets for meaningful events and tests. 1
- Unowned controls across Facilities and IT. Fix: a simple RACI and a shared evidence repository reduce last-minute audit scrambles. 1
Enforcement context and risk implications
No public enforcement cases were provided in the available source catalog for this requirement. Practically, failures in supporting utilities create predictable operational risk: downtime, data integrity issues from abrupt shutdowns, and security monitoring blind spots if logging and identity systems lose power or cooling. HITRUST assessors generally treat this as an availability control with direct patient care and business continuity implications when healthcare workflows depend on system uptime. 1
Practical execution plan (30/60/90)
Use this as a sequencing guide; adjust to your environment and audit calendar.
First 30 days (stabilize scope and evidence)
- Confirm in-scope systems and identify critical equipment supporting them. 1
- Create a utility dependency map for each location/hosting model in scope. 1
- Gather existing evidence: facility specs, prior test reports, maintenance logs, monitoring screenshots, and incident procedures. 1
- Open exceptions for any known gaps (no UPS, no monitoring, unclear escalation). 1
Next 60 days (close control design gaps)
- Publish a minimum resilience standard by hosting model, including evidence requirements for third parties. 1
- Implement or tighten monitoring and alert routing into your incident/ticketing system. 1
- Update incident response runbooks and train on-call teams for utility failure scenarios. 1
- Send evidence requests to colocation and other critical third parties; track responses and exceptions. 1
Next 90 days (prove operating effectiveness)
- Execute at least one utility-failure exercise: a generator/UPS test and/or an incident tabletop with documented outcomes. 1
- Validate maintenance is happening as scheduled; remediate overdue items with documented closure. 1
- Run an internal control test: sample a utility alert and trace it to ticket creation, escalation, resolution, and post-incident documentation. 1
- Package your audit binder: narrative + artifact index + crosswalk to HITRUST 08.h. 1
Frequently Asked Questions
Does “supporting utilities” apply if we are fully cloud-hosted?
Yes, because your equipment and services still depend on electricity and cooling, even if a cloud provider operates them. Your job is to retain provider evidence and show your own resilience design plus incident handling for provider utility-related outages. 1
What counts as “backup capacity” under HITRUST 08.h?
The requirement does not prescribe a single design, but it does require that utilities supporting equipment have backup capacity. For on-prem, that usually means UPS/generator and resilient cooling; for third parties, it means documented resilience commitments and your own failover planning. 1
Do we need to treat a brief power blip as an incident?
Define thresholds in your runbook so meaningful utility failures trigger incident response procedures, while minor events become maintenance tickets. Auditors mainly care that impactful failures are detected, escalated, and documented consistently. 1
What evidence is strongest for auditors?
Test records (UPS/generator transfer tests), maintenance logs, monitoring alerts that created tickets, and incident records that show escalation and closure tend to be the most persuasive. Pair them with a dependency map that shows why the equipment is in scope. 1
How do we handle colocation providers who won’t share detailed facility documents?
Start with contract terms, provider assurance documentation they are willing to share, and written attestations about backup power and cooling. If the evidence is still thin, document an exception and reduce dependence through architectural redundancy where feasible. 1
Who should own this control: Facilities or Security?
Split ownership by function: Facilities owns backup utilities and maintenance, IT Ops owns monitoring and recovery, and Security/IR owns incident classification and governance. GRC should own the evidence package and third-party follow-up. 1
Footnotes
Frequently Asked Questions
Does “supporting utilities” apply if we are fully cloud-hosted?
Yes, because your equipment and services still depend on electricity and cooling, even if a cloud provider operates them. Your job is to retain provider evidence and show your own resilience design plus incident handling for provider utility-related outages. (Source: HITRUST CSF v11 Control Reference)
What counts as “backup capacity” under HITRUST 08.h?
The requirement does not prescribe a single design, but it does require that utilities supporting equipment have backup capacity. For on-prem, that usually means UPS/generator and resilient cooling; for third parties, it means documented resilience commitments and your own failover planning. (Source: HITRUST CSF v11 Control Reference)
Do we need to treat a brief power blip as an incident?
Define thresholds in your runbook so meaningful utility failures trigger incident response procedures, while minor events become maintenance tickets. Auditors mainly care that impactful failures are detected, escalated, and documented consistently. (Source: HITRUST CSF v11 Control Reference)
What evidence is strongest for auditors?
Test records (UPS/generator transfer tests), maintenance logs, monitoring alerts that created tickets, and incident records that show escalation and closure tend to be the most persuasive. Pair them with a dependency map that shows why the equipment is in scope. (Source: HITRUST CSF v11 Control Reference)
How do we handle colocation providers who won’t share detailed facility documents?
Start with contract terms, provider assurance documentation they are willing to share, and written attestations about backup power and cooling. If the evidence is still thin, document an exception and reduce dependence through architectural redundancy where feasible. (Source: HITRUST CSF v11 Control Reference)
Who should own this control: Facilities or Security?
Split ownership by function: Facilities owns backup utilities and maintenance, IT Ops owns monitoring and recovery, and Security/IR owns incident classification and governance. GRC should own the evidence package and third-party follow-up. (Source: HITRUST CSF v11 Control Reference)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream