Service catalog and service level governance

The service catalog and service level governance requirement means you must keep an accurate, approved list of services you provide and the service level commitments for each, then govern those commitments through ownership, measurement, and change control. For ISO/IEC 20000-1 alignment, auditors look for defined catalog entries, documented SLAs/OLAs, and evidence you monitor performance and manage breaches 1.

Key takeaways:

  • You need a controlled service catalog with standard fields, owners, and approval workflow, not a spreadsheet no one trusts.
  • Every in-scope service must have documented service level commitments and clear accountability for measuring and reporting them.
  • Governance is provable operation: reviews, exceptions, breach handling, and change management tied back to the catalog and SLAs.

Service catalogs and service levels are “simple” until an audit asks a basic question: “Which services are you responsible for, what do you promise, and how do you prove you meet it?” If your answer depends on tribal knowledge, scattered contracts, or dashboards that don’t map to a defined service, you will struggle to show consistent control operation.

This requirement is operational by design. It’s about keeping service definitions stable enough that delivery teams can run them, customers can understand them, and governance functions can assess them. It also forces alignment between what Sales or Product says, what Operations can deliver, and what Risk/Compliance can evidence. Under ISO/IEC 20000-1, you should expect scrutiny on whether service commitments are defined, owned, measured, and updated under control when services change 1.

This page gives you requirement-level implementation guidance you can execute quickly: who owns what, what fields to include, how to set up reviews, what evidence to retain, and what auditors commonly challenge.

Service catalog and service level governance requirement (ISO 20000): plain-English meaning

You must maintain a defined service catalog (a controlled list of live services) and defined service level commitments for those services, then govern them through ownership, measurement, reporting, and controlled change. The compliance test is not whether you “have SLAs somewhere,” but whether you can show that service definitions and commitments are current, approved, and used to run and monitor service delivery 1.

Practical interpretation:

  • If a service exists, it is in the catalog.
  • If it is in the catalog, it has an accountable owner and a description that matches reality.
  • If you make service level commitments, they are documented, measurable, monitored, and acted on when breached.
  • If the service changes, the catalog and commitments change under control.

Regulatory text

Provided excerpt: “Baseline implementation-intent summary derived from publicly available framework overviews; licensed standard text is not reproduced in this record.”
Implementation-intent summary: “Maintain defined service catalog entries and service level commitments.” 1

What the operator must do:

  1. Define each service you deliver in a consistent, controlled format.
  2. Document service level commitments for each service (what is measured, targets, measurement method, reporting).
  3. Assign accountability for service definition accuracy and service level performance.
  4. Operate governance: approvals, reviews, reporting cadence, breach handling, and change control tied back to the catalog and SLAs 1.

Who it applies to

Entities: IT service providers and internal IT organizations running a service management system aligned to ISO/IEC 20000-1 1.

Operational context (typical scope):

  • Customer-facing technology services (SaaS, managed services, hosting, service desk).
  • Internal shared services (identity, network, endpoint, collaboration) when they are managed as services with defined consumers.
  • Third-party-supported services where you remain accountable for outcomes, even if a third party performs parts of delivery.

If you outsource components, the requirement still applies to your governance. You may flow down requirements to third parties through contracts and OLAs, but you still need an internal catalog and evidence of oversight.

What you actually need to do (step-by-step)

Step 1: Define “service” and set catalog scope

Decide what qualifies as a service versus a component. Auditors accept reasonable boundaries if they are documented and consistently applied.

  • Service: outcome delivered to a consumer (e.g., “Customer Support Portal”).
  • Component: enabling asset (e.g., “PostgreSQL cluster”) unless you expose it as a consumer-facing service.

Deliverable: Service Catalog Policy/Standard defining inclusion rules, ownership model, and update workflow.

Step 2: Build a minimum-viable service catalog template

Use a standard record structure. Keep it tight enough to maintain, detailed enough to govern.

Recommended service catalog fields (minimum):

  • Service name (unique), description, service category
  • Service owner (accountable), operational owner (responsible)
  • Consumers/tenants and support model (hours, channels)
  • Dependencies (critical upstream/downstream services and key third parties)
  • Data classification handled (high-level)
  • Service hours and maintenance windows
  • Linked documents: SLA, OLA(s), runbook, escalation path, DR/BC notes
  • Current status (draft/active/retired) and effective date
  • Change history / approvals

Deliverable: Service Catalog Register in a controlled system (ITSM tool, GRC system, or a controlled repository with workflow).

Step 3: Establish service level commitments per service

For each cataloged service, document commitments that are:

  • Measurable: clear metric definition and calculation method.
  • Assignable: named owner for reporting and remediation.
  • Actionable: thresholds trigger defined actions.

Common SLA areas (choose what fits your service):

  • Availability/uptime definition (including exclusions)
  • Incident response times and resolution targets by priority
  • Request fulfillment targets
  • Performance metrics (latency, throughput) where relevant
  • Support responsiveness (first response time) and escalation

Deliverables:

  • SLA document (external/customer) and/or Service Level Targets (internal) linked to the service catalog entry.
  • OLA(s) where internal teams or third parties must meet supporting targets.

Step 4: Define governance: ownership, reporting, and review

Create a repeatable operating rhythm so your catalog and SLAs stay current.

Minimum governance expectations auditors probe:

  • RACI for service definition upkeep, SLA definition, monitoring, reporting, and breach management.
  • Review triggers: new service launch, major change, recurring breaches, dependency changes, third-party changes.
  • Approval workflow: who approves new/changed service definitions and commitments.
  • Exception process: how you approve temporary noncompliance, with compensating controls and end dates.

Deliverables:

  • Service Level Governance Procedure
  • Service review meeting minutes (agenda, attendance, decisions)
  • Exception register (with approvals and closure evidence)

Step 5: Implement measurement and evidence-ready reporting

Tie monitoring directly to cataloged services. A dashboard that tracks infrastructure without mapping to services is weak audit evidence.

Operationalize:

  • Map service to monitoring checks, SLO indicators, and ticket queues.
  • Produce a standard monthly/quarterly service report per critical service:
    • SLA results
    • breaches and root cause
    • corrective actions and due dates
    • trend commentary linked to changes/releases

Deliverables:

  • Service level reports
  • Breach tickets/post-incident reviews with corrective actions
  • Corrective action tracking (owned, dated, closed)

Step 6: Control changes and retirement

Services change constantly. Governance fails when the catalog lags reality.

Controls to implement:

  • Change management requires updating the service catalog and SLA/OLA when a change affects scope, hours, dependencies, measurement method, or targets.
  • Retirement process: confirm consumers migrated, contracts updated, monitoring retired, catalog status changed.

Deliverables:

  • Change records referencing catalog updates
  • Service retirement checklist and approvals

Required evidence and artifacts to retain (audit-ready checklist)

Keep artifacts in a system with access control and version history.

Evidence What it proves Typical auditor test
Service catalog register (controlled) Services are defined and governed Sample services traced to owners, docs, monitoring
Service catalog policy/standard Scope and upkeep rules exist Check consistency across teams
SLA/Service level target docs per service Commitments are defined and measurable Trace metric definition to reports
OLA(s) and third-party supporting commitments Dependencies are governed Trace to contracts/OLA reviews
Service level reports and dashboards Monitoring occurs Recalculate a metric for a sample period
Breach records + corrective actions You act on misses Verify closure and recurrence handling
Governance meeting minutes Ongoing oversight Evidence of decisions and follow-up
Change records with catalog/SLA update links Updates are controlled Spot-check changes and effective dates

Common exam/audit questions and hangups

  1. “Show me your service catalog.” They will test completeness by comparing it to CMDB entries, customer contracts, or top incident categories.
  2. “Who owns this service?” “The team” is not a control. Name a role/person and show delegation.
  3. “How do you calculate availability?” Ambiguous definitions (exclusions, maintenance, dependencies) trigger findings.
  4. “Prove the SLA was met last period.” Screenshots without timestamps, raw logs without mapping, or reports with manual edits get challenged.
  5. “What happens when you miss?” Auditors want a repeatable breach process, not ad hoc heroics.

Frequent implementation mistakes (and how to avoid them)

  • Mistake: Catalog is a static document. Fix: treat it as a living system with workflow, owners, and change triggers.
  • Mistake: SLAs exist only in customer contracts. Fix: create internal service level targets and OLAs that map contract promises to operational metrics.
  • Mistake: Metrics aren’t defined tightly. Fix: write “metric specs” (formula, data source, exclusions, time window, report owner) and link them to the SLA.
  • Mistake: Monitoring doesn’t map to services. Fix: each catalog entry lists the monitoring sources and the reporting owner.
  • Mistake: No exception path. Fix: implement a formal exception register with approvals and closure criteria.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement. Treat this as an auditability and service delivery risk: weak catalog/SLA governance commonly translates into customer disputes, inconsistent service delivery, and audit findings due to inability to prove defined commitments are controlled and met 1.

Practical 30/60/90-day execution plan

Days 0–30: Establish the baseline and stop the bleeding

  • Name an executive owner for service governance and assign service owners for top services.
  • Define the service catalog template and publish a short standard.
  • Inventory your current “services” from contracts, ITSM categories, customer-facing docs, and major platforms.
  • Draft SLAs/service level targets for the highest-impact services and align metric definitions with monitoring.

Deliverables: v1 catalog, ownership list, v1 SLA templates, metric definitions for priority services.

Days 31–60: Operationalize governance and reporting

  • Stand up approval workflow for new/changed services and SLAs.
  • Implement service reporting for priority services and run the first governance review.
  • Create breach handling workflow: ticket type, severity, root-cause requirement, corrective action tracking.
  • Link third-party commitments (contracts/OLAs) to services with material dependencies.

Deliverables: governance procedure, first service reports, breach workflow, dependency mapping.

Days 61–90: Expand scope and harden evidence

  • Extend catalog coverage to remaining services in scope.
  • Normalize SLAs/OLAs and ensure every service has an owner, monitoring mapping, and review cadence.
  • Run an internal audit-style sampling: trace 5–10 services end-to-end (catalog → SLA → monitoring → report → breach handling → change records).
  • Close gaps and document exceptions with approvals.

Deliverables: audit-ready evidence pack, exception register, sampling results and remediation plan.

Where Daydream fits (practical, earned mention)

If you struggle with scattered evidence, Daydream can serve as the system of record for the service catalog and service level governance requirement by tying each service record to owners, SLAs, reporting evidence, exceptions, and review tasks. The win is audit response speed: one place to pull a service’s definition, commitments, and proof of operation.

Frequently Asked Questions

Do we need a customer-facing service catalog, or is an internal catalog enough?

ISO/IEC 20000-1 expects defined services and commitments; an internal catalog can satisfy this if it is controlled and maps to what customers actually receive 1. If you publish external descriptions, keep them consistent with the internal record.

Can we manage the catalog in a spreadsheet?

You can, but audits often fail on version control, approvals, and change history. If you keep a spreadsheet, store it in a controlled repository with workflow, access controls, and an evidence trail of reviews and updates.

What’s the difference between an SLA and an OLA in practice?

An SLA is the commitment to the customer or service consumer. An OLA is the internal or third-party supporting commitment that makes the SLA achievable, and it should be linked to the same service record.

How do we handle services supported by multiple third parties?

Record third-party dependencies in the catalog and define OLAs or contract terms that cover the parts they operate. Your governance should still measure end-to-end service performance and track breaches back to the service owner.

What evidence is most persuasive to an auditor?

A controlled catalog entry that links directly to the SLA, metric definition, monitoring source, and a recent service level report. Add breach records and corrective actions to show you manage misses, not just report them.

How do we govern “informal” internal services where no one asked for an SLA?

Define internal service level targets anyway for critical internal services. Document the consumer group, support model, and measurement method so you can show consistent operation and accountability.

Related compliance topics

Footnotes

  1. ISO/IEC 20000-1 overview

Frequently Asked Questions

Do we need a customer-facing service catalog, or is an internal catalog enough?

ISO/IEC 20000-1 expects defined services and commitments; an internal catalog can satisfy this if it is controlled and maps to what customers actually receive (Source: ISO/IEC 20000-1 overview). If you publish external descriptions, keep them consistent with the internal record.

Can we manage the catalog in a spreadsheet?

You can, but audits often fail on version control, approvals, and change history. If you keep a spreadsheet, store it in a controlled repository with workflow, access controls, and an evidence trail of reviews and updates.

What’s the difference between an SLA and an OLA in practice?

An SLA is the commitment to the customer or service consumer. An OLA is the internal or third-party supporting commitment that makes the SLA achievable, and it should be linked to the same service record.

How do we handle services supported by multiple third parties?

Record third-party dependencies in the catalog and define OLAs or contract terms that cover the parts they operate. Your governance should still measure end-to-end service performance and track breaches back to the service owner.

What evidence is most persuasive to an auditor?

A controlled catalog entry that links directly to the SLA, metric definition, monitoring source, and a recent service level report. Add breach records and corrective actions to show you manage misses, not just report them.

How do we govern “informal” internal services where no one asked for an SLA?

Define internal service level targets anyway for critical internal services. Document the consumer group, support model, and measurement method so you can show consistent operation and accountability.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream