Service level management

Service level management under ISO/IEC 20000-1:2018 Clause 8.3.2 requires you to negotiate and agree documented service level agreements (SLAs) with customers, measure actual service performance against SLA targets, and run a planned, periodic review cycle that drives improvements. To operationalize it, you need an SLA inventory, defined metrics and reporting, governance for exceptions, and evidence that reviews result in actions. 1

Key takeaways:

  • You must have customer-agreed, documented SLAs that are accessible and current. 1
  • You must continuously compare performance to SLA targets and review SLAs on a planned cadence. 1
  • You must identify improvements and be able to prove the improvement loop through records, not intent. 1

Service level management is the control that turns service commitments into measurable obligations. Auditors will not accept “we try to meet expectations” or “the contract covers it.” Clause 8.3.2 expects a closed loop: negotiate and agree SLAs with customers, document them, measure performance against the targets, and periodically review results to drive improvement. 1

For a Compliance Officer, CCO, or GRC lead, the practical challenge is coordination. SLAs often live in multiple places: master services agreements, order forms, product pages, support policies, and customer-specific amendments. Metrics live somewhere else: monitoring tools, ticketing systems, and incident postmortems. Reviews happen informally, if at all. The requirement is met only when you can show traceability from (1) customer-agreed targets, to (2) measured performance, to (3) a planned review cadence, to (4) documented improvement actions. 1

This page gives requirement-level implementation guidance you can execute quickly: scope, roles, steps, evidence, audit questions, and an execution plan you can run without turning SLA management into a months-long replatforming project.

Regulatory text

ISO/IEC 20000-1:2018 Clause 8.3.2 states: “The organization shall negotiate and agree service level agreements with customers, review service performance against SLA targets, and identify improvements. SLAs shall be available as documented information and shall be reviewed at planned intervals.” 1

What the operator must do (plain-English interpretation):

  1. Negotiate and agree SLAs with customers. The commitments must be mutually agreed, not implied. 1
  2. Keep SLAs as documented information. That means controlled, retrievable documents (or equivalent controlled records) that reflect what is in force. 1
  3. Review service performance against targets. You must measure performance and compare it to the SLA targets, not just report operational metrics. 1
  4. Identify improvements. Reviews must drive actions: fixes, capacity changes, process changes, or SLA adjustments where appropriate. 1
  5. Review SLAs at planned intervals. Reviews must be scheduled and repeatable, not ad hoc. 1

Who this applies to (entity and operational context)

Applies to: any organization delivering services to customers where service commitments exist, including internal service providers supporting business units, and external service providers serving paying customers. 1

Typical in-scope services:

  • IT operations and infrastructure services (network, compute, storage).
  • SaaS or managed services where availability, support response, or incident resolution targets are promised.
  • Shared services (service desk, identity, endpoint management) where internal customers have expectations that are formally set as SLAs.

Where teams commonly get tripped up:

  • Product marketing “availability” statements conflict with contract SLAs.
  • Support policy targets exist, but customers never agreed to them contractually.
  • Measurements exist, but they don’t map to the SLA definition (for example, measuring system uptime but SLA defines “service availability” with exclusions).

What you actually need to do (step-by-step)

Step 1: Define your SLA universe and ownership

Create an SLA register that answers:

  • Which services have SLAs?
  • Which customers are on which SLA terms?
  • Where is the authoritative document?
  • Who owns the SLA operationally (Service Owner) and contractually (Legal/Sales Ops)?

Assign a single Service Level Manager (role, not necessarily a full-time person) to coordinate the loop across Legal/Sales, Service Management, SRE/Operations, and Support.

Step 2: Standardize the SLA template and minimum content

Build an SLA template (or standard schedule) with minimum fields:

  • Service scope and boundaries (what is covered, what is excluded)
  • Service hours and maintenance windows
  • SLA targets (availability; support response; resolution targets; service request fulfillment)
  • Measurement method (source systems, calculation rules, exclusions, sampling, time zone)
  • Customer responsibilities/dependencies (customer network, approved configurations)
  • Reporting and review method (how the customer receives reports; how often reviews occur)
  • Remedies or service credits (if applicable)
  • Escalation path and dispute process

Your goal is auditability: a reader can tell exactly how you calculate compliance with each target.

Step 3: Implement measurement that maps to the SLA definition

For each SLA target, document:

  • Metric name (as written in the SLA)
  • System of record (monitoring platform, ticketing tool, status page logs)
  • Calculation rules (what counts, what doesn’t)
  • Data retention (how long you keep raw evidence and summaries)
  • Control owner (who signs off on the data quality)

Common example mappings:

  • Availability target → monitoring/observability data plus incident logs for exclusions.
  • Support response target → ticket timestamps and business-hours calendars.
  • Resolution target → incident/problem records tied to severity taxonomy.

If you cannot measure a target as written, treat it as a compliance issue: either fix instrumentation or renegotiate the SLA language.

Step 4: Establish a planned SLA review cadence and agenda

Clause 8.3.2 requires planned intervals. Define:

  • Review calendar 2
  • Standard agenda: performance vs targets, breaches and root causes, trend analysis, customer complaints, upcoming changes, improvement actions
  • Attendance: Service Owner, Support lead, Ops/SRE, Customer Success/Account owner, and Compliance/GRC as needed
  • Outputs: minutes, action items, owners, due dates, and any proposed SLA changes

Keep it lightweight but consistent. The evidence is the repeatable rhythm and the documented outcomes.

Step 5: Manage breaches, exceptions, and improvements as controlled workflows

Implement three workflows:

  1. SLA breach management: detect, validate, communicate to customer 1, document cause, track corrective actions.
  2. Exception handling: customer-specific deviations (temporary waivers, pilot services) require documented approval and expiry.
  3. Continual improvement: create improvement tickets (capacity, monitoring gaps, process changes), track to closure, and link them back to the review that raised them. 1

This is where tools help. Many teams use ticketing plus spreadsheets; it works until scale breaks. Daydream can help by centralizing SLA obligations, mapping them to evidence sources, and generating audit-ready review packs without chasing screenshots across teams.

Required evidence and artifacts to retain

Auditors look for “documented information” and proof of operation over time. Keep:

  • SLA register (inventory with owners, effective dates, customer mapping)
  • Executed SLAs (or contract schedules) and controlled versions
  • SLA template and measurement methodology documentation
  • Monitoring and ticketing reports showing performance vs targets
  • SLA review schedule (calendar or plan) and review minutes
  • Breach records: notifications, internal analysis, corrective actions, closure evidence
  • Improvement log linked to reviews (what changed and why)
  • Change records when changes impact SLA performance (release notes, maintenance notices)
  • Access controls to show SLAs and reports are available to authorized users and not stale

If you retain only dashboards, you will struggle. Dashboards change; auditors want snapshots or exported reports tied to a time period and a specific SLA.

Common exam/audit questions and hangups

Expect questions like:

  • “Show me all current SLAs and which customers they apply to.” 1
  • “Where is the authoritative SLA document stored, and how do you control updates?” 1
  • “Pick one SLA target. Walk me from the contract language to the measurement to the report.”
  • “Show evidence of planned SLA reviews and the resulting improvement actions.” 1
  • “How do you handle breaches and customer notifications?”
  • “How do you ensure metrics reflect the SLA definition (hours, exclusions, time zones)?”

Hangups that drive findings:

  • Reviews happen, but no documented minutes or action tracking exists.
  • Service performance is measured, but no explicit comparison to SLA targets is recorded.
  • Customer-specific SLAs exist, but nobody can enumerate them reliably.

Frequent implementation mistakes (and how to avoid them)

  1. SLA language that can’t be measured. Fix with a measurement appendix in the SLA and pre-signature operational review by SRE/Support.
  2. Single global SLA that ignores customer-specific amendments. Maintain a per-customer SLA mapping in the SLA register.
  3. Reporting metrics without thresholds. Reports must show “target vs actual” and breach status, not raw numbers.
  4. Ad hoc reviews. Put reviews on a calendar, keep minutes, track actions. Planned intervals are explicit in the clause. 1
  5. No improvement linkage. Auditors want to see the loop: performance review → improvement identified → work completed. 1

Enforcement context and risk implications

No public enforcement cases were provided for this requirement. Practically, weaknesses in service level management drive:

  • Contract disputes and service credit exposure (if remedies exist).
  • Customer churn from repeated misses or poor transparency.
  • Audit nonconformities under ISO/IEC 20000-1 certification audits if SLAs are undocumented, not reviewed on a plan, or performance is not assessed against targets. 1

Practical execution plan (30/60/90-day)

Exact timelines depend on size and tooling, but this phased plan matches how most compliance teams execute without stalling delivery.

First 30 days (stabilize and inventory)

  • Build the SLA register for top services and top customers.
  • Collect current SLA documents and identify the authoritative location per SLA.
  • Identify the top SLA targets and whether they are measurable with current data.
  • Define owners: Service Owner, measurement owner, review owner.

Days 31–60 (make it measurable and reviewable)

  • Standardize the SLA template and measurement methodology.
  • Implement target-vs-actual reporting for priority SLAs.
  • Create the SLA review calendar and a standard review agenda.
  • Run initial reviews for priority services; capture minutes and action items.

Days 61–90 (close the loop and harden governance)

  • Implement breach and exception workflows, with approval and recordkeeping.
  • Build an improvement log linked to SLA reviews and track actions to closure.
  • Perform an internal audit-style walk-through: contract → metrics → report → review minutes → improvement tickets.
  • Decide whether to centralize evidence and workflows in a system like Daydream to reduce manual effort and improve audit readiness.

Frequently Asked Questions

Do SLAs have to be signed documents?

The requirement is that SLAs are negotiated, agreed with customers, and available as documented information. 1 In practice, “agreed” is easiest to prove through executed contract schedules, order forms, or written acceptance tied to the SLA text.

What counts as “reviewed at planned intervals”?

You need a defined review plan (calendar or schedule) and evidence that reviews occurred and produced outputs. 1 Pick an interval appropriate to service criticality and customer expectations, then follow it consistently.

Can we meet the requirement with dashboards only?

Dashboards help, but you still need documented SLAs, target-vs-actual evaluation, and review records that identify improvements. 1 Exported reports or archived snapshots reduce audit friction.

Who should own SLA management: Support, SRE, or GRC?

Operationally, Service Owners and Support/SRE own performance and measurement. GRC should own governance: the SLA register, review evidence, and audit readiness. Keep one coordinator accountable for the end-to-end loop.

How do we handle customer-specific SLAs without drowning in complexity?

Standardize a base SLA and track deltas in a structured way (amendments, addenda, or a customer-specific schedule) tied into the SLA register. The audit goal is fast traceability: customer → SLA terms → measurement → review records.

What if the contract has an SLA but we can’t measure it accurately yet?

Treat it as a priority gap. Either improve instrumentation to match the SLA definition or renegotiate the SLA language to match what you can measure and control. Continuing without a plan creates repeated breaches and weak audit posture. 1

Footnotes

  1. ISO/IEC 20000-1:2018 Information technology — Service management

  2. customer tier or per service line

Frequently Asked Questions

Do SLAs have to be signed documents?

The requirement is that SLAs are negotiated, agreed with customers, and available as documented information. (Source: ISO/IEC 20000-1:2018 Information technology — Service management) In practice, “agreed” is easiest to prove through executed contract schedules, order forms, or written acceptance tied to the SLA text.

What counts as “reviewed at planned intervals”?

You need a defined review plan (calendar or schedule) and evidence that reviews occurred and produced outputs. (Source: ISO/IEC 20000-1:2018 Information technology — Service management) Pick an interval appropriate to service criticality and customer expectations, then follow it consistently.

Can we meet the requirement with dashboards only?

Dashboards help, but you still need documented SLAs, target-vs-actual evaluation, and review records that identify improvements. (Source: ISO/IEC 20000-1:2018 Information technology — Service management) Exported reports or archived snapshots reduce audit friction.

Who should own SLA management: Support, SRE, or GRC?

Operationally, Service Owners and Support/SRE own performance and measurement. GRC should own governance: the SLA register, review evidence, and audit readiness. Keep one coordinator accountable for the end-to-end loop.

How do we handle customer-specific SLAs without drowning in complexity?

Standardize a base SLA and track deltas in a structured way (amendments, addenda, or a customer-specific schedule) tied into the SLA register. The audit goal is fast traceability: customer → SLA terms → measurement → review records.

What if the contract has an SLA but we can’t measure it accurately yet?

Treat it as a priority gap. Either improve instrumentation to match the SLA definition or renegotiate the SLA language to match what you can measure and control. Continuing without a plan creates repeated breaches and weak audit posture. (Source: ISO/IEC 20000-1:2018 Information technology — Service management)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 20000-1: Service level management | Daydream