Monitoring, measurement, analysis and evaluation
To meet ISO/IEC 20000-1:2018 Clause 9.1, you must define what service management performance you will monitor and measure, how you will do it, when you will do it, and when/how you will analyze and evaluate results so they drive decisions and improvements. Treat this as a governed measurement program, not ad hoc reporting. 1
Key takeaways:
- You need a documented, repeatable measurement design: metrics, methods, frequency, ownership, and evaluation cadence.
- “Analysis and evaluation” must produce decisions (actions, risk acceptance, improvement plans), not just dashboards.
- Evidence is the point: auditors look for defined measures, consistent execution, and management review outputs tied to results.
Footnotes
Clause 9.1 is the ISO 20000 requirement that forces operational discipline around performance and control health. If you run an IT service management system (SMS), you already collect data (tickets, uptime, changes, incidents). The gap is usually governance: metrics are scattered, inconsistent, or disconnected from decisions. ISO 20000 asks you to determine four things: what to monitor and measure, the methods, the timing, and when you analyze/evaluate results. 1
For a Compliance Officer, CCO, or GRC lead, the fastest path is to stand up a single “Measurement & Evaluation Plan” for the SMS. This plan acts as the control center: it defines the service KPIs you rely on, the control indicators you use to detect process breakdowns, and the management cadence that turns data into actions. You can implement it without buying new tools by standardizing definitions, assigning owners, and producing consistent outputs (dashboards, reviews, corrective actions).
This page gives requirement-level implementation guidance you can execute quickly: who it applies to, step-by-step operationalization, audit-ready artifacts, common auditor hangups, and a practical 30/60/90-day plan.
Regulatory text
ISO/IEC 20000-1:2018 Clause 9.1 states: “The organization shall determine what needs to be monitored and measured, methods for monitoring and measurement, when monitoring and measuring shall be performed, and when results shall be analysed and evaluated.” 1
Operator meaning: you must design and run a controlled measurement program for your service management system. That includes:
- A defined scope of measures (what you monitor/measure).
- Defined methods (how data is captured, calculated, validated).
- Defined cadence (when you collect and when you review).
- Defined evaluation (how results are interpreted, what decisions follow, and how actions are tracked).
Plain-English interpretation (what auditors expect)
Auditors expect you to show that measurement is intentional, repeatable, and decision-driving:
- Intentional: metrics exist because they support service requirements, customer commitments, and process control.
- Repeatable: the same definitions and data sources are used consistently.
- Decision-driving: results are reviewed, exceptions are explained, and actions are taken (or risk is accepted) with traceable approvals.
Who it applies to (entity and operational context)
Applies to: any organization operating an ISO/IEC 20000-1 service management system, including internal IT organizations and external service providers. 1
Operational contexts where Clause 9.1 gets tested hardest:
- Multi-team delivery where incidents, changes, and requests cross boundaries.
- Environments with third parties delivering critical service components (cloud, MSPs, SaaS). Your metrics may depend on third-party data, so you need defined methods and validation.
- High-change services where poor measurement creates blind spots (release failures, major incidents, capacity constraints).
- Any organization preparing for certification, surveillance audits, or internal assurance reviews.
What you actually need to do (step-by-step)
Step 1: Define your measurement objectives (tie to service and SMS outcomes)
Write a short statement of what measurement must enable, for example:
- Demonstrate service performance against commitments.
- Detect process nonconformities early (incident, change, problem, capacity, availability).
- Provide inputs to management review and continual improvement.
Keep objectives few and specific. If a metric doesn’t support an objective, it’s noise.
Step 2: Build a controlled inventory of “what to monitor and measure”
Create a single register (spreadsheet is acceptable) with three classes:
- Service performance KPIs (customer-facing)
- Availability / reliability indicators per critical service
- Incident restoration outcomes (time to restore, repeat incidents)
- Request fulfillment outcomes (throughput, aging)
- Process control indicators (how healthy your SMS processes are)
- Change outcomes (success rate, emergency change trends)
- Problem management outcomes (recurrence reduction, known error backlog health)
- CMDB/config accuracy checks (where applicable)
- Risk and compliance indicators
- SLA breaches and root causes
- Major incident counts and after-action completion
- Third-party performance indicators where they materially affect your services
For each metric, define: purpose, owner, scope, and the decision it informs.
Step 3: Define methods for monitoring and measurement (make metrics auditable)
For each metric, specify a method that can be executed consistently:
- Data source: tool/system of record (ITSM platform, monitoring system, logs).
- Calculation rule: numerator/denominator and inclusions/exclusions.
- Sampling approach: if not fully automated, define sampling rules and who performs it.
- Data quality checks: reconciliations (for example, ticket status completeness, timestamp integrity).
- Tool controls: access control for metric changes and dashboard edits.
Auditor hangup to preempt: two teams calculating “the same metric” differently. Your method section must remove ambiguity.
Step 4: Set the cadence: when to monitor/measure vs. when to analyze/evaluate
Clause 9.1 distinguishes collection from evaluation. Define both:
- Monitoring/measurement frequency: real-time, daily, weekly, monthly, per release, per major incident.
- Analysis/evaluation cadence: a scheduled forum where results are interpreted (weekly ops review, monthly service review, quarterly management review).
Practical rule: operational teams can watch dashboards continuously, but the SMS needs a documented cadence where someone is accountable for interpreting the results and making decisions.
Step 5: Define thresholds, targets, and decision rules (what “good/bad” means)
For each key metric, document:
- Target/threshold: SLA target, internal target, or “no target; trend-only” where appropriate.
- Trigger: what causes escalation (breach, adverse trend, control failure).
- Required response: corrective action ticket, problem record, risk acceptance, or improvement plan.
Avoid fake precision. It is acceptable to start with conservative thresholds and refine after you learn baseline performance.
Step 6: Run a recurring evaluation forum and record decisions
Set up a recurring review with an agenda and minutes:
- Review key trends and exceptions.
- Confirm whether targets were met and why not.
- Decide actions: corrective actions, problem investigations, improvement initiatives, or risk acceptance.
- Assign owners and due dates; track to closure.
This is where “analysis and evaluation” becomes provable. Dashboards alone rarely satisfy auditors without documented evaluation outputs.
Step 7: Integrate third parties where they influence service performance
If third parties provide monitoring data or operate parts of the service:
- Define how you obtain their reports, how you validate them, and how they feed your evaluation cadence.
- Align third-party measures to your service KPIs (avoid a separate, disconnected third-party scorecard).
If you use Daydream for third-party due diligence and ongoing monitoring, map third-party performance indicators into the same Measurement & Evaluation Plan so service reviews cover end-to-end delivery, not just internal teams.
Required evidence and artifacts to retain
Auditors want objective evidence that you determined the four required elements and executed them consistently. Maintain:
- Measurement & Evaluation Plan (or “Monitoring and Measurement Procedure”) covering what/how/when/analyze-evaluate. 1
- Metrics register with definitions, owners, data sources, calculation rules, thresholds, and review cadence.
- Dashboards/reports (exported snapshots or access-controlled views) showing the metrics over time.
- Review records: agendas, minutes, attendance, decisions, and action logs from service reviews and management reviews.
- Corrective action records linked to metric exceptions (tickets, problem records, improvement plans).
- Data quality evidence: reconciliations, sampling worksheets, or notes on metric corrections and approvals.
- Third-party performance inputs where relevant (SLA reports, incident summaries) and proof they were reviewed.
Common exam/audit questions and hangups
Expect these questions:
-
“Show me how you decided what to measure.”
Have a metrics register with rationale and linkage to service requirements. -
“Where is the method documented for this KPI?”
Point to calculation rules, data source, and any exclusions. -
“How do you know the data is accurate?”
Provide data quality checks and change control for metric definitions. -
“Who reviews these results and what actions came from them?”
Produce minutes and action tracking with examples of closed items. -
“How do third-party services factor into your monitoring?”
Show integrated reporting and review, not an inbox of PDFs.
Frequent implementation mistakes and how to avoid them
- Too many metrics, no decisions
- Symptom: large dashboards nobody can explain.
- Fix: require each metric to name the decision it informs; retire “vanity” measures.
- Uncontrolled metric definitions
- Symptom: the same KPI changes month to month due to tool changes or manual edits.
- Fix: version-control metric definitions and restrict who can change dashboards.
- Collection without evaluation
- Symptom: you can show reports but not minutes, actions, or follow-up.
- Fix: create a documented evaluation cadence with recorded outcomes.
- Targets with no trigger logic
- Symptom: teams miss a target and nothing happens.
- Fix: define triggers and required response types (problem, corrective action, risk acceptance).
- Ignoring third-party contribution
- Symptom: “We can’t measure that; it’s the provider’s responsibility.”
- Fix: measure end-to-end outcomes and require third-party inputs as part of service governance.
Enforcement context and risk implications
No public enforcement cases were provided for ISO/IEC 20000-1 in the source catalog. From a risk standpoint, weak monitoring and evaluation increases the chance you miss control failures (for example, rising incident recurrence, change instability, or chronic SLA breaches) until they become customer-impacting events. It also undermines your ability to prove effective service management during certification or customer audits.
Practical 30/60/90-day execution plan
First 30 days: establish control design and minimum viable evidence
- Appoint metric owners for critical services and core SMS processes.
- Draft the Measurement & Evaluation Plan with the required four determinations (what/how/when/analyze-evaluate). 1
- Build the initial metrics register and standardize definitions for the top service KPIs.
- Start capturing review minutes for at least one recurring forum (weekly ops or monthly service review).
By 60 days: run the cadence and close the loop on exceptions
- Operationalize data quality checks for the highest-risk metrics.
- Add trigger logic and action pathways (problem record vs corrective action vs accepted risk).
- Demonstrate at least one end-to-end exception lifecycle: metric breach → analysis → action → closure evidence.
- Integrate third-party reporting where it affects service performance; document method and review.
By 90 days: mature evaluation, management review inputs, and continuous improvement
- Expand metrics coverage to remaining in-scope services and key processes.
- Show trend analysis and documented evaluations feeding management review.
- Implement change control for metric definitions and dashboard updates.
- If using Daydream, align third-party monitoring outputs to your metrics register so service governance includes third-party performance without manual chasing.
Frequently Asked Questions
What’s the difference between “monitoring and measurement” and “analysis and evaluation” in Clause 9.1?
Monitoring and measurement are the collection activities (getting the data). Analysis and evaluation are the decision activities (interpreting results, determining compliance with targets, and deciding actions). ISO 20000 expects both to be defined and evidenced. 1
Do we need to document every possible metric we track in tools?
No. Document the measures that matter for your SMS and services, and make sure they have defined methods and review cadences. Keep ad hoc diagnostics separate from the controlled metrics register.
Can a dashboard alone satisfy this requirement?
A dashboard helps prove monitoring and measurement, but auditors typically also expect evidence of analysis and evaluation (minutes, decisions, action tracking). Your evidence should show that results drive actions.
How do we handle metrics that depend on third-party data?
Define the method for obtaining the data, the cadence, and how you validate it (spot checks, reconciliations, contract/SLA alignment). Then include it in the same evaluation forums used for internal metrics.
What if we don’t have good baselines to set targets yet?
Start with trend-based evaluation and document the intent to set targets after you establish baseline performance. Record decisions based on trend movement and known risks, then formalize targets once stable.
Who should own the Measurement & Evaluation Plan: GRC or IT?
IT service owners should own the metrics and corrective actions; GRC should own governance, consistency, and audit readiness. Split ownership cleanly: GRC defines the standard, service teams run it and produce evidence.
Footnotes
Frequently Asked Questions
What’s the difference between “monitoring and measurement” and “analysis and evaluation” in Clause 9.1?
Monitoring and measurement are the collection activities (getting the data). Analysis and evaluation are the decision activities (interpreting results, determining compliance with targets, and deciding actions). ISO 20000 expects both to be defined and evidenced. (Source: ISO/IEC 20000-1:2018 Information technology — Service management)
Do we need to document every possible metric we track in tools?
No. Document the measures that matter for your SMS and services, and make sure they have defined methods and review cadences. Keep ad hoc diagnostics separate from the controlled metrics register.
Can a dashboard alone satisfy this requirement?
A dashboard helps prove monitoring and measurement, but auditors typically also expect evidence of analysis and evaluation (minutes, decisions, action tracking). Your evidence should show that results drive actions.
How do we handle metrics that depend on third-party data?
Define the method for obtaining the data, the cadence, and how you validate it (spot checks, reconciliations, contract/SLA alignment). Then include it in the same evaluation forums used for internal metrics.
What if we don’t have good baselines to set targets yet?
Start with trend-based evaluation and document the intent to set targets after you establish baseline performance. Record decisions based on trend movement and known risks, then formalize targets once stable.
Who should own the Measurement & Evaluation Plan: GRC or IT?
IT service owners should own the metrics and corrective actions; GRC should own governance, consistency, and audit readiness. Split ownership cleanly: GRC defines the standard, service teams run it and produce evidence.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream