Change management
ISO/IEC 20000-1:2018 Clause 8.5.1 requires you to manage changes to your service management system (SMS) and services through a controlled, auditable process: record the change, assess risk/impact, obtain authorization, build and test, then complete a post-implementation review. Operationalizing this means running every change through a consistent workflow with clear approvals, testing evidence, and review outcomes. 1
Key takeaways:
- Treat “change management” as an end-to-end workflow, not a CAB meeting.
- You need traceability: request → assessment → approval → build/test → implementation → review.
- Auditors look for consistent classification, segregation of duties, and proof that reviews drive corrective actions.
Change management is one of the fastest ways auditors determine whether your service organization is actually in control of production. ISO/IEC 20000-1:2018 Clause 8.5.1 is explicit: every change to the SMS and services must be controlled, recorded, assessed for risk and impact, authorized, built and tested, and reviewed after implementation. 1
For a Compliance Officer, CCO, or GRC lead, the operational challenge is rarely writing a policy. The hard part is getting consistent behavior across engineering, infrastructure, IT operations, and service owners, while still allowing the business to ship. This page translates the requirement into an implementable control system: roles, decision points, minimum required fields in a change record, evidence to retain, and how to handle common edge cases (emergency changes, standard changes, and third-party-delivered changes).
If you already have an ITIL-style process, focus on making it “audit-ready”: defined change types, documented risk/impact criteria, approval rules that match risk, and post-implementation reviews that produce corrective actions when things go wrong.
Regulatory text
Requirement (Clause 8.5.1): “The organization shall manage changes to the service management system and services in a controlled manner. Changes shall be recorded, assessed for risk and impact, authorized, built and tested, and reviewed post-implementation.” 1
What the operator must do:
- Put one defined process in place for changes to both the service management system (processes, roles, tooling, governance) and the services you deliver (production systems, configurations, service components).
- Ensure each change has objective evidence for each mandated step: record → assess → authorize → build/test → post-implementation review.
- Make the process consistent enough that you can sample changes and prove control operation without relying on tribal knowledge. 1
Plain-English interpretation (what “good” looks like)
You must be able to answer, for any meaningful change: who requested it, what will change, what could break, who approved it, what testing was done, when it was deployed, whether it worked, and what you learned afterward. If the change caused an incident, the post-implementation review must drive a fix to the change process, the service, or both.
A controlled process does not mean “slow.” It means the approval path and test depth scale with risk, and every change leaves an audit trail.
Who it applies to (entity and operational context)
This applies to any organization operating an ISO/IEC 20000-aligned service management system, including:
- Internal IT service organizations supporting business services.
- Managed service providers and other service providers delivering services to customers.
- Teams running production platforms where changes can affect availability, security, data integrity, or customer commitments. 1
Operational scope to include:
- Application and infrastructure changes (code, configuration, network, IAM policies, CI/CD pipelines).
- Service management system changes (change procedure updates, new approval rules, tool migrations).
- Third-party changes that affect your services (for example: a cloud provider feature change you adopt, a SaaS configuration change, a subcontractor release). You may not control their internal process, but you must control how you assess, approve, test, and review adoption of their changes within your services.
What you actually need to do (step-by-step)
1) Define change types and minimum workflow states
Create change categories that map to risk and approval rigor. Keep it simple:
- Standard change: pre-approved, repeatable, low-risk, with a proven implementation plan.
- Normal change: the default path, requires documented assessment and explicit approval.
- Emergency change: expedited path to restore service or address urgent risk, with after-the-fact review.
Minimum workflow states you need in your tooling:
- Draft / Submitted
- Risk & impact assessment complete
- Authorized / Approved
- Implemented (with implementation evidence)
- Post-implementation review complete
- Closed
2) Standardize the change record (your audit backbone)
Every change ticket/request should capture, at minimum:
- Unique ID, requester, service/system affected, environment, planned window
- Description of change and reason/business justification
- Risk assessment (likelihood/impact narrative is fine) and impact analysis (what services/users could be affected)
- Dependencies and affected configuration items (or equivalent mapping)
- Implementation plan and back-out/rollback plan
- Test plan and test results (or reference to build/test evidence)
- Approval(s): who approved, when, and under what criteria
- Communications plan (who needs notice, customer impact if applicable)
- Post-implementation review outcome, including incidents, follow-ups, and lessons learned 1
3) Implement risk/impact assessment rules that drive approvals
Define a small set of risk drivers to force consistency:
- Customer impact potential (availability, performance, data)
- Security/privacy impact potential (credential/IAM changes, exposure risk)
- Reversibility (easy rollback vs. irreversible data migrations)
- Novelty (first-time change vs. repeated and proven)
Then tie risk levels to required approvals and test depth. Auditors care that this is defined and followed more than they care about your specific scoring model.
4) Enforce authorization and segregation of duties
Authorization must be explicit and recorded. Practical rules:
- The approver must be accountable for the service (service owner) or control gate (Change Manager/CAB), depending on your model.
- High-risk changes should require at least two perspectives (service owner + technical authority or security reviewer).
- Avoid “self-approval” for meaningful production changes. If your org is small, document compensating controls (peer review, automated guardrails, after-the-fact management review). The standard requires authorization; it does not prescribe org size, but auditors will test whether approvals are independent in practice. 1
5) Build and test with evidence you can produce on demand
“Built and tested” must be provable. Your evidence can be:
- CI/CD pipeline logs showing build and test stages passed
- Test case results, change validation scripts, or runbook verification outputs
- Peer review evidence linked to the change (code review record, configuration review)
Define what “minimum testing” means by change type (standard vs. normal vs. emergency) and by environment.
6) Control implementation and capture implementation proof
Require implementers to attach or reference:
- Deployment record (pipeline run, release tag, change window confirmation)
- Verification steps performed (smoke tests, monitoring checks)
- Any deviations from plan (what changed and why)
If you use a third party to implement changes, require them to provide implementation notes and test evidence, then attach it to your change record.
7) Run post-implementation review (PIR) with teeth
A PIR is required, not optional, even when the change “worked.” 1
Minimum PIR fields:
- Outcome: successful / successful with issues / failed / rolled back
- Incidents or alerts triggered and their linkage
- What went well / what did not
- Corrective actions with owners (process update, monitoring improvement, better test coverage, update standard change template)
Trigger mandatory deeper PIR for:
- Emergency changes
- Changes that caused incidents or customer-visible disruption
- Repeated failed changes
8) Measure and govern the process (lightweight but real)
You do not need fancy metrics to meet Clause 8.5.1, but you do need governance:
- Periodic sampling review of changes for completeness and adherence
- Backlog management for PIR action items
- Management visibility into repeat failure patterns 1
If you manage this in Daydream or another GRC system, the practical win is mapping the required workflow steps to required artifacts and setting evidence reminders so teams cannot close a change without the minimum audit trail.
Required evidence and artifacts to retain
Keep evidence tied to specific change IDs. Auditors sample. Your job is fast retrieval.
Core artifacts
- Change management policy/procedure (scope includes SMS and services)
- Change classification criteria (standard/normal/emergency definitions)
- Approval matrix (who can approve what risk level)
- Change records/tickets with required fields completed
- Risk/impact assessments per sampled change
- Authorization evidence (system approval logs or signed records)
- Build and test evidence (pipeline logs, test results, peer review links)
- Implementation evidence (release record, deployment logs, verification notes)
- Post-implementation reviews and resulting action tracking 1
Supporting artifacts (often requested)
- CAB agenda/minutes (if you run a CAB)
- Standard change catalog (pre-approved templates)
- Emergency change log and after-the-fact approvals
- Tool configuration screenshots/export showing mandatory fields and workflow gates
Common exam/audit questions and hangups
Expect variations of:
- “Show me your change process and how it applies to the service management system itself.” Many teams forget SMS changes (process/tooling/governance changes) are in scope. 1
- “Pull a sample of changes and show: assessment, approval, testing, and PIR.” Missing just one element can fail the sample.
- “How do you handle emergency changes?” Auditors look for expedited approval plus after-the-fact review and authorization evidence.
- “How do you ensure changes from third parties are assessed and authorized before adoption?” They will accept your internal gating controls if documented and evidenced.
- “How do you prevent unauthorized changes?” Be ready to explain technical controls (access controls, protected branches, deployment permissions) and how they tie back to the change record.
Frequent implementation mistakes and how to avoid them
-
Tickets with shallow descriptions and no impact analysis
Fix: require structured fields (affected service, customer impact, rollback, test evidence) and prevent closure without completion. -
Approvals that are ceremonial
Fix: define approval criteria by risk; force approvers to acknowledge risk/impact and back-out readiness in the record. -
Testing exists but is not linked to the change
Fix: require a URL/reference to CI runs, test reports, or review records in the change ticket. -
No real post-implementation review
Fix: make PIR a workflow state with mandatory outcomes and action tracking. Tie repeated issues to problem management or corrective action processes. -
Emergency becomes a loophole
Fix: define what qualifies as emergency, require time-bounded after-the-fact authorization, and require a deeper PIR for all emergency changes. 1
Risk implications (why auditors care)
Weak change management is a direct path to outages, security exposure, and broken customer commitments. The Clause 8.5.1 steps line up to the failure modes auditors see in practice: undocumented changes, unassessed risk, unauthorized releases, insufficient testing, and repeating the same mistakes because no PIR actions get implemented. 1
Practical execution plan (30/60/90-day)
You asked for speed; the sequence below prioritizes auditability first, then maturity. The timeboxes are planning labels, not a promise of elapsed time.
First 30 days (get to controlled and provable)
- Publish/update the change management procedure to explicitly include SMS changes and service changes. 1
- Define change types (standard/normal/emergency) and approval rules.
- Configure your change tool to enforce required fields and workflow states.
- Create a PIR template and make it required for closure.
- Run a pilot with one service team; sample completed changes for evidence gaps.
Days 31–60 (reduce exceptions; tighten approvals and testing)
- Build a standard change catalog for common low-risk activities.
- Formalize risk/impact criteria and align approver roles to risk levels.
- Connect CI/CD or deployment logs to change records through links or attachments.
- Implement an emergency change “after-action” review loop with tracked corrective actions.
Days 61–90 (make it repeatable across teams and third parties)
- Expand to all services in scope; train service owners and approvers on their accountability.
- Establish a periodic change sampling review and document findings and remediations.
- Add third-party change intake controls (release notes review, internal risk assessment, customer impact evaluation) for changes you adopt into your services.
- Use Daydream (or your GRC system) to map Clause 8.5.1 to required evidence, schedule control testing, and centralize sampled change artifacts for audit readiness. 1
Frequently Asked Questions
Do “service management system” changes really need the same workflow as production changes?
Yes, the clause covers changes to the service management system and services. Use the same core steps (record, assess, authorize, test where applicable, review), but scale the testing to the change type. 1
What counts as sufficient “risk and impact assessment” for auditors?
A consistent method matters more than a complex model. Document the affected services/users, plausible failure modes, security implications if relevant, and rollback feasibility, then show it drove the approval path. 1
Can CI/CD approvals replace a CAB?
Yes if they provide recorded authorization, enforce the required steps, and produce retrievable evidence. Auditors will sample records; your pipeline gates must be traceable to the specific change. 1
How should we handle emergency changes without failing the requirement?
Allow expedited implementation, but still record the change, capture the rationale, and complete authorization and post-implementation review as soon as practicable. The audit risk comes from emergency paths that bypass review permanently. 1
What evidence is most commonly missing in audits?
Testing proof and post-implementation review outcomes are the most frequent gaps, especially when the work happens in engineering tools separate from the change ticket. Make linking those artifacts a closure requirement. 1
How do we manage changes introduced by a third party (cloud/SaaS/provider) we don’t control?
Control your adoption: record the planned change, assess impact to your services, authorize the adoption, test in your environment where feasible, and run a PIR after rollout. Attach third-party release notes or communications as supporting evidence. 1
Footnotes
Frequently Asked Questions
Do “service management system” changes really need the same workflow as production changes?
Yes, the clause covers changes to the service management system and services. Use the same core steps (record, assess, authorize, test where applicable, review), but scale the testing to the change type. (Source: ISO/IEC 20000-1:2018 Information technology — Service management)
What counts as sufficient “risk and impact assessment” for auditors?
A consistent method matters more than a complex model. Document the affected services/users, plausible failure modes, security implications if relevant, and rollback feasibility, then show it drove the approval path. (Source: ISO/IEC 20000-1:2018 Information technology — Service management)
Can CI/CD approvals replace a CAB?
Yes if they provide recorded authorization, enforce the required steps, and produce retrievable evidence. Auditors will sample records; your pipeline gates must be traceable to the specific change. (Source: ISO/IEC 20000-1:2018 Information technology — Service management)
How should we handle emergency changes without failing the requirement?
Allow expedited implementation, but still record the change, capture the rationale, and complete authorization and post-implementation review as soon as practicable. The audit risk comes from emergency paths that bypass review permanently. (Source: ISO/IEC 20000-1:2018 Information technology — Service management)
What evidence is most commonly missing in audits?
Testing proof and post-implementation review outcomes are the most frequent gaps, especially when the work happens in engineering tools separate from the change ticket. Make linking those artifacts a closure requirement. (Source: ISO/IEC 20000-1:2018 Information technology — Service management)
How do we manage changes introduced by a third party (cloud/SaaS/provider) we don’t control?
Control your adoption: record the planned change, assess impact to your services, authorize the adoption, test in your environment where feasible, and run a PIR after rollout. Attach third-party release notes or communications as supporting evidence. (Source: ISO/IEC 20000-1:2018 Information technology — Service management)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream