Annex A 8.17: Clock Synchronisation
Annex a 8.17: clock synchronisation requirement means you must define, implement, and maintain consistent time across systems so security logs, authentication events, and monitoring data can be reliably correlated. Operationalize it by standardizing on approved time sources, enforcing synchronization on all in-scope assets, monitoring drift and failures, and retaining clear evidence that the control runs continuously. 1
Key takeaways:
- Standardize on authoritative time sources and document your synchronization architecture and configuration baseline. 1
- Treat time sync as a logging dependency: drift breaks investigations, alerting, and audit trails even if logs “exist.” 1
- Evidence wins audits: show enforced configuration, monitoring, exception handling, and recurring reviews across endpoints, servers, network devices, and cloud services. 1
Clock synchronization is one of those controls that looks “obvious” until you try to prove it under audit. Annex A 8.17 sits in the operational security layer of ISO/IEC 27001:2022 and ties directly to your ability to trust event timelines. If your domain controllers, Linux servers, SaaS audit logs, firewall logs, EDR telemetry, and cloud control-plane events disagree on time, you lose correlation. You also lose confidence in incident reconstruction, root cause analysis, and chain-of-custody for evidence.
A CCO, GRC lead, or security compliance owner should treat this as a requirement with three parts: (1) defined standard (which time sources are approved, what protocols are allowed, what “in sync” means), (2) enforced implementation (devices actually sync, including cloud and third-party managed components), and (3) operational oversight (monitoring, alerting, and exceptions). ISO 27001 expects you to implement and be able to demonstrate operation, not just write a policy. 1
This page gives you requirement-level guidance you can assign to IT operations and security engineering, then audit internally with crisp artifacts.
Regulatory text
Provided excerpt: “ISO/IEC 27001:2022 Annex A control 8.17 implementation expectation (Clock Synchronisation).” 1
Operator interpretation: You must ensure clocks for relevant information processing systems are synchronized to approved, consistent time sources, and that the approach is managed as an ongoing control (documented design, implemented configurations, monitored operation, and evidence retained). 1
Plain-English interpretation (what the control is really asking)
- You can’t run a defensible security program if timestamps are unreliable.
- “Clock synchronization” is not a one-time configuration; it’s a managed service with monitoring and exceptions.
- The audit question you must be able to answer is: “How do you know your logs are time-consistent across systems that matter?” 1
Who it applies to (entity + operational context)
Applies to: Any organization implementing ISO/IEC 27001, especially service organizations that must produce trustworthy logs for customers, regulators, and auditors. 1
Operational scope you should assume is in-scope unless you formally exclude it:
- Identity infrastructure (directory services, SSO, MFA providers, certificate services)
- Security tooling (SIEM, EDR, IDS/IPS, vulnerability scanners, PAM)
- Network/security devices (firewalls, VPN gateways, proxies, WAF/CDN logging points)
- Core compute (servers, containers, hypervisors, Kubernetes nodes/control plane where applicable)
- End-user endpoints where security telemetry and authentication events originate
- Cloud services that emit audit logs (IaaS control plane, SaaS admin logs)
- Time-sensitive applications (transaction processing, message queues, batch jobs, billing) when they generate security-relevant logs
Third-party dependencies: If a third party manages infrastructure that produces logs you rely on (managed detection, managed hosting, payroll SaaS, CRM), you still need a clear story for timestamp consistency and time zone handling in your monitoring and investigations.
What you actually need to do (step-by-step)
1) Define a time synchronization standard (control design)
Create a short “Clock Synchronization Standard” that answers:
- Authoritative time sources: e.g., internal NTP strata backed by a trusted upstream, or approved cloud-provider time services.
- Approved protocols: NTP, authenticated NTP where feasible; define if SNTP is allowed on constrained devices.
- Time zone handling: enforce UTC for servers and log pipelines; define user-facing systems separately if needed.
- Minimum asset classes: servers, network devices, endpoints, security tools, cloud services.
- Exception handling: how you approve exceptions, compensating controls, and review cadence.
- Ownership: IT Ops runs it; Security validates; GRC audits evidence.
Deliverable: a version-controlled standard mapped to Annex A 8.17. 1
2) Inventory and scope the systems that must be synchronized (control scope)
Build or reuse an asset inventory view that identifies:
- Asset type (Windows, Linux, network device, SaaS)
- Where its logs go (SIEM, cloud logging, local-only)
- Time sync method (domain time, chrony, ntpd, vendor mechanism)
- Owner and environment (prod/non-prod)
Practical tip: Don’t aim for perfection. Aim for “all systems that generate or broker security-relevant logs” first, then expand.
3) Implement a resilient synchronization architecture (control implementation)
A typical pattern that audits well:
- Internal time servers in each major network zone/region.
- Redundancy: more than one approved time source so devices can fail over.
- Network controls: allow NTP traffic only to approved sources; block arbitrary outbound NTP to the internet unless justified.
- Cloud alignment: use cloud-native time synchronization where offered, but document how it works and how you verify it.
4) Enforce configuration with tooling (make it hard to drift)
Pick enforcement mechanisms that fit your stack:
- Windows: Group Policy time service settings, domain hierarchy, and monitoring for “time service not running.”
- Linux: standardized chrony/ntpd configuration via configuration management.
- Network devices: centralized templates, config compliance checks.
- Containers/Kubernetes: focus on node time; document that containers inherit node time.
- Laptops/remote endpoints: MDM policies where feasible; rely on OS time sync with monitored health signals if available.
The goal: you can prove this is enforced, not “tribal knowledge.”
5) Monitor drift and failures (control operation)
Define operational checks:
- Alert when time sync service stops, source becomes unreachable, or drift exceeds your internal threshold.
- Alert on sudden time jumps (common indicator of misconfig or compromise).
- Make sure monitoring covers all major environments: corp, prod, staging, and cloud.
If you already have a SIEM, create a detection that flags inconsistent timestamps from the same host across different log sources (agent logs vs syslog, for example).
6) Handle exceptions without breaking investigations
Common valid exceptions:
- Legacy OT/IoT devices that only support basic SNTP
- Segmented networks with no upstream connectivity
- Vendor-managed appliances with limited configuration access
For each exception, document:
- The reason and duration
- Compensating control (e.g., local log collection with a synchronized collector, stronger event correlation keys, or tighter access restrictions)
- Review/renewal criteria
7) Prove it works (recurring evidence capture)
Auditors will ask for objective evidence. Build a recurring evidence pack:
- Config baselines
- Monitoring dashboards and alert samples
- Spot checks from representative systems
- Exception register and approvals
Daydream (as a workflow, not just a repository) fits well here: map Annex A 8.17 to a control narrative, assign owners, schedule recurring evidence requests, and keep a clean audit trail of what was collected and when.
Required evidence and artifacts to retain
Keep artifacts that show design, implementation, and operation:
Design
- Clock Synchronization Standard (approved, versioned)
- Network diagram or architecture note showing time sources and clients
- Defined scope statement: what systems are in-scope and why
Implementation
- Windows GPO screenshots/exports or configuration documentation
- Linux configuration management snippets (chrony/ntp configs)
- Network device template/config standard
- Cloud configuration notes for time sync and logging timestamp format (UTC decision)
Operation
- Monitoring alerts/tickets showing detection and remediation of sync failures
- Periodic compliance report (even a scripted output) showing sync status across fleets
- Incident runbook section: “Validate time sync / timestamp integrity”
- Exceptions register with approvals and review notes
Common exam/audit questions and hangups
-
“What is your authoritative time source, and how do you prevent devices using random internet NTP?”
Have a named list of approved sources and firewall rules/egress standards. -
“Show me evidence that production servers are synchronized right now.”
Be ready with live samples from multiple environments plus automated status reporting. -
“How do you handle cloud and SaaS logs with different timestamp formats?”
Have a normalization approach in your logging pipeline and document UTC handling. -
“What happens when the time source is unavailable?”
Auditors look for redundancy and alerting, not perfection.
Frequent implementation mistakes (and how to avoid them)
- Mistake: Policy-only control. Fix: pair the standard with enforced config and monitoring evidence.
- Mistake: Only servers are covered. Fix: include identity services, network devices, and security tooling first.
- Mistake: UTC decision not made. Fix: mandate UTC for infrastructure and log pipelines; document any exceptions.
- Mistake: No exception discipline. Fix: run an exception register with owners and expiry/review.
- Mistake: Assuming SIEM fixes time issues. Fix: SIEM can normalize formats, but it cannot correct wrong clocks at the source.
Risk implications (why auditors care)
Clock drift creates real failure modes:
- Broken correlation across detection tools, which delays containment.
- Unreliable timelines in incident response, which complicates reporting to customers and regulators.
- Disputes about “what happened when” during fraud, insider threat, or availability incidents.
- Gaps in evidentiary quality for investigations.
ISO 27001 assessments tend to treat time sync as foundational: if logs can’t be trusted, multiple other controls become harder to evidence. 1
Practical 30/60/90-day execution plan
First 30 days (stabilize and define)
- Publish the Clock Synchronization Standard (authoritative sources, protocols, UTC stance, exceptions).
- Identify in-scope asset classes and the top log-producing systems.
- Confirm ownership: IT Ops runs time services; Security validates; GRC tracks evidence.
- Stand up an initial evidence pack: current NTP configs, a few representative drift checks, and a draft exception register.
Day 31–60 (enforce and monitor)
- Roll out enforced configurations (GPO/MDM/config management/templates) for prioritized assets.
- Implement monitoring and alert routing (Ops ticketing + Security visibility).
- Document cloud/SaaS timestamp normalization decisions in the logging architecture.
- Run an internal “audit drill”: pick systems at random and produce proof of sync plus remediation records.
Day 61–90 (harden and operationalize)
- Expand coverage to remaining environments and edge cases (remote endpoints, segmented networks, appliances).
- Formalize exception reviews and add compensating controls where needed.
- Add recurring evidence capture (monthly/quarterly) and control owner attestations.
- Use Daydream to keep the mapping, evidence, and tasks audit-ready across cycles, not just at certification time.
Frequently Asked Questions
Do we need internal NTP servers, or can we point everything to public internet time?
Public internet NTP may work technically, but it complicates control and auditability. Most teams standardize on approved internal or cloud-provider sources and restrict outbound NTP to reduce drift, abuse, and configuration sprawl. 1
Should we standardize on UTC everywhere?
For infrastructure and security logs, UTC is the cleanest choice for correlation and investigations. If business applications need local time for user experience, document the separation and keep security telemetry normalized. 1
How do we handle SaaS platforms where we can’t control the clock?
Treat SaaS as a logging integration problem: document the timestamp format, time zone, and how your SIEM/log pipeline parses and normalizes it. Retain vendor documentation or admin screenshots that show time settings when configurable. 1
What evidence is strongest for auditors?
A combination of enforced configuration (GPO/MDM/config management), monitoring outputs that show continuous operation, and tickets showing you detected and fixed failures. Add an exception register with approvals for anything you can’t enforce directly. 1
We have both NTP and “device time” in logs. Is that a finding?
It becomes a finding when you can’t reliably correlate events or prove that device time is synchronized. Normalize timestamps in your log pipeline and fix the source clocks where possible; document any residual risk through exceptions. 1
How do we operationalize Annex a 8.17: clock synchronisation requirement without creating busywork?
Automate evidence capture where you can: configuration exports, drift checks, and monitoring snapshots on a recurring schedule. Use a GRC workflow (including Daydream) to assign control ownership and collect the same artifacts consistently across audit cycles. 1
Footnotes
Frequently Asked Questions
Do we need internal NTP servers, or can we point everything to public internet time?
Public internet NTP may work technically, but it complicates control and auditability. Most teams standardize on approved internal or cloud-provider sources and restrict outbound NTP to reduce drift, abuse, and configuration sprawl. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
Should we standardize on UTC everywhere?
For infrastructure and security logs, UTC is the cleanest choice for correlation and investigations. If business applications need local time for user experience, document the separation and keep security telemetry normalized. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
How do we handle SaaS platforms where we can’t control the clock?
Treat SaaS as a logging integration problem: document the timestamp format, time zone, and how your SIEM/log pipeline parses and normalizes it. Retain vendor documentation or admin screenshots that show time settings when configurable. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
What evidence is strongest for auditors?
A combination of enforced configuration (GPO/MDM/config management), monitoring outputs that show continuous operation, and tickets showing you detected and fixed failures. Add an exception register with approvals for anything you can’t enforce directly. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
We have both NTP and “device time” in logs. Is that a finding?
It becomes a finding when you can’t reliably correlate events or prove that device time is synchronized. Normalize timestamps in your log pipeline and fix the source clocks where possible; document any residual risk through exceptions. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
How do we operationalize Annex a 8.17: clock synchronisation requirement without creating busywork?
Automate evidence capture where you can: configuration exports, drift checks, and monitoring snapshots on a recurring schedule. Use a GRC workflow (including Daydream) to assign control ownership and collect the same artifacts consistently across audit cycles. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream