Monitoring — General
ISO 9001:2015 Clause 9.1.1 requires you to define, run, and prove a monitoring and measurement system for your QMS: what you monitor, how you measure it, when you do it, and when you analyze and evaluate results. Operationalizing it means building a controlled KPI/metric register, assigning owners and frequencies, validating measurement methods, and retaining evidence of review and actions taken. 1
Key takeaways:
- Define a complete monitoring and measurement plan: “what, how, when, and when analyzed/evaluated.” 1
- Ensure methods produce valid data, or your “good-looking” dashboards fail an audit. 1
- Keep documented evidence of results, evaluation, and follow-up actions, not just charts. 1
Clause 9.1.1 is simple to read and easy to fail in practice because it lives at the intersection of operations, quality, and governance. Auditors rarely accept “we track some KPIs” as compliance. They look for a deliberate, organization-wide approach that connects monitoring to process performance, product/service conformity, customer outcomes, and QMS effectiveness, backed by consistent methods and disciplined evaluation. 1
For a Compliance Officer, CCO, or GRC lead operating a certified (or certifying) QMS, this requirement is your proof point that management control exists beyond policies. It is where you show that the organization measures what matters, detects issues early, and makes decisions based on reliable data instead of anecdotes. 1
This page translates the clause into a requirement-level build plan: scope, ownership, step-by-step implementation, evidence to retain, common audit pitfalls, and a practical execution plan you can run with your process owners. It is written to help you stand up a defensible monitoring program quickly, even if metrics are currently fragmented across teams and tools. 1
Regulatory text
Clause requirement (verbatim excerpt): “The organization shall determine what needs to be monitored and measured, the methods, when performed, and when results are analysed and evaluated.” 1
Operator interpretation of the text: You must make explicit decisions (and keep them consistent) about:
- What QMS-related performance and effectiveness you monitor/measure
- How you monitor/measure it (methods, definitions, tools, sampling rules, calculation logic)
- When monitoring and measurement occur (frequency, timing, triggering events)
- When you analyze and evaluate results (review cadence, forums, thresholds, escalation paths)
and you must be able to show that your methods generate valid data and that results drive evaluation and action. 1
Plain-English requirement (what “good” looks like)
A conforming organization can answer, with evidence:
- “These are the processes, outcomes, and risks we monitor because they indicate QMS performance and effectiveness.” 1
- “Here is exactly how each metric is calculated and controlled so results are trustworthy.” 1
- “Here is the schedule and the governance routine where results are reviewed, evaluated, and acted on.” 1
If you cannot show method control (definitions, calibration where relevant, consistent data sources) you will struggle to defend “valid data,” even if your metrics look reasonable. 1
Who it applies to (entity and operational context)
Applies to: Any organization operating an ISO 9001:2015 quality management system, including organizations seeking initial certification, maintaining certification, or using ISO 9001 as an internal governance standard. 1
Operationally, it touches:
- Process owners who run core and support processes and own performance outcomes.
- Quality function managing the QMS, internal audits, corrective actions, and management review inputs.
- Data owners (Ops, Service Delivery, Product, Engineering, Procurement) who control systems of record.
- Third parties when process performance depends on outsourced activity (for example, outsourced manufacturing, service desks, logistics, cloud providers). Your monitoring plan should cover the parts of performance you rely on, even if execution sits outside your walls. 1
What you actually need to do (step-by-step)
1) Define scope: what must be monitored and measured
Build a controlled list of monitoring areas tied to QMS performance and effectiveness. Start with:
- Process performance (are processes stable and producing expected outputs?)
- Product/service conformity (are outputs meeting requirements?)
- Customer outcomes (complaints, feedback themes, service performance signals)
- QMS effectiveness signals (internal audit results, corrective action trends, recurring nonconformities) 1
Deliverable: a Monitoring & Measurement Register (single table) that becomes the “source of truth.”
2) Specify methods so data is valid
For each metric/measure, define method controls that make the data defensible:
- Metric definition: name, purpose, owner, scope, inclusion/exclusion rules
- Data source: system of record, report/query name, access controls
- Calculation logic: formula, numerator/denominator, rounding rules
- Sampling/inspection method: if not a meaningful percentage captured, how sampling works
- Data quality checks: reconciliation steps, exception handling, known limitations
- Measurement equipment controls: where relevant, confirm equipment is suitable and controlled (for example, maintained/calibrated per your internal process). 1
Auditors test “valid data” by asking whether two people can reproduce the same result from the same definition. If your KPI changes because someone exported to Excel and edited filters, your method is not controlled. 1
3) Set “when performed” (frequency and triggers)
Assign:
- Collection frequency (event-driven, daily, weekly, monthly, per batch, per release)
- Timing rules (cutoffs, time zone, end-of-period logic)
- Trigger events (major change, incident, spike in complaints, third-party issue) 1
Make frequency risk-based in practice: high-impact processes need tighter monitoring than low-impact support processes. Keep the rationale short but explicit in the register. 1
4) Set “when analyzed and evaluated” (governance and decision rights)
Separate collection from evaluation. Collection can be automated; evaluation is a management action. Define:
- Review forums: operational review, quality review, management review inputs
- Review cadence: how often each metric is evaluated, not just collected
- Thresholds and escalation: what triggers investigation, containment, corrective action
- Decision rights: who can approve action plans and accept residual performance risk 1
Practical tip: Put the evaluation routine on calendars, with agendas that list the measures to be reviewed. Evidence of evaluation is often the missing link.
5) Connect results to action (CAPA and continual improvement mechanics)
When measures show issues, you need controlled follow-through:
- Log issues as nonconformities, risks, or improvement opportunities per your QMS structure.
- Record containment (if needed), root cause approach (where required by your process), corrective action, and verification of effectiveness.
- Feed trends into management review inputs so leadership evaluates QMS effectiveness with evidence. 1
6) Operationalize with tooling (without turning it into a dashboard project)
You can run Clause 9.1.1 with spreadsheets, but control breaks quickly across teams. Many compliance teams standardize using a GRC workflow to:
- maintain the metric register with version control,
- assign owners and reminders,
- attach evidence (reports, screenshots, meeting minutes),
- track action items to closure.
If you already manage ISO obligations in Daydream, treat the Monitoring & Measurement Register as a governed “control object” with owners, evidence tasks, and review cadences so audits become a retrieval exercise instead of a scramble.
Required evidence and artifacts to retain
Auditors look for “documented information” that proves determinations were made and executed. Retain:
- Monitoring & Measurement Register (current, approved, version-controlled) 1
- Metric definitions and method statements (including calculation logic and data sources) 1
- Records of monitoring/measurement results (reports, system exports, inspection logs) 1
- Evidence of analysis/evaluation (meeting minutes, review sign-offs, annotated dashboards with decisions) 1
- Action tracking evidence (tickets, CAPA records, corrective action verification) 1
- Change history for measures/methods (what changed, why, who approved) 1
Common exam/audit questions and hangups
Expect auditors to probe these areas:
- “Show me how you decided what to monitor.” They want a deliberate selection, not an ad hoc list. 1
- “Prove your data is valid.” They will test definitions, sources, and reproducibility. 1
- “You collect data weekly. When do you evaluate it?” Collection without review fails the clause intent. 1
- “What happened the last time a metric missed target?” They want to see actions and effectiveness checks. 1
- “Who owns this measure?” Shared ownership often means no ownership.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Too many KPIs, no governance.
Fix: Keep a controlled register with clear owners and review forums. Kill vanity metrics. -
Mistake: Metrics defined in slides, calculated in spreadsheets, sourced from unknown queries.
Fix: Document calculation logic and data source locations. Control changes. 1 -
Mistake: Monitoring without evaluation.
Fix: Create explicit evaluation cadences and minutes that record decisions. 1 -
Mistake: Third-party performance ignored.
Fix: Add measures tied to outsourced processes you depend on (quality, delivery, service levels), and define how you review and respond. -
Mistake: Targets treated as the requirement.
Fix: Clause 9.1.1 is about determining what/how/when and evaluating results. Targets help, but method validity and evaluation evidence usually decide the audit outcome. 1
Enforcement context and risk implications
No public enforcement cases are provided for this requirement in the supplied sources. Practically, failure shows up as audit nonconformities because the organization cannot prove it controls performance measurement or uses results to evaluate effectiveness. The operational risk is delayed detection: defects, service failures, and recurring nonconformities stay latent longer because signals are weak or not reviewed. 1
Practical 30/60/90-day execution plan
First a defined days: Establish control points
- Inventory existing KPIs, operational reports, inspection logs, customer feedback loops.
- Draft the Monitoring & Measurement Register with owners, definitions, and current-state frequencies. 1
- Pick a small set of critical measures per core process and make their method statements audit-ready.
- Schedule recurring evaluation forums and define what “evidence of review” will look like.
By a defined days: Make it repeatable and auditable
- Fill method gaps (data source naming, calculation rules, sampling logic, equipment controls where relevant). 1
- Pilot the evaluation routine: capture minutes, decisions, and actions for at least one full review cycle.
- Implement change control for metric definitions and reporting logic.
By a defined days: Prove effectiveness under stress
- Run trend reviews and show at least one example of action taken based on results, with follow-up evidence.
- Audit your own evidence pack: select a measure and test whether an independent person can reproduce results and locate review records quickly. 1
- Expand coverage to remaining processes and key third-party dependent activities.
Frequently Asked Questions
Do we need a formal “Monitoring and Measurement Procedure” to meet Clause 9.1.1?
The clause requires you to determine what to monitor, methods, timing, and evaluation timing, and to retain documented information as evidence of results. A procedure can help, but the audit usually turns on whether your register, method controls, and review evidence exist and are followed. 1
What’s the difference between “monitoring” and “measurement” for audit purposes?
Treat monitoring as ongoing observation (often qualitative or status-based) and measurement as quantitative determination using defined methods. In both cases, you must define method and timing and show when you analyze and evaluate results. 1
How do we prove our measurement methods produce valid data?
Keep metric definitions, controlled data sources, calculation logic, and data quality checks so results are reproducible. Where equipment is involved, maintain records that show the equipment is suitable and controlled per your QMS practices. 1
Can we rely on dashboards from other teams (Ops, Product, IT) as evidence?
Yes, if you control the definition and can show the underlying method, source, and review/evaluation records. A screenshot alone is weak evidence unless you can trace it to a controlled report and show it was reviewed. 1
How should we handle third-party metrics under this clause?
Add measures for outsourced activities that affect your ability to conform to requirements, and define how you collect results and evaluate them. Keep evidence of review and follow-up actions when third-party performance degrades. 1
What evidence usually fails in audits even when teams “do the work”?
Missing evaluation records is the most common gap: teams collect numbers but cannot show when results were analyzed, who evaluated them, and what decisions were made. Method drift is another frequent failure, where calculation logic changes without documented approval. 1
Footnotes
Frequently Asked Questions
Do we need a formal “Monitoring and Measurement Procedure” to meet Clause 9.1.1?
The clause requires you to determine what to monitor, methods, timing, and evaluation timing, and to retain documented information as evidence of results. A procedure can help, but the audit usually turns on whether your register, method controls, and review evidence exist and are followed. (Source: ISO 9001:2015 Quality management systems — Requirements)
What’s the difference between “monitoring” and “measurement” for audit purposes?
Treat monitoring as ongoing observation (often qualitative or status-based) and measurement as quantitative determination using defined methods. In both cases, you must define method and timing and show when you analyze and evaluate results. (Source: ISO 9001:2015 Quality management systems — Requirements)
How do we prove our measurement methods produce valid data?
Keep metric definitions, controlled data sources, calculation logic, and data quality checks so results are reproducible. Where equipment is involved, maintain records that show the equipment is suitable and controlled per your QMS practices. (Source: ISO 9001:2015 Quality management systems — Requirements)
Can we rely on dashboards from other teams (Ops, Product, IT) as evidence?
Yes, if you control the definition and can show the underlying method, source, and review/evaluation records. A screenshot alone is weak evidence unless you can trace it to a controlled report and show it was reviewed. (Source: ISO 9001:2015 Quality management systems — Requirements)
How should we handle third-party metrics under this clause?
Add measures for outsourced activities that affect your ability to conform to requirements, and define how you collect results and evaluate them. Keep evidence of review and follow-up actions when third-party performance degrades. (Source: ISO 9001:2015 Quality management systems — Requirements)
What evidence usually fails in audits even when teams “do the work”?
Missing evaluation records is the most common gap: teams collect numbers but cannot show when results were analyzed, who evaluated them, and what decisions were made. Method drift is another frequent failure, where calculation logic changes without documented approval. (Source: ISO 9001:2015 Quality management systems — Requirements)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream