Analysis and evaluation

ISO 9001:2015 Clause 9.1.3 requires you to analyze and evaluate the data you already collect through monitoring and measurement, then use the results to prove product/service conformity, customer satisfaction, QMS effectiveness, planning effectiveness, risk/opportunity action effectiveness, external provider performance, and where improvement is needed. Your fastest path is to define inputs, analysis methods, owners, cadence, and management review outputs. 1

Key takeaways:

  • Treat 9.1.3 as a defined “analysis system,” not ad hoc reporting. 1
  • Auditors look for traceability: metric → analysis → decision → action → verified outcome. 1
  • Include external provider performance and risk/opportunity actions in your evaluation, not just internal KPIs. 1

Clause 9.1.3 is easy to under-implement because most organizations already “have metrics.” The gap is usually not measurement, it’s disciplined evaluation: showing that you regularly interpret the data, reach defensible conclusions, and drive action through the QMS. ISO 9001 does not mandate specific KPIs or statistical techniques, but it does require that your analysis and evaluation cover specific outcomes: conformity of products and services, customer satisfaction, QMS performance and effectiveness, planning effectiveness, effectiveness of actions for risks and opportunities, performance of external providers, and improvement needs. 1

For a CCO, GRC lead, or Quality leader operationalizing this quickly, the goal is to build a repeatable routine with clear ownership: what data is “appropriate,” how it’s analyzed, how often it’s evaluated, how conclusions are recorded, and how resulting actions are controlled (corrective action, change control, supplier management, management review). If you can show this closed-loop chain with evidence, you are typically in good shape for certification audits and internal governance reviews. 1

Regulatory text

Requirement excerpt: “The organization shall analyse and evaluate appropriate data and information arising from monitoring and measurement.” 1

What the operator must do: You must (1) identify which monitoring/measurement outputs matter for your QMS objectives and obligations, (2) analyze and evaluate them in a defined way, and (3) retain evidence that the evaluation produced conclusions and actions across the required result areas (conformity, customer satisfaction, QMS effectiveness, planning effectiveness, risk/opportunity actions, external provider performance, and improvement needs). 1

Plain-English interpretation (what auditors expect)

You need a reliable “quality intelligence” process. That means:

  • Data is collected consistently (from monitoring and measurement).
  • Someone competent interprets it (analysis and evaluation).
  • The organization makes decisions based on it (management review, corrective actions, supplier actions, improvements).
  • Those decisions are tracked to completion, and effectiveness is checked. 1

If you can’t show how the data changed a decision, improved performance, or confirmed control, an auditor can treat the activity as reporting noise rather than compliance with 9.1.3. 1

Who it applies to

Entities: Any organization operating a QMS aligned to ISO 9001:2015, regardless of industry or size. 1

Operational contexts where it bites hardest:

  • Regulated production or service delivery where “conformity” requires documented proof (inspection results, defect trends, validation outcomes). 1
  • High reliance on third parties (manufacturers, labs, logistics, SaaS providers, contractors) where external provider performance must be evaluated with evidence. 1
  • Fast-changing environments where planning assumptions drift and you must demonstrate planning effectiveness and risk/opportunity action effectiveness. 1

What you actually need to do (step-by-step)

Step 1: Define “appropriate data” and scope it to required result areas

Create a simple register that maps each required evaluation area to your data sources. Use categories explicitly aligned to 9.1.3 outputs:

  • Conformity of products/services: defects, rework, test failures, service errors, returns, audit nonconformities. 1
  • Customer satisfaction: complaints, feedback, surveys, churn reasons, escalations. 1
  • QMS performance/effectiveness: internal audit results, CAPA cycle performance, training completion effectiveness evidence, process performance measures. 1
  • Planning effectiveness: objective attainment, on-time delivery vs plan, capacity vs forecast, project milestones. 1
  • Risk/opportunity actions: risk treatment plans vs outcomes, incident trends, residual risk results, preventive improvements. 1
  • External provider performance: supplier OTD, incoming acceptance rates, supplier defects, SLA adherence, issue responsiveness. 1
  • Improvement needs: recurring nonconformities, chronic delays, complaint themes, audit trend analysis. 1

Deliverable: a one-page “9.1.3 Analysis Inputs Map” showing coverage and data owners. 1

Step 2: Specify analysis methods and decision rules (keep it defendable)

For each metric or dataset, define:

  • Method: trending, Pareto, control charting, stratification by product line/site/provider, root-cause review triggers, correlation checks when relevant. 1
  • Decision rule: what constitutes a signal worth action (example: sustained adverse trend, repeat complaint category, supplier misses acceptance criteria). Keep thresholds internal if you want flexibility, but define triggers. 1
  • Frequency: aligned to risk and process velocity (monthly operational review for fast processes; quarterly may fit slower ones). ISO doesn’t prescribe cadence, but you must show it is planned and repeatable. 1
  • Owner: named role accountable for producing the evaluation output, not just collecting data. 1

Step 3: Run the evaluation meeting rhythm and document conclusions

Implement two layers:

  1. Operational performance reviews (process-level): confirm process health, raise issues, open actions.
  2. Management review inputs (system-level): consolidate analysis and elevate systemic decisions.

Minimum expectation: recorded conclusions, not just dashboards. A chart without interpretation is not “evaluation.” 1

Step 4: Drive actions through controlled workflows (CAPA, change control, supplier management)

For each evaluation finding, decide the correct path:

  • Corrective action for actual nonconformities or systemic failures.
  • Improvement action for optimization without a nonconformity.
  • Supplier corrective action / re-evaluation for external provider performance issues.
  • Risk treatment updates if risk/opportunity actions are ineffective.
  • Plan updates if planning effectiveness is weak. 1

In practice, this is where teams fail: they discuss trends but don’t open controlled actions with owners and due dates, so nothing is provable at audit time. 1

Step 5: Verify effectiveness and feed back into the next evaluation cycle

Close the loop:

  • Define what “effective” means per action (e.g., reduced recurrence, restored capability, stabilized supplier performance).
  • Re-check the relevant metric after implementation.
  • Record the outcome and any follow-on action. 1

Optional accelerator: automate evidence capture (without losing control)

If you struggle with assembling cross-functional evidence (dashboards, meeting minutes, CAPA records, supplier scorecards), a platform like Daydream can help you centralize the 9.1.3 evidence trail by linking metrics to findings, actions, and management review outputs. Keep ownership and approval steps clear so the tool supports governance rather than replacing it. 1

Required evidence and artifacts to retain

Auditors typically want to sample evidence that shows coverage and repeatability. Maintain:

  • 9.1.3 Analysis Inputs Map (data sources, owners, frequency, methods, required result-area coverage). 1
  • Trend reports / scorecards with versioning or date stamps. 1
  • Performance review minutes showing conclusions, decisions, and action assignments. 1
  • Management review package including analysis outputs and resulting decisions/actions. 1
  • CAPA / improvement records tied back to analyzed data and including effectiveness checks. 1
  • External provider evaluations (scorecards, issue logs, re-evaluation records, supplier corrective actions). 1
  • Risk/opportunity action tracking showing evaluation of effectiveness. 1

Common exam/audit questions and hangups

Expect questions like:

  • “Show me the data you monitor and measure, and how you decided it was appropriate.” 1
  • “Where is the documented evaluation, not just the graph?” 1
  • “How do you know your risk actions worked?” 1
  • “How do you evaluate external provider performance, and what happens when they miss?” 1
  • “Point to an improvement that came out of your evaluation process.” 1
  • “How does this feed management review decisions?” 1

Frequent implementation mistakes (and how to avoid them)

  1. Dashboards without decisions. Fix: require a short written interpretation and “so what” per metric pack. 1
  2. No linkage to actions. Fix: every material adverse trend gets an action record or a documented rationale for no action. 1
  3. Ignoring external providers. Fix: put supplier performance on the same review cadence as internal KPIs; document escalation and re-evaluation. 1
  4. Risk register exists but isn’t evaluated. Fix: track risk/opportunity actions like projects with effectiveness criteria and post-implementation checks. 1
  5. Analysis coverage gaps. Fix: use the 9.1.3 result areas as a checklist; if you can’t show evidence for one, add a data source or adjust monitoring. 1

Enforcement context and risk implications

ISO 9001 is a certifiable standard, not a regulator, so “enforcement” is typically expressed through certification audit nonconformities, customer audits, and contractual consequences. Weak 9.1.3 implementation increases the chance of systemic issues going undetected (supplier drift, recurring defects, chronic customer complaints) and makes it harder to defend decisions because you cannot show objective evaluation. 1

Practical execution plan (30/60/90)

First 30 days: establish the analysis system backbone

  • Inventory monitoring and measurement sources already in place. 1
  • Build the 9.1.3 Analysis Inputs Map and confirm each required result area is covered. 1
  • Assign owners, cadence, and the forum where evaluation happens (ops review vs management review). 1
  • Standardize a template for “Metric → interpretation → decision → action.” 1

By 60 days: run the first full evaluation cycle and open actions

  • Produce the first evaluation pack and hold the review meetings. 1
  • Create action records for material issues (CAPA, supplier action, planning updates, risk treatment changes). 1
  • Confirm external provider performance evaluation is working in practice (scorecards, issue escalation path, re-evaluation trigger). 1

By 90 days: prove closure and effectiveness

  • Demonstrate at least one closed-loop example: trend identified → action taken → effectiveness checked → documented conclusion. 1
  • Tune decision rules and remove noise metrics that don’t drive decisions. 1
  • Package the outputs as formal inputs to management review and confirm decisions are recorded. 1

Frequently Asked Questions

Do we need statistical process control (SPC) to comply with ISO 9001 Clause 9.1.3?

No specific technique is mandated. You must show that the analysis method is appropriate for the data and that evaluation leads to conclusions and actions across the required result areas. 1

What does “appropriate data” mean in practice?

“Appropriate” means the data is sufficient to evaluate conformity, customer satisfaction, QMS effectiveness, planning effectiveness, risk/opportunity action effectiveness, external provider performance, and improvement needs. Document your rationale in the inputs map so it’s auditable. 1

Can we rely on dashboards as evidence?

Dashboards help, but auditors usually expect documented evaluation, including interpretation and decisions. Add short narrative conclusions and link any material findings to controlled actions. 1

How do we show evaluation of external provider performance without overbuilding supplier management?

Start with a small set of supplier KPIs tied to what you buy and the risk it creates, then document periodic evaluation and what you do when performance is unacceptable. Keep evidence of escalation, corrective actions, or re-evaluation. 1

What’s the cleanest way to demonstrate “effectiveness of actions to address risks and opportunities”?

Define measurable or observable success criteria when you approve the action, then re-check the relevant monitoring/measurement data after implementation. Store the before/after conclusion and any follow-on changes. 1

We have multiple sites. Do we need site-level and corporate-level evaluation?

You need evaluation at the level where decisions are made and where risks occur. Many organizations keep site-level reviews for local control and roll up system trends to management review for enterprise decisions. 1

Footnotes

  1. ISO 9001:2015 Quality management systems — Requirements

Frequently Asked Questions

Do we need statistical process control (SPC) to comply with ISO 9001 Clause 9.1.3?

No specific technique is mandated. You must show that the analysis method is appropriate for the data and that evaluation leads to conclusions and actions across the required result areas. (Source: ISO 9001:2015 Quality management systems — Requirements)

What does “appropriate data” mean in practice?

“Appropriate” means the data is sufficient to evaluate conformity, customer satisfaction, QMS effectiveness, planning effectiveness, risk/opportunity action effectiveness, external provider performance, and improvement needs. Document your rationale in the inputs map so it’s auditable. (Source: ISO 9001:2015 Quality management systems — Requirements)

Can we rely on dashboards as evidence?

Dashboards help, but auditors usually expect documented evaluation, including interpretation and decisions. Add short narrative conclusions and link any material findings to controlled actions. (Source: ISO 9001:2015 Quality management systems — Requirements)

How do we show evaluation of external provider performance without overbuilding supplier management?

Start with a small set of supplier KPIs tied to what you buy and the risk it creates, then document periodic evaluation and what you do when performance is unacceptable. Keep evidence of escalation, corrective actions, or re-evaluation. (Source: ISO 9001:2015 Quality management systems — Requirements)

What’s the cleanest way to demonstrate “effectiveness of actions to address risks and opportunities”?

Define measurable or observable success criteria when you approve the action, then re-check the relevant monitoring/measurement data after implementation. Store the before/after conclusion and any follow-on changes. (Source: ISO 9001:2015 Quality management systems — Requirements)

We have multiple sites. Do we need site-level and corporate-level evaluation?

You need evaluation at the level where decisions are made and where risks occur. Many organizations keep site-level reviews for local control and roll up system trends to management review for enterprise decisions. (Source: ISO 9001:2015 Quality management systems — Requirements)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO 9001 Analysis and evaluation: Implementation Guide | Daydream