Privacy risk assessment

ISO/IEC 27701 Clause 5.4.1.2 requires you to define and run a repeatable privacy risk assessment process for PII processing that identifies risks (including to data subjects), then rates likelihood and consequence so you can decide treatments and prove governance. Operationalize it by standardizing scope triggers, a risk method, decision records, and evidence retention.

Key takeaways:

  • You need a documented, consistently applied process for PII processing risk assessment, not ad hoc reviews.
  • Your assessment must cover risks to PII principals and evaluate both likelihood and consequences.
  • Auditors will look for traceability: inventory → assessment → decision → controls → re-assessment.

“Privacy risk assessment” in ISO/IEC 27701 is a requirement to run privacy risk with the same discipline you run security risk: defined inputs, consistent scoring, clear ownership, and evidence you can show later. It applies to any organization acting as a PII controller or PII processor, and it is especially relevant where your systems, products, or third parties create new or changed PII processing.

CCOs and GRC leads usually run into two operational pitfalls here. First, teams treat privacy risk assessments as a one-time exercise (or only for big projects) and fail to show a repeatable process that triggers on change. Second, the assessment output is vague (“low/medium/high” with no rationale), which makes it hard to justify risk acceptance, select controls, or defend decisions during certification audits.

This page gives requirement-level guidance you can implement quickly: who owns what, what to assess, how to score likelihood and consequence, which artifacts to retain, and what auditors commonly challenge. Where teams need tooling, Daydream can help standardize intake, workflows, and evidence, but the core requirement is process and proof.

Regulatory text

Requirement (verbatim): “The organization shall define and apply a PII processing risk assessment process that identifies risks associated with the processing of PII, including risks to PII principals, and assesses the potential consequences and likelihood of those risks.” 1

Plain-English interpretation

You must have a documented privacy risk assessment process and you must actually use it for PII processing. Each assessment must:

  1. Identify privacy risks created by the processing of PII (collection, use, sharing, storage, deletion).
  2. Include risks to PII principals (data subjects), not only business risk.
  3. Rate or describe likelihood and consequences so treatment decisions are defensible and repeatable.

Auditors will not accept “we think about privacy” as compliance. They will expect a consistent method and a set of assessment records tied to real processing activities.

Who it applies to

Entity scope

  • PII controllers: deciding purposes and means of PII processing.
  • PII processors: processing PII on behalf of a controller.

Operational context (where it shows up in practice)

  • New products, features, or marketing programs that change PII flows.
  • System implementations or migrations involving PII.
  • Onboarding a third party that will access or process PII (including sub-processors).
  • Material changes to retention, access, data sharing, or cross-border transfers.
  • Security incidents or near misses that indicate elevated privacy risk.
  • Periodic review of high-risk processing (for example, sensitive categories, large-scale processing, or automation that affects individuals).

What you actually need to do (step-by-step)

1) Define the process, roles, and triggers

Create a written procedure that answers:

  • Trigger events: what changes require an assessment (new processing, material change, new third party, new data category, new purpose, new geography, new technology).
  • Owners and approvers: who completes the assessment, who reviews (privacy, security, legal), who approves risk acceptance.
  • Decision paths: when you must mitigate, when you can accept, and when you must escalate to leadership.

Practical ownership model:

  • Business/product owner drafts the intake and describes processing.
  • Privacy lead runs the method, ensures data subject risks are addressed.
  • Security contributes threat and control context.
  • Legal validates purpose/contractual posture as needed.
  • Risk owner signs off on acceptance or commits to treatment.

2) Standardize scoping inputs (the “minimum facts”)

Require a consistent intake so assessments are comparable. Collect:

  • Processing purpose(s) and legal/business justification (describe, don’t litigate).
  • Data categories (PII types), including whether any are sensitive.
  • PII principals (customers, employees, minors, patients, etc.).
  • Data sources, data flows, and sharing (internal teams and third parties).
  • Storage locations, environments, and access model.
  • Retention and deletion approach.
  • Automation elements (profiling, scoring, decisioning) if any.
  • Existing controls already in place.

Artifact: PII processing description / data flow summary tied to the assessment record.

3) Identify privacy risks (including to PII principals)

Your method should force teams to consider both:

  • Risks to individuals (PII principals): unauthorized disclosure, identity exposure, loss of confidentiality, misuse beyond expectations, discrimination or unfair outcomes from automation, inability to exercise rights, chilling effects.
  • Risks to the organization: regulatory exposure, contractual breach, operational disruption, reputational harm.

Use a structured checklist so risk identification is consistent. Typical risk families:

  • Over-collection or purpose creep.
  • Excessive retention or weak deletion.
  • Overbroad access or privilege creep.
  • Uncontrolled sharing with third parties or sub-processors.
  • Weak transparency/notice and consent mismatches (where relevant).
  • Security control gaps that create privacy harm pathways (logging PII, backups, test data, misconfigurations).

Artifact: Risk register entries or an assessment risk list mapped to the processing activity.

4) Assess likelihood and consequences with a defined rubric

ISO/IEC 27701 requires you to assess likelihood and consequences. Define a rubric that is:

  • Clear enough that two reviewers reach similar ratings.
  • Explainable in an audit.

What “consequence” should cover:

  • Severity of harm to individuals (financial, physical, emotional, discrimination, loss of control, confidentiality breach).
  • Scope (how many people could be affected) and sensitivity (type of PII).
  • Reversibility (can harm be undone).

What “likelihood” should cover:

  • Exposure (internet-facing, broad internal access, third party access).
  • Control strength (technical and procedural).
  • Threat conditions (history of incidents, complexity, change frequency).

Artifact: Scoring worksheet (or embedded scoring in a GRC tool) with written rationale.

5) Decide treatment and assign accountable actions

For each non-trivial risk, record a disposition:

  • Mitigate: add or strengthen controls (access controls, encryption, minimization, retention limits, monitoring, contractual restrictions, transparency updates).
  • Avoid: change the design to remove the risk driver (stop collecting a field, reduce sharing, shorten retention).
  • Transfer: shift through contractual terms or insurance where appropriate, while still managing residual risk.
  • Accept: document why acceptance is reasonable and who approved.

Each mitigation needs:

  • A control owner.
  • A target completion date (use your internal governance cadence).
  • A validation approach (test plan, configuration evidence, control attestation).

Artifact: Risk treatment plan linked to control implementation work items.

6) Re-assess and close the loop

Close-out is where many programs fail. Require:

  • Re-scoring after treatment.
  • Evidence that changes were implemented.
  • Residual risk sign-off.

Artifact: Completed assessment package with versioning and timestamps.

7) Make the process auditable and repeatable

Operational controls that make this stick:

  • Central intake queue (ticketing or GRC workflow).
  • Standard templates and required fields (no “N/A” sprawl).
  • Review SLAs aligned to your change cadence (define internally).
  • Periodic sampling QA by privacy or internal audit.

If you use Daydream, configure it as the system of record for intake, routing, approvals, and evidence attachment so assessments do not live in disconnected docs and email threads.

Required evidence and artifacts to retain

Auditors usually expect a traceable file for each assessment plus program-level governance.

Program-level artifacts

  • Privacy risk assessment procedure (roles, triggers, scoring method).
  • Risk scoring rubric definitions (likelihood and consequence).
  • Training or guidance for assessors and approvers.
  • Inventory linkage approach (how processing activities are identified).

Per-assessment artifacts

  • Processing description (data categories, purposes, flows, third parties).
  • Identified risk list including risks to PII principals.
  • Likelihood/consequence scoring with rationale.
  • Treatment decisions and approvals (including risk acceptance).
  • Control implementation evidence (configs, tickets, test results, contract clauses where relevant).
  • Re-assessment and residual risk sign-off.

Common exam/audit questions and hangups

Expect these and pre-answer them in your artifacts:

  1. “Show me your defined process.” Provide the procedure and rubric.
  2. “Prove you apply it consistently.” Show a list of completed assessments tied to real projects and third-party onboarding.
  3. “How do you consider risks to PII principals?” Point to explicit harm/severity fields and examples.
  4. “How do you decide likelihood and consequence?” Show rubric + rationale text, not just a score.
  5. “Where are the treatment actions and closures?” Auditors look for closed-loop governance and evidence of implementation.

Frequent implementation mistakes and how to avoid them

  • Mistake: Security-only risk language. Fix: require a “harm to individuals” section and train reviewers to challenge gaps.
  • Mistake: No trigger discipline. Fix: integrate triggers into change management, SDLC gates, and third-party onboarding checklists.
  • Mistake: Scoring without rationale. Fix: make rationale mandatory fields; reject assessments missing justification.
  • Mistake: Risk acceptance by the wrong person. Fix: define “risk owner” and approval thresholds in the procedure.
  • Mistake: Orphaned action plans. Fix: tie treatments to tracked work items, then require re-assessment evidence.

Enforcement context and risk implications

No public enforcement cases were provided in the approved source catalog for this requirement, so this page does not cite specific regulatory actions. Practically, weak privacy risk assessment discipline increases the chance that harmful processing goes live without minimization, access controls, retention limits, or appropriate third-party constraints. That creates downstream incident and regulatory exposure you may be unable to defend because you cannot show a reasonable, repeatable decision process.

Practical 30/60/90-day execution plan

First 30 days (stand up the minimum viable process)

  • Publish a privacy risk assessment procedure aligned to ISO/IEC 27701 Clause 5.4.1.2. 1
  • Define likelihood and consequence rubrics, plus required rationale fields.
  • Create a standard intake template capturing the minimum facts (purpose, PII categories, data subjects, flows, third parties, retention).
  • Pick the system of record (document repository, ticketing, or Daydream) and define naming/versioning rules.

Next 60 days (operationalize through triggers and workflows)

  • Add triggers to SDLC/change management and third-party onboarding.
  • Train product, engineering, procurement, and privacy reviewers on the template and scoring method.
  • Run pilot assessments on a small set of real processing activities; tune the rubric for consistency.
  • Define approval rules for risk acceptance and escalation.

By 90 days (prove repeatability and audit readiness)

  • Build a dashboard or register view: processing activities assessed, open treatments, accepted risks, and re-assessment status.
  • Perform an internal quality review on a sample of assessments (completeness, rationale quality, closure evidence).
  • Document common control patterns (standard mitigations) to speed future assessments.
  • Prepare an audit packet: procedure, rubric, and a curated set of assessment examples with full traceability.

Frequently Asked Questions

Do we need a separate “privacy” risk methodology from our enterprise risk process?

You can align to your enterprise risk method, but you still need privacy-specific identification of risks to PII principals and a defined approach to likelihood and consequence for PII processing. The key is consistent, repeatable application to processing activities. 1

How is this different from a security risk assessment?

A privacy risk assessment must explicitly address risks to individuals and the impacts of processing choices like minimization, retention, sharing, and transparency, not only confidentiality/integrity/availability. You can reuse security inputs, but the outputs must cover PII principals and processing consequences. 1

What evidence is most persuasive to an ISO auditor?

A defined procedure plus completed assessment files that show consistent scoring, documented treatment decisions, approvals, and closure evidence. Auditors want traceability from a real processing activity to a risk decision and implemented controls. 1

Do third parties need to be included in the assessment?

If a third party processes or accesses PII as part of the processing activity, their role and the related risks belong in scope because the requirement is about risks associated with PII processing. Capture sharing, sub-processing, and control dependencies in the assessment record. 1

How often should we re-run privacy risk assessments?

Re-run them on material change and when treatment actions materially change residual risk. For stable, high-risk processing, set a periodic review cadence in your procedure based on your environment and risk appetite.

Can Daydream replace our templates and spreadsheets for this?

Yes, if you configure it as the workflow and system of record for intake, scoring, approvals, and evidence attachment. That helps you prove the “define and apply” requirement because the process becomes auditable by design. 1

Footnotes

  1. ISO/IEC 27701:2019 Security techniques — Extension to ISO/IEC 27001 and ISO/IEC 27002 for privacy information management

Frequently Asked Questions

Do we need a separate “privacy” risk methodology from our enterprise risk process?

You can align to your enterprise risk method, but you still need privacy-specific identification of risks to PII principals and a defined approach to likelihood and consequence for PII processing. The key is consistent, repeatable application to processing activities. (Source: ISO/IEC 27701:2019 Security techniques — Extension to ISO/IEC 27001 and ISO/IEC 27002 for privacy information management)

How is this different from a security risk assessment?

A privacy risk assessment must explicitly address risks to individuals and the impacts of processing choices like minimization, retention, sharing, and transparency, not only confidentiality/integrity/availability. You can reuse security inputs, but the outputs must cover PII principals and processing consequences. (Source: ISO/IEC 27701:2019 Security techniques — Extension to ISO/IEC 27001 and ISO/IEC 27002 for privacy information management)

What evidence is most persuasive to an ISO auditor?

A defined procedure plus completed assessment files that show consistent scoring, documented treatment decisions, approvals, and closure evidence. Auditors want traceability from a real processing activity to a risk decision and implemented controls. (Source: ISO/IEC 27701:2019 Security techniques — Extension to ISO/IEC 27001 and ISO/IEC 27002 for privacy information management)

Do third parties need to be included in the assessment?

If a third party processes or accesses PII as part of the processing activity, their role and the related risks belong in scope because the requirement is about risks associated with PII processing. Capture sharing, sub-processing, and control dependencies in the assessment record. (Source: ISO/IEC 27701:2019 Security techniques — Extension to ISO/IEC 27001 and ISO/IEC 27002 for privacy information management)

How often should we re-run privacy risk assessments?

Re-run them on material change and when treatment actions materially change residual risk. For stable, high-risk processing, set a periodic review cadence in your procedure based on your environment and risk appetite.

Can Daydream replace our templates and spreadsheets for this?

Yes, if you configure it as the workflow and system of record for intake, scoring, approvals, and evidence attachment. That helps you prove the “define and apply” requirement because the process becomes auditable by design. (Source: ISO/IEC 27701:2019 Security techniques — Extension to ISO/IEC 27001 and ISO/IEC 27002 for privacy information management)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 27701 Privacy risk assessment: Implementation Guide | Daydream