Article 9: Processing of special categories of personal data
To meet the article 9: processing of special categories of personal data requirement, you must treat special category data as “prohibited by default” and only allow it when a specific Article 9 condition applies, is documented, and is enforced in intake, access, sharing, and retention workflows. Operationally, this means building a scoped inventory, a gated approval process, and audit-ready evidence for every in-scope processing activity. 1
Key takeaways:
- Special category data processing is prohibited unless you can point to, and document, a valid Article 9 condition. 1
- You need a repeatable “intake-to-approval-to-controls” procedure, not just policy language.
- Evidence matters: maintain decision records, system-level control outputs, and exception handling packets.
Article 9 is where many GDPR programs become operational, because it forces you to identify sensitive data processing you may be doing “accidentally” (through forms, customer support tickets, HR flows, biometrics, or health-related benefits), then either stop it or justify it. The requirement is simple in concept: special categories of personal data are prohibited to process unless an exception applies. 1
For a CCO, Compliance Officer, or GRC lead, the practical objective is defensibility: you should be able to show (1) where special category data exists, (2) why you are processing it, (3) who approved that basis, (4) which controls enforce that scope in systems and third parties, and (5) how you monitor for drift. If you cannot do those five things quickly, you have an uncontrolled high-sensitivity processing risk, even if your privacy policy is well-written.
This page gives requirement-level implementation guidance you can execute immediately: scoping, workflow gating, approvals, and the evidence an auditor or supervisory authority will expect to see.
Regulatory text
GDPR Article 9(1) excerpt: “Processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation shall be prohibited.” 1
Operator interpretation (what you must do)
- Default rule: Do not process these data types. Treat them as blocked unless a permitted condition is identified and documented. 1
- Scope of data types: Build your detection and classification around the categories listed in the text (race/ethnicity, political opinions, religion/philosophy, trade union membership, genetic data, biometric data for unique identification, health data, sex life/sexual orientation). 1
- Operational requirement: Translate “prohibited” into enforceable controls across: data intake, data storage, access, sharing (including third parties), and retention/deletion.
Plain-English interpretation of the requirement
Article 9 is a hard stop unless you can justify processing. If a business team wants to collect or infer a special category attribute (even indirectly), they need to go through a defined review and approval path before the processing starts, and you need technical and procedural guardrails to keep the processing limited to what was approved. 1
A practical way to run this: treat special category processing like a “restricted processing lane” with explicit entry criteria, named owners, and ongoing monitoring.
Who it applies to (entity and operational context)
Article 9 applies when you act as a controller or processor and your processing includes any of the special categories listed in the text. 1
Common operational contexts where Article 9 comes up:
- HR and people operations: benefits, accommodations, leave administration, union membership discussions, DEI surveys.
- Identity and access: biometrics used to uniquely identify someone (face recognition, fingerprint time clocks). 1
- Customer support and trust & safety: tickets containing health details, harassment reports, religious/political content.
- Product analytics and profiling: inferred attributes from user behavior (be careful: “revealing” can happen without explicit collection). 1
- Third-party processing: payroll providers, benefits administrators, background check providers, identity verification vendors, clinical/health partners.
What you actually need to do (step-by-step)
1) Establish role-and-scope for Article 9 processing
Goal: Know where you are controller vs processor, and where special category data exists.
Actions:
- Create an Article 9 scope register: processing activity, business owner, controller/processor role, special category type, systems, third parties, data subjects, and data flows.
- Map entry points: forms, uploads, free-text fields, call recordings, chat transcripts, scans, and APIs.
- Identify derived/inferred special category outputs (e.g., health risk flags, union-related tags in case management).
Artifact to retain:
- Article 9 role-and-scope register (versioned, owned, reviewed on a set cadence).
2) Implement a gated intake and approval workflow (“no ticket, no processing”)
Goal: Convert “prohibited” into a measurable operational gate. 1
Actions:
- Define trigger events that force review:
- New data field or dataset
- New model feature engineering using sensitive signals
- New third party that will receive or host in-scope data
- New geography, purpose, or user segment
- Create a standard approval packet required before launch:
- Description of the data and how it is captured
- Business purpose and necessity
- Systems and third parties involved
- Access model (roles, entitlements)
- Retention period and deletion plan
- Monitoring approach (alerts, sampling, QA)
- Require named approvals (minimum):
- Privacy/DPO function (or privacy counsel)
- Security owner for the system
- Business process owner
Where Daydream fits naturally: use Daydream as the system of record to track the scope register, route approvals, and generate evidence packets (decision record + control outputs + exceptions) on a recurring cadence.
Artifacts to retain:
- Approval ticket/workflow record
- Decision record with sign-offs
- Data flow diagram (can be lightweight)
3) Enforce system-level controls for special category data
Goal: Ensure the approved scope is the actual scope.
Minimum control set (pick the ones that match your environment):
- Collection controls: minimize free-text; add warnings; add field validation; separate sensitive fields into dedicated forms with access restrictions.
- Access controls: least-privilege roles; break-glass access for incident response; periodic access reviews for teams with sensitive access.
- Storage controls: segregate datasets; encrypt; logging enabled for read/export actions.
- Sharing controls: contract and DPA gating for third parties; block ad hoc exports; require approvals for new downstream recipients.
- Retention controls: restricted retention; automate deletion; prevent backups from becoming indefinite archives.
Artifacts to retain:
- Role-based access control (RBAC) matrix for in-scope systems
- Access review outputs (who had access, reviewer, disposition)
- System logs or audit log configuration screenshots (read/export events)
- Data retention configuration evidence (policy + system setting)
4) Manage third-party exposure explicitly (processors and onward sharing)
Goal: Prevent sensitive-data sprawl across third parties.
Actions:
- Tag third parties in your inventory that touch Article 9 data.
- For each, document:
- What special category types they process
- Processing purpose and instructions
- Data locations and subprocessors (if known)
- Breach notification and incident coordination paths
- Gate procurement and renewals on: approved scope, security review, and contractual restrictions aligned to your intended processing.
Artifacts to retain:
- Third-party inventory entries with Article 9 flag
- Signed DPA / processing instructions (or equivalent contract addendum)
- Third-party risk review package tied to the specific processing activity
5) Operational monitoring and exceptions
Goal: Detect drift and handle inevitable edge cases.
Actions:
- Set up monitoring for:
- New sensitive fields created in production
- Unapproved data exports from sensitive systems
- High-risk access patterns (bulk reads/downloads)
- Implement an exception process:
- Criteria for emergency processing
- Time-bound approvals
- Post-facto review and remediation
- Run periodic quality checks:
- Sampling of support tickets or free-text inputs for sensitive content
- Review of analytics/event payloads for accidental capture
Artifacts to retain:
- Exception requests and approvals
- Monitoring alerts and investigation notes
- Remediation records (what changed, when, and who approved)
Required evidence and artifacts to retain (audit-ready checklist)
Use this as your “evidence packet” table per processing activity:
| Evidence item | What it proves | Owner |
|---|---|---|
| Article 9 scope register entry | You know where special category processing occurs | Privacy/GRC |
| Decision record + approvals | Processing was reviewed before launch | Privacy + Business |
| System list + data flow | Processing is bounded and mapped | Security/IT |
| RBAC matrix + access review output | Access is controlled | Security |
| Third-party inventory + DPA/contract terms | Sharing is governed | Procurement/GRC |
| Retention/deletion proof | Data is not kept indefinitely | Data owner |
| Exceptions + remediation | You manage edge cases without normalizing them | Privacy/GRC |
Common exam/audit questions and hangups
Auditors and regulators tend to ask questions that test whether “prohibited” is real in your operating model:
- “Show me where you process special category data.” If you cannot enumerate systems and purposes, your program looks aspirational.
- “How do you stop teams from adding a sensitive field without review?” They will look for an intake gate tied to SDLC/product change management.
- “Which third parties receive this data, and why?” Expect follow-ups on subprocessors and onward transfers.
- “Who can export it?” They will want logs, role definitions, and evidence of periodic access review.
- “How do you detect accidental collection?” Free-text fields and logs are common failure points.
Frequent implementation mistakes and how to avoid them
-
Mistake: Treating Article 9 as a policy-only statement.
Fix: Put a required approval workflow in the launch process; block deployments without a completed review packet. -
Mistake: Missing “biometric data for unique identification.”
Fix: Inventory authentication and physical access use cases; tag any face/fingerprint solutions explicitly. 1 -
Mistake: Assuming “we don’t ask for it” means “we don’t process it.”
Fix: Test intake channels that collect user-generated content (support, chats, attachments). Add controls and monitoring. -
Mistake: Third-party sprawl (data copied into ticketing, CRMs, analytics).
Fix: Limit downstream recipients; require review for each new integration; enforce field-level suppression in event pipelines. -
Mistake: Undefined retention for sensitive datasets.
Fix: Set a retention rule per use case and prove enforcement with system settings and deletion reports.
Enforcement context and risk implications
No public enforcement case sources were provided in the source catalog for this page, so this guidance focuses on operational defensibility anchored to the regulatory text. 1
From a risk standpoint, special category data increases:
- Regulatory scrutiny because the baseline rule is prohibition. 1
- Breach impact because the data can expose discrimination, health status, or intimate details.
- Third-party risk because processors can replicate data across tools, backups, and subprocessors if scope is not tightly controlled.
Practical execution plan (30/60/90-day)
You asked for speed. Use this plan to move from “unknown exposure” to “controlled processing” without waiting for a full program rebuild.
First 30 days (stabilize and find)
- Stand up the Article 9 scope register and name owners for each processing activity.
- Identify top entry points: HR systems, identity verification, support tooling, analytics event pipelines.
- Freeze new sensitive-data collection unless a review packet is approved.
- Define the approval packet template and implement it in your ticketing/workflow tool (or Daydream).
Days 31–60 (control and document)
- Implement RBAC tightening and logging for in-scope systems.
- Complete third-party tagging: which third parties touch Article 9 data, and under what instructions.
- Add intake controls: form design changes, free-text warnings, data loss prevention (where feasible).
- Begin periodic access reviews for sensitive systems; retain outputs as evidence.
Days 61–90 (monitor and prove)
- Add monitoring for drift: new fields, exports, unusual access patterns.
- Run a tabletop exercise for an incident involving special category data: test notification paths, logs, and containment.
- Sample-test your environment for accidental capture (support tickets, event payloads).
- Package evidence: one “golden” evidence packet per major processing activity to prove readiness.
Frequently Asked Questions
Does Article 9 apply if special category data appears only in free-text support tickets?
Yes, if you process data revealing the listed attributes, the prohibition framework is in play and you need controls around intake, access, and retention. Treat free-text channels as high-risk entry points and add monitoring plus restricted access. 1
Are biometric logins always special category data?
Article 9 calls out “biometric data for the purpose of uniquely identifying a natural person.” If you use biometrics for unique identification, treat it as in-scope and gate it through your Article 9 workflow. 1
We’re a processor. Do we still need an Article 9 scope register?
Yes. Even as a processor, you need to know which client instructions involve special category data so you can apply appropriate controls, restrict access, and manage third-party exposure tied to that processing. 1
How do we handle “inferred” sensitive attributes in analytics or ML features?
Start by documenting whether your processing reveals any of the Article 9 categories and where those features are generated and stored. Then gate model changes through the same approval packet and restrict downstream sharing of sensitive feature sets. 1
Can we centralize approvals without blocking product velocity?
Yes, if you standardize the packet, define clear trigger events, and set service-level expectations for reviewers. Most delay comes from missing information, so make the template mandatory and require system/data-flow details up front.
What evidence do auditors ask for first?
They usually start with: where the data is, who has access, which third parties receive it, and proof that you review and enforce those decisions. Prepare one evidence packet per major system and keep it current.
Footnotes
Frequently Asked Questions
Does Article 9 apply if special category data appears only in free-text support tickets?
Yes, if you process data revealing the listed attributes, the prohibition framework is in play and you need controls around intake, access, and retention. Treat free-text channels as high-risk entry points and add monitoring plus restricted access. (Source: Regulation (EU) 2016/679, Article 9)
Are biometric logins always special category data?
Article 9 calls out “biometric data for the purpose of uniquely identifying a natural person.” If you use biometrics for unique identification, treat it as in-scope and gate it through your Article 9 workflow. (Source: Regulation (EU) 2016/679, Article 9)
We’re a processor. Do we still need an Article 9 scope register?
Yes. Even as a processor, you need to know which client instructions involve special category data so you can apply appropriate controls, restrict access, and manage third-party exposure tied to that processing. (Source: Regulation (EU) 2016/679, Article 9)
How do we handle “inferred” sensitive attributes in analytics or ML features?
Start by documenting whether your processing reveals any of the Article 9 categories and where those features are generated and stored. Then gate model changes through the same approval packet and restrict downstream sharing of sensitive feature sets. (Source: Regulation (EU) 2016/679, Article 9)
Can we centralize approvals without blocking product velocity?
Yes, if you standardize the packet, define clear trigger events, and set service-level expectations for reviewers. Most delay comes from missing information, so make the template mandatory and require system/data-flow details up front.
What evidence do auditors ask for first?
They usually start with: where the data is, who has access, which third parties receive it, and proof that you review and enforce those decisions. Prepare one evidence packet per major system and keep it current.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream