Article 22: Automated individual decision-making, including profiling
To operationalize Article 22, you must identify any decisions that are made solely by automated processing (including profiling) and that have legal or similarly significant effects on individuals, then put controls in place so people can avoid being subject to those decisions. This starts with an inventory of decision flows and ends with enforceable technical and process guardrails. (Regulation (EU) 2016/679, Article 22)
Key takeaways:
- Inventory “solely automated” decision flows and classify whether outcomes are legally or similarly significant.
- Put an operational gate in front of production releases so Article 22-impacting automation cannot ship without an approved control package.
- Retain evidence that a decision is not “solely automated” (or that individuals can exercise the right not to be subject to it).
Article 22 is easy to misunderstand because it sounds like a general “AI rule,” but it is narrower and more operational: it targets decisions about individuals that are made solely by automated processing and that produce legal effects or similarly significant effects. If you run eligibility checks, fraud screening, hiring filters, pricing/credit decisions, access restrictions, or account actions through rules engines or models, you likely have at least one workflow that needs Article 22 triage. (Regulation (EU) 2016/679, Article 22)
For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat Article 22 as a decision-governance requirement, not a model-governance requirement. Your first output should be a decision register that names each decision, the system(s) that execute it, whether it is “solely automated,” and whether it has legal or similarly significant effects. From there, you implement hard stop controls: either (a) introduce meaningful human involvement so the decision is not “solely automated,” or (b) design a rights workflow so the individual can avoid being subject to the solely automated decision. (Regulation (EU) 2016/679, Article 22)
This page gives requirement-level steps, evidence expectations, and an execution plan that a serious operator can put into a working program quickly.
Regulatory text
Excerpt (provided): “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” (Regulation (EU) 2016/679, Article 22)
Operator meaning (what you must make true in practice)
You need to prevent individuals from being subjected to decisions that meet all three conditions below, unless you redesign the workflow to break one of the conditions:
| Condition | What to test in your environment | Practical control outcome |
|---|---|---|
| Decision about a person | A system output triggers an action or outcome tied to an identified/identifiable individual | Decision register links the outcome to a person-level record |
| Based solely on automated processing | No human meaningfully reviews and can change the result before it takes effect | Add meaningful human involvement or add a rights-based alternative path |
| Legal or similarly significant effect | The outcome changes rights, access, eligibility, or similarly serious real-world impact | Classify as Article 22-in-scope and require a control package before go-live |
This is the operational bar: if a decision is solely automated and significant, you cannot treat it as “just analytics.” You must implement a defensible process so people can avoid being subject to that decision. (Regulation (EU) 2016/679, Article 22)
Plain-English interpretation of the requirement
Article 22 creates a right for individuals: they can opt out of being subject to certain automated decisions. Your job is to (1) find the decision flows that qualify, (2) prevent those flows from running unchecked in production, and (3) create a repeatable mechanism to honor the right.
Common examples of potentially “significant” decisions (you must confirm in your own context) include:
- Approving/denying access to a service, account, or benefit
- Approving/denying or materially changing terms for credit, insurance, employment, housing, education
- Automated account suspension or irreversible restrictions based on a score or rule outcome
Article 22 is not automatically triggered by every model. A churn model used only to prioritize internal outreach may be lower risk than a model that denies service automatically. The trigger is the decision and its effect. (Regulation (EU) 2016/679, Article 22)
Who it applies to (entity and operational context)
Entities
- Controllers that design or run automated decisioning that affects individuals.
- Processors that provide decisioning services or components (rules engines, fraud scoring, ID verification, ML pipelines) to controllers; processors typically implement what the controller instructs, but still need to support compliant operation through contractable controls and auditable evidence. (Regulation (EU) 2016/679)
Operational contexts that routinely trip Article 22
- Product-led companies with self-serve onboarding and automated risk checks
- Financial services, fintech, insurtech, lending, payments, marketplaces
- HR tech and recruiting workflows with automated screening/ranking tied to rejection decisions
- Trust & safety automation that removes content or suspends accounts without meaningful review
- Third-party-provided scoring and decision APIs embedded into customer journeys
What you actually need to do (step-by-step)
Step 1: Create an Article 22 decision register (your scope anchor)
Build a register focused on decisions, not just systems. Minimum fields:
- Decision name (e.g., “instant account approval/denial”)
- Business owner and technical owner
- Data subjects affected (customers, applicants, users)
- Systems involved (internal services + third party decision APIs)
- Inputs used (data categories at a high level)
- Output action (deny, approve, restrict, suspend, price change)
- “Solely automated?” (Yes/No; explain)
- “Legal or similarly significant effect?” (Yes/No; explain)
- Current control pattern (human review, alternative path, or none)
- Evidence location (tickets, logs, approvals)
This aligns with the practical need to avoid role and scope ambiguity and to translate policy into execution. (Regulation (EU) 2016/679, Article 22)
Step 2: Make a binding classification call for each decision
For each decision flow, hold a short working session with Product, Legal/Privacy, Compliance, and Engineering. Produce one of these outcomes:
- Out of scope: Not solely automated or not significant. Record the rationale and reviewer sign-off.
- In scope, mitigated by human involvement: Add or validate meaningful human review before the decision takes effect.
- In scope, mitigated by rights workflow: Build a process so the individual can avoid being subject to the solely automated decision.
Keep the rationale crisp and testable. Auditors and regulators will ask how you decided “solely automated” and “significant.” (Regulation (EU) 2016/679, Article 22)
Step 3: Implement “meaningful human involvement” where that’s your chosen control
If you choose to break “solely automated,” your human review must be real in practice:
- A named role (queue owner) with training and authority to override
- Sufficient context displayed to the reviewer (inputs, reasons, supporting signals)
- SLAs and coverage so the automated result does not silently become final
- Audit logging: reviewer identity, timestamp, decision, and justification
A “rubber stamp” review is a known failure mode. Build the workflow so overrides are possible and measurable.
Step 4: Implement a rights workflow when the decision remains solely automated
If the decision stays fully automated:
- Add an intake channel (DSAR portal, support workflow, or dedicated form) tagged “Automated decision objection”
- Authenticate the requester appropriately (consistent with your DSAR approach)
- Route to a trained case team with clear playbooks
- Provide a resolution path that changes the outcome or removes the person from the automated decisioning path, where feasible
- Record the disposition and remediation actions
Operationally, treat this as a specialized DSAR subtype with higher urgency and tighter controls, because it directly affects an ongoing or completed decision about the person. (Regulation (EU) 2016/679, Article 22)
Step 5: Put release gates in front of decisioning changes
Most Article 22 failures happen after a model/rules change ships without compliance review. Add a “Decisioning Impact Assessment” gate for:
- New automated eligibility/denial actions
- Changes that remove human review
- New third party scoring integrations
- Changes that increase automation coverage (more people affected)
The gate should require:
- Decision register entry updated
- Signed classification outcome
- Evidence that the operational control exists (human review queue live, or rights workflow live)
- Monitoring and logging in place
Daydream can help by turning the register, gate approvals, and evidence packets into a single, reviewable workflow so you can answer diligence questions quickly without rebuilding history from tickets.
Step 6: Ongoing monitoring and periodic evidence packets
Run recurring checks:
- Sample decisions to confirm the workflow matches the documented classification
- Review override rates and queue backlogs where human review is the control
- Confirm the rights workflow routes correctly and is staffed
- Reassess when business processes change (new product, new market, new third party)
Required evidence and artifacts to retain
Keep these artifacts in an “Article 22 evidence packet” per in-scope decision:
- Role-and-scope register entry (controller/processor role, affected systems, data categories, owner)
- Decision classification memo (solely automated? significant effect? rationale; approvers)
- Operating procedure with: trigger events, owners, steps, escalation paths, tooling references
- Workflow evidence
- Human-in-the-loop: queue configuration, training record, override capability proof, sample audit logs
- Rights workflow: intake form, case routing, case notes template, sample completed cases (redacted)
- Change management evidence showing the release gate was followed for meaningful changes
- Exception records with approvals, compensating controls, and remediation dates
These map directly to the practical controls of role/scope clarity, requirement-specific procedures, and auditable evidence. (Regulation (EU) 2016/679, Article 22)
Common exam/audit questions and hangups
Expect questions like:
- “List all decisions that are solely automated and significant. How do you know the list is complete?”
- “Show me a production workflow where a human can override the automated result.”
- “Prove the decision is not solely automated. Where is the log?”
- “How can a data subject exercise the right not to be subject to the decision? Show the intake and a completed case.”
- “What happens when a third party provides the score? Who is accountable and how do you control changes?”
- “How do you prevent engineers from removing the human review step during optimization?”
Hangups:
- Teams confuse “profiling exists” with “Article 22 applies.” Your register should anchor on the decision and the effect. (Regulation (EU) 2016/679, Article 22)
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails | Fix |
|---|---|---|
| Treating Article 22 as a privacy policy statement | Policy does not prove individuals can avoid the decision | Build workflow controls plus logs and case records |
| Declaring “human review” but reviewers cannot override | The decision remains effectively solely automated | Require override authority and capture overrides in audit logs |
| No inventory of decision flows | You cannot prove coverage | Maintain a decision register tied to systems and releases |
| Third party scoring treated as “vendor problem” | You still operate the decision | Contract for change notice, logging, and support for rights handling |
| Shipping decisioning changes without compliance sign-off | Controls drift | Add a release gate tied to decision register updates |
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this page, so this guidance focuses on defensible operations aligned to the Article 22 text. The practical risk is straightforward: if you cannot show how individuals avoid being subject to a solely automated significant decision, you are exposed during regulator inquiries, complaints, and customer diligence. (Regulation (EU) 2016/679, Article 22)
Practical execution plan (30/60/90-day)
You asked for speed, but the playbook rules prohibit numeric timelines without source support, so use phases instead.
Immediate (stabilize and map)
- Assign a single accountable owner for Article 22 execution (often Privacy + GRC jointly).
- Stand up the Article 22 decision register and populate it with obvious candidates (onboarding, fraud, credit/eligibility, suspension).
- Pick one high-impact workflow and run the classification session. Produce a signed memo.
Near-term (build controls that actually work)
- Implement the chosen control pattern for each in-scope decision:
- Human review workflow with override authority and logging, or
- Rights workflow with intake, routing, trained handlers, and disposition tracking
- Add a release gate so decisioning changes cannot ship without updated classification and evidence.
- Create the evidence packet template and start storing artifacts centrally (Daydream can be your system of record for the register, approvals, and evidence).
Ongoing (prove operation and prevent drift)
- Run sampling and control tests on a cadence you can sustain.
- Monitor for changes: new models, new rule sets, new third party scoring, new “auto-deny” actions.
- Refresh training for reviewers and case handlers, and track exceptions to closure.
Frequently Asked Questions
Does Article 22 apply to every machine learning model we run?
No. It applies to decisions about an individual that are based solely on automated processing and that produce legal or similarly significant effects. Start from the decision outcome, then trace back to models and rules. (Regulation (EU) 2016/679, Article 22)
What counts as “solely automated” in practice?
If the automated output becomes the final decision without meaningful human review that can change the outcome, treat it as solely automated. Document the workflow and keep logs that prove the human step exists and is used. (Regulation (EU) 2016/679, Article 22)
We use a third party risk score but make the final decision. Are we in scope?
Potentially, yes. If your system auto-acts on the score and the outcome is significant, you still operate an automated decision flow. Your contract and controls should cover change management, logging, and support for handling data subject rights. (Regulation (EU) 2016/679, Article 22)
How do we prove to an auditor that a decision is not solely automated?
Show the end-to-end workflow with evidence: queue configuration, reviewer training, and immutable audit logs that record who reviewed, what they saw, and whether they overrode the automated result. Keep samples in an evidence packet per decision. (Regulation (EU) 2016/679, Article 22)
What artifact should we build first if we have nothing today?
Build the decision register first. Without it, you cannot show you identified in-scope decisioning or that you have consistent controls across products, regions, and third parties. (Regulation (EU) 2016/679, Article 22)
How should we operationalize Article 22 across engineering releases?
Add a release gate for any change that introduces or expands automated decisioning, removes human review, or adds a third party scoring integration. Require an updated register entry, signed classification, and a control evidence checklist before approval. (Regulation (EU) 2016/679, Article 22)
Frequently Asked Questions
Does Article 22 apply to every machine learning model we run?
No. It applies to decisions about an individual that are based solely on automated processing and that produce legal or similarly significant effects. Start from the decision outcome, then trace back to models and rules. (Regulation (EU) 2016/679, Article 22)
What counts as “solely automated” in practice?
If the automated output becomes the final decision without meaningful human review that can change the outcome, treat it as solely automated. Document the workflow and keep logs that prove the human step exists and is used. (Regulation (EU) 2016/679, Article 22)
We use a third party risk score but make the final decision. Are we in scope?
Potentially, yes. If your system auto-acts on the score and the outcome is significant, you still operate an automated decision flow. Your contract and controls should cover change management, logging, and support for handling data subject rights. (Regulation (EU) 2016/679, Article 22)
How do we prove to an auditor that a decision is not solely automated?
Show the end-to-end workflow with evidence: queue configuration, reviewer training, and immutable audit logs that record who reviewed, what they saw, and whether they overrode the automated result. Keep samples in an evidence packet per decision. (Regulation (EU) 2016/679, Article 22)
What artifact should we build first if we have nothing today?
Build the decision register first. Without it, you cannot show you identified in-scope decisioning or that you have consistent controls across products, regions, and third parties. (Regulation (EU) 2016/679, Article 22)
How should we operationalize Article 22 across engineering releases?
Add a release gate for any change that introduces or expands automated decisioning, removes human review, or adds a third party scoring integration. Require an updated register entry, signed classification, and a control evidence checklist before approval. (Regulation (EU) 2016/679, Article 22)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream