Article 18: Classification of ICT-related incidents and cyber threats
To meet the article 18: classification of ict-related incidents and cyber threats requirement, you must run a repeatable, evidence-backed process that classifies every ICT-related incident and relevant cyber threat, then determines impact using defined criteria so escalation and downstream DORA reporting decisions are consistent and timely (Regulation (EU) 2022/2554, Article 18). Operationalize this by defining a taxonomy, thresholds, owners, and a workflow integrated into SecOps and incident management.
Key takeaways:
- Article 18 is an operational requirement: classify incidents and determine impact using defined criteria, not ad hoc judgment (Regulation (EU) 2022/2554, Article 18).
- Your biggest audit risk is inconsistent classification across teams, tools, and third parties, plus weak evidence of how impact was determined.
- Build a single classification decision workflow, tie it to incident records, and retain artifacts that show inputs, decisions, approvals, and outcomes.
Article 18 sits at the hinge point between “we detected something” and “we have a regulated incident with governance, escalation, and potential reporting consequences.” It requires a financial entity to classify ICT-related incidents and determine their impact based on defined criteria (Regulation (EU) 2022/2554, Article 18). For a Compliance Officer, CCO, or GRC lead, the practical goal is simple: make incident classification consistent, fast, and defensible under supervisory review.
Most failures here are not technical. They are operational: multiple teams using different severity scales, threat intel items tracked outside the incident system, unclear responsibility for “impact” versus “technical severity,” and inconsistent treatment of third-party outages. Article 18 pushes you to standardize the decision and capture the evidence.
This page gives requirement-level implementation guidance you can assign to SecOps, ITSM, GRC, and third-party owners. You’ll leave with a step-by-step build plan, a decision matrix you can implement in your ticketing system, the minimum evidence set to retain, and the exam questions you should pre-answer in your documentation.
Target keyword: article 18: classification of ict-related incidents and cyber threats requirement
Regulatory text
Excerpt (provided): “Financial entities shall classify ICT-related incidents and shall determine their impact based on the following criteria:” (Regulation (EU) 2022/2554, Article 18)
What the operator must do:
You need a defined, repeatable method to (1) classify ICT-related incidents and relevant cyber threats and (2) determine impact using pre-set criteria, with records showing how each classification decision was reached (Regulation (EU) 2022/2554, Article 18). In practice, this means a documented taxonomy, thresholds, ownership, and a workflow embedded into your incident lifecycle (detect → triage → classify → assess impact → escalate).
Plain-English interpretation
- Classification means assigning a category and severity level that your organization uses consistently (for example: “ICT incident” vs “cyber threat,” and severity bands that drive escalation).
- Determine impact means translating technical facts into business and regulatory consequences using defined criteria (for example: which critical services were affected, whether a third party caused or contributed, and whether customer-facing functions degraded).
- Criteria must be explicit. The rule is not satisfied by “we assess impact case by case.” Your criteria must be written, used, and evidenced (Regulation (EU) 2022/2554, Article 18).
Who it applies to
In-scope entities
- Financial entities subject to DORA that must manage ICT risk and incident handling under the regulation (Regulation (EU) 2022/2554).
In-scope operational context (where this bites)
- Security Operations / SOC triage and escalation decisions.
- IT Service Management (ITSM) incident and problem management, including outages and degradations.
- Fraud / financial crime operations when cyber threats intersect with account takeover, malware, or payment disruption.
- Third-party management where outages, breaches, or data integrity issues originate at a service provider.
- Business continuity / crisis management because classification should trigger the right playbooks and leadership notifications.
What you actually need to do (step-by-step)
Step 1: Define your classification scope and triggers
Create a short, controlled definition list for:
- ICT-related incident: events affecting confidentiality, integrity, availability, or authenticity of ICT assets/services.
- Cyber threat: credible threat activity relevant to your environment even if no incident has occurred yet.
Operational trigger: “Any event that creates a security incident ticket OR a major IT outage ticket must be evaluated for DORA classification.” Keep it binary and easy to enforce.
Deliverable: “Classification scope & triggers” one-pager referenced in your incident response standard (Regulation (EU) 2022/2554, Article 18).
Step 2: Build a classification taxonomy that can live in tools
You need a taxonomy that works in Jira/ServiceNow/Sentinel cases, not just in a policy PDF.
Minimum recommended fields:
- Event type: ICT incident / cyber threat
- Root driver (picklist): e.g., malicious, accidental, third-party, change-related, unknown
- Primary impact domain: confidentiality / integrity / availability / authenticity
- Severity level: your internal bands (align across SecOps and ITSM)
Add a control: if the ticket is “security incident” or “major incident,” the classification fields are mandatory before closure.
Deliverable: Data dictionary + tool configuration screenshots/exports showing required fields.
Step 3: Define impact criteria and a decision workflow
Article 18 explicitly requires impact determination based on criteria (Regulation (EU) 2022/2554, Article 18). Your job is to make “impact” measurable enough to be consistent, even during chaos.
Use a two-layer model:
- Technical impact inputs (captured by SecOps/IT): system downtime, data type involved, spread, persistence, control failure indicators.
- Business impact outputs (owned by service owners): critical service affected, customer impact, operational disruption, regulatory sensitivity.
Practical decision workflow (embed in ticket template):
- Triage owner records initial classification (provisional).
- Service owner confirms affected service(s) and business impact.
- Incident commander confirms final classification and impact rating.
- Compliance/GRC reviews for consistency and downstream obligations (spot-check or required for high-severity).
Deliverable: RACI + workflow diagram + approval rules in the ticketing system.
Step 4: Calibrate thresholds with tabletop exercises, then lock them
Most teams fail because thresholds are theoretical. Run scenario-based calibration using past incidents:
- A third-party SaaS outage affecting a customer portal
- Ransomware on an internal endpoint with lateral movement risk
- Data integrity issue in a trading or payments interface
- DDoS causing intermittent degradation
Goal: prove that two different incident commanders would classify the same scenario the same way.
Deliverable: Tabletop notes, decisions, and resulting threshold edits tracked in change control.
Step 5: Connect classification to escalation and downstream obligations
Classification should automatically trigger:
- Notification tiers (SOC lead, CIO/CISO, business owner, crisis management)
- BCP/DR invocation criteria
- Third-party engagement (contractual notification, SOC bridge, evidence preservation)
- Regulatory reporting evaluation (handled under other DORA articles, but initiated here)
Avoid a “policy-only” control. Make the tool drive the behavior.
Deliverable: Escalation matrix + on-call/notification rules + sample incident showing escalation triggered by classification.
Step 6: Implement quality control and periodic review
Add an operating control owned by GRC or the IR program manager:
- Perform periodic QA on a sample of incidents and threats to check classification consistency, completeness of impact rationale, and correct mapping to affected services.
Keep it simple: a checklist and documented findings with remediation tasks.
Deliverable: QA checklists, findings log, and closed corrective actions tied to incident IDs.
Required evidence and artifacts to retain
Auditors will ask, “Show me it operates.” Keep artifacts that prove decisions and criteria use.
| Artifact | What it proves | Where to store |
|---|---|---|
| Incident classification standard (taxonomy + criteria) | Defined method for classification/impact | Policy repository with version control |
| RACI for classification and impact approval | Clear accountability | GRC control library / IR program docs |
| Tool configuration evidence (required fields, workflow states) | The process is enforced operationally | ITSM/SOAR admin exports, screenshots |
| Incident records with completed classification fields | Actual operation | ITSM/SIEM case system |
| Impact assessment notes (service owner confirmation) | Business impact determination | Linked ticket comments/attachments |
| QA review results and corrective actions | Ongoing governance | GRC platform or ticketing |
| Third-party incident linkage (if applicable) | Coverage includes outsourced services | Third-party issue records + incident bridge notes |
Common exam/audit questions and hangups
Expect variations of:
- “Show your criteria.” Where is the documented basis for determining impact? (Regulation (EU) 2022/2554, Article 18)
- “Show consistency.” Two incidents with similar facts should land in the same classification band.
- “Who decides impact?” SecOps can’t be the only answer; service ownership must be in the workflow.
- “How do you classify third-party incidents?” Auditors will look for the same rigor when a provider is the source.
- “What about cyber threats that don’t become incidents?” You need a way to record and classify credible threats, even if they remain “threat only.”
Hangup to pre-empt: teams confuse “severity” (technical) with “impact” (business/regulatory). Your workflow must capture both.
Frequent implementation mistakes (and how to avoid them)
-
Separate scales for SOC and ITSM.
Fix: one taxonomy, mapped if needed, with a single “DORA classification” field used everywhere. -
No evidence for impact determination.
Fix: require a short “impact rationale” field plus service owner attestation for high-impact events. -
Third-party incidents handled off-system (email/bridges only).
Fix: require a formal internal incident ticket for any third-party event that affects your services, with classification completed. -
Threat intel sits in a spreadsheet with no linkage.
Fix: log material cyber threats in the same case system or an integrated threat module, with classification fields. -
Classification updated after the fact with no audit trail.
Fix: enforce state-based approvals; preserve timestamps and who changed severity/impact.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement. Practical supervisory risk is still clear: inconsistent classification weakens incident governance, delays escalations, and undermines any later regulatory communications because you cannot show a controlled method was used (Regulation (EU) 2022/2554, Article 18).
A practical 30/60/90-day execution plan
First 30 days (Immediate build)
- Assign an owner (IR program manager or GRC control owner) and publish the RACI.
- Draft the taxonomy + impact criteria and align SecOps, ITSM, and key business service owners.
- Configure mandatory classification fields in the incident tool and define “provisional” vs “final” states.
- Create an evidence checklist for incident commanders to attach/record.
By 60 days (Operationalize + calibrate)
- Run calibration tabletop exercises using real historic incidents and agree threshold edits.
- Turn on escalation rules tied to classification (notifications, major incident process, third-party engagement).
- Start QA sampling and log findings as corrective actions.
By 90 days (Harden + prove)
- Demonstrate operation with a mini audit pack: policy version, tool configuration, sample tickets, QA results, remediation closure.
- Train incident commanders and service owners; add classification steps to onboarding for on-call rotations.
- If you use Daydream, map Article 18 to owners, controls, and evidence artifacts in a single register so supervisory requests can be answered with a consistent packet (Regulation (EU) 2022/2554, Article 18).
Frequently Asked Questions
Do we need to classify every alert as an ICT-related incident under Article 18?
No. Classify ICT-related incidents and relevant cyber threats, not raw alerts. Define clear triggers for when an alert becomes a “cyber threat record” or an “ICT incident ticket,” then apply the criteria consistently (Regulation (EU) 2022/2554, Article 18).
Who should own “impact determination,” the SOC or the business?
Make it shared. The SOC provides technical impact inputs; the service owner confirms business impact; the incident commander finalizes the classification, with GRC providing oversight for consistency (Regulation (EU) 2022/2554, Article 18).
How do we handle third-party outages that affect us but occur outside our environment?
Create an internal incident ticket anyway, link to the third party’s communications, and complete your own classification and impact determination based on your service impact. Treat it as your incident governance problem, even if the root cause is external (Regulation (EU) 2022/2554, Article 18).
Can we reuse our existing ITIL “major incident” severity scale?
Yes, if you map it cleanly to a DORA classification field and can show the criteria used to determine impact. Auditors will look for one consistent method, not parallel scales that conflict (Regulation (EU) 2022/2554, Article 18).
What evidence is most important to keep for Article 18 audits?
Keep the criteria document, the RACI, tool enforcement evidence (mandatory fields/workflow), and incident records showing classification plus impact rationale and approvals. Those artifacts prove the control operates, not just that a policy exists (Regulation (EU) 2022/2554, Article 18).
How do we keep classifications consistent across different incident commanders?
Calibrate using scenario reviews and QA sampling, then update criteria under change control. Consistency comes from tight definitions, required fields, and periodic review, not from training alone (Regulation (EU) 2022/2554, Article 18).
Frequently Asked Questions
Do we need to classify every alert as an ICT-related incident under Article 18?
No. Classify ICT-related incidents and relevant cyber threats, not raw alerts. Define clear triggers for when an alert becomes a “cyber threat record” or an “ICT incident ticket,” then apply the criteria consistently (Regulation (EU) 2022/2554, Article 18).
Who should own “impact determination,” the SOC or the business?
Make it shared. The SOC provides technical impact inputs; the service owner confirms business impact; the incident commander finalizes the classification, with GRC providing oversight for consistency (Regulation (EU) 2022/2554, Article 18).
How do we handle third-party outages that affect us but occur outside our environment?
Create an internal incident ticket anyway, link to the third party’s communications, and complete your own classification and impact determination based on your service impact. Treat it as your incident governance problem, even if the root cause is external (Regulation (EU) 2022/2554, Article 18).
Can we reuse our existing ITIL “major incident” severity scale?
Yes, if you map it cleanly to a DORA classification field and can show the criteria used to determine impact. Auditors will look for one consistent method, not parallel scales that conflict (Regulation (EU) 2022/2554, Article 18).
What evidence is most important to keep for Article 18 audits?
Keep the criteria document, the RACI, tool enforcement evidence (mandatory fields/workflow), and incident records showing classification plus impact rationale and approvals. Those artifacts prove the control operates, not just that a policy exists (Regulation (EU) 2022/2554, Article 18).
How do we keep classifications consistent across different incident commanders?
Calibrate using scenario reviews and QA sampling, then update criteria under change control. Consistency comes from tight definitions, required fields, and periodic review, not from training alone (Regulation (EU) 2022/2554, Article 18).
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream