Article 10: Detection
To meet the article 10: detection requirement, you must run mechanisms that promptly detect anomalous activities (including network performance issues and ICT-related incidents) and that identify potential material single points of failure, aligned with your broader incident processes under Article 17. Operationalize this by defining detection coverage, instrumenting telemetry, setting triage/escalation rules, and retaining evidence that detection actually works in production (Regulation (EU) 2022/2554, Article 10).
Key takeaways:
- Detection under DORA is broader than security monitoring; it also covers availability/performance and single points of failure (Regulation (EU) 2022/2554, Article 10).
- Supervisors will look for mechanisms + operating proof: alerts, thresholds, on-call actions, and post-incident improvements tied to control owners.
- Your fastest path is a detection control register that maps services, telemetry sources, owners, and evidence artifacts to Article 10.
Article 10 of DORA forces a practical question: “If something abnormal happens in our ICT environment, how fast do we notice, and how do we know which weaknesses could take us down?” The requirement is short, but it creates real operational expectations: detection cannot be ad hoc, limited to a SIEM, or dependent on a few individuals watching dashboards. It has to be a repeatable set of mechanisms that catch anomalous activity across security, infrastructure, applications, and networks, and it must help you spot “material single points of failure” before they cause an ICT-related incident (Regulation (EU) 2022/2554, Article 10).
For a Compliance Officer, CCO, or GRC lead, the challenge is translation: converting this article into controls that engineers can run daily and that you can defend in an exam. The most effective pattern is to treat detection as a defined capability with coverage targets by critical service, integrated with incident handling expectations referenced in Article 17, and backed by evidence that alerts fire, are triaged, and drive remediation (Regulation (EU) 2022/2554, Article 10).
This page gives requirement-level guidance you can assign, track, and audit.
Regulatory text
DORA Article 10(1) excerpt: “Financial entities shall have in place mechanisms to promptly detect anomalous activities, in accordance with Article 17, including ICT network performance issues and ICT-related incidents, and to identify potential material single points of failure.” (Regulation (EU) 2022/2554, Article 10)
Operator interpretation (what you must be able to show):
- You have mechanisms (tools + processes + people) that detect:
- Anomalous activities (security anomalies, misconfigurations, unusual access, abnormal system behavior).
- ICT network performance issues (latency, packet loss, saturation, DNS issues, routing anomalies).
- ICT-related incidents (events that meet your incident criteria, including availability and integrity issues).
- Detection is aligned with incident handling expectations referenced in Article 17, meaning detection must feed triage, escalation, and incident management, not end at an alert (Regulation (EU) 2022/2554, Article 10).
- You can identify potential material single points of failure (SPOFs) as part of detection and monitoring, not only during architecture reviews (Regulation (EU) 2022/2554, Article 10).
Plain-English requirement (what “good” looks like)
If something starts going wrong, you find out quickly, you can tell whether it’s an incident, and you can escalate it through your incident process. Separately, you continuously watch for fragile dependencies where one failure could cause material disruption, and you can prove you addressed them.
A practical way to frame the requirement in your control library:
- Detection coverage: For each critical business service and its supporting ICT assets, you define what “abnormal” means and how it is detected (signals, thresholds, correlation rules).
- Response linkage: Every meaningful alert has an owner, severity logic, and an escalation path into incident management.
- SPOF discovery: You maintain a method to detect (or systematically identify) critical single points of failure and track them to remediation.
Who it applies to
Entity scope: “Financial entities” in DORA scope (Regulation (EU) 2022/2554, Article 10).
Operational scope (where you must implement it):
- Production ICT environments supporting critical or important functions.
- Networks, endpoints, identity systems, core banking/trading/claims platforms, customer channels, and shared platforms (logging, IAM, CI/CD).
- Third-party delivered components that are part of service delivery (cloud hosting, managed security, core SaaS), where you depend on their signals and status to detect anomalies.
What you actually need to do (step-by-step)
1) Translate Article 10 into a detection control register
Build a single register that answers, per critical service:
- Service / system name
- Business owner + technical owner + on-call group
- Detection mechanisms (SIEM rules, EDR alerts, APM monitors, NDR signals, synthetic tests, cloud-native alarms)
- Key failure modes (security compromise, capacity exhaustion, dependency outage, certificate expiry, auth failures)
- SPOFs to watch (single DNS resolver, single region, single IAM provider dependency, single queue, single admin account)
- Evidence artifacts produced (dashboards, alert tickets, runbooks, post-incident reviews)
This creates traceability from requirement → control → evidence, which is what exam teams ask for in practice.
2) Define “anomalous activity” in operational terms
Write detection standards that engineers can implement:
- Security anomalies: unusual privileged access, impossible travel, high-rate authentication failures, new admin creation, suspicious process execution.
- Performance anomalies: latency deviation, error-rate spikes, dropped connections, saturation signals, failed health checks.
- Reliability anomalies: crash loops, failed deployments, dependency timeouts, queue backlogs, replication lag.
Avoid vague terms like “monitor everything.” Define what triggers a ticket and what triggers an incident bridge.
3) Ensure telemetry exists and is fit for purpose
For each critical service, verify you ingest and retain:
- Logs (auth, admin activity, application logs, audit trails)
- Metrics (CPU/memory, error rates, queue depth, saturation, response times)
- Traces (for transaction-level anomaly localization where applicable)
- Network signals (flow logs, firewall events, DNS logs where available)
GRC’s role: confirm coverage exists for all critical services; engineering’s role: implement collectors, pipelines, and alerting.
4) Implement triage, escalation, and incident linkage (Article 17 alignment)
Article 10 explicitly points you to incident handling. Your detection output must connect to:
- A ticketing or incident platform (alerts create cases with timestamps)
- Severity rules (what is informational vs. critical)
- Escalation matrix (SOC → SRE → service owner → crisis management)
- Communications triggers (internal notifications, third-party notification steps if a provider is implicated)
You should be able to show a straight line from “alert fired” to “triage decision” to “incident declared” (or “false positive closed with rationale”) (Regulation (EU) 2022/2554, Article 10).
5) Identify and manage material single points of failure
Treat SPOF identification as a governed workstream:
- Discovery methods: architecture reviews, dependency mapping, monitoring gaps review, incident trend analysis, chaos testing outputs (where used), third-party dependency review.
- Materiality criteria: tie “material” to your critical services and impact tolerances used internally for resilience decisions.
- Remediation tracking: log SPOFs as risk issues with an owner, target fix, interim mitigations, and closure evidence.
Detection-specific angle: add monitors that reveal SPOF stress before failure (for example, single database replica lag, certificate expiry monitors, single-region health checks).
6) Prove the mechanisms work: drills and validation
Run periodic readiness drills that validate:
- Alerts trigger as expected (simulation or test events)
- Paging/on-call works end-to-end
- Runbooks are accurate
- Corrective actions are tracked to closure with validation evidence
This is also where Daydream fits naturally: use Daydream to maintain the requirement-to-control-to-evidence register, manage exam requests, and track corrective actions with consistent artifacts and owner attestations.
Required evidence and artifacts to retain
Keep evidence in a form an examiner can review without specialized tools access:
Governance and design
- Detection policy/standard (definitions, scope, roles)
- Detection control register mapped to Article 10
- RACI for detection operations (SOC/SRE/IT ops/app teams)
- Alert severity taxonomy and escalation matrix
Operational proof
- Samples of alerts and linked tickets (with timestamps, triage notes, disposition)
- Dashboards/screenshots exported on a cadence for critical services
- On-call schedules and evidence of paging/acknowledgement (where applicable)
- Incident records showing detection source and time-to-detect narrative
SPOF management
- SPOF inventory with risk rating/materiality rationale
- Remediation plans and change records
- Evidence of mitigations implemented (architecture diagrams, redundancy proof, test results)
Testing and continuous improvement
- Drill reports, tabletop outputs, detection tests
- Post-incident reviews tying root cause to new/adjusted detection rules
Common exam/audit questions and hangups
Auditors and supervisors tend to probe four areas:
- Coverage: “Show detection for each critical service, not just the data center or SOC scope.”
- Speed and promptness: “How do you know detection is prompt?” Expect follow-ups on alert latency, handoffs, and whether critical alerts page humans.
- False positives: “How do you tune alerts and avoid alert fatigue without going blind?”
- SPOFs: “Where is your inventory of material single points of failure, and how do you know you found the important ones?”
Hangup to expect: teams present tool screenshots but cannot map them to business services and material impacts. Fix it with the detection control register and service mapping.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: SIEM-only interpretation. Article 10 explicitly includes network performance and incidents, so you need NOC/SRE-grade monitoring too (Regulation (EU) 2022/2554, Article 10).
Avoid: build one detection view across security + reliability. -
Mistake: No ownership per alert. Unowned alerts become noise.
Avoid: require every critical alert to have a named resolver group and runbook link. -
Mistake: SPOFs handled informally. “We know the risks” does not pass scrutiny.
Avoid: maintain a SPOF log with remediation status and evidence. -
Mistake: Evidence scattered across tools. You will fail response-time expectations during an exam.
Avoid: centralize evidence pointers and artifacts in a GRC system (Daydream works well as the front door for requests and artifact mapping).
Risk implications (what goes wrong if you miss this)
Weak detection increases:
- Business impact from ICT incidents because the response starts late.
- Supervisory risk because you cannot demonstrate control operation.
- Third-party dependency risk because outages in providers become “customer complaints” before they become “detected incidents.”
Even without published enforcement cases in your materials, treat this as a control that will be tested through incident walkthroughs and evidence sampling.
Practical execution plan (30/60/90)
Use phases to avoid artificial precision.
First 30 days (Immediate)
- Assign an executive owner and operational owners (SOC lead, SRE/IT ops lead, service owners).
- Build the first version of the Article 10 detection control register for critical services.
- Identify top detection gaps: missing logs, missing monitors, no paging, no runbooks.
- Stand up a regulatory-response workflow for document requests, escalations, and remediation tracking (aligns with exam reality).
By 60 days (Near-term)
- Implement or tune detection for the highest-risk services: auth/IAM, core transaction platforms, network edge, cloud control plane logs.
- Formalize severity and escalation rules; ensure alerts open tickets with required fields.
- Create an initial SPOF inventory and open remediation items with owners and milestones.
- Run a detection-to-incident drill and capture evidence (alerts, tickets, bridge logs, follow-up actions).
By 90 days (Stabilize and prove operation)
- Expand coverage to remaining critical services and key third-party dependencies.
- Add recurring review: alert tuning, coverage review, SPOF remediation review.
- Validate that post-incident reviews feed back into new detection rules and SPOF reduction.
- Package an “exam-ready” evidence set: register export, artifact index, sample cases, drill report.
Frequently Asked Questions
Does Article 10 require a SIEM?
Article 10 requires “mechanisms” to detect anomalies and incidents; it does not mandate specific tools (Regulation (EU) 2022/2554, Article 10). A SIEM can be part of the approach, but you also need performance and availability detection for critical services.
What counts as “promptly detect”?
DORA does not define a numeric threshold in the provided text (Regulation (EU) 2022/2554, Article 10). Define internal targets by severity and service criticality, then keep evidence showing detection and triage happen within those targets.
How do we handle detection for cloud and SaaS third parties?
Treat third-party telemetry as part of your detection mechanisms: provider health notifications, audit logs, and API events should feed your monitoring and incident workflow. Document what signals you receive, how you triage them, and what you do when the third party is the suspected cause.
What is a “material single point of failure” in practice?
It is a dependency where one failure could materially disrupt a critical service (Regulation (EU) 2022/2554, Article 10). Common examples include single-region deployments, single IAM dependency, or one admin account that can block recovery; log them, set monitors where possible, and track remediation.
What evidence is most persuasive to supervisors?
A mapped register from Article 10 to detection controls, plus real operational records: alert tickets with timestamps and dispositions, incident records showing detection sources, and remediation closure evidence for identified SPOFs (Regulation (EU) 2022/2554, Article 10).
How should GRC work with SOC/SRE without becoming a bottleneck?
Keep GRC focused on scope, ownership, and evidence consistency, while SOC/SRE own technical implementation and tuning. Use a shared register and a lightweight change log so alert changes and new monitors automatically update the evidence map.
Frequently Asked Questions
Does Article 10 require a SIEM?
Article 10 requires “mechanisms” to detect anomalies and incidents; it does not mandate specific tools (Regulation (EU) 2022/2554, Article 10). A SIEM can be part of the approach, but you also need performance and availability detection for critical services.
What counts as “promptly detect”?
DORA does not define a numeric threshold in the provided text (Regulation (EU) 2022/2554, Article 10). Define internal targets by severity and service criticality, then keep evidence showing detection and triage happen within those targets.
How do we handle detection for cloud and SaaS third parties?
Treat third-party telemetry as part of your detection mechanisms: provider health notifications, audit logs, and API events should feed your monitoring and incident workflow. Document what signals you receive, how you triage them, and what you do when the third party is the suspected cause.
What is a “material single point of failure” in practice?
It is a dependency where one failure could materially disrupt a critical service (Regulation (EU) 2022/2554, Article 10). Common examples include single-region deployments, single IAM dependency, or one admin account that can block recovery; log them, set monitors where possible, and track remediation.
What evidence is most persuasive to supervisors?
A mapped register from Article 10 to detection controls, plus real operational records: alert tickets with timestamps and dispositions, incident records showing detection sources, and remediation closure evidence for identified SPOFs (Regulation (EU) 2022/2554, Article 10).
How should GRC work with SOC/SRE without becoming a bottleneck?
Keep GRC focused on scope, ownership, and evidence consistency, while SOC/SRE own technical implementation and tuning. Use a shared register and a lightweight change log so alert changes and new monitors automatically update the evidence map.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream