Article 25: Testing of ICT tools and systems

To meet the article 25: testing of ict tools and systems requirement, you must run a risk-based, documented set of ICT tests (for example vulnerability scans, scenario-based tests, performance and end-to-end testing), and track remediation to closure within your DORA digital operational resilience testing programme. The outcome regulators look for is repeatable testing coverage plus proof you fixed what testing found. (Regulation (EU) 2022/2554, Article 25)

Key takeaways:

  • Your testing plan must be risk-based and proportionate, mapped to critical functions, assets, and services. (Regulation (EU) 2022/2554, Article 25)
  • Examiners will ask for evidence of execution and remediation closure, not just a policy or annual plan. (Regulation (EU) 2022/2554, Article 25)
  • You need traceability from test results → risk decision → fix → retest/validation, with clear owners across security, IT, and business. (Regulation (EU) 2022/2554, Article 25)

Article 25 sits inside DORA’s operational resilience testing expectations: you cannot claim resilience if you cannot prove you test the ICT tools and systems that support your business and you close the gaps those tests find. Operationally, this is where many programs fail. Teams run scans and penetration tests, but they cannot show coverage (what was tested and why), governance (who approved scope and risk decisions), and outcomes (what was fixed and how you verified the fix).

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat Article 25 as a requirement for a single, governed testing “assembly line.” You define your test catalog, map it to assets and critical services, set triggers and frequencies based on risk, standardize reporting, and enforce remediation SLAs through ticketing and exception management.

This page gives requirement-level implementation guidance you can apply immediately: who is in scope, what tests to run, what artifacts to retain, the exam questions you should pre-answer, and a practical execution plan. Primary source: DORA Article 25. (Regulation (EU) 2022/2554, Article 25)

Regulatory text

DORA Article 25(1) requires your digital operational resilience testing programme (from Article 24) to provide, in line with proportionality/risk criteria, for execution of “appropriate tests” such as: vulnerability assessments and scans, open-source analyses, network security assessments, gap analyses, physical security reviews, questionnaires and scanning software solutions, source code reviews where feasible, scenario-based tests, compatibility testing, performance testing, and end-to-end testing. (Regulation (EU) 2022/2554, Article 25)

Operator meaning: regulators expect (a) a defined testing programme, (b) a risk-based selection of test types and scope, and (c) demonstrable execution and follow-through. Your “programme” cannot be an informal set of ad hoc security activities. It must be managed, repeatable, evidenced, and connected to remediation. (Regulation (EU) 2022/2554, Article 25)

Plain-English interpretation (what the requirement really asks for)

Article 25 requires you to prove that your ICT environment is tested in ways that match your risks. That means:

  • You run multiple kinds of tests, not only vulnerability scans.
  • You test what matters most (critical services, supporting infrastructure, key applications, and security controls).
  • You can show that test results drive corrective actions, and that fixes are validated.

A practical framing: coverage, depth, and closure.

  • Coverage: do you test the estate that supports important business services?
  • Depth: do your tests go beyond surface scanning (for example scenario-based and end-to-end tests)?
  • Closure: do findings turn into verified remediation, with managed exceptions?

Who it applies to

Entity scope: DORA applies to regulated financial entities in scope of the regulation. Article 25 obligations sit on the regulated entity, even when ICT is outsourced. (Regulation (EU) 2022/2554)

Operational scope (where teams get tripped up):

  • Production systems and networks supporting important and critical functions.
  • Security tooling (for example vulnerability management, IAM, logging/SIEM, endpoint controls) because failure of these tools undermines resilience.
  • Change delivery pipelines where relevant (for example source code review “where feasible”). (Regulation (EU) 2022/2554, Article 25)
  • Third-party-supported services: you may not run intrusive tests on a third party’s environment, but you still need assurance. Do this through contractual testing clauses, independent reports, right-to-audit language, shared test results, and compensating controls you operate internally.

What you actually need to do (step-by-step)

1) Establish governance and ownership

Create a named Digital Operational Resilience Testing Programme owner (often CISO/Head of ICT Risk) and a compliance oversight role that can evidence challenge and escalation. Tie responsibilities to:

  • Security testing (AppSec, infrastructure security)
  • IT operations (availability, performance, change)
  • Business service owners (critical service validation)
  • Third-party risk owners (test constraints and assurance)

Deliverable: a RACI that includes test execution, risk acceptance, remediation ownership, and sign-off.

2) Define your test catalog (minimum viable list)

Build a catalog aligned to Article 25’s examples. You do not need every test type everywhere; you need a justified selection by risk. (Regulation (EU) 2022/2554, Article 25)

A usable catalog structure:

Test type (Article 25 examples) Typical scope Output you must standardize
Vulnerability assessments/scans Hosts, containers, cloud resources Findings list, severity rationale, remediation tickets
Open-source analyses Libraries, dependencies, images SBOM/dependency findings, patch/upgrade decisions
Network security assessments Segmentation, exposure, firewall rules Rule review results, exposure map, fixes
Gap analyses Controls vs target standard Gap register, plan, owner, due dates
Physical security reviews Data centers, offices, access Issue log, remediation plan, retest
Questionnaires/scanning tools Third parties, SaaS configurations Completed artifacts, scoring, follow-ups
Source code reviews (where feasible) Critical apps, high-risk changes Code review evidence, SAST findings, remediation
Scenario-based tests Attack/operational scenarios Scenario report, lessons learned, actions
Compatibility/performance Releases, integrations Test results, acceptance criteria, sign-off
End-to-end testing Critical business journeys Runbooks, evidence, defect remediation

3) Map tests to “what matters” (services → assets → tests)

Create a mapping from:

  • Important business services/critical functions
  • Supporting applications/infrastructure/data flows
  • Required test types and depth

Practical control: maintain a single register that ties Article 25 to concrete ICT controls, accountable owners, and evidence artifacts. This reduces the “we do scanning” problem where nobody can show scope completeness. (Regulation (EU) 2022/2554, Article 25)

4) Set a risk-based schedule and triggers

Document what drives testing:

  • Routine testing cadence by asset criticality and change velocity
  • Event-based triggers: major changes, incidents, new exposure, new third party, platform migrations

Avoid hardcoding frequencies you cannot sustain. Regulators will prefer a defensible, repeatable model over aspirational schedules with missed execution.

5) Standardize reporting and severity decisions

Define:

  • Severity taxonomy (even if you align to CVSS or internal impact scoring)
  • “Fix, mitigate, accept, or transfer” decision paths
  • Approval levels for risk acceptance

Common hangup: risk acceptance without evidence. Require business owner and security owner sign-off with expiry dates and compensating controls.

6) Drive remediation through a controlled workflow

Run remediation through a ticketing or GRC workflow that captures:

  • Finding ID and source
  • Owner
  • Target date and current status
  • Exception path
  • Validation evidence (retest output)

A regulatory-response workflow helps when supervisors ask for “show me all open findings and your escalation path.” Keep legal/compliance in the approval chain for formal responses. (Regulation (EU) 2022/2554, Article 25)

7) Validate closure (retest) and report up

Closure is not “ticket closed.” Closure is “fix verified.”

  • Retest vulnerabilities
  • Re-run end-to-end tests after major changes
  • Confirm performance/compatibility outcomes against acceptance criteria

Provide periodic reporting to senior management on coverage, key findings, remediation aging, and accepted risks.

8) Make third-party testing real (without overreaching)

Where third parties provide ICT services:

  • Require relevant test evidence (penetration test summaries, vulnerability management attestations, configuration baselines) where contractually feasible.
  • Define “customer responsibility” tests for your configuration (for example SaaS posture checks, tenant configuration reviews).
  • Track third-party gaps as remediation items with the same discipline as internal findings.

Required evidence and artifacts to retain

Keep artifacts in an examiner-ready structure (by year/period and by critical service). Minimum set:

  • Testing programme charter and governance (scope, roles, reporting line)
  • Test catalog and methodology documents
  • Asset/service-to-test mapping register
  • Approved test plans and scopes for major exercises
  • Raw outputs (scan results, reports) or controlled references to systems of record
  • Findings register with severity, owners, and dates
  • Remediation tickets, change records, and closure notes
  • Retest/validation evidence
  • Risk acceptance memos with compensating controls and expiry
  • Management reporting packs and minutes showing oversight

Daydream can help by keeping this traceability in one place: requirement mapping, accountable owners, and a supervisory-evidence checklist tied to each test run and remediation cycle.

Common exam/audit questions and hangups

Expect these questions and pre-build answers:

  1. Show your digital operational resilience testing programme and how Article 25 tests are selected. (Regulation (EU) 2022/2554, Article 25)
  2. What systems support your critical services, and when were they last tested?
  3. How do you ensure findings are remediated and validated?
  4. Who can accept risk, and what is your exception process?
  5. How do you cover third-party-provided ICT services where you cannot run intrusive tests?
  6. Demonstrate scenario-based and end-to-end testing for at least one critical service. (Regulation (EU) 2022/2554, Article 25)

Frequent implementation mistakes (and how to avoid them)

  • Mistake: “We run scans” treated as the whole programme.
    Fix: expand to scenario-based and end-to-end tests for critical services; document why those tests fit your risk profile. (Regulation (EU) 2022/2554, Article 25)

  • Mistake: No coverage map.
    Fix: build service-to-asset mapping and test coverage matrix. If you cannot show scope, you will not pass a credibility test.

  • Mistake: Findings close without validation.
    Fix: require retest evidence for closure, and block closure without proof.

  • Mistake: Risk acceptance becomes a parking lot.
    Fix: add expiries, compensating controls, and management reporting of accepted risk.

  • Mistake: Third-party blind spots.
    Fix: define what assurance you require contractually and what you test in your own environment (config, monitoring, access paths).

Enforcement context and risk implications

No public enforcement cases were provided in the available source catalog for Article 25, so this page does not summarize enforcement outcomes.

Operationally, the risk is straightforward: weak testing governance increases the chance that material vulnerabilities, integration failures, or capacity issues persist until they become incidents. Supervisory scrutiny tends to intensify after incidents, so your ability to show disciplined testing and remediation history matters. (Regulation (EU) 2022/2554, Article 25)

Practical 30/60/90-day execution plan

First 30 days: establish control of the programme

  • Name the programme owner; publish RACI and escalation path.
  • Inventory critical services and supporting ICT assets at a usable granularity.
  • Define the test catalog aligned to Article 25 examples and document selection criteria. (Regulation (EU) 2022/2554, Article 25)
  • Stand up a single findings register with consistent fields and owners.

By 60 days: prove execution and closure mechanics

  • Run a first wave of tests on the highest-priority assets (at least one technical test and one scenario/end-to-end style exercise for a critical service). (Regulation (EU) 2022/2554, Article 25)
  • Implement the remediation workflow (ticketing integration, exception approvals, retest requirement).
  • Produce a management report with coverage snapshot, top findings, remediation status, and accepted risks.

By 90 days: make it repeatable and audit-ready

  • Expand coverage to the next tier of assets and third-party-supported services.
  • Run a readiness drill: can you produce evidence within a short window for a supervisor request?
  • Validate that closure evidence exists and is stored consistently.
  • Finalize the examiner pack: programme docs, test plans, sample reports, findings-to-closure traceability, governance minutes.

Frequently Asked Questions

What does “appropriate tests” mean under Article 25?

It means a risk-based set of test types that fit your ICT risk profile, not a single standard test everywhere. Article 25 lists examples you can select from, but you must justify scope and depth. (Regulation (EU) 2022/2554, Article 25)

Do we need to perform source code reviews?

Article 25 includes “source code reviews where feasible,” so you should define feasibility criteria (criticality, in-house vs third-party code, SDLC maturity) and document where you do and do not perform them. Keep the rationale as evidence. (Regulation (EU) 2022/2554, Article 25)

How do we operationalize scenario-based testing without a big red-team program?

Start with one critical service and run a tabletop plus a technical simulation where you validate detection, response, and recovery steps. Capture scenario design, participants, results, and remediation actions as program artifacts. (Regulation (EU) 2022/2554, Article 25)

What’s the minimum evidence auditors expect for vulnerability scanning?

They will ask for scope definition, scan outputs (or controlled references), triage decisions, remediation tickets, and retest proof for closure. A dashboard alone is rarely enough without underlying records. (Regulation (EU) 2022/2554, Article 25)

How should we address ICT services run by third parties where we can’t scan directly?

Document the constraint, require assurance artifacts contractually where possible, and test what you control (tenant configuration, access paths, logging, integrations). Track third-party findings and follow-ups in the same remediation workflow. (Regulation (EU) 2022/2554)

How does Article 25 connect to Article 24?

Article 25 describes what your testing programme must contain (execution of appropriate tests). Article 24 is the program-level requirement; Article 25 is the concrete testing content you must implement and evidence. (Regulation (EU) 2022/2554, Article 25)

Frequently Asked Questions

What does “appropriate tests” mean under Article 25?

It means a risk-based set of test types that fit your ICT risk profile, not a single standard test everywhere. Article 25 lists examples you can select from, but you must justify scope and depth. (Regulation (EU) 2022/2554, Article 25)

Do we need to perform source code reviews?

Article 25 includes “source code reviews where feasible,” so you should define feasibility criteria (criticality, in-house vs third-party code, SDLC maturity) and document where you do and do not perform them. Keep the rationale as evidence. (Regulation (EU) 2022/2554, Article 25)

How do we operationalize scenario-based testing without a big red-team program?

Start with one critical service and run a tabletop plus a technical simulation where you validate detection, response, and recovery steps. Capture scenario design, participants, results, and remediation actions as program artifacts. (Regulation (EU) 2022/2554, Article 25)

What’s the minimum evidence auditors expect for vulnerability scanning?

They will ask for scope definition, scan outputs (or controlled references), triage decisions, remediation tickets, and retest proof for closure. A dashboard alone is rarely enough without underlying records. (Regulation (EU) 2022/2554, Article 25)

How should we address ICT services run by third parties where we can’t scan directly?

Document the constraint, require assurance artifacts contractually where possible, and test what you control (tenant configuration, access paths, logging, integrations). Track third-party findings and follow-ups in the same remediation workflow. (Regulation (EU) 2022/2554)

How does Article 25 connect to Article 24?

Article 25 describes what your testing programme must contain (execution of appropriate tests). Article 24 is the program-level requirement; Article 25 is the concrete testing content you must implement and evidence. (Regulation (EU) 2022/2554, Article 25)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream