Article 20: Harmonisation of reporting content and templates

To operationalize the article 20: harmonisation of reporting content and templates requirement, you need a repeatable way to produce regulator-ready ICT reporting outputs in the standardized content and templates that EU supervisors will expect once defined by the ESAs. Build a single reporting data model, map sources to template fields, and prove you can generate complete, consistent submissions on demand. (Regulation (EU) 2022/2554, Article 20)

Key takeaways:

  • Article 20 is about standardized reporting content and templates developed by the ESAs, so your job is to prepare your reporting machinery to conform. (Regulation (EU) 2022/2554, Article 20)
  • Treat this as a data + workflow requirement: defined fields, controlled inputs, ownership, approvals, and submission-ready exports.
  • The fastest path is a reporting register that links each template field to an authoritative data source, control owner, and retained evidence.

Article 20 sits in a practical corner of DORA: it drives harmonisation. Instead of each competent authority asking for incident or ICT risk information in different formats, the ESAs (via their Joint Committee) will develop common reporting content and templates, in consultation with ENISA and the ECB. (Regulation (EU) 2022/2554, Article 20)

For a CCO or GRC lead, the operational question is simple: can your organization reliably produce complete, consistent, auditable reporting outputs when supervisors require the harmonised template? You do not control what the final template fields will be, but you fully control whether your reporting program has (1) a stable data model, (2) governed inputs, (3) clear accountability, and (4) evidence that reports are produced the same way every time.

This page translates Article 20 into a requirement-level execution plan: who should own it, what to build, what evidence to keep, and how to avoid common traps (like assembling reports manually from emails and screenshots). It also gives a practical 30/60/90-day plan you can run as a mini-program inside your broader DORA readiness work.

Regulatory text

What the law says (excerpt): “The ESAs, through the Joint Committee, and in consultation with ENISA and the ECB, shall develop:” (Regulation (EU) 2022/2554, Article 20)

Operator interpretation of the excerpt
Article 20 is not written as “you shall fill out template X.” It is written as “the ESAs shall develop” harmonised reporting content and templates. Your operational obligation is indirect but real: you must be able to adopt and execute the harmonised template(s) once they exist, without re-inventing your reporting process each time a supervisor asks for information. (Regulation (EU) 2022/2554, Article 20)

What you must be able to do in practice

  • Produce required ICT reporting outputs in a standard structure (fields, definitions, validation rules) when the harmonised templates are issued. (Regulation (EU) 2022/2554, Article 20)
  • Show that reporting data is traceable back to authoritative systems and owners, and that changes are controlled.
  • Demonstrate a governed workflow for drafting, review, approval, and submission (including legal/compliance sign-off).

Plain-English requirement

Prepare your reporting program so it can quickly conform to ESA-defined standard templates for ICT-related reporting. Build a controlled “reporting factory” (data mapping, validation, workflow, and evidence retention) so you can generate regulator-ready reports consistently, even under time pressure.

Who it applies to

Entity scope

  • Financial entities in scope of DORA that must report ICT-related information to competent authorities (the specific reporting topics depend on other DORA provisions and related standards). (Regulation (EU) 2022/2554)

Operational context

This requirement lands across multiple operating teams:

  • GRC / Compliance: owns regulatory interpretation, submission governance, and audit readiness.
  • ICT risk management: owns risk data, control status, and remediation reporting inputs.
  • Security operations / incident management: owns incident records, timelines, impact assessments, and post-incident actions.
  • Third-party risk management (TPRM): owns third-party involvement data, contract references, and concentration/criticality attributes where they feed reporting.
  • Data/IT (platform owners): owns the systems of record and export mechanics.

What you actually need to do (step-by-step)

Step 1: Appoint a single accountable owner and a cross-functional reporting “bench”

Deliverable: a RACI for “DORA harmonised reporting outputs.”

  • Name an accountable executive owner (often the CISO, COO, or CCO, depending on your model).
  • Name operational owners for each data domain (incidents, assets/services, third parties, risk exceptions, remediation).

Why examiners care: if ownership is split, reports become inconsistent and late. Your goal is one throat to choke for reporting readiness.

Step 2: Build a “harmonised reporting readiness register” (one table, not ten documents)

Create a register that you will update once ESA templates are available. Start it now with placeholders.

Minimum columns to include

  • Reporting object (e.g., “ICT incident report,” “major incident report,” “ICT risk report”)
  • Template field name (placeholder until final template exists) (Regulation (EU) 2022/2554, Article 20)
  • Field definition / business rule (placeholder)
  • System of record (ticketing, SIEM case system, GRC tool, CMDB, vendor inventory)
  • Data owner (person + team)
  • Transformation logic (how you derive the value)
  • Validation checks (completeness, allowed values, cross-field consistency)
  • Approval step (who signs off)
  • Evidence retained (artifact type + location)

This register becomes your control plane. Daydream can hold this register as a living compliance-to-evidence map so you can answer “where did that number come from?” without a meeting.

Step 3: Normalize your underlying data model before you see the final template

Even without the final ESA template, you can standardize the common failure points:

  • Entity identifiers: consistent naming for legal entity, branch, and service lines.
  • Service taxonomy: consistent naming for “critical/important functions” as your organization defines them under DORA program scope. (Regulation (EU) 2022/2554)
  • Third-party identifiers: one unique ID per third party; consistent linkage between vendor inventory, contracts, and service ownership.
  • Incident core fields: timestamps, detection method, impacted services, root cause category, containment status, customer impact narrative.

Control objective: a report should be an export, not a hand-built story.

Step 4: Design the reporting workflow like a regulated submission

Workflow stages to implement

  1. Trigger: regulatory reporting event or supervisory request intake
  2. Draft: data pull + narrative assembly
  3. Validation: automated checks + peer review
  4. Approval: compliance + legal review, plus accountable executive sign-off where required
  5. Submission: secure channel, version locked
  6. Post-submission: errata handling, lessons learned, CAPA linkage

Operational detail that prevents pain: define a rule for “authoritative narrative owner” (usually incident commander for incident-related reports, ICT risk manager for risk reports). Otherwise you get six narratives and none are consistent.

Step 5: Implement field-level controls (completeness, accuracy, consistency)

Use a small set of controls that scale:

  • Completeness control: required fields cannot be blank without an exception reason.
  • Accuracy control: sampled trace-back from report values to source records.
  • Consistency control: cross-checks (e.g., incident start time <= detection time <= containment time).
  • Change control: if a report is resubmitted, track deltas and approval.

Document these as reporting controls in your control library, and link them to evidence artifacts in a single place.

Step 6: Run readiness drills (tabletop, then “export test”)

Run a drill that simulates: “Supervisor asks for harmonised report output by end of day.”

  • Timebox the exercise.
  • Capture gaps as corrective actions.
  • Re-run until the work is boring.

This is one of the few ways to prove operational readiness without waiting for an incident.

Required evidence and artifacts to retain

Keep evidence that proves repeatability and traceability:

Governance

  • Reporting RACI and escalation path
  • Reporting procedure / work instruction (versioned)
  • Approval matrix (who signs what)

Data and mapping

  • Reporting readiness register (field-to-source mapping)
  • Data dictionary for common reporting elements (incident fields, service taxonomy, third-party identifiers)
  • Validation rules documentation and test results

Execution evidence

  • Completed report outputs (draft and final versions) with version history
  • Source extracts (or immutable references) used to populate the report
  • Validation results (automated checks, peer review notes)
  • Approval records (legal/compliance sign-off)
  • Submission record (date/time, channel, confirmation)

Remediation

  • Corrective action log for drill findings and reporting defects
  • Evidence of closure (updated mappings, updated workflow, updated controls)

Common exam/audit questions and hangups

Expect these questions in a supervisory review or internal audit:

  • “Show me how you would populate the harmonised template fields from your systems of record.” (Regulation (EU) 2022/2554, Article 20)
  • “Who certifies accuracy of incident timelines and impact assessments?”
  • “Where do you store the final submitted version, and how do you prevent after-the-fact edits?”
  • “If a third party is involved, how do you confirm facts without relying on informal emails?”
  • “How do you handle corrections after submission?”

Hangups that slow teams down:

  • No single inventory for services, third parties, and incidents that ties together cleanly.
  • Approvals are ad hoc, so reports get stuck in “review” with no deadline owner.
  • The organization can produce a narrative but cannot back it with traceable source records.

Frequent implementation mistakes and how to avoid them

  1. Manual report assembly as the default
    Avoid it: design exports and mappings; use manual steps only for narrative and documented exceptions.

  2. No field-level definitions
    Avoid it: define “what counts” for each key reporting element (timestamps, impact, affected services) in your data dictionary.

  3. Ownership by committee
    Avoid it: one accountable owner, domain owners for inputs, and a defined sign-off chain.

  4. Evidence scattered across inboxes
    Avoid it: one controlled repository, with immutable records and version history for submissions.

  5. Drills that produce slides, not fixes
    Avoid it: every gap becomes a tracked corrective action with a closure test.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for Article 20, so this page does not cite case outcomes.

Practical risk still exists:

  • Supervisory friction risk: inconsistent reporting outputs can trigger follow-up requests, deeper reviews, and loss of credibility during an ongoing incident.
  • Operational risk: if reporting relies on heroic manual effort, you increase error probability when the organization is already under stress.
  • Accountability risk: without clear ownership and approvals, submissions may go out without proper legal/compliance review.

Practical 30/60/90-day execution plan

No sourced timelines exist for “days to comply,” so treat these as implementation phases you can adjust to your operating cadence.

First 30 days (Immediate foundation)

  • Assign accountable owner and publish RACI.
  • Stand up the reporting readiness register with placeholder fields and known reporting objects.
  • Inventory systems of record for incidents, services, and third parties; pick the authoritative source for each.
  • Define a minimum reporting workflow with compliance/legal sign-off gates.

Next 60 days (Build and test the reporting factory)

  • Create the data dictionary for core reporting elements (incident timeline fields, impact fields, service taxonomy, third-party identifiers).
  • Implement validation checks and a standard report package structure (draft, final, evidence bundle).
  • Run a readiness drill; record gaps; open corrective actions with owners and due dates.

Next 90 days (Operationalize and prove repeatability)

  • Close corrective actions from the drill and re-test with a second drill.
  • Implement version control and immutable storage for submissions and evidence bundles.
  • Add sampling-based traceability checks (report value to source record) and retain results.
  • Put the program on a steady cadence: periodic drills, mapping updates when systems change, and review of ownership/approvals.

Where Daydream fits naturally: store the mapping register, evidence bundles, drill results, and corrective actions in one place so you can answer supervisory questions quickly without rebuilding context for each request.

Frequently Asked Questions

Does Article 20 require me to submit a report today?

Article 20 states that the ESAs will develop harmonised reporting content and templates. (Regulation (EU) 2022/2554, Article 20) Your near-term obligation is readiness: build the data and workflow capabilities to adopt the harmonised templates once applicable.

What is the single most important artifact to create first?

Start with a reporting readiness register that maps report fields to systems of record, owners, validation checks, and evidence locations. That register becomes your control plane for consistent outputs.

How do I handle third-party-provided facts in a harmonised report?

Define which third-party inputs you will accept as authoritative (e.g., incident times, affected components), how you will confirm them, and where you retain the confirmation. Avoid relying on informal emails; store a controlled record tied to the incident.

Can I keep using spreadsheets for reporting?

You can, but treat spreadsheets as controlled artifacts: versioned, access-restricted, with documented data sources and validation checks. The bigger risk is uncontrolled copying and manual edits without traceability.

What will auditors test if the ESA template isn’t finalized yet?

They will test whether you have governance, ownership, systems of record identified, a draft mapping approach, and evidence you can generate a consistent report package under time pressure. Drills plus corrective action closure are strong proof.

Who should sign off before submission?

Set a minimum of compliance and legal review for regulatory submissions, plus an accountable operational owner for factual accuracy (often incident management for incident reports). Document the sign-off chain and retain approval evidence.

Frequently Asked Questions

Does Article 20 require me to submit a report today?

Article 20 states that the ESAs will develop harmonised reporting content and templates. (Regulation (EU) 2022/2554, Article 20) Your near-term obligation is readiness: build the data and workflow capabilities to adopt the harmonised templates once applicable.

What is the single most important artifact to create first?

Start with a reporting readiness register that maps report fields to systems of record, owners, validation checks, and evidence locations. That register becomes your control plane for consistent outputs.

How do I handle third-party-provided facts in a harmonised report?

Define which third-party inputs you will accept as authoritative (e.g., incident times, affected components), how you will confirm them, and where you retain the confirmation. Avoid relying on informal emails; store a controlled record tied to the incident.

Can I keep using spreadsheets for reporting?

You can, but treat spreadsheets as controlled artifacts: versioned, access-restricted, with documented data sources and validation checks. The bigger risk is uncontrolled copying and manual edits without traceability.

What will auditors test if the ESA template isn’t finalized yet?

They will test whether you have governance, ownership, systems of record identified, a draft mapping approach, and evidence you can generate a consistent report package under time pressure. Drills plus corrective action closure are strong proof.

Who should sign off before submission?

Set a minimum of compliance and legal review for regulatory submissions, plus an accountable operational owner for factual accuracy (often incident management for incident reports). Document the sign-off chain and retain approval evidence.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream