DSS04: Managed Continuity

The dss04: managed continuity requirement in COBIT expects you to run an operational continuity program for IT services: define continuity objectives, plan for disruptions, test those plans, and keep evidence that the program works. To operationalize it quickly, assign accountable owners, map critical services to RTO/RPO targets, implement tested recovery procedures, and maintain audit-ready artifacts. 1

Key takeaways:

  • Treat DSS04 as an operating requirement: plans plus proof of testing, training, and remediation.
  • Start from “critical services” and dependencies, then set recovery objectives and build runbooks.
  • Your fastest audit win is a clean evidence set mapped to DSS04 (ownership, procedures, test results, fixes). 2

DSS04: Managed Continuity is the COBIT objective that examiners, auditors, and internal stakeholders use to test whether your IT organization can keep delivering critical services through disruption. In practice, teams fail DSS04 less from lack of tooling and more from unclear accountability, incomplete scope (apps but not upstream dependencies), and weak evidence that plans were tested and improved.

This page is written for a Compliance Officer, CCO, or GRC lead who needs to translate the dss04: managed continuity requirement into a concrete set of tasks, owners, and deliverables. You’ll find: a plain-English interpretation, applicability guidance, step-by-step implementation actions, and a tight list of artifacts to retain for audit readiness. The emphasis is operational: what to implement, how to run it, and how to show it works.

COBIT is a framework, so your goal is defensible design and repeatable operation, not perfect paperwork. The most effective approach is to define continuity outcomes for business-critical services, then build the minimum set of plans, runbooks, tests, and corrective actions that demonstrate control of continuity risk. 1

DSS04: Managed Continuity (plain-English requirement)

Plain-English interpretation: You must identify which IT-enabled services are critical, define how quickly they must be restored after disruption (and how much data loss is acceptable), maintain workable recovery procedures, test them, and fix gaps. You also need clear ownership and retained evidence that continuity activities happen on schedule and drive improvements. 1

What DSS04 is trying to prevent:

  • “We had a plan” that was never tested.
  • Recovery steps that exist only in one engineer’s head.
  • Critical dependencies (identity, DNS, network, third parties) missing from recovery design.
  • Repeated outages without post-test or post-incident corrective actions.

Regulatory text

Framework excerpt (provided): “COBIT 2019 objective DSS04 implementation expectation.” 1

Operator meaning: COBIT expects you to implement the DSS04 objective as a managed process, not a one-time document. For an operator, that means:

  • A defined scope (services and supporting assets).
  • Clear roles and decision rights (who declares a disaster, who restores what, who communicates).
  • Documented continuity and recovery procedures.
  • Recurring exercises and tests with recorded outcomes.
  • A feedback loop that updates plans, architecture, and procedures based on test results and incidents. 2

Who it applies to (entity and operational context)

Applies to: Enterprise IT organizations adopting COBIT, including centralized IT, product engineering groups, and shared services that deliver production systems. 1

Operational contexts where DSS04 becomes exam-critical:

  • Revenue-impacting customer-facing platforms and core internal systems (finance, HRIS, ERP).
  • Cloud-first environments with complex identity/network dependencies.
  • Organizations with outsourced or third-party provided critical components (hosting, payments, managed security, SaaS platforms).
  • Regulated operations where downtime can create compliance, contractual, or safety impact.

Scope note you should make explicit in your program charter:

  • Include people/process dependencies (on-call coverage, access to break-glass accounts).
  • Include technology dependencies (identity provider, secrets management, CI/CD, observability).
  • Include third parties that are prerequisites to recovery (telecom, cloud provider, critical SaaS). DSS04 is internal continuity, but your continuity design fails if third-party dependencies are ignored.

What you actually need to do (step-by-step)

Use this as an implementation runbook for the dss04: managed continuity requirement.

1) Assign accountability and governance

  1. Name an executive sponsor (business or technology).
  2. Name a continuity owner for the DSS04 process (often IT Risk, SRE leadership, or BCM leader).
  3. Assign service owners for each critical service, with documented responsibilities for recovery readiness.
  4. Define decision points: who declares an incident as disaster-level, who approves failover, who communicates externally.

Minimum output: Continuity governance RACI and service ownership list mapped to DSS04. 1

2) Define “critical services” and map dependencies

  1. Build a service inventory (start with what the business depends on).
  2. Identify tiering (critical, important, non-critical) based on business impact.
  3. For each critical service, document dependencies:
    • Upstream/downstream apps
    • Data stores
    • Identity and access services
    • Network/DNS/CDN
    • Monitoring/alerting
    • Third parties

Practical tip: A dependency map that is “good enough” beats an exhaustive CMDB that never stays current. Put owners on the hook for quarterly validation as part of change management.

3) Set recovery objectives (RTO/RPO) and continuity requirements

  1. For each critical service, define:
    • RTO (time to restore service)
    • RPO (maximum tolerable data loss)
  2. Document assumptions (e.g., “cloud region unavailable,” “ransomware event,” “loss of identity provider”).
  3. Align objectives with architecture:
    • Backups and restore time
    • Replication strategy
    • Failover design
    • Staffing/on-call coverage

Minimum output: Approved RTO/RPO register per critical service, with owner sign-off and review cadence. 2

4) Build continuity and recovery documentation that engineers can use

Create two layers of documentation:

  • Plan-level documents (management):

    • IT Continuity/Disaster Recovery policy and standards
    • Service continuity plans 1
    • Communication plan (internal/external stakeholders)
  • Execution-level documents (operators):

    • Step-by-step recovery runbooks
    • “Break glass” access procedures
    • Restore procedures for data stores
    • Manual workarounds for business processes if tech is down

Quality bar: If a qualified on-call engineer who didn’t build the system can follow the runbook under stress, it’s usable.

5) Test, exercise, and record results

  1. Define a testing schedule by tier (tabletop for all critical services; technical failover/restore for the highest tier).
  2. Run exercises that reflect real failure modes (region outage, credential compromise, corrupted backups).
  3. Capture evidence:
    • Start/end times
    • Steps performed
    • What failed and why
    • RTO/RPO achieved vs. target
  4. Create corrective actions with owners and due dates; track to closure.

Audit reality: Test evidence plus closed corrective actions often matters more than the elegance of the plan. 1

6) Integrate continuity into change, incident, and third-party processes

To keep DSS04 “managed,” tie continuity to everyday workflows:

  • Change management: major changes require DR impact review and runbook updates.
  • Incident management: post-incident reviews feed continuity improvements.
  • Third-party risk: critical third parties must have continuity commitments (SLAs, DR posture, notification) and you should record how your service recovers when they fail.

7) Build an evidence map (your fastest path to exam readiness)

Create a DSS04 evidence index that points to:

  • Policies and standards
  • Service inventory and tiering
  • RTO/RPO approvals
  • Runbooks
  • Test plans and results
  • Corrective action tracker
  • Training/awareness records for responders
  • Review/attestation logs for periodic updates

This directly supports the recommended control: document control ownership, procedures, and evidence mapped to DSS04. 2

Required evidence and artifacts to retain

Keep these artifacts in a controlled repository with version history and ownership:

Governance

  • Continuity/DR policy and standard(s)
  • DSS04 RACI and role descriptions
  • Service owner register

Scope and objectives

  • Critical service inventory and tiering rationale
  • Dependency maps (even if lightweight)
  • RTO/RPO register with approvals

Operational readiness

  • Runbooks per critical service
  • Backup and restore procedures with locations of backups and access requirements
  • Break-glass access procedure and access review records

Testing and improvement

  • Exercise calendar and test plans
  • Test results (tabletop + technical) and evidence (tickets, logs, screenshots where appropriate)
  • Corrective action tracker and closure evidence
  • Post-incident reviews tied back to continuity fixes

Common exam/audit questions and hangups

Expect these lines of inquiry:

  • “Show me your list of critical services and who owns them.”
  • “Where are RTO/RPO targets documented and approved?”
  • “Prove you tested restores and failover. What were the results?”
  • “Show corrective actions from tests and whether they were completed.”
  • “How do you ensure continuity docs stay current after changes?”
  • “How do third-party outages affect your recovery design?”

Hangups that stall audits:

  • RTO/RPO exist but are not approved by accountable owners.
  • Tests are informal and not recorded, or records don’t show outcomes vs targets.
  • Plans don’t cover identity, DNS, secrets, or admin access.
  • Corrective actions exist but have no closure evidence.

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: Plans written by GRC only.
    Fix: Make service owners and SRE/ops co-author runbooks; GRC sets the standard and evidence model.

  2. Mistake: “We back up” equals “we can restore.”
    Fix: Require periodic restore tests for critical data stores, with documented restoration time and data integrity checks.

  3. Mistake: Testing without remediation.
    Fix: No test is “complete” until corrective actions are assigned, tracked, and closed with evidence.

  4. Mistake: Ignoring third-party failure modes.
    Fix: Add third-party dependency scenarios to tabletops; document manual workarounds and escalation paths.

  5. Mistake: Evidence scattered across tools.
    Fix: Maintain a DSS04 evidence index. Daydream can function as the control-to-evidence system of record so audits don’t turn into a scavenger hunt. 1

Risk implications (why operators should care)

If DSS04 is weak, disruptions become extended outages, data loss events, and failed customer commitments. The compliance risk is secondary but real: auditors can conclude the continuity control is not designed effectively or not operating consistently. That drives negative findings, remediation commitments, and higher ongoing scrutiny. 1

Practical 30/60/90-day execution plan

Because no public source in the provided catalog supports a specific day-based plan, treat this as a sequence you can run as fast as your environment allows.

First phase (immediate): establish control ownership and scope

  • Assign DSS04 owner, executive sponsor, and service owners.
  • Create critical service inventory and initial tiering.
  • Stand up the DSS04 evidence index and repository structure.
  • Draft continuity policy/standard and define required artifacts.

Second phase (near-term): set objectives and create runbooks

  • Complete dependency mapping for critical services.
  • Define and approve RTO/RPO per critical service.
  • Write or normalize runbooks (restore, failover, break-glass access).
  • Integrate DR review into change management for in-scope systems.

Third phase (operationalize): test, remediate, and report

  • Run tabletops across all critical services; run technical recovery tests for highest tier.
  • Track corrective actions to closure; update runbooks and architecture where needed.
  • Establish recurring review cadence (service owner attestations, test calendar, evidence refresh).
  • Produce a continuity status report for leadership: test completion, key gaps, remediation progress.

Frequently Asked Questions

How do I define “critical services” fast without boiling the ocean?

Start from business processes that stop revenue, safety, or regulatory obligations, then map to the systems that enable them. Confirm tiering with business owners and make service owners accountable for maintaining it.

Do I need a separate DR plan for every application?

You need recovery procedures for each critical service, but you can standardize the format and reuse platform runbooks (identity, network, databases) where shared. Auditors care that the steps are actionable and aligned to the service’s RTO/RPO. 1

What evidence is most persuasive in an audit for DSS04?

Tested outcomes plus remediation closure: test records that show targets vs results, and tickets proving gaps were fixed. Pair that with an evidence map tied to DSS04 so you can produce artifacts quickly. 2

How should I handle continuity for third-party dependencies?

Document the dependency, the failure mode, and your recovery strategy (workaround, alternate provider, degraded mode, or contractual escalation). Keep third-party contact paths and notification expectations in the runbook so responders do not improvise.

Can I meet DSS04 in a cloud-native environment without a secondary data center?

Yes, if your architecture supports your recovery objectives through multi-region design, backups, and proven restore/failover procedures. DSS04 evaluates outcomes and management discipline, not a specific hosting pattern. 1

Where does Daydream fit if I already have DR tooling?

DR tooling restores systems; DSS04 also requires governance, ownership, testing records, and evidence mapping. Daydream helps you maintain the control narrative and artifacts in one place so audits and internal reviews are repeatable. 1

Footnotes

  1. ISACA COBIT overview

  2. OSA COBIT 2019 objective mapping

Frequently Asked Questions

How do I define “critical services” fast without boiling the ocean?

Start from business processes that stop revenue, safety, or regulatory obligations, then map to the systems that enable them. Confirm tiering with business owners and make service owners accountable for maintaining it.

Do I need a separate DR plan for every application?

You need recovery procedures for each critical service, but you can standardize the format and reuse platform runbooks (identity, network, databases) where shared. Auditors care that the steps are actionable and aligned to the service’s RTO/RPO. (Source: ISACA COBIT overview)

What evidence is most persuasive in an audit for DSS04?

Tested outcomes plus remediation closure: test records that show targets vs results, and tickets proving gaps were fixed. Pair that with an evidence map tied to DSS04 so you can produce artifacts quickly. (Source: OSA COBIT 2019 objective mapping)

How should I handle continuity for third-party dependencies?

Document the dependency, the failure mode, and your recovery strategy (workaround, alternate provider, degraded mode, or contractual escalation). Keep third-party contact paths and notification expectations in the runbook so responders do not improvise.

Can I meet DSS04 in a cloud-native environment without a secondary data center?

Yes, if your architecture supports your recovery objectives through multi-region design, backups, and proven restore/failover procedures. DSS04 evaluates outcomes and management discipline, not a specific hosting pattern. (Source: ISACA COBIT overview)

Where does Daydream fit if I already have DR tooling?

DR tooling restores systems; DSS04 also requires governance, ownership, testing records, and evidence mapping. Daydream helps you maintain the control narrative and artifacts in one place so audits and internal reviews are repeatable. (Source: ISACA COBIT overview)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream