DSS02: Managed Service Requests and Incidents

The dss02: managed service requests and incidents requirement in COBIT 2019 expects you to run a controlled, measurable service desk process that logs, categorizes, prioritizes, resolves, and learns from both user requests and incidents. Operationalize it by standardizing intake, defining SLAs and escalation, enforcing documentation in a single system of record, and keeping audit-ready evidence of performance and root-cause actions. 1

Key takeaways:

  • One workflow must cover both service requests and incidents, with clear categorization, priority, and escalation paths.
  • Evidence matters as much as process design: tickets, approvals, timelines, SLAs, and post-incident actions must be retrievable.
  • The fastest path to maturity is a single system of record plus disciplined operational hygiene (queue management, metrics, and problem linkage).

DSS02 sits in the “Deliver, Service and Support” domain of COBIT 2019 and focuses on the mechanics that keep technology services reliable: how users ask for help, how disruptions are handled, and how the organization proves control over both. For a CCO or GRC lead, DSS02 is less about tooling choices and more about establishing a defensible operating model: defined ownership, standardized workflows, measurable service levels, and records that withstand scrutiny.

If your organization already runs a service desk, DSS02 typically fails for two reasons: (1) inconsistent execution (teams work tickets “their way”), and (2) weak evidence (tickets lack timestamps, classification, approvals, or resolution notes). If you outsource IT operations, DSS02 becomes a third-party control problem: you still need visibility into the provider’s queue, SLAs, and incident learning loop, and you need to retain evidence even when the third party runs the tool.

This page translates the DSS02 requirement into requirement-level implementation steps you can assign, track, and test, while keeping the narrative mapped to COBIT’s objective expectation. 1

Regulatory text

Framework excerpt (provided): “COBIT 2019 objective DSS02 implementation expectation.” 2

Operator interpretation: DSS02 expects you to implement and operate a managed process for service requests (standard user asks such as access, hardware, how-to) and incidents (unplanned interruptions or degradation) with consistent handling from intake through closure, plus metrics and continuous improvement. You must be able to show that requests and incidents are recorded, prioritized, assigned, resolved within defined targets, escalated when needed, and reviewed for trends and root cause. 1

Plain-English interpretation (what DSS02 requires you to be able to prove)

You can prove, using records from a system of record, that:

  1. People know where to submit requests and report incidents, and they do.
  2. Each ticket is categorized and prioritized consistently.
  3. Ownership and escalation are defined and followed.
  4. Resolutions are documented and validated.
  5. The service desk measures performance and takes corrective action when service targets are missed.
  6. High-impact incidents trigger structured communication, post-incident review, and follow-up work. 1

Who it applies to

Entity scope

  • Any enterprise IT organization using COBIT 2019 as a governance and control framework, including regulated businesses mapping COBIT to internal controls. 2

Operational scope (where DSS02 “lands”)

  • Internal IT service desk/help desk operations.
  • SRE/IT operations incident response.
  • Shared services teams (IAM, endpoint, network, ERP support).
  • Third-party managed services (MSP/MSSP), outsourced service desk, cloud operations support.
  • Business units running shadow IT service desks (often the hidden gap).

Ticket types in scope

  • Service requests: access requests, standard changes (if you route them here), password resets, device requests, onboarding tasks.
  • Incidents: outages, degraded performance, security-relevant disruptions (coordinate with security incident response if separate), failed jobs, user-impacting errors. 3

What you actually need to do (step-by-step)

Use this as an implementation checklist you can assign to IT Ops and test as GRC.

1) Define ownership, scope, and a single system of record

  • Assign a process owner for DSS02 (often Head of IT Operations or Service Management).
  • Define what counts as a service request vs incident in your environment (short definitions plus examples).
  • Choose the authoritative ticketing system(s). If multiple tools exist, define which system is the “system of record” and how others synchronize or feed it.
  • For third parties running their own tools, contract for (a) reporting access, (b) export rights, and (c) evidence retention requirements. 2

Deliverable: DSS02 process standard (1–3 pages) + RACI.

2) Standardize intake and minimum ticket data

Configure forms and required fields so tickets are auditable without heroics:

  • Requester identity and contact method
  • Service/CI (or application) affected
  • Category/subcategory
  • Priority/impact/urgency (define rules, not vibes)
  • Timestamps (opened/assigned/responded/resolved/closed)
  • Assignment group and assignee
  • Resolution notes and closure code
  • Approvals for request types that require authorization (access, privileged actions)

Control intent: reduce “free text” tickets that cannot be measured or defended.

3) Build a priority and escalation model that matches operations

Define:

  • Priority matrix (impact x urgency) with examples.
  • Target response and resolution times per priority (these can be internal goals or SLA/OLA commitments).
  • Escalation rules: time-based (breach risk), impact-based (executive visibility), and skill-based (tiering).
  • Major incident criteria and the on-call/bridge process.

Practical tip: Write escalation rules as “if/then” statements a queue manager can enforce.

4) Implement consistent workflows for requests vs incidents

Minimum workflow states you should enforce in the tool:

  • New → Triage → Assigned → In progress → Pending (vendor/user) → Resolved → Closed

For service requests, add:

  • Approval (where needed)
  • Fulfillment tasks and task owners
  • Verification that access or service was delivered (closure confirmation)

For incidents, add:

  • Detection source (user report, monitoring, third party)
  • Work notes with troubleshooting timeline
  • Linkage to problem record or known error (if you run problem management)
  • Customer/user communication notes for user-impacting incidents

5) Integrate with related controls (so DSS02 doesn’t break elsewhere)

DSS02 often fails because adjacent processes are disconnected:

  • Change management: ensure incidents that require changes route into change control (and that emergency changes are documented).
  • Access management: ensure access requests require approvals and are logged.
  • Monitoring: alerts should generate incidents or at least be correlated to them.
  • Security incident response: define when an IT incident becomes a security incident and how handoffs occur. 2

6) Operational governance: queue hygiene, metrics, and review cadence

Run the process like an operational control, not a policy:

  • Daily/regular queue review: aging tickets, breached targets, unassigned work.
  • Metrics dashboard: volume by category, SLA attainment, backlog aging, reopen rate, top recurring issues.
  • Trend review: identify repeat incidents and create problem records or backlog items.
  • Corrective actions: document what changed (KB articles, automation, monitoring, capacity fixes).

Evidence goal: show a feedback loop, not just ticket closure.

7) Third-party alignment (if any part is outsourced)

If a third party handles requests/incidents:

  • Contractually define SLAs, reporting, escalation, and notification timelines.
  • Require access to ticket data or scheduled exports with required fields.
  • Test the escalation path (tabletop or operational drill) and keep the record.
  • Ensure you can retrieve major incident reports and post-incident actions.

Required evidence and artifacts to retain

Keep evidence that proves both design and operation. A tight list:

Governance and design artifacts

  • DSS02 policy/standard and process flow
  • RACI and on-call/escalation roster documentation
  • Priority matrix and SLA/OLA definitions
  • Ticket field definitions and required-field configuration screenshots/exports
  • Third-party contracts/SOW language covering incident/request handling and reporting

Operational evidence (sample-based is fine)

  • Ticket samples for each priority and category, showing timestamps, assignment, work notes, and closure
  • Major incident records: bridge notes, comms, timeline, decision log
  • SLA reports and backlog/aging reports
  • Post-incident reviews (PIRs) and corrective action tracking to closure
  • Knowledge base entries created/updated from recurring issues

Audit-ready mapping artifact

  • A DSS02 control-to-evidence matrix (control statement → system report → ticket samples → review meeting notes). This is the fastest way to prevent scramble audits and is explicitly aligned to “document control ownership, procedures, and evidence mapped to DSS02.” 1

Common exam/audit questions and hangups

Auditors and assessors tend to press on the same failure points:

  1. “Show me how you distinguish incidents from service requests.”
    Hangup: inconsistent classification makes metrics meaningless.

  2. “How do you set priority, and who can override it?”
    Hangup: priority inflation (everything is urgent) or ad hoc changes without notes.

  3. “Prove tickets are handled within defined targets.”
    Hangup: you have SLAs on paper but cannot produce a reliable report.

  4. “Walk me through a major incident end-to-end.”
    Hangup: no timeline, no comms record, no PIR, or actions not tracked to closure.

  5. “How do you oversee your managed service provider’s incident process?”
    Hangup: contracts lack evidence rights; you only get summary emails.

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails DSS02 Fix
Separate informal channels (chat, email) bypass the ticket system No system of record, no metrics, no audit trail Enforce “no ticket, no work” for non-emergency items; auto-create tickets from email/chat where possible
Too many categories/priority levels Users and analysts misclassify Start with a small taxonomy, review and refine during trend reviews
“Resolved” without verification Reopens increase; auditors question closure quality Add closure confirmation for user-facing requests and high-impact incidents
Outsourcer runs everything, you keep nothing You can’t demonstrate control operation Require export/API access to ticket data and PIRs; store in your GRC repository
Metrics exist but no corrective action Looks performative Document decisions and follow-up work items tied to trends and PIRs

Enforcement context and risk implications

COBIT is a framework, not a regulator, so you should treat DSS02 as an internal control expectation and a common mapping point for broader requirements (availability, operational resilience, and service management discipline). The operational risk is direct: poor request handling drives unauthorized access and weak provisioning controls, and weak incident handling increases downtime duration, customer impact, and incomplete root-cause remediation. 2

Practical 30/60/90-day execution plan

You asked for speed-to-operationalization, so this plan focuses on evidence and repeatability. Timelines are presented as phases you can adapt to your delivery capacity.

First 30 days (stabilize and make it auditable)

  • Name DSS02 owner; publish a one-page scope statement and definitions.
  • Identify the system(s) of record; stop uncontrolled intake paths.
  • Implement required ticket fields and a basic priority matrix.
  • Produce your first repeatable evidence pack: ticket samples + a basic SLA/backlog report.
  • If outsourced, inventory contracts and confirm evidence access/export rights.

Days 31–60 (make it consistent and measurable)

  • Implement escalation rules and major incident criteria; document the bridge and comms workflow.
  • Formalize request approvals for sensitive request types (access, privileged actions).
  • Establish a queue governance routine (operations review notes become evidence).
  • Stand up trend reporting and a lightweight problem linkage approach.

Days 61–90 (close the loop and harden third-party oversight)

  • Require PIRs for major incidents; track corrective actions to closure in a system you control.
  • Tune categories and priority rules based on misclassification patterns.
  • Expand reporting: SLA attainment by service, top drivers, reopen rate, aging buckets.
  • If a third party is involved, run an escalation test and retain the record.
  • Build a DSS02 control-to-evidence matrix in your GRC tool (Daydream fits naturally here as the place to store mappings, owners, and recurring evidence requests without rebuilding spreadsheets each audit cycle). 2

Frequently Asked Questions

Do we need a separate process for “service requests” and “incidents” to meet DSS02?

You need consistent handling for both, but it can be one service management process with two workflows. The key is clear definitions, consistent categorization, and evidence that each type follows the right approvals, prioritization, and closure rules. 2

What’s the minimum evidence an auditor will accept for DSS02?

A documented process owner and workflow, a priority/SLA model, and ticket samples that show timestamps, assignment, work notes, and closure. Add SLA/backlog reports and at least one major incident record with follow-up actions if you want to avoid the “paper process” finding. 3

We outsource the help desk. Are we still accountable for DSS02?

Yes. Outsourcing changes how you operate the control, not whether you need it. Contract for ticket data access, SLA reporting, escalation visibility, and retention of incident documentation so you can produce evidence on demand. 2

Can we use chat tools (Teams/Slack) for incident response and still be compliant?

Yes, if chat is not the system of record. Create or link an incident ticket that captures the timeline, decisions, and closure details; store key comms artifacts or summaries with the ticket so the audit trail survives chat retention limits.

How do we handle “security incidents” under DSS02?

Define a handoff rule: IT incidents with security indicators get escalated to the security incident response process, but the service-impacting record still needs a ticket with operational timestamps and resolution notes. Document the linkage between the IT incident and the security case for traceability. 2

What should we do if teams keep misclassifying tickets and breaking metrics?

Reduce taxonomy complexity, enforce required fields, and run periodic calibration with examples (“this is P1; this is not”). Track misclassification as a quality issue in queue reviews, and document the corrective actions so you can show process improvement. 3

Footnotes

  1. ISACA COBIT overview; OSA COBIT 2019 objective mapping

  2. ISACA COBIT overview

  3. OSA COBIT 2019 objective mapping

Frequently Asked Questions

Do we need a separate process for “service requests” and “incidents” to meet DSS02?

You need consistent handling for both, but it can be one service management process with two workflows. The key is clear definitions, consistent categorization, and evidence that each type follows the right approvals, prioritization, and closure rules. (Source: ISACA COBIT overview)

What’s the minimum evidence an auditor will accept for DSS02?

A documented process owner and workflow, a priority/SLA model, and ticket samples that show timestamps, assignment, work notes, and closure. Add SLA/backlog reports and at least one major incident record with follow-up actions if you want to avoid the “paper process” finding. (Source: OSA COBIT 2019 objective mapping)

We outsource the help desk. Are we still accountable for DSS02?

Yes. Outsourcing changes how you operate the control, not whether you need it. Contract for ticket data access, SLA reporting, escalation visibility, and retention of incident documentation so you can produce evidence on demand. (Source: ISACA COBIT overview)

Can we use chat tools (Teams/Slack) for incident response and still be compliant?

Yes, if chat is not the system of record. Create or link an incident ticket that captures the timeline, decisions, and closure details; store key comms artifacts or summaries with the ticket so the audit trail survives chat retention limits.

How do we handle “security incidents” under DSS02?

Define a handoff rule: IT incidents with security indicators get escalated to the security incident response process, but the service-impacting record still needs a ticket with operational timestamps and resolution notes. Document the linkage between the IT incident and the security case for traceability. (Source: ISACA COBIT overview)

What should we do if teams keep misclassifying tickets and breaking metrics?

Reduce taxonomy complexity, enforce required fields, and run periodic calibration with examples (“this is P1; this is not”). Track misclassification as a quality issue in queue reviews, and document the corrective actions so you can show process improvement. (Source: OSA COBIT 2019 objective mapping)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream