RS.MA-03: Incidents are categorized and prioritized
RS.MA-03 requires you to run every security incident through a defined, repeatable triage method that assigns a category and a priority so responders know what to do first and leadership gets consistent reporting. Operationalize it by publishing a severity matrix, linking it to response SLAs and escalation paths, and retaining evidence from real tickets. 1
Key takeaways:
- You need a documented categorization taxonomy and a priority/severity model that drives action, not a subjective “high/medium/low.”
- Your incident tooling must enforce the model (required fields, workflow, escalations) and produce audit-ready records.
- Evidence is ticket-level: timestamps, classification rationale, escalation, and post-incident review updates.
Footnotes
RS.MA-03 (“Incidents are categorized and prioritized”) is a maturity requirement that examiners and auditors test through outcomes: do your teams consistently triage incidents, pick the right response path, and focus effort where business impact is highest. The control fails most often for one of two reasons. First, organizations define categories but don’t connect them to decisions (who gets paged, how fast containment happens, what gets reported, when legal or privacy is engaged). Second, the SOC or IT team “does triage,” but it lives in tribal knowledge, chat logs, or ad hoc judgement without durable artifacts.
Treat RS.MA-03 as an operational contract between detection and response. Your categorization answers “what is this?” (malware, BEC, data loss, DDoS, third-party compromise). Your prioritization answers “how urgent and how big?” (impact, scope, confidence, regulatory exposure). Then you wire both into a workflow with required fields, decision support, and escalation triggers. This page gives you a requirement-level build plan, the evidence to retain, and the audit questions you should pre-answer, aligned to NIST CSF 2.0. 1
Regulatory text
Requirement: “Incidents are categorized and prioritized.” 2
Operator meaning (what you must do):
- Define what “incident categories” are in your environment (a taxonomy).
- Define how “priority” is assigned (a severity model) using consistent criteria.
- Apply both to every incident record in a repeatable way, so response actions, escalation, and reporting follow the assigned priority. 1
Plain-English interpretation
You need a triage system that reliably answers two questions for every incident:
- Category: What type of incident is this (e.g., ransomware, account compromise, data exposure, third-party breach, insider misuse)?
- Priority: How fast and how aggressively do we respond based on business impact and risk?
A compliant implementation produces the same classification regardless of who is on-call. It also creates a clear “if priority is X, do Y” response motion: paging, containment urgency, communications, management updates, and potential legal/privacy involvement. 1
Who it applies to
Entities: Any organization running a cybersecurity program and using NIST CSF 2.0 as a framework baseline. 1
Operational context where this shows up:
- SOC operations and incident response (IR)
- IT operations handling security events that become incidents
- Cloud and application teams (appsec, SRE) who own incident queues
- Privacy, legal, and communications functions that depend on early severity decisions
- Third-party risk management when incidents involve a third party (e.g., SaaS compromise, supplier breach)
If incidents can be opened in multiple systems (SIEM/SOAR, ITSM, case management, email), RS.MA-03 applies across all of them. You either centralize triage or enforce the same taxonomy everywhere.
What you actually need to do (step-by-step)
Step 1: Set ownership and scope (make it auditable)
- Assign a control owner for incident categorization and prioritization (often Head of IR/SOC Manager; in smaller orgs, IT Security Manager).
- Define the system of record for incidents (ITSM, SOAR, or case management) and require that all “security incidents” end up there.
- Decide what qualifies as an incident versus an event in your program documentation so triage is applied consistently. 1
Practical tip: If teams argue over “incident vs event,” auditors will see inconsistent records. Write the boundary down and enforce it in tooling.
Step 2: Build a categorization taxonomy that matches how you respond
Create a small set of categories that map to distinct playbooks. Keep it operational, not academic.
A workable taxonomy often includes:
- Unauthorized access / account compromise
- Malware / ransomware
- Phishing / social engineering
- Data loss / exposure
- Denial of service / availability
- Vulnerability exploitation
- Third-party / supply chain incident
- Insider misuse
For each category, define:
- Inclusion criteria (what must be true)
- Common indicators (what analysts look for)
- Default playbook (which response procedure applies)
- Primary owner (team accountable for containment and recovery)
Step 3: Define a priority (severity) model with decision criteria
Priority must drive action. Create a severity matrix that uses objective factors and produces an output like P1–P4 (or Sev1–Sev4). Typical criteria:
- Business impact (service down, revenue-impacting workflow, safety impact)
- Data impact (sensitive data involved, confirmed exfiltration vs suspected)
- Scope (single endpoint vs many systems; single user vs many accounts)
- Threat actor behavior (active exploitation, lateral movement, persistence)
- Confidence (confirmed incident vs under investigation)
- Regulatory/contractual exposure (e.g., customer notification obligations, critical supplier involvement)
Document the mapping:
- Priority → response SLA (acknowledge, contain, executive notify)
- Priority → escalation (who must be paged; when to involve privacy/legal)
- Priority → communications (internal comms cadence; customer comms gate)
Keep the matrix simple enough to apply fast, but specific enough to prevent “everything is high.”
Step 4: Embed categorization and prioritization into the workflow (don’t rely on memory)
In your incident system of record:
- Make Category and Priority required fields.
- Require a short rationale field for Priority assignment (one or two sentences).
- Add workflow rules: if Priority is highest tier, auto-page on-call, open a bridge, notify leadership distribution, and trigger required tasks (forensic capture, legal hold steps, stakeholder updates).
If you run a SOAR tool, enforce it at case creation. If you run ITSM, enforce it via ticket templates, mandatory fields, and automation rules.
Step 5: Train and calibrate the triage team
- Run calibration sessions where analysts classify sample incidents and compare results.
- Maintain a triage quick reference (one page) with category definitions and severity criteria.
- Update training whenever playbooks or business impact definitions change.
Calibration is how you prove consistency.
Step 6: Measure drift and improve the model through PIRs
After significant incidents (and periodically for the queue):
- Check if the initial category and priority were correct based on what you later learned.
- Record “reclassification” events and why they occurred (missing data, ambiguous criteria, tool gaps).
- Feed updates back into the taxonomy, severity criteria, and detection content. 1
Required evidence and artifacts to retain
Auditors will ask for “show me,” not “tell me.” Retain:
Governance documents
- Incident Response Policy and/or Standard that states incidents must be categorized and prioritized 1
- Incident taxonomy document (categories, definitions, playbook mapping)
- Severity/priority matrix with criteria and escalation rules
Operational runbooks
- Triage SOP (how an analyst assigns category/priority; required fields; reclassification rules)
- Escalation matrix (who gets notified at each priority)
System evidence (most persuasive)
- Incident tickets/cases showing:
- Category, Priority, timestamps, assignee
- Priority rationale
- Escalations/notifications executed
- Reclassification history (if applicable)
- Reports/dashboards showing distribution of incidents by category/priority over time (no stats required for compliance, but trending supports governance)
Review evidence
- Post-incident reviews that confirm or correct classification
- Change log for updates to taxonomy or severity matrix (what changed and who approved)
Daydream fit: If you need to operationalize evidence collection, Daydream can map RS.MA-03 to an owner, the governing documents, and a recurring evidence request that pulls a quarterly sample of incident records with required fields completed.
Common exam/audit questions and hangups
Expect these questions, and pre-package answers:
-
“Show me your severity matrix and how you decide P1 vs P2.”
Hangup: severity defined as “high/medium/low” with no criteria. -
“Give me a sample of incidents and prove they were categorized and prioritized consistently.”
Hangup: incidents split across email, chat, spreadsheets, and multiple ticketing systems. -
“What happens automatically when you declare a high-priority incident?”
Hangup: escalation is informal; notification lists are outdated. -
“How do you handle third-party-caused incidents?”
Hangup: no category for third-party incidents; unclear who coordinates with the provider. -
“When do you reclassify, and do you keep the trail?”
Hangup: priority changes without documentation, or old values overwritten.
Frequent implementation mistakes and how to avoid them
-
Mistake: Too many categories. Analysts guess and misroute incidents.
Avoid: Keep categories aligned to playbooks. If two categories trigger the same response, merge them. -
Mistake: Priority based on “scariness.” The loudest stakeholder sets severity.
Avoid: Tie priority to defined impact criteria and observable indicators, plus a confidence measure. -
Mistake: Priority doesn’t change what happens. Same response regardless of severity.
Avoid: Hardwire escalations and task lists to priority tiers in your system of record. -
Mistake: No evidence of consistency. You have a matrix, but no calibration or sampling.
Avoid: Run periodic classification QA on a ticket sample; document findings and updates. -
Mistake: Third-party incidents treated as “someone else’s problem.”
Avoid: Add a third-party category and define coordination steps: contract notification, shared forensics, customer messaging alignment, and internal ownership.
Enforcement context and risk implications
No public enforcement cases were provided in the approved source set for this requirement, so you should treat RS.MA-03 as a defensibility and readiness control rather than a penalty-citation item.
Operational risk is still real:
- Misprioritized incidents cause delayed containment, incomplete scoping, and inconsistent reporting to leadership and customers.
- Poor categorization breaks metrics. You cannot credibly explain trends, resourcing needs, or control effectiveness if the underlying labels are inconsistent.
- Third-party incidents are frequently time-sensitive. Without a defined category and escalation path, coordination and contractual notice can slip.
Practical 30/60/90-day execution plan
First 30 days (Immediate stabilization)
- Appoint control owner and confirm incident system of record.
- Draft or update taxonomy (categories) and severity matrix (priority criteria).
- Configure required fields (Category, Priority, Priority rationale) in the ticketing/case system.
- Publish a triage SOP that states who can set or change priority and when reclassification is allowed. 1
By 60 days (Operational rollout)
- Train analysts and incident commanders on the matrix with scenario-based exercises.
- Implement automation for high-priority escalations (paging, distribution lists, bridge creation, mandatory task checklist).
- Run a calibration session: have multiple responders classify the same set of historical incidents; resolve gaps in criteria.
- Start a lightweight QA review of closed incidents focused on correct category/priority and documented rationale.
By 90 days (Audit-ready operation)
- Produce an evidence bundle: the taxonomy, severity matrix, SOP, and a sample of incident records showing consistent classification and escalations.
- Add post-incident review prompts: “Was category correct?” “Was priority correct?” “Should criteria change?”
- Establish recurring governance: periodic review of the matrix and categories based on lessons learned and changes in systems, data, and third-party dependencies.
If you manage this in Daydream, set RS.MA-03 to a recurring evidence cadence that pulls incident samples and captures change approvals when the matrix is updated.
Frequently Asked Questions
Do we need a formal “incident” definition before we can categorize and prioritize?
Yes. If teams disagree on what counts as an incident, your records will be inconsistent and prioritization won’t be comparable across teams. Document the boundary and require everything meeting the definition to be recorded and classified. 1
Can we meet RS.MA-03 with just High/Medium/Low severity?
You can, but auditors usually expect criteria that make those labels repeatable and actionable. If you keep three tiers, add clear decision criteria and escalation actions for each tier so responders can apply them consistently. 1
How do we handle incidents that start low priority but escalate?
Allow reclassification, but require documentation: who changed it, when, and why. Preserve history in the ticket so you can show auditors the evolution of understanding and the response adjustments.
Who should be allowed to set or change incident priority?
Define roles. Many organizations allow analysts to set an initial priority, then require an incident commander or IR lead to approve changes for high-priority cases. The key is a documented rule and an auditable trail.
What’s the minimum evidence an auditor will accept for RS.MA-03?
A severity matrix and taxonomy, a triage procedure, and a sample of incident tickets showing category, priority, timestamps, and rationale. Tickets are the core evidence because they prove the process runs in production. 1
How should we categorize third-party incidents?
Add an explicit third-party category (or a required “third-party involved” flag) and route it to a coordination playbook: provider notification, contract review, shared investigation steps, and customer communications approvals. This keeps accountability internal even when the root cause is external.
Footnotes
Frequently Asked Questions
Do we need a formal “incident” definition before we can categorize and prioritize?
Yes. If teams disagree on what counts as an incident, your records will be inconsistent and prioritization won’t be comparable across teams. Document the boundary and require everything meeting the definition to be recorded and classified. (Source: NIST CSWP 29)
Can we meet RS.MA-03 with just High/Medium/Low severity?
You can, but auditors usually expect criteria that make those labels repeatable and actionable. If you keep three tiers, add clear decision criteria and escalation actions for each tier so responders can apply them consistently. (Source: NIST CSWP 29)
How do we handle incidents that start low priority but escalate?
Allow reclassification, but require documentation: who changed it, when, and why. Preserve history in the ticket so you can show auditors the evolution of understanding and the response adjustments.
Who should be allowed to set or change incident priority?
Define roles. Many organizations allow analysts to set an initial priority, then require an incident commander or IR lead to approve changes for high-priority cases. The key is a documented rule and an auditable trail.
What’s the minimum evidence an auditor will accept for RS.MA-03?
A severity matrix and taxonomy, a triage procedure, and a sample of incident tickets showing category, priority, timestamps, and rationale. Tickets are the core evidence because they prove the process runs in production. (Source: NIST CSWP 29)
How should we categorize third-party incidents?
Add an explicit third-party category (or a required “third-party involved” flag) and route it to a coordination playbook: provider notification, contract review, shared investigation steps, and customer communications approvals. This keeps accountability internal even when the root cause is external.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream