Incident Classification

The incident classification requirement means you must define severity levels and consistently tag every information security incident based on impact and severity so the right response actions and escalations happen fast. For TISAX/VDA ISA, classification has to reflect data sensitivity, scope, and business consequences, not just technical symptoms. (VDA ISA Catalog v6.0)

Key takeaways:

  • Define a severity model that ties directly to response playbooks and escalation paths. (VDA ISA Catalog v6.0)
  • Classify using impact criteria (data sensitivity, scope, business consequences) with clear decision rules. (VDA ISA Catalog v6.0)
  • Keep evidence that classification happened, was consistent, and drove actions (tickets, timelines, approvals, post-incident reviews). (VDA ISA Catalog v6.0)

Incident classification is the “traffic control” of incident response. If you classify too high, you burn scarce response capacity and create escalation noise. If you classify too low, you delay containment, miss customer/OEM expectations, and increase the chance an incident becomes a reportable or contract-breaching event. VDA ISA expects a structured approach: classify information security incidents by severity and impact so you can select appropriate response actions and escalation levels. (VDA ISA Catalog v6.0)

For a CCO, GRC lead, or security governance owner, the operational goal is simple: every incident gets a consistent label that triggers the right workflow. That label must be defensible in an assessment. “Defensible” means (1) you can show defined severity levels, (2) the criteria are tied to data sensitivity, scope, and business consequences, and (3) records show people actually applied the model, including when they adjusted the classification as new facts emerged. (VDA ISA Catalog v6.0)

This page gives you requirement-level guidance you can put into a procedure, a ticketing workflow, and an assessor-ready evidence pack without turning your program into a research project.

Regulatory text

Requirement (VDA ISA 9.2.1): “Classify information security incidents by severity and impact to determine appropriate response actions and escalation levels.” (VDA ISA Catalog v6.0)

Operator interpretation: You need a defined incident severity scheme (for example, levels with names and decision criteria) and you must apply it to real incidents. Classification is not a label for reporting only; it is a control mechanism that determines who gets paged, what playbook gets executed, what leadership is notified, and what external parties (customers, third parties) may need communication under contract. (VDA ISA Catalog v6.0)

Plain-English interpretation of the requirement

You must:

  1. Define incident severity levels that your teams understand and can apply quickly. (VDA ISA Catalog v6.0)
  2. Classify incidents based on severity and impact, explicitly considering:
    • Data sensitivity (what kind of information or systems are involved)
    • Scope of impact (how widespread the issue is)
    • Business consequences (downtime, production disruption, customer commitments, safety implications, etc.) (VDA ISA Catalog v6.0)
  3. Use the classification to drive action, meaning response intensity and escalation are mapped to the classification. (VDA ISA Catalog v6.0)

Who it applies to (entity and operational context)

Entity scope: Automotive suppliers and OEMs pursuing or maintaining TISAX alignment against VDA ISA expectations. (VDA ISA Catalog v6.0)

Operational scope: Any function that detects, triages, investigates, or manages security events and incidents, including:

  • SOC / security operations (alerts, initial triage)
  • IT operations (availability incidents that may be security-related)
  • Product/OT environments (plant or engineering networks)
  • Legal/compliance and privacy (impact and notification analysis)
  • Procurement / third-party management (incidents involving third parties)
  • Business owners (production, logistics, engineering leadership)

Where this breaks in practice: Many organizations classify only after the fact, or classify based on “type of alert” (malware, phishing) rather than impact. VDA ISA language pushes you toward impact-based classification tied to escalation and response actions. (VDA ISA Catalog v6.0)

What you actually need to do (step-by-step)

1) Define your severity taxonomy (and keep it short)

Create a small set of severity levels with unambiguous names. Four levels is common in practice, but choose what your operations can sustain. The key is: each level must be decidable during triage and must map to a workflow. (VDA ISA Catalog v6.0)

Minimum content for each severity level:

  • Definition (one sentence)
  • Impact criteria (data sensitivity, scope, business consequences)
  • Required response actions (containment, forensics, comms)
  • Escalation targets (roles, not just names)
  • Reclassification rules (when to raise/lower severity)

2) Build a classification decision matrix your analysts can use

Put the criteria in a table so triage is consistent. Example structure (adapt to your environment):

Dimension Low Medium High Critical
Data sensitivity Public/internal Internal with limited sensitivity Confidential/customer/OEM Highly sensitive, regulated, or core IP
Scope Single endpoint Small group/system Multiple systems/site Widespread, enterprise or multi-site
Business consequence Minimal disruption Noticeable disruption Production/customer delivery risk Major outage or severe business disruption

Your matrix must be tailored to how your business defines sensitivity and consequences. The requirement is explicit that these factors matter. (VDA ISA Catalog v6.0)

3) Map classification to response actions and escalation levels

For each severity level, document:

  • Who is notified (on-call, incident commander, exec sponsor)
  • What playbook applies (ransomware, BEC, data exposure, OT disruption)
  • What approvals are required (e.g., system isolation, production shutdown decisions)
  • What comms are triggered (internal stakeholders; customer/OEM contact; third party engagement)

This is the “determine appropriate response actions and escalation levels” part of the requirement. If severity does not change actions, classification becomes theater and assessors will see it. (VDA ISA Catalog v6.0)

4) Put classification into the workflow tooling (don’t leave it in a PDF)

Make classification a required field in:

  • Incident ticketing (ITSM, SOAR, case management)
  • SOC triage forms
  • Major incident bridge templates

Implementation detail that matters: require initial severity at creation and current severity as a living value, because early triage is uncertain and later facts change impact. Keep the history in the ticket. (VDA ISA Catalog v6.0)

5) Train the people who classify incidents (and test consistency)

Run short, scenario-based training using your own environment examples:

  • phishing leading to mailbox access
  • malware on an engineering workstation with CAD files
  • OT network scan during production
  • third party remote access anomaly

Then do periodic calibration: take a sample of closed incidents and check whether different analysts would have classified them the same way based on the documented criteria. (VDA ISA Catalog v6.0)

6) Add governance: ownership, review cadence, and exception handling

Assign:

  • Process owner (often IR lead, SOC manager, or GRC)
  • Approval authority for downgrades (common control to prevent “severity minimization”)
  • Review trigger (new customer requirements, new systems, new data classifications)

Document how to handle edge cases (for example, “unknown data involved” defaults to higher severity until proven otherwise). (VDA ISA Catalog v6.0)

Required evidence and artifacts to retain

For assessments and internal audit, retain artifacts that prove “defined, applied, actioned”:

Core documents

  • Incident classification standard/procedure with severity definitions and criteria. (VDA ISA Catalog v6.0)
  • Severity-to-escalation mapping (RACI or notification matrix). (VDA ISA Catalog v6.0)
  • Severity-to-playbook mapping (which runbooks apply at each level). (VDA ISA Catalog v6.0)

Operational records

  • Incident tickets showing:
    • initial severity, current severity, timestamps
    • rationale notes tied to criteria (data/scope/business consequence)
    • escalation actions (who was notified, when)
    • reclassification history and approvals (if applicable) (VDA ISA Catalog v6.0)
  • On-call logs / bridge notes for high-severity incidents showing escalation occurred. (VDA ISA Catalog v6.0)
  • Post-incident reviews indicating whether classification was correct and what changed. (VDA ISA Catalog v6.0)
  • Training records and scenario test results. (VDA ISA Catalog v6.0)

Common exam/audit questions and hangups

Expect assessors to probe consistency and linkage to action:

  1. “Show me your severity definitions and how you decide.” Provide the matrix and procedure. (VDA ISA Catalog v6.0)
  2. “Show me three incidents and explain why they were classified that way.” Have examples ready across severities. (VDA ISA Catalog v6.0)
  3. “Where does data sensitivity show up in the decision?” Point to your data classification mapping inside the matrix. (VDA ISA Catalog v6.0)
  4. “How do you ensure escalation happens?” Show the notification matrix plus ticket evidence (timestamps, bridge invites). (VDA ISA Catalog v6.0)
  5. “What happens if the initial classification is wrong?” Show reclassification workflow and approvals. (VDA ISA Catalog v6.0)

Frequent implementation mistakes and how to avoid them

Mistake: Classifying by incident “type” instead of “impact”

Fix: Keep type as a separate field. Severity must be driven by data sensitivity, scope, and business consequence. (VDA ISA Catalog v6.0)

Mistake: Too many severity levels

Fix: Reduce levels until analysts can decide quickly and consistently. Make the matrix do the work, not prose. (VDA ISA Catalog v6.0)

Mistake: Severity does not trigger different actions

Fix: Map each severity to required escalations and minimum response steps. Then enforce in tooling (mandatory tasks, notification groups). (VDA ISA Catalog v6.0)

Mistake: No evidence of reclassification decisions

Fix: Add a required “reason for change” field and require approval for downgrades above a defined level. Keep the audit trail. (VDA ISA Catalog v6.0)

Mistake: Ignoring third-party involvement

Fix: Add a decision point: “Does this involve a third party system, access path, or hosted data?” If yes, trigger third-party coordination steps and contract review. (VDA ISA Catalog v6.0)

Enforcement context and risk implications

No public enforcement cases were provided in the available sources for this requirement, so treat the risk as assurance, contractual, and operational: poor incident classification drives slow escalation, inconsistent response, missed customer communications, and assessor findings that can affect trust and commercial outcomes in automotive supply chains. The requirement language is explicit that classification must determine response actions and escalation levels, so a “paper-only” scheme creates a clear assessment gap. (VDA ISA Catalog v6.0)

Practical 30/60/90-day execution plan

First 30 days (stabilize decisions)

  • Draft severity levels and a one-page decision matrix aligned to data sensitivity, scope, and business consequences. (VDA ISA Catalog v6.0)
  • Define the escalation/notification matrix by severity (roles, not individuals). (VDA ISA Catalog v6.0)
  • Pick your system of record (ITSM/SOAR) and add required fields: initial severity, current severity, reason for change. (VDA ISA Catalog v6.0)
  • Identify an incident sample set (recent tickets) to test the matrix for consistency. (VDA ISA Catalog v6.0)

Next 60 days (embed into operations)

  • Publish the procedure and train SOC/IT/OT and business stakeholders with scenario walk-throughs. (VDA ISA Catalog v6.0)
  • Connect severity levels to playbooks and minimum required actions. (VDA ISA Catalog v6.0)
  • Run a calibration session: multiple analysts classify the same past incidents, then reconcile gaps and update criteria language. (VDA ISA Catalog v6.0)

By 90 days (prove it works)

  • Perform a tabletop exercise that forces reclassification as new facts arrive (e.g., “suspected malware” becomes “data exposure”). Capture the evidence. (VDA ISA Catalog v6.0)
  • Start ongoing QA: periodic review of incident records for correct classification, timely escalation, and complete documentation. (VDA ISA Catalog v6.0)
  • Prepare an assessor-ready evidence pack: procedure, matrix, mappings, training proof, and incident examples with narratives. (VDA ISA Catalog v6.0)

Where Daydream fits: If you manage incidents that involve third parties (outsourcers, SaaS, managed services), Daydream can centralize third-party profiles, map incidents to affected third parties, and keep the evidence chain (tickets, comms, corrective actions) tied to the same record you’ll need during assessment.

Frequently Asked Questions

Do we have to classify every security event, or only confirmed incidents?

The requirement is about “information security incidents,” so focus on confirmed incidents and credible suspected incidents that trigger response actions. In practice, teams often start classification at triage and adjust after confirmation, as long as you keep the reclassification trail. (VDA ISA Catalog v6.0)

Can we use our existing ITIL priority (P1–P4) as incident severity?

You can, if the decision criteria explicitly incorporate data sensitivity, scope of impact, and business consequences and if the priority drives escalation and actions. If your P1–P4 is purely uptime-based, add security impact criteria or create a separate security severity field. (VDA ISA Catalog v6.0)

Who should own incident classification: SOC, IR, or GRC?

Operations should classify (SOC/IR) because they act on it, and GRC should govern the definitions, evidence expectations, and periodic QA. Assign a single accountable process owner and define who can approve severity changes. (VDA ISA Catalog v6.0)

How do we classify incidents involving a third party where we lack full visibility?

Default to the higher plausible severity until you have facts, and document assumptions and information requests to the third party. Tie classification updates to what you learn (data involved, scope, business consequence) and keep the correspondence as evidence. (VDA ISA Catalog v6.0)

What evidence will an assessor actually want to see?

They will ask for your documented severity model plus real incident records that show the model was applied and drove escalation and response actions. Keep a curated set of incidents across severities with short narratives explaining the classification decision. (VDA ISA Catalog v6.0)

How do we prevent teams from downgrading severity to avoid escalations?

Require documented rationale for severity changes and set an approval rule for downgrades above a defined threshold. Add QA reviews that compare similar incidents for consistent classification outcomes. (VDA ISA Catalog v6.0)

Frequently Asked Questions

Do we have to classify every security event, or only confirmed incidents?

The requirement is about “information security incidents,” so focus on confirmed incidents and credible suspected incidents that trigger response actions. In practice, teams often start classification at triage and adjust after confirmation, as long as you keep the reclassification trail. (VDA ISA Catalog v6.0)

Can we use our existing ITIL priority (P1–P4) as incident severity?

You can, if the decision criteria explicitly incorporate data sensitivity, scope of impact, and business consequences and if the priority drives escalation and actions. If your P1–P4 is purely uptime-based, add security impact criteria or create a separate security severity field. (VDA ISA Catalog v6.0)

Who should own incident classification: SOC, IR, or GRC?

Operations should classify (SOC/IR) because they act on it, and GRC should govern the definitions, evidence expectations, and periodic QA. Assign a single accountable process owner and define who can approve severity changes. (VDA ISA Catalog v6.0)

How do we classify incidents involving a third party where we lack full visibility?

Default to the higher plausible severity until you have facts, and document assumptions and information requests to the third party. Tie classification updates to what you learn (data involved, scope, business consequence) and keep the correspondence as evidence. (VDA ISA Catalog v6.0)

What evidence will an assessor actually want to see?

They will ask for your documented severity model plus real incident records that show the model was applied and drove escalation and response actions. Keep a curated set of incidents across severities with short narratives explaining the classification decision. (VDA ISA Catalog v6.0)

How do we prevent teams from downgrading severity to avoid escalations?

Require documented rationale for severity changes and set an approval rule for downgrades above a defined threshold. Add QA reviews that compare similar incidents for consistent classification outcomes. (VDA ISA Catalog v6.0)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
TISAX Incident Classification: Implementation Guide | Daydream