Safeguard 16.6: Establish and Maintain a Severity Rating System and Process for Application Vulnerabilities
Safeguard 16.6 requires you to define a consistent vulnerability severity rating method for application findings and run an end-to-end process that turns findings into prioritized remediation, tracked to closure with clear SLAs and exceptions. To operationalize it quickly, standardize a scoring model (often CVSS plus business context), integrate it into your SDLC and ticketing, and keep auditable evidence of decisions and outcomes. (CIS Controls v8; CIS Controls Navigator v8)
Key takeaways:
- A “severity system” is useless without a workflow: intake → triage → prioritize → fix → verify → close → report. (CIS Controls v8)
- You must include business impact and exploitability context, not just scanner scores, for application vulnerability severity. (CIS Controls v8)
- Evidence is the control: documented rating criteria, completed triage records, remediation tracking, and recurring metrics. (CIS Controls Navigator v8)
Safeguard 16.6: establish and maintain a severity rating system and process for application vulnerabilities requirement is a control-operator problem, not a tooling problem. A CCO or GRC lead typically inherits a mixed reality: multiple scanners, multiple app teams, inconsistent labels (“critical” means different things), and remediation tickets that age without clear ownership.
This requirement from CIS Controls v8 expects you to standardize how application vulnerabilities are rated and how those ratings drive action. The fastest path is to pick one severity taxonomy, define how you translate raw findings into that taxonomy (including compensating controls and business context), then enforce the taxonomy inside the workflow your engineers already use (ticketing, CI/CD gates, and exception approvals). (CIS Controls v8; CIS Controls Navigator v8)
Your goal is repeatability and defensibility: two analysts rating the same app finding should reach the same severity, and you should be able to show an assessor how severity drove prioritization, remediation timelines, verification, and management reporting. Daydream can help by mapping Safeguard 16.6 to documented control operation and recurring evidence capture so you stay assessment-ready without rebuilding your process each quarter. (CIS Controls v8; CIS Controls Navigator v8)
Regulatory text
Framework requirement (excerpt): “CIS Controls v8 safeguard 16.6 implementation expectation (Establish and Maintain a Severity Rating System and Process for Application Vulnerabilities).” (CIS Controls v8; CIS Controls Navigator v8)
What the operator must do:
You need (1) a defined severity rating system for application vulnerability findings and (2) a maintained process that applies the system consistently, routes work to owners, tracks remediation to closure, and produces evidence that the process operates over time. “Maintain” means it does not live as a one-time document; you review and update the rating criteria and workflow as your apps, threats, and tooling change. (CIS Controls v8)
Plain-English interpretation
Build one playbook for application vulnerability severity that answers three questions every time:
- How bad is it in this application context? (not just the scanner output)
- What happens next based on that rating? (SLA, escalation, gating, approvals)
- How do we prove we did it consistently? (records, metrics, exceptions)
If your program cannot show consistent ratings, consistent actions, and consistent evidence, you will struggle to demonstrate Safeguard 16.6 implementation. (CIS Controls v8)
Who it applies to
Entity types: Enterprises and technology organizations implementing CIS Controls v8. (CIS Controls v8; CIS Controls Navigator v8)
Operational context (where this control lives):
- Product and application security (AppSec) teams running SAST/DAST/SCA and manual testing
- Engineering teams receiving vulnerability tickets and fixing code
- IT/security operations teams handling patching when the “application vulnerability” is in a platform component
- GRC/compliance teams defining policy, oversight, and evidence retention
Systems in scope:
- Internally developed applications, APIs, and services
- Third-party code components included in your apps (libraries, containers, frameworks)
- SDLC tooling that generates or tracks findings (CI/CD, repos, scanners, ticketing)
What you actually need to do (step-by-step)
1) Define a severity taxonomy and stick to it
Pick a small, explicit set of severities (example: Critical/High/Medium/Low/Informational) and publish definitions that any assessor can read in one page.
Minimum definition fields to document:
- Exploitability (how feasible exploitation is in your environment)
- Impact (confidentiality/integrity/availability and business impact)
- Exposure (internet-facing, authenticated, internal-only)
- Data sensitivity handled by the affected component
- Compensating controls that reduce effective risk (WAF rules, feature flags, network segmentation)
Output artifact: Application Vulnerability Severity Standard (one-pager plus an appendix for edge cases). (CIS Controls v8)
2) Choose the scoring method and the “override” rules
Most teams start with CVSS from scanners, then apply context adjustments. Safeguard 16.6 does not force a specific model; it forces consistency.
Document:
- Primary scoring inputs (scanner score, CWE category, exploit maturity)
- Context modifiers (internet exposure, privilege required, sensitive data)
- Rules for raising severity (e.g., auth bypass in production)
- Rules for lowering severity (e.g., dead code confirmed, unreachable endpoint) with required proof
- Who can approve severity overrides (AppSec lead, product security, delegated champions)
Output artifact: Severity Rating Decision Tree (simple flowchart) plus override approval record template. (CIS Controls v8)
3) Embed severity into the vulnerability intake and triage workflow
Make it operational by forcing every finding through the same lifecycle states.
Recommended workflow states:
- New (untriaged)
- Triaged (validated or rejected as false positive)
- Rated (severity assigned with rationale)
- Assigned (owner + due date based on severity)
- Remediated (code fix/patch merged)
- Verified (retest confirms closure)
- Closed (with closure reason)
Key point for auditors: your system must show when severity was set, by whom, and why. (CIS Controls v8)
4) Tie severity to remediation expectations (SLAs) and escalation
Define expected remediation timelines by severity and environment (production vs non-production), and define escalation paths when SLAs are missed.
Your SLA policy should cover:
- Default remediation target per severity
- Triage time expectations (how quickly findings get rated)
- Escalation steps (engineering manager → product owner → security leadership)
- Pause conditions (e.g., release freeze) and how they’re documented
Output artifact: Vulnerability Remediation Standard with an SLA table and escalation contacts. (CIS Controls v8)
5) Implement exceptions with governance, not informal deferrals
You need an exception process for cases where you accept risk temporarily (or permanently), and exceptions must be searchable and reviewable.
Define:
- Acceptable exception reasons (vendor patch unavailable, compensating controls)
- Required fields (affected asset, severity, rationale, expiry, approver)
- Re-approval cadence and expiry behavior (auto-reopen ticket)
Evidence to retain: an exception register plus approvals. (CIS Controls v8)
6) Verify fixes and ensure closure quality
A rating system fails if you never confirm remediation. Require verification for higher severities and define acceptable verification methods:
- Rescan in CI/CD or scanning platform
- Manual validation for logic flaws
- Unit/integration tests that prevent regression
Evidence: before/after findings, retest results, and closure notes. (CIS Controls v8)
7) Report metrics that prove the process operates
Pick a small set of recurring metrics that show control operation:
- Open vulnerabilities by severity and age
- SLA compliance by severity and team
- Exceptions by severity and expiry
- Time-to-triage and time-to-close trends
Daydream fits well here by mapping Safeguard 16.6 to documented control operation and recurring evidence capture so you can produce consistent monthly evidence without scrambling during audits. (CIS Controls v8; CIS Controls Navigator v8)
Required evidence and artifacts to retain
Keep evidence that demonstrates design (your rules) and operating effectiveness (your records). Minimum set:
- Severity Rating Policy/Standard (definitions, criteria, override rules) (CIS Controls v8)
- Workflow documentation (states, roles, handoffs, escalation) (CIS Controls v8)
- Sample of vulnerability records showing:
- finding source (SAST/DAST/SCA/pentest)
- severity + rationale
- owner assignment
- due date/SLA
- remediation commits/patch references
- verification results
- closure date and closure reason (CIS Controls v8)
- Exception register with approvals and expiry dates (CIS Controls v8)
- Recurring metrics reports and evidence of review (security steering notes, ticket exports) (CIS Controls v8)
- Change log for severity model updates (who changed what and when) (CIS Controls v8)
Common exam/audit questions and hangups
Expect assessors to probe consistency and traceability:
- “Show me your documented severity rating criteria for application vulnerabilities.” (CIS Controls v8)
- “Pick three recent findings. Walk me from detection through triage, rating, assignment, remediation, verification, and closure.” (CIS Controls v8)
- “How do you adjust a scanner’s severity for business context, and who approves overrides?” (CIS Controls v8)
- “How do exceptions work, and how do you ensure they expire and get re-reviewed?” (CIS Controls v8)
- “How do you ensure the process is maintained across multiple product teams?” (CIS Controls v8)
Typical hangup: teams can show scanner outputs but cannot show a controlled decision trail for severity overrides and SLA exceptions.
Frequent implementation mistakes and how to avoid them
| Mistake | Why it fails Safeguard 16.6 | Fix |
|---|---|---|
| Using scanner severity as “the system” | Scanner scores ignore environment exposure and business impact | Document context modifiers and require rationale on final severity (CIS Controls v8) |
| No consistent ownership model | Tickets bounce or age without escalation | Define app/component owners and escalation path tied to SLA (CIS Controls v8) |
| Exceptions handled in email/Slack | No audit trail; exceptions never expire | Use a formal exception register with approver, rationale, and expiry (CIS Controls v8) |
| Closure without verification | Findings reappear; no proof risk reduced | Require rescan/retest evidence for closure, especially high severities (CIS Controls v8) |
| Policy exists but no operating evidence | Assessors evaluate operation, not intentions | Schedule recurring metrics review and retain samples each period (CIS Controls Navigator v8) |
Enforcement context and risk implications
CIS Controls v8 is a framework, not a regulator. Your real risk shows up indirectly: if you cannot rate and prioritize application vulnerabilities consistently, you can miss exploit paths in production, fail internal risk reporting, and struggle to demonstrate control operation to customers, auditors, and oversight functions that rely on CIS mappings. (CIS Controls v8)
Practical 30/60/90-day execution plan
First 30 days (stabilize and standardize)
- Publish severity taxonomy and definitions for application vulnerabilities. (CIS Controls v8)
- Decide the scoring method and document override rules and approvers. (CIS Controls v8)
- Inventory current sources of app vuln findings and pick the system of record (ticketing or vuln platform). (CIS Controls v8)
- Start capturing evidence: export a weekly sample of triaged findings with severity rationale. (CIS Controls Navigator v8)
Days 31–60 (operate the workflow and enforce accountability)
- Implement workflow states and required fields (severity, rationale, owner, due date). (CIS Controls v8)
- Roll out SLA targets by severity and an escalation path; socialize with engineering managers. (CIS Controls v8)
- Stand up an exception register and require approvals for SLA breaches or risk acceptance. (CIS Controls v8)
- Begin a recurring metrics review with Security and Engineering leadership; retain minutes/screenshots. (CIS Controls v8)
Days 61–90 (prove consistency and maintainability)
- Run calibration sessions: have multiple raters score the same findings and refine criteria. (CIS Controls v8)
- Add verification requirements and closure standards; audit a sample of closed items for proof of retest. (CIS Controls v8)
- Formalize “maintain” activities: schedule periodic review of the severity model and workflow changes. (CIS Controls v8)
- Map Safeguard 16.6 to documented control operation and recurring evidence capture in Daydream (or equivalent GRC workflow) so audits become evidence retrieval, not a fire drill. (CIS Controls v8; CIS Controls Navigator v8)
Frequently Asked Questions
Do we have to use CVSS for the severity rating system?
No specific model is mandated by CIS Controls v8, but you need a documented, repeatable method that you apply consistently to application vulnerabilities. If you use CVSS, document how you adjust it for exposure and business impact. (CIS Controls v8)
How do we handle “informational” findings from scanners?
Treat them as a defined severity category with clear handling rules, such as backlog grooming and trend reporting. The key is that they still move through intake, triage, and closure with evidence. (CIS Controls v8)
What’s the minimum evidence an auditor will accept for Safeguard 16.6?
You need documented rating criteria plus operating records that show severity assignment, remediation tracking, verification, and exceptions where applicable. Keep recurring snapshots so you can prove the process runs over time. (CIS Controls v8; CIS Controls Navigator v8)
How should we rate vulnerabilities in third-party components embedded in our applications?
Rate them using the same system, but include component reachability and actual use in your environment in the rationale. If a fix is blocked by a third party, document the exception, compensating controls, and re-review trigger. (CIS Controls v8)
Who should own the severity rating decision, AppSec or engineering?
AppSec should own the standard and calibration, while engineering owns remediation execution. Many teams use shared triage with AppSec approval for overrides to keep consistency across products. (CIS Controls v8)
We have multiple tools (SAST, DAST, SCA). Do we need one unified severity?
Yes, you need one severity taxonomy so leadership reporting and SLA enforcement are consistent across finding sources. You can keep tool-native scores as inputs, but the final severity should be normalized. (CIS Controls v8)
Frequently Asked Questions
Do we have to use CVSS for the severity rating system?
No specific model is mandated by CIS Controls v8, but you need a documented, repeatable method that you apply consistently to application vulnerabilities. If you use CVSS, document how you adjust it for exposure and business impact. (CIS Controls v8)
How do we handle “informational” findings from scanners?
Treat them as a defined severity category with clear handling rules, such as backlog grooming and trend reporting. The key is that they still move through intake, triage, and closure with evidence. (CIS Controls v8)
What’s the minimum evidence an auditor will accept for Safeguard 16.6?
You need documented rating criteria plus operating records that show severity assignment, remediation tracking, verification, and exceptions where applicable. Keep recurring snapshots so you can prove the process runs over time. (CIS Controls v8; CIS Controls Navigator v8)
How should we rate vulnerabilities in third-party components embedded in our applications?
Rate them using the same system, but include component reachability and actual use in your environment in the rationale. If a fix is blocked by a third party, document the exception, compensating controls, and re-review trigger. (CIS Controls v8)
Who should own the severity rating decision, AppSec or engineering?
AppSec should own the standard and calibration, while engineering owns remediation execution. Many teams use shared triage with AppSec approval for overrides to keep consistency across products. (CIS Controls v8)
We have multiple tools (SAST, DAST, SCA). Do we need one unified severity?
Yes, you need one severity taxonomy so leadership reporting and SLA enforcement are consistent across finding sources. You can keep tool-native scores as inputs, but the final severity should be normalized. (CIS Controls v8)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream