Safeguard 16.14: Conduct Threat Modeling

Safeguard 16.14 requires you to perform threat modeling on your systems so you can identify likely attacker actions, prioritize security requirements, and drive concrete design and engineering changes before release. Operationalize it by defining scope triggers, running a repeatable workshop method, tracking threats to mitigations, and retaining evidence that the process happens and changes outcomes. 1

Key takeaways:

  • Define when threat modeling is mandatory (new systems, major changes, exposed interfaces) and make it a release gate. 1
  • Record threats, assumptions, mitigations, owners, and residual risk, then link them to tickets and architecture decisions. 1
  • Evidence is the control: keep the model outputs, meeting records, and proof that fixes shipped. 1

Threat modeling is one of the fastest ways to turn “security by intent” into security by design. For a Compliance Officer, CCO, or GRC lead, Safeguard 16.14: conduct threat modeling requirement is less about selecting a specific methodology and more about proving a consistent practice: you routinely analyze how a system can be attacked, choose mitigations, and track those mitigations to completion. 1

Most programs fail this requirement for predictable reasons: threat modeling is treated as optional, happens only in security teams (not engineering), or produces documents that never change implementation. Auditors and internal stakeholders will look for repeatability (a defined trigger and cadence), coverage (systems that matter are included), and traceability (threats map to requirements, design decisions, and work items). 1

This page gives requirement-level implementation guidance you can put into motion quickly: who must participate, how to run the workflow end-to-end, what artifacts to retain, and what questions you should expect during an assessment. It also includes a practical execution plan and common failure modes so you can avoid building a “paper threat modeling” program. 1

Regulatory text

Framework requirement: “CIS Controls v8 safeguard 16.14 implementation expectation (Conduct Threat Modeling).” 1

Operator interpretation: You need a defined, repeatable process to evaluate credible threats against systems (applications, services, infrastructure patterns), document the results, and drive mitigations into engineering work. Passing implementation means you can show threat modeling is performed for in-scope systems and changes decisions or backlog items in a traceable way. 1

Plain-English interpretation (what this means in practice)

Threat modeling means answering four operational questions for a system you build or materially change:

  1. What are we building and how does data move? (trust boundaries, entry points, sensitive data)
  2. What can go wrong if an attacker targets it? (abuse cases, likely attack paths)
  3. What are we doing about it? (prevent/detect/respond controls, design changes)
  4. Did we do it and is the remaining risk acceptable? (testing, monitoring, sign-off)

For compliance, your job is to make this predictable: the organization knows when threat modeling happens, who runs it, what “done” looks like, and where the evidence lives. 1

Who it applies to (entity and operational context)

Safeguard 16.14 applies broadly to enterprises and technology organizations implementing CIS Controls v8. 1

Operationally, treat these as in scope:

  • Product and application engineering: new apps, major feature releases, API changes, auth changes.
  • Cloud and platform teams: new cloud accounts/projects, new network patterns, new identity integrations.
  • Data platforms: new data pipelines, new data sharing pathways, new analytics environments.
  • Security engineering: provides the method, facilitation, and approval gates.
  • Third-party delivered systems (where you configure or integrate): identity providers, payment processors, customer data platforms. You may not control their internals, but you do control integration design and compensating controls.

What you actually need to do (step-by-step)

Step 1: Define scope triggers and a “threat model required” rule

Write a short standard that states threat modeling is mandatory for:

  • New internet-facing services or new public endpoints
  • Significant architecture changes (new trust boundary, new auth path, new datastore)
  • New handling of sensitive or regulated data
  • New third-party integrations that add a security boundary

Make it enforceable by embedding it into your SDLC intake: architecture review, change management, or release readiness. The control fails in practice when it’s “encouraged” but not required. 1

Step 2: Choose a method your teams can repeat

Pick one approach and document it so a facilitator can run it consistently. Common approaches include STRIDE-style brainstorming, abuse-case driven modeling, or attack-tree mapping. CIS does not require a specific method; your requirement is consistent execution and evidence. 1

Minimum required inputs:

  • System context diagram and/or data flow diagram
  • Trust boundaries
  • Asset inventory references (service name, repo, cloud account/project)
  • Data classification for key flows
  • Authentication/authorization approach and key dependencies

Step 3: Run a structured threat modeling session

Set expectations for attendance:

  • System owner/tech lead
  • Someone who knows identity and access flows
  • Security facilitator (or trained engineer if your program scales)
  • Optional: SRE/operations for deployment and monitoring realities

A practical agenda:

  1. Confirm diagrams and assumptions (what is true today vs planned).
  2. Identify entry points and trust boundaries (APIs, queues, admin consoles, CI/CD).
  3. Enumerate threats (spoofing, tampering, privilege abuse, data exposure, supply chain risks).
  4. Prioritize (likelihood x impact qualitatively; define what “high” means for you).
  5. Decide mitigations and acceptance criteria (design controls + detection/response).

Step 4: Convert threats into trackable requirements and work items

Your primary operational deliverable is traceability:

  • For each prioritized threat, assign a mitigation owner.
  • Create tickets in your normal system (Jira/Azure DevOps/GitHub Issues).
  • Reference the threat model record in each ticket.
  • Define “done” evidence (config change, code change, test, alert, runbook update).

A common hangup: threat models that end as PDFs. Auditors will ask what changed in engineering as a result. Make the backlog linkage mandatory. 1

Step 5: Add governance: approval, exceptions, and residual risk

Define who can accept residual risk when a mitigation is deferred. Keep it simple:

  • Security recommends severity and mitigations.
  • Product/engineering owns delivery timelines.
  • A named risk owner approves exceptions, with expiration and compensating controls.

If you already run an exception process (for vulnerabilities, firewall rules, or access), reuse it so you have one consistent risk acceptance record set. 1

Step 6: Make it repeatable (cadence + change triggers)

Threat models become stale. Define refresh triggers:

  • Major architectural changes
  • Significant incidents or near-misses involving the system
  • Material changes in dependencies (identity provider change, new data store, new third party)

Tie refresh to an architecture review board or change advisory process so it happens without heroic effort. 1

Required evidence and artifacts to retain

Auditors and internal reviewers will look for proof of both performance and outcomes. Keep these artifacts in a controlled repository (GRC system, wiki with access control, or SDLC evidence store):

Core artifacts 2

  • Threat modeling record (template output) including:
    • System name, owner, date, participants
    • Diagram(s): context + data flows + trust boundaries
    • Threat register: threat description, affected components, qualitative risk, mitigations
    • Residual risk decisions and approver
  • Meeting notes or workshop agenda and attendance evidence
  • Links to engineering tickets created from the model
  • Evidence of closure: merged PRs, config changes, test results, deployment notes

Program-level artifacts

  • Threat modeling standard/procedure and scope triggers
  • Training or enablement materials for facilitators
  • Coverage view (inventory of systems with last threat model date)
  • Exception/risk acceptance log tied to threat model items

Daydream can help by mapping Safeguard 16.14 to a documented control operation and recurring evidence capture, so you have consistent artifacts across teams and can answer “show me” requests quickly. 1

Common exam/audit questions and hangups

Use these as your pre-audit checklist:

  1. “Which systems are in scope, and how do you know you covered them?”
    Have an inventory list and a coverage report that ties to your service catalog. 1

  2. “Show me a recent threat model and the resulting changes.”
    Be ready with one strong example that includes tickets and shipped mitigations. 1

  3. “What triggers a refresh?”
    Point to your SDLC/change triggers and show at least one refresh example. 1

  4. “Who can accept risk, and where is it documented?”
    Produce the risk acceptance record with approvals and expiry expectations. 1

  5. “How do you handle third-party components?”
    Show integration threat modeling: key risks, compensating controls, monitoring, and contract/security review tie-ins.

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails What to do instead
Threat modeling happens only for “critical” apps with no definition Coverage becomes arbitrary and hard to defend Define triggers (internet exposure, sensitive data, auth changes) and enforce via release gates. 1
Diagrams are missing or outdated You can’t reason about trust boundaries Make a minimal diagram a prerequisite; keep it versioned with the repo or architecture docs.
Output is a document, not work No measurable risk reduction Require ticket creation and closure evidence for prioritized threats. 1
Teams treat it as a one-time exercise Models become stale after changes Trigger refresh from change management and architecture review. 1
Risk acceptance is informal (“we’ll accept it”) No accountability Require named risk owner approval and a recorded decision.

Risk implications (why operators care)

Threat modeling is a force multiplier because it catches design flaws earlier than testing and incident response. If you cannot show a consistent process and evidence trail, you carry avoidable risk in high-impact areas: authentication design, authorization boundaries, data exposure paths, and third-party integration choke points. Assessors also treat missing evidence as a control failure even if teams do “some security thinking” informally. 1

Practical 30/60/90-day execution plan

First 30 days (establish the control)

  • Publish a one-page standard: scope triggers, roles, required artifacts, and where evidence is stored. 1
  • Create a threat model template and a facilitation checklist.
  • Pick a system inventory source of truth (service catalog, CMDB, or cloud/app inventory) and tag in-scope systems.
  • Run two pilot threat models: one internet-facing service, one internal data pipeline, to test the workflow.

Days 31–60 (make it repeatable)

  • Embed the “threat model required” check into SDLC intake (architecture review, change request form, or release checklist).
  • Train facilitators (security champions or platform architects) and set office hours.
  • Start a coverage tracker: system, owner, last model date, next trigger event.
  • Define and publish the risk acceptance workflow for deferred mitigations.

Days 61–90 (scale and prepare for assessment)

  • Expand to additional teams using the same template and evidence repository.
  • Sample-check quality: verify each threat model has tickets and closure evidence.
  • Run an internal audit-style walkthrough: pick a system and trace from diagram to threats to mitigations to deployment.
  • If evidence sprawl is an issue, centralize collection in Daydream so you can show consistent control operation and recurring evidence capture without chasing teams across tools. 1

Frequently Asked Questions

Do we need a specific threat modeling framework (STRIDE, PASTA, attack trees) to meet Safeguard 16.14?

CIS does not require a specific method; it expects you to conduct threat modeling and be able to show repeatable execution and evidence. Pick one approach and standardize inputs, outputs, and traceability. 1

What counts as “conducted” for audit purposes?

“Conducted” means there is a dated record with diagrams, identified threats, prioritized mitigations, and proof those mitigations were implemented or formally accepted as residual risk. If you cannot tie outputs to tickets or design decisions, expect pushback. 1

How do we handle agile teams that ship continuously?

Define threat modeling triggers around material changes (new endpoints, auth changes, sensitive data flows) and make the threat model a gate for those change types. For smaller changes, use lightweight updates to an existing model rather than a full workshop. 1

Do we need to threat model third-party SaaS tools?

You cannot model their internal code, but you should model your integration: data shared, auth model, admin access, failure modes, and compensating controls like logging and access restrictions. Keep the integration threat model with your system’s records. 1

What’s the minimum artifact set if teams refuse “documentation”?

Require a minimal diagram, a threat/mitigation table, and linked tickets with owners and closure evidence. Keep the format lightweight and consistent so teams can produce it without slowing delivery. 1

Who should approve residual risk from an unmitigated threat?

The approver should be a named risk owner with authority over the system and accountability for outcomes, typically a product or engineering leader with security input. Record the decision and any compensating controls in your exception log. 1

Footnotes

  1. CIS Controls v8; CIS Controls Navigator v8

  2. CIS Controls v8

Frequently Asked Questions

Do we need a specific threat modeling framework (STRIDE, PASTA, attack trees) to meet Safeguard 16.14?

CIS does not require a specific method; it expects you to conduct threat modeling and be able to show repeatable execution and evidence. Pick one approach and standardize inputs, outputs, and traceability. (Source: CIS Controls v8; CIS Controls Navigator v8)

What counts as “conducted” for audit purposes?

“Conducted” means there is a dated record with diagrams, identified threats, prioritized mitigations, and proof those mitigations were implemented or formally accepted as residual risk. If you cannot tie outputs to tickets or design decisions, expect pushback. (Source: CIS Controls v8; CIS Controls Navigator v8)

How do we handle agile teams that ship continuously?

Define threat modeling triggers around material changes (new endpoints, auth changes, sensitive data flows) and make the threat model a gate for those change types. For smaller changes, use lightweight updates to an existing model rather than a full workshop. (Source: CIS Controls v8; CIS Controls Navigator v8)

Do we need to threat model third-party SaaS tools?

You cannot model their internal code, but you should model your integration: data shared, auth model, admin access, failure modes, and compensating controls like logging and access restrictions. Keep the integration threat model with your system’s records. (Source: CIS Controls v8; CIS Controls Navigator v8)

What’s the minimum artifact set if teams refuse “documentation”?

Require a minimal diagram, a threat/mitigation table, and linked tickets with owners and closure evidence. Keep the format lightweight and consistent so teams can produce it without slowing delivery. (Source: CIS Controls v8; CIS Controls Navigator v8)

Who should approve residual risk from an unmitigated threat?

The approver should be a named risk owner with authority over the system and accountability for outcomes, typically a product or engineering leader with security input. Record the decision and any compensating controls in your exception log. (Source: CIS Controls v8; CIS Controls Navigator v8)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream