SA-11(2): Threat Modeling and Vulnerability Analyses
SA-11(2) requires you to make developers perform threat modeling and vulnerability analyses during development and again during subsequent testing and evaluation, then retain evidence that the work happened and drove fixes. To operationalize it, embed threat modeling into your SDLC gates and tie vulnerability analysis results to tracked remediation and release approvals. 1
Key takeaways:
- Make threat modeling a required SDLC activity with defined triggers (new system, major change, high-risk features).
- Run vulnerability analyses during build and during test/evaluation, and prove findings were triaged and fixed.
- Keep assessment-ready artifacts: models, findings, remediation tickets, retest results, and release sign-offs.
Footnotes
The sa-11(2): threat modeling and vulnerability analyses requirement is an engineering control that compliance owns operationally through governance: you have to ensure developers do the work, at the right time, with the right scope, and you can prove it in an assessment. SA-11(2) sits in the System and Services Acquisition (SA) family, so it applies even when development is performed by a third party, a cloud service provider, a systems integrator, or an internal product team.
For a CCO or GRC lead, the fastest path is to treat SA-11(2) as an SDLC “gate with receipts.” You define when threat models and vulnerability analyses are mandatory, what “done” looks like, who approves exceptions, how results feed remediation, and what evidence must be retained in the system security plan (SSP) package or equivalent control repository.
This page gives requirement-level, implement-now guidance: applicability, steps, artifacts, audit questions, common failure modes, and a practical execution plan you can hand to Engineering and Security without turning it into a research project.
Regulatory text
Requirement excerpt: “Require the developer of the system, system component, or system service to perform threat modeling and vulnerability analyses during development and the subsequent testing and evaluation of the system, component, or service that:” 1
Operator interpretation (what you must do):
- You must contractually and operationally require whoever develops your system (internal teams and third parties) to:
- perform threat modeling during development, and
- perform vulnerability analyses during development, and
- perform vulnerability analyses again during subsequent testing and evaluation activities.
1
What auditors care about is not the method you pick; it’s whether the activity is repeatable, scoped, performed at the right points, and produces actionable findings that are remediated or formally accepted.
Plain-English interpretation
SA-11(2) means: before you ship and while you test, developers must systematically think like an attacker (threat model) and systematically look for weaknesses (vulnerability analysis), then fix what they find or document risk acceptance. 1
Threat modeling answers: What can go wrong, how, and what controls prevent it?
Vulnerability analysis answers: What weaknesses exist in code, dependencies, configuration, design, or deployed components, and what is the plan to remediate and verify?
Who it applies to (entity and operational context)
This control commonly applies where you implement NIST SP 800-53 Rev. 5, including:
- Federal information systems and programs assessed against NIST controls. 2
- Contractor systems handling federal data, where your customer flows NIST requirements down to your environment and development practices. 2
Operationally, it applies to:
- New application development and major rewrites
- Infrastructure-as-code and cloud platform builds
- Commercial off-the-shelf (COTS) integration and configuration (you may not “develop” the code, but you develop the implementation)
- System components and services, including microservices, APIs, CI/CD pipelines, and shared libraries 1
Third-party development: If a third party builds, customizes, or significantly configures a system or component for you, your obligation becomes “require the developer” via contract clauses, statements of work, acceptance criteria, and delivery artifacts.
What you actually need to do (step-by-step)
1) Assign ownership and define scope (one owner, many contributors)
- Control owner: usually AppSec or Product Security; accountable executive: CISO/CTO; GRC accountable for evidence and assessor interface.
- Define the in-scope “systems/components/services” and map them to repos, CI pipelines, and product boundaries.
Practical tip: If you can’t enumerate the in-scope software units, you can’t prove coverage.
2) Establish SDLC triggers (when threat modeling is required)
Define explicit triggers such as:
- New system or new externally exposed service
- Major architecture change (new data store, new auth pattern, new network boundary)
- New sensitive data flow
- High-risk features (admin functions, payments, identity, key management)
Make the triggers part of your engineering workflow (template in your ticketing system or as a required field in your change intake).
3) Standardize the threat modeling method and minimum outputs
Pick a method Engineering will follow consistently (examples: STRIDE-based, data-flow-diagram based, abuse cases). The method is less important than repeatability and outputs.
Minimum outputs to require:
- System context and trust boundaries (diagram)
- Key assets and security objectives (confidentiality/integrity/availability)
- Threats and mitigations mapped to components
- Residual risks and required follow-up work items
4) Define “vulnerability analyses” across build and test/evaluation
To meet the “during development” and “subsequent testing and evaluation” language, implement vulnerability analysis in two bands:
During development (build-time):
- Static analysis and code scanning where applicable
- Dependency and container analysis for known vulnerable components
- Infrastructure-as-code scanning for misconfigurations
- Secure code review requirements for high-risk changes
During testing and evaluation (pre-release / post-build):
- Dynamic testing against running services (authenticated where relevant)
- Penetration-test style evaluation for major releases or high-risk systems
- Configuration review of deployed environment and runtime controls
You do not need every technique for every system, but you do need a defined baseline plus risk-based escalation.
5) Wire results to remediation and release controls
This is where programs fail. Require:
- Findings are logged in a system of record (ticketing tool, vuln management platform)
- A defined triage process (severity, exploitability, exposure)
- Remediation SLAs or timeframes set by your risk program (your policy decision)
- Retest evidence prior to closure
- Release approval rules: what blocks a release, what requires exception approval
Decision matrix (example you can adopt):
| Finding type | In internet-facing scope? | Default release action | Exception path |
|---|---|---|---|
| Auth bypass / access control failure | Yes | Block release | Written risk acceptance by system owner + security sign-off |
| Dependency critical vulnerability | Depends on reachability | Block if reachable/exploitable | Compensating controls documented + fix scheduled |
| Low-risk informational | N/A | Do not block | Track for backlog grooming |
6) Flow down requirements to third parties
If a third party develops any part of the system:
- Put threat modeling and vulnerability analysis deliverables in the SOW
- Require delivery of artifacts (models, scan summaries, findings list, remediation status)
- Add acceptance criteria: “no open critical/high findings without approved exception”
- Add audit rights and evidence retention expectations
7) Make it assessable: map, store, and refresh evidence
Create a single control narrative for SA-11(2) with:
- The procedure (how it runs)
- The tools used
- The cadence/trigger logic
- The roles and approvals
- Where evidence lives and retention period (your program decision)
If you use Daydream, treat SA-11(2) as a control record that links directly to each system’s threat model, scan outputs, and remediation tickets so you can answer assessor requests without scrambling.
Required evidence and artifacts to retain
Store artifacts per system/component/service. Typical evidence set:
- SDLC policy or secure engineering standard that requires threat modeling and vulnerability analyses 1
- Threat model package:
- Architecture and data flow diagram with trust boundaries
- Threat enumeration and mitigation mapping
- Review/approval record and date
- Vulnerability analysis outputs (build-time):
- Scan configurations and execution logs (or reports)
- Findings export with timestamps
- Testing/evaluation vulnerability analysis outputs:
- DAST results, penetration test report, or evaluation notes
- Evidence the target was the right environment/version
- Remediation evidence:
- Tickets with severity, owner, due date, fix commit/PR links
- Retest/verification results
- Risk acceptance memos for deferred items, with approver and expiration
- Release gate evidence:
- Change record showing security approval or exception (if your process uses CAB/approvals)
Evidence quality rule: An auditor should be able to trace from “requirement exists” → “activity executed” → “findings handled” → “system shipped with known risks documented.”
Common exam/audit questions and hangups
Expect these questions:
- “Show me the threat model for this system and the last major change.”
- “What triggers a threat model update?”
- “How do you ensure developers, including third parties, perform vulnerability analysis during development?”
- “Show evidence of vulnerability analysis during testing and evaluation, not only code scanning.”
- “How are findings tracked to closure? Show retest.”
- “Who can accept risk, and how do you document exceptions?”
Common hangups:
- Threat models exist but are outdated relative to the deployed architecture.
- Scans run, but no one can prove triage and remediation decisions.
- Third-party deliverables are missing because the contract didn’t require them.
Frequent implementation mistakes and how to avoid them
-
Treating threat modeling as optional documentation.
Fix: make it a gate for defined triggers; no model, no release approval. -
Running one scanner and calling it “vulnerability analysis.”
Fix: define a minimum set across code/dependencies/config, plus a testing/evaluation activity appropriate to the system. -
No linkage to remediation.
Fix: require tickets, owners, and retest evidence. Evidence beats screenshots. -
Ignoring system components and services.
Fix: include shared libraries, CI/CD, IaC modules, and managed services configuration in scope where they affect security outcomes. 1 -
Third-party development with no flow-down.
Fix: add SOW deliverables and acceptance criteria; require artifacts at milestones.
Enforcement context and risk implications
No public enforcement cases were provided in the supplied sources for this requirement, so this page does not list enforcement examples.
Risk implications still matter operationally:
- Weak or missing threat models correlate with design flaws that bypass downstream controls (authz gaps, trust boundary mistakes).
- Weak vulnerability analysis practices leave known weaknesses in code and dependencies that can become incidents, audit findings, and customer escalations.
For regulated customers, failure typically surfaces as an assessment deficiency: “control not implemented” or “implemented but not effective,” driven by missing artifacts and inconsistent execution.
Practical 30/60/90-day execution plan
First 30 days (stand up the control mechanics)
- Name control owner, define in-scope systems/components/services, and publish the SA-11(2) procedure.
- Add SDLC triggers and a threat model template (diagram + threat/mitigation table).
- Define minimum vulnerability analysis activities for development and for test/evaluation.
- Update third-party development contract templates/SOW language to require deliverables.
By 60 days (operate on real systems and produce evidence)
- Complete threat models for highest-risk systems first (internet-facing, sensitive data, privileged functions).
- Turn on or formalize vulnerability analyses (build-time and test/evaluation) and start storing reports centrally.
- Establish triage workflow and release gating rules; start capturing exception approvals.
By 90 days (prove repeatability and close gaps)
- Run an internal assessment dry-run: pick a system and trace artifacts end-to-end.
- Fix evidence gaps (missing approvals, missing retest proof, unclear scope boundaries).
- Operationalize KPIs qualitatively (coverage by system, backlog age by severity) without inventing metrics; use what your tools can actually report.
- Implement a recurring refresh mechanism: threat model updates tied to architecture changes and major releases.
Frequently Asked Questions
Does SA-11(2) require a specific threat modeling framework (STRIDE, PASTA, etc.)?
No specific method is mandated in the provided text; the requirement is that developers perform threat modeling during development and you can show the outputs and follow-through. Pick one approach and standardize templates so execution is consistent. 1
What counts as “vulnerability analyses” for SA-11(2)?
Treat it as systematic identification of weaknesses across code, dependencies, configuration, and deployed behavior, performed during development and again during testing/evaluation. Your evidence should show both phases occurred and produced tracked findings. 1
If we buy a SaaS product, are we still on the hook?
If the SaaS is a system service in your authorization boundary, you still need assurance that the developer performs threat modeling and vulnerability analyses. Practically, you meet this by contracting for security attestations and delivery artifacts, or by obtaining equivalent evidence through due diligence. 1
How do we handle “we can’t fix it before launch” findings?
Use documented risk acceptance with explicit approvers, scope, and a remediation plan, then retest once fixed. Auditors usually accept deferral only when the decision is recorded and governed, not informal.
How often do threat models need to be updated?
Tie updates to triggers: architecture changes, new trust boundaries, new sensitive data flows, and major releases. A fixed calendar cadence can miss the real risk driver, which is change.
What’s the minimum evidence set to satisfy an assessor quickly?
For one representative system, be ready to produce the threat model, vulnerability analysis outputs from development and testing/evaluation, remediation tickets with closures, and the release approval or exception record that references those results. 1
Footnotes
Frequently Asked Questions
Does SA-11(2) require a specific threat modeling framework (STRIDE, PASTA, etc.)?
No specific method is mandated in the provided text; the requirement is that developers perform threat modeling during development and you can show the outputs and follow-through. Pick one approach and standardize templates so execution is consistent. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What counts as “vulnerability analyses” for SA-11(2)?
Treat it as systematic identification of weaknesses across code, dependencies, configuration, and deployed behavior, performed during development and again during testing/evaluation. Your evidence should show both phases occurred and produced tracked findings. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
If we buy a SaaS product, are we still on the hook?
If the SaaS is a system service in your authorization boundary, you still need assurance that the developer performs threat modeling and vulnerability analyses. Practically, you meet this by contracting for security attestations and delivery artifacts, or by obtaining equivalent evidence through due diligence. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How do we handle “we can’t fix it before launch” findings?
Use documented risk acceptance with explicit approvers, scope, and a remediation plan, then retest once fixed. Auditors usually accept deferral only when the decision is recorded and governed, not informal.
How often do threat models need to be updated?
Tie updates to triggers: architecture changes, new trust boundaries, new sensitive data flows, and major releases. A fixed calendar cadence can miss the real risk driver, which is change.
What’s the minimum evidence set to satisfy an assessor quickly?
For one representative system, be ready to produce the threat model, vulnerability analysis outputs from development and testing/evaluation, remediation tickets with closures, and the release approval or exception record that references those results. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream