Developer Testing and Evaluation | Threat Modeling and Vulnerability Analyses
To meet the Developer Testing and Evaluation requirement for Threat Modeling and Vulnerability Analyses, you must contractually require the system developer (including third parties) to perform threat modeling and vulnerability analysis during development and again during testing and evaluation, then collect evidence that the work occurred and was acted on. This is a supplier-deliverable control, not a purely internal SDLC aspiration.
Key takeaways:
- Put threat modeling and vulnerability analysis into developer SOWs, acceptance criteria, and test gates, not just policy.
- Demand artifacts: models, findings, remediation tracking, and retest results tied to versions/builds.
- Auditors look for repeatability: defined triggers, accountable owners, and proof findings changed the build.
SA-11(2) sits in the acquisition and development lane: you are responsible for ensuring the developer of your system, component, or service performs threat modeling and vulnerability analyses during development and during subsequent testing and evaluation. The practical implication for a CCO, GRC lead, or compliance owner is simple: if you cannot show that the developer did this work and you reviewed the outputs, you will struggle to justify that the requirement is met, even if your internal security team does great testing after the fact.
This requirement also changes how you manage third parties. Many FedRAMP-relevant systems are built from a mix of internal engineering and external providers (commercial software, managed services, open-source dependencies maintained by others, and contracted development shops). SA-11(2) pushes you to turn secure engineering into a deliverable: threat models and vulnerability analysis outputs need to be contractually required, scheduled at the right time in the SDLC, and linked to remediation and retesting before you accept the system into an authorized environment.
The rest of this page gives you requirement-level implementation steps you can put into your SDLC and third-party agreements immediately, plus the exact artifacts and audit talking points that typically matter.
Regulatory text
Requirement: “Require the developer of the system, system component, or system service to perform threat modeling and vulnerability analyses during development and the subsequent testing and evaluation of the system.” (NIST Special Publication 800-53 Revision 5)
What the operator must do
You must (1) identify who “the developer” is for each in-scope system/component/service, (2) require that party to perform threat modeling and vulnerability analysis at two points: during development and during later testing/evaluation, and (3) maintain evidence that the activities occurred and that findings were handled through remediation and retesting. The word “require” is the operative term; it implies enforceable expectations (contract language, SDLC gates, acceptance criteria) rather than optional best efforts.
Plain-English interpretation
SA-11(2) means: no build ships without a documented view of likely attacker paths (threat modeling) and a documented review of weaknesses (vulnerability analysis), plus proof you tested and evaluated the system with those findings in mind. If development is outsourced, this becomes a third-party deliverable with review and acceptance by your organization.
Threat modeling answers: What can go wrong, how, and where are the trust boundaries? Vulnerability analysis answers: Given the design and implementation, what weaknesses exist (or are likely), and how do we find and fix them? SA-11(2) expects both, and it expects them to happen early enough to influence design, not just at the end.
Who it applies to
Entity types: Cloud Service Providers and Federal Agencies 1.
Operational context: Any system, system component, or system service you are building, integrating, or acquiring where a “developer” exists. That includes:
- Internal engineering teams building application code, infrastructure-as-code, and platform components.
- Third-party developers (contracted development, product vendors delivering custom modules, integrators building deployments).
- Teams producing “system services” (for example, shared identity, logging pipelines, CI/CD services) if they are developed rather than purely configured.
A common scoping mistake is to treat this as “appsec for the main app only.” If a component is security-relevant (authn/z, key management, update mechanism, admin interface), treat it as in-scope by default.
What you actually need to do (step-by-step)
1) Name the developer(s) and bind responsibility
Create a simple RACI per system: who is the developer, who approves threat models, who owns remediation, and who can grant an exception. Do this for internal and third-party developers. If you cannot name “the developer” clearly, you cannot “require” anything consistently.
Operational tip: For third parties, “developer” can mean the product team of a software provider, a contracted engineering firm, or an internal dev team delivering code that runs in your boundary. Map it explicitly in your supplier inventory and system boundary documentation.
2) Put the requirement into contract language and acceptance gates
For third-party development, add clauses to SOW/MSA/purchase terms that require:
- Threat modeling during design/development (minimum: per major feature or architectural change).
- Vulnerability analysis during development (secure design review, code-focused analysis, dependency risk review).
- Vulnerability analysis during subsequent testing and evaluation (for example, security test execution and results).
- Delivery of artifacts (see “Evidence” section) tied to a specific release/build.
- Remediation SLAs or prioritization rules you define, plus retest expectations.
- Your right to review results, request clarification, and reject deliverables if security gates fail.
For internal development, implement the same requirements as SDLC gates (Definition of Done / release criteria), enforced by change control and CI/CD policies.
3) Define triggers: when threat modeling and vulnerability analyses must occur
Write down objective triggers auditors can understand. Examples:
- New system or new major component.
- New external interface (API, admin portal, webhook).
- New authentication flow, authorization model, or privilege boundary.
- Significant data flow change (especially regulated data).
- New cryptography, key storage, or secrets handling pattern.
- Major dependency/platform change (runtime, container base image, service mesh).
The audit hangup is “it depends.” You can keep discretion, but you must document the triggers so the process is repeatable.
4) Standardize the deliverables (templates) so outputs are reviewable
Create templates that force consistency across teams and third parties:
Threat model minimum content
- System overview and assumptions.
- Trust boundaries and data flows (diagram required).
- Assets and attacker goals.
- Threat enumeration (methodology is your choice; consistency matters).
- Mitigations mapped to threats.
- Residual risk and explicit acceptance where mitigations are deferred.
Vulnerability analysis minimum content
- What was analyzed (repos, services, IaC modules, container images).
- Methods used (design review, static analysis, dependency analysis, manual review).
- Findings list with severity/priority, affected components, and recommended fixes.
- Evidence of triage decisions (fix, mitigate, accept) and owner assignment.
5) Connect findings to remediation, retesting, and release decisions
SA-11(2) is often failed in practice because findings exist but do not influence release. Create a single workflow:
- Findings logged in a system of record (ticketing, GRC, or vulnerability management).
- Each finding tied to a version/build and an owner.
- Decision recorded (fix now, fix later with compensating controls, or risk accept).
- Retest evidence for fixed items.
- Release approval references the threat model and vulnerability analysis completion.
Where Daydream fits naturally: If you manage many third parties and multiple engineering teams, Daydream can track which suppliers owe which security deliverables, collect threat modeling and vulnerability analysis artifacts, and keep an audit-ready trail that ties each artifact to a system, release, and approval.
6) Review and challenge: your organization must actively evaluate outputs
“Require the developer” does not mean “file it away.” Assign a reviewer (security architect, appsec lead, or delegated GRC reviewer) to:
- Confirm the threat model matches the real architecture and current data flows.
- Spot missing trust boundaries (admin paths, CI/CD, logging/telemetry, backup/restore).
- Validate that mitigations are implemented or tracked.
- Check that vulnerability analysis is scoped to the real attack surface, not just a tool report.
Document the review outcome and required follow-ups.
Required evidence and artifacts to retain
Keep artifacts tied to a specific system/component/service and release:
- Contractual requirement evidence
- SOW/MSA clauses or internal SDLC policy/gate language requiring threat modeling and vulnerability analysis.
- Threat modeling artifacts
- Threat model document(s) and data flow/trust boundary diagrams.
- Meeting notes or sign-off showing review and approval.
- Mapping of threats to mitigations and residual risk decisions.
- Vulnerability analysis artifacts
- Vulnerability analysis report(s) during development (design/code/dependency focus).
- Vulnerability analysis outputs during testing and evaluation (security test results, triage records).
- Tool outputs are acceptable only if contextualized: scope, configuration, and interpretation.
- Remediation and retest trail
- Ticket list with statuses, owners, and dates.
- Retest results or evidence of verification.
- Risk acceptance records for deferred issues, with approver and rationale.
- Release gating evidence
- Change control record or release checklist referencing completion of the activities.
Common exam/audit questions and hangups
- “Show me where you require developers to do threat modeling and vulnerability analysis.” Auditors want enforceable language: contract clauses or SDLC gates.
- “How do you know the threat model is current?” Expect scrutiny around architectural changes and new interfaces.
- “Prove the vulnerability analysis happened during development, not just right before go-live.” Versioned artifacts and dated approvals help.
- “How do findings impact release?” If you cannot show a gate or documented risk acceptance, the process looks optional.
- “What about third-party components?” You need a position: require deliverables, assess supplier outputs, or document compensating controls and acceptance.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Treating threat modeling as a one-time exercise.
Fix: Define triggers and require updates on major changes; link to change management. -
Mistake: Substituting scanning outputs for “vulnerability analysis.”
Fix: Keep tool results, but also produce a human-reviewed analysis with scope and conclusions. -
Mistake: No contractual teeth for third-party developers.
Fix: Add deliverables, rejection rights, and acceptance criteria tied to security artifacts. -
Mistake: Findings tracked in too many places.
Fix: Use one system of record and reference it from the threat model and test evidence. -
Mistake: No evidence of review.
Fix: Record reviewer sign-off, questions raised, and decisions made.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite any. Practically, SA-11(2) gaps create predictable risk: blind spots at trust boundaries, late discovery of design flaws, and third-party development that ships without security expectations. In authorization and assessment contexts, these gaps often surface as missing artifacts, inconsistent SDLC gates, or inability to demonstrate that the developer performed the required analyses (NIST Special Publication 800-53 Revision 5).
Practical 30/60/90-day execution plan
First 30 days (Immediate)
- Inventory in-scope systems/components/services and identify the developer for each.
- Add SA-11(2) deliverable language to your standard third-party development SOW template.
- Publish internal SDLC gate language: threat model + vulnerability analysis required before release approval.
- Create templates for threat models and vulnerability analyses; assign reviewers.
Next 60 days (Near-term)
- Pilot on one high-impact system: produce threat model, run vulnerability analysis during development, and repeat during testing/evaluation.
- Stand up the workflow: findings intake, triage, remediation ownership, retest evidence, and risk acceptance.
- Train engineering and procurement on the new contract and gate requirements.
By 90 days (Operationalize)
- Expand coverage to remaining in-scope systems and key third parties.
- Add checks to release/change processes so artifacts are required for approvals.
- Run an internal audit: select recent releases and confirm the full evidence chain exists end-to-end.
- Centralize collection and tracking (for example, in Daydream) so you can answer assessor questions without chasing teams.
Frequently Asked Questions
Does SA-11(2) apply if we buy a commercial product and don’t modify code?
It can, if the “system component or system service” is developed by a third party and is part of your authorized boundary. If you cannot get developer-performed artifacts, document your compensating approach (your own threat modeling and vulnerability analysis) and record the risk decision.
What counts as “vulnerability analysis” versus routine vulnerability scanning?
Scanning outputs can be inputs, but the expectation is an analysis with defined scope, methods, and conclusions that inform remediation and retesting. Keep evidence that the developer interpreted results and made engineering decisions.
Who should approve a threat model?
Assign an accountable security reviewer (security architect, appsec lead, or designated control owner) and require a recorded sign-off. For third parties, require delivery plus your internal approval before acceptance.
How do we handle agile teams shipping frequently?
Use triggers and tiered threat modeling: update the model for material changes, and require vulnerability analysis integrated into CI/CD with a clear exception process. Keep versioned artifacts tied to release tags.
Can we accept risk instead of fixing every finding?
Yes, but the decision must be explicit, approved by the right authority, and tied to compensating controls where applicable. Auditors look for a consistent risk acceptance workflow rather than informal waivers.
What’s the minimum evidence assessors usually ask for?
A threat model with diagrams, a vulnerability analysis report from development, a second set of results from testing/evaluation, and a remediation/retest trail tied to a release. They also ask where the requirement is imposed (contract or SDLC gate).
Footnotes
Frequently Asked Questions
Does SA-11(2) apply if we buy a commercial product and don’t modify code?
It can, if the “system component or system service” is developed by a third party and is part of your authorized boundary. If you cannot get developer-performed artifacts, document your compensating approach (your own threat modeling and vulnerability analysis) and record the risk decision.
What counts as “vulnerability analysis” versus routine vulnerability scanning?
Scanning outputs can be inputs, but the expectation is an analysis with defined scope, methods, and conclusions that inform remediation and retesting. Keep evidence that the developer interpreted results and made engineering decisions.
Who should approve a threat model?
Assign an accountable security reviewer (security architect, appsec lead, or designated control owner) and require a recorded sign-off. For third parties, require delivery plus your internal approval before acceptance.
How do we handle agile teams shipping frequently?
Use triggers and tiered threat modeling: update the model for material changes, and require vulnerability analysis integrated into CI/CD with a clear exception process. Keep versioned artifacts tied to release tags.
Can we accept risk instead of fixing every finding?
Yes, but the decision must be explicit, approved by the right authority, and tied to compensating controls where applicable. Auditors look for a consistent risk acceptance workflow rather than informal waivers.
What’s the minimum evidence assessors usually ask for?
A threat model with diagrams, a vulnerability analysis report from development, a second set of results from testing/evaluation, and a remediation/retest trail tied to a release. They also ask where the requirement is imposed (contract or SDLC gate).
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream