SA-15(3): Criticality Analysis
SA-15(3) requires you to contractually require the developer of your system, component, or service to perform a criticality analysis, then use the results to drive security design, testing priorities, and supply chain risk decisions. To operationalize it quickly, define “developer,” set analysis scope and depth, embed it in SDLC and third-party deliverables, and retain evidence. 1
Key takeaways:
- You must make criticality analysis a developer deliverable, not an internal afterthought. 1
- Treat the output as a living input to architecture, secure engineering, and supplier controls across the lifecycle. 2
- Audit success depends on contract language, repeatable procedure, and retained artifacts tied to releases and changes. 1
SA-15(3): Criticality Analysis sits in the System and Services Acquisition (SA) family, so it is about how you govern development and supply, not just how you operate a production system. The requirement is simple on its face: you must require the developer to perform a criticality analysis. 1 In practice, teams fail this control for one reason: they cannot prove they made it a formal, repeatable developer obligation, and they cannot show how the analysis changed engineering and risk decisions.
For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat SA-15(3) like a procurement and SDLC control with clear acceptance criteria. You decide who qualifies as “the developer” (internal engineering, a systems integrator, a SaaS provider building custom features, an OEM building a component), define the required analysis output, and enforce it via contracts/SoWs and release gates. Then you store the evidence in a place assessors can follow end-to-end: requirement → deliverable → review → decisions → implementation.
This page gives requirement-level implementation guidance you can put into a control procedure and immediately operationalize for the target keyword: sa-15(3): criticality analysis requirement.
Regulatory text
Control requirement (excerpt): “Require the developer of the system, system component, or system service to perform a criticality analysis:” 1
What the operator must do:
You must (1) identify who the “developer” is for the in-scope system/component/service, (2) make criticality analysis an explicit requirement for that developer, and (3) collect and act on the analysis as part of your acquisition and development governance. The control is satisfied by demonstrable obligation plus demonstrable performance and review, not by informal discussion. 1
Plain-English interpretation
A criticality analysis identifies which parts of a system matter most to mission/business outcomes and security objectives, and therefore deserve the strongest protections, the most scrutiny, and the tightest supply chain controls. Under SA-15(3), you are not only allowed to ask for this analysis, you are required to require it from whoever builds the system or component for you. 1
If you operationalize it well, criticality becomes a decision input:
- Which components are “crown jewels” and must be hardened first
- Which dependencies (libraries, services, hardware) require stricter provenance and monitoring
- Which test depth and assurance activities are mandatory before release
Who it applies to
Entity scope
- Federal information systems implementing NIST SP 800-53 controls. 2
- Contractor systems handling federal data where NIST SP 800-53 is flowed down through contractual or program requirements. 2
Operational context (when it triggers)
Apply SA-15(3) when you:
- Build or significantly change a system (new application, major refactor, cloud migration)
- Acquire a system, system component, or system service from a third party (including managed services and SaaS where the provider develops and maintains the service)
- Introduce or change critical dependencies (identity provider, key management, payment processor, logging pipeline, container base images)
If engineering is internal, “developer” still applies; you enforce it through your SDLC policy and release governance. If engineering is external, you enforce it through the contract/SoW plus acceptance criteria.
What you actually need to do (step-by-step)
Step 1: Assign ownership and define “developer”
- Name a control owner (often AppSec, Engineering Governance, or Third-Party Risk) and a compliance owner (GRC) responsible for evidence quality.
- Define developer categories: internal dev teams, systems integrators, SaaS providers delivering an in-scope service, OEMs building components.
Deliverable: “SA-15(3) applicability and developer designation” note per system/component/service.
Step 2: Define the criticality analysis standard (your minimum bar)
Create a one-page Criticality Analysis Standard that answers:
- Scope: system boundary, major subsystems, data flows, trust boundaries
- Criticality criteria: mission impact, confidentiality/integrity/availability impact, privilege level, exposure (internet-facing vs internal), blast radius, recoverability dependencies
- Component taxonomy: identity/auth, crypto/key management, data stores, control plane, build pipeline, monitoring/telemetry, admin interfaces, third-party services, firmware/hardware where relevant
- Output format: a ranked component list with rationale and required controls per tier
Keep it simple enough that developers can execute, but specific enough that auditors can judge consistency.
Step 3: Flow down the requirement into contracts and SDLC gates
For third parties, include in:
- SoW/contract clause: “Developer shall perform and deliver criticality analysis for the system/component/service and updates upon major changes.”
- Acceptance criteria: delivery required before design sign-off and before production go-live.
- Right to review: your organization reviews and approves analysis and updates.
For internal teams, implement:
- SDLC gate: architecture/design review cannot close without the criticality analysis artifact.
- Change gate: major change tickets require an updated analysis.
Tip: Make the requirement event-driven (new system, major change, new dependency) so it stays current without requiring arbitrary calendar churn.
Step 4: Execute the analysis with the developer and validate it
Ask the developer to produce an analysis that includes:
- Component inventory for the scoped boundary
- Criticality ranking (e.g., High/Medium/Low or Tier 1/2/3)
- Justification per component
- Mapped security expectations per tier (more testing, stricter code review, stronger access controls, higher logging)
Your validation checks:
- Does it identify the real high-impact parts (identity, keys, admin paths, build pipeline)?
- Does it include third-party dependencies that could fail or be tampered with?
- Are the “most critical” components tied to concrete actions (design controls, testing depth, monitoring, supplier constraints)?
Step 5: Convert results into enforceable engineering actions
Criticality analysis is only defensible if it changes work. Translate it into:
- Security requirements: e.g., “Tier 1 components require MFA for admin access, signed artifacts, and strict change control.”
- Testing requirements: e.g., threat modeling depth, code scanning coverage, manual review expectations for Tier 1
- Supplier controls: stricter due diligence for third parties supporting Tier 1 functions; evidence requests and contract addenda
Step 6: Operationalize updates
Define update triggers:
- Major architecture change
- New privileged pathway or admin interface
- Key management or identity provider change
- New third-party dependency providing critical function
- Material data classification change
Tie triggers to your change management process so updates are automatic, not optional.
Step 7: Evidence packaging for audits (make the story easy to follow)
Bundle evidence by system:
- Contract/SoW (or SDLC policy) requiring criticality analysis
- Latest criticality analysis
- Review/approval record
- Resulting security requirements and tickets
- Proof those tickets shipped (release notes, change approvals)
If you use Daydream to manage third-party risk and control evidence, map SA-15(3) to a named owner, an implementation procedure, and recurring evidence artifacts so you can answer assessor questions without reconstructing history from tickets and email. 1
Required evidence and artifacts to retain
Keep artifacts in an audit-friendly repository (GRC tool, secure wiki, or evidence vault) and link them to the system boundary.
Minimum evidence set:
- Policy/procedure that defines criticality analysis, scope, and triggers
- Developer obligation: contract clause/SoW deliverable or internal SDLC requirement
- Completed criticality analysis per system/component/service
- Review and approval record (security/architecture sign-off)
- Derived actions: security requirements, architecture decisions, test plans tied to criticality
- Change records showing updates when triggers occurred
Common exam/audit questions and hangups
Auditors/assessors commonly press on:
- “Who is the developer?” If you have multiple builders (internal team plus integrator plus SaaS), show how obligations are allocated.
- “Show me the requirement.” They will ask for the contract clause or SDLC gate wording. The artifact must be explicit. 1
- “Show me the analysis.” They will look for a consistent method, not a one-off spreadsheet.
- “What did you do differently because of it?” Expect to show downstream tickets, design changes, or test scope changes tied to critical components.
- “How do you keep it current?” You need triggers and change linkage.
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails | Fix |
|---|---|---|
| Treating criticality as “asset criticality” only | SA-15(3) focuses on the system/component/service developer and development outcomes | Rank components and dependencies inside the system boundary, not just the system as a whole |
| No contract/SoW flow-down for third parties | You cannot prove you “required the developer” to do it | Add explicit deliverables and acceptance criteria before go-live 1 |
| Analysis exists but doesn’t drive action | Assessors view it as shelfware | Tie each Tier 1 item to required controls and test depth, then show completed work |
| One-time analysis with no update triggers | Becomes stale after architecture changes | Put triggers into change management and design review checklists |
| Over-scoping into an unfinishable exercise | Teams stall and produce nothing | Start with top-level components and refine only for Tier 1/most critical paths |
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for SA-15(3). 1 Treat this as an assessment-readiness and risk-reduction control: the practical risk is that high-impact components and dependencies receive the same assurance level as low-impact ones, raising the chance of security failures in privileged pathways, key management, identity, build integrity, and other high-blast-radius areas.
Practical 30/60/90-day execution plan
Use phases rather than date promises. The goal is to stand up a repeatable control, then expand coverage.
First 30 days (stand up the control)
- Assign control owner and define “developer” categories for your environment.
- Draft the Criticality Analysis Standard (template + minimum fields).
- Update one contract/SoW template clause (or SDLC policy gate) to require the analysis deliverable. 1
- Pilot on one in-scope system with an active change or upcoming release.
Days 31–60 (operationalize and integrate)
- Integrate the requirement into architecture review and change management intake.
- Train security/architecture reviewers on how to accept/reject analyses.
- Build an evidence checklist per system so audits do not turn into archaeology.
- Expand to your highest-risk systems and third parties that provide critical services.
Days 61–90 (scale and harden)
- Make criticality tiers drive test depth, supplier diligence, and monitoring requirements.
- Add update triggers to change templates and third-party onboarding workflows.
- Conduct a table-top review: pick a Tier 1 component and prove you can trace from analysis → controls → testing → release evidence.
- Implement centralized tracking (Daydream or equivalent) so SA-15(3) evidence is complete and consistently packaged. 1
Frequently Asked Questions
Does SA-15(3) apply if we develop everything internally?
Yes. You still must “require the developer” to perform the analysis; for internal development, the requirement is enforced through SDLC policy, architecture gates, and release approvals. 1
What counts as a “system component” for criticality analysis?
Use a practical definition: major subsystems, privileged services, security control points (identity, keys, logging), and third-party dependencies inside your system boundary. Keep it consistent across systems so reviewers can compare results.
How detailed does the criticality analysis need to be?
Detailed enough to rank components and drive specific security and testing actions. If the analysis cannot justify why a component is Tier 1 and what extra controls it gets, it will not support assessment or engineering decisions.
How do we handle SaaS providers where we cannot see internal architecture?
Require the provider to deliver a criticality analysis at the service-component level they can support (control plane vs data plane, admin paths, key management, logging), plus how those critical areas are protected. Make it a contractual deliverable with review rights. 1
What is the minimum evidence an assessor will accept?
A documented requirement imposed on the developer (contract clause or SDLC gate), the completed analysis, and proof of review and downstream actions. If any one of those is missing, the control commonly fails in practice. 1
How often should we update the analysis?
Update it when meaningful change happens: architecture shifts, new critical dependencies, new privileged access paths, identity or key management changes, or material data sensitivity changes. Tie updates to change management so you do not depend on memory.
Footnotes
Frequently Asked Questions
Does SA-15(3) apply if we develop everything internally?
Yes. You still must “require the developer” to perform the analysis; for internal development, the requirement is enforced through SDLC policy, architecture gates, and release approvals. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What counts as a “system component” for criticality analysis?
Use a practical definition: major subsystems, privileged services, security control points (identity, keys, logging), and third-party dependencies inside your system boundary. Keep it consistent across systems so reviewers can compare results.
How detailed does the criticality analysis need to be?
Detailed enough to rank components and drive specific security and testing actions. If the analysis cannot justify why a component is Tier 1 and what extra controls it gets, it will not support assessment or engineering decisions.
How do we handle SaaS providers where we cannot see internal architecture?
Require the provider to deliver a criticality analysis at the service-component level they can support (control plane vs data plane, admin paths, key management, logging), plus how those critical areas are protected. Make it a contractual deliverable with review rights. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What is the minimum evidence an assessor will accept?
A documented requirement imposed on the developer (contract clause or SDLC gate), the completed analysis, and proof of review and downstream actions. If any one of those is missing, the control commonly fails in practice. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How often should we update the analysis?
Update it when meaningful change happens: architecture shifts, new critical dependencies, new privileged access paths, identity or key management changes, or material data sensitivity changes. Tie updates to change management so you do not depend on memory.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream