Development Process, Standards, and Tools | Criticality Analysis

To meet the Development Process, Standards, and Tools | Criticality Analysis requirement, you must contractually require your system developer (including third-party builders) to perform a criticality analysis at defined depth and at specific SDLC decision points, then use the results to drive design, security controls, testing, and supply-chain decisions (NIST Special Publication 800-53 Revision 5).

Key takeaways:

  • Write down your “breadth, depth, and decision points” and make them mandatory in developer SOWs.
  • Treat criticality analysis as an engineering gate that affects architecture, threat modeling, test scope, and component sourcing.
  • Keep evidence that the analysis happened at the required points and that findings changed requirements, design, or acceptance criteria.

NIST SP 800-53 Rev. 5 SA-15(3) is easy to misunderstand because it sounds like a purely technical exercise. It is a governance requirement aimed at preventing “unknown critical components” from entering your system without proper engineering scrutiny. The control enhancement expects you to (1) define what “criticality analysis” means for your environment, (2) define when it must occur in your SDLC, and (3) require the developer to execute it and produce usable outputs.

For a Compliance Officer, CCO, or GRC lead, the fastest path to operationalizing SA-15(3) is to treat it like a procurement and SDLC gating requirement. Your job is to make the expectations explicit, ensure developers perform the analysis at the right times, and ensure engineering and security teams can show auditors a clear trail: critical components identified, rationale documented, risks assessed, mitigations assigned, and decisions tracked through to implementation and testing.

This page gives requirement-level guidance you can drop into your SDLC, secure development policy, and third-party development contracts, with a focus on the evidence auditors ask for most.

Regulatory text

Requirement (verbatim): “Require the developer of the system, system component, or system service to perform a criticality analysis at organization-defined breadth and depth and at organization-defined decision points in the system development life cycle.” (NIST Special Publication 800-53 Revision 5)

Operator meaning: You must define:

  1. Breadth and depth of the analysis (what is in scope and how rigorous it must be), and
  2. Decision points in the SDLC (when the analysis must be performed or refreshed), and
  3. A mechanism to require the developer to do it (policy + contractual language + SDLC gates).

Auditors will look for two things: (a) the analysis artifacts exist at the defined points, and (b) the results influenced engineering decisions (requirements, design, sourcing, security controls, and testing).

Plain-English interpretation (what SA-15(3) is really asking for)

A criticality analysis is a structured way to identify the parts of your system that are most essential to confidentiality, integrity, and availability, and most likely to cause outsized harm if they fail, are compromised, or are tampered with. SA-15(3) requires you to force that thinking into the development process, rather than discovering “critical” components after an incident or late in an assessment (NIST Special Publication 800-53 Revision 5).

In practice, this means you define what counts as “critical” (data flows, identity, encryption, privileged paths, control plane, build pipeline, update mechanism, key dependencies), then you require developers to analyze those areas early and at change points, and to document risk-driven decisions.

Who it applies to (entity + operational context)

Applies to:

  • Cloud Service Providers building or operating systems assessed against FedRAMP-aligned baselines.
  • Federal agencies that build systems in-house or contract for development (NIST Special Publication 800-53 Revision 5).

Operational contexts that trigger it:

  • New system builds, major feature releases, and re-architecture work.
  • New or changed system components (e.g., auth service, logging pipeline, encryption module).
  • New or changed system services (e.g., managed databases, CI/CD services, external identity providers).
  • Third-party development (integrators, contractors, product vendors) where you don’t directly control day-to-day engineering, but you still must require and evidence the analysis.

What you actually need to do (step-by-step)

Step 1: Define “criticality analysis” for your environment (breadth + depth)

Create a one-page standard that answers:

  • Breadth (what is in scope): at minimum include system components/services that handle sensitive data, identity and access, cryptographic key management, administrative interfaces, control plane functions, audit logging, backup/restore, update mechanisms, and CI/CD supply chain.
  • Depth (how rigorous): specify the minimum analysis content, such as:
    • Component/service description and boundaries
    • Data classification and key data flows
    • Trust boundaries and privilege paths
    • Dependency map (including third parties and open-source)
    • Failure modes and security abuse cases
    • Required security controls and test expectations
    • Residual risk and approval path

Deliverable: “Criticality Analysis Standard” (short, enforceable, and referenced by SDLC and contracts).

Step 2: Define SDLC “decision points” where the analysis is mandatory

Write these as gates with clear entry/exit criteria. Common decision points you can define without overcomplicating it:

  • Architecture approval / design review
  • Pre-implementation (before coding begins for a high-risk feature)
  • Pre-production / release readiness
  • Post-change review for material changes to critical components (e.g., major dependency swap, new admin interface, new auth flow)

Make it explicit that “decision point” includes significant change events, not just calendar milestones.

Deliverable: SDLC procedure update and an engineering checklist that shows “Criticality Analysis required? Y/N” with rationale.

Step 3: Make it a contractual requirement for developers (including third parties)

SA-15(3) is explicit: you must require the developer to perform the analysis (NIST Special Publication 800-53 Revision 5). Implement this by adding to:

  • Master services agreements / development SOWs
  • Product security requirements for internal teams
  • Acceptance criteria for deliverables

Contract language should specify:

  • The developer must perform criticality analysis per your standard
  • Timing aligned to your decision points
  • Artifact format and delivery (e.g., documented analysis + updated architecture diagrams)
  • Obligation to update analysis when critical components change
  • Right to review and to reject releases that fail the gate

Step 4: Operationalize with an intake workflow and ownership

Assign owners:

  • Engineering owns producing the analysis.
  • Security architecture (or AppSec) owns review and required mitigations.
  • GRC owns gate definition, evidence collection, and audit readiness.

Create a lightweight workflow:

  1. System/feature intake identifies potential critical scope.
  2. Developer completes analysis template.
  3. Security reviews and issues required actions.
  4. Release/architecture board approves with documented residual risk acceptance where needed.

Step 5: Tie outputs to control selection, testing, and supplier decisions

A criticality analysis that does not change anything will fail an exam sniff test. Hardwire outputs to:

  • Threat modeling depth for critical areas
  • Security test scope (SAST/DAST, dependency scanning, code review intensity)
  • Hardening requirements and logging/auditing requirements
  • Third-party dependency approvals (including whether a component is allowed at all)

Step 6: Centralize evidence for audits (don’t rely on tribal knowledge)

Store artifacts in a system of record:

  • GRC repository, ticketing system, or controlled document store
  • Link analysis to architecture diagrams, epics, and release tickets

If you use Daydream, treat SA-15(3) as a requirement with a mapped evidence checklist so teams can attach the analysis, gate approvals, and change triggers in one place. The goal is fast retrieval during assessments, not perfect prose.

Required evidence and artifacts to retain

Keep evidence that proves requirement + timing + action:

Core artifacts

  • Criticality Analysis Standard (defines breadth and depth)
  • SDLC gate procedure showing decision points
  • Completed criticality analyses per system/component/service
  • Architecture diagrams and data flow diagrams referenced by the analysis
  • Dependency inventories for critical components (including third parties)
  • Review/approval records (tickets, meeting minutes, sign-offs)
  • Mitigation tracking (security requirements, backlog items, exceptions)

Change-trigger evidence

  • Release records showing analysis performed/updated before release
  • Change requests showing re-analysis when critical components change
  • Risk acceptance records for residual issues (who approved, what conditions)

Common exam/audit questions and hangups

Expect assessors to probe:

  • “How did you define breadth and depth?” Show your standard and a filled example.
  • “What are your SDLC decision points?” Show gates and at least one release where the gate was enforced.
  • “How do you know developers followed it?” Show contractual language (if third party) and a workflow record.
  • “How do results affect engineering?” Show a requirement or design change that came from the analysis.
  • “How do you handle changes?” Show a trigger mechanism and an updated analysis tied to a change ticket.

Hangup: teams often produce a one-time document at project start. SA-15(3) expects analysis at defined decision points, which typically includes change-driven refresh.

Frequent implementation mistakes (and how to avoid them)

  1. No organizational definition of breadth/depth.
    Fix: publish a minimum standard and template. Keep it short enough that engineers will use it.

  2. Criticality analysis confused with asset inventory.
    Fix: require decision outputs: what is critical, why, what controls/tests are required, and what changed as a result.

  3. No SDLC gate enforcement.
    Fix: make release readiness or architecture approval contingent on completing the analysis for in-scope work.

  4. Third-party developers not bound to the requirement.
    Fix: add explicit SOW language and acceptance criteria that require delivery of the analysis artifacts.

  5. Findings don’t flow into work tracking.
    Fix: every required mitigation becomes a ticket with an owner and due date; link tickets back to the analysis.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement. From a risk standpoint, SA-15(3) reduces the chance that high-impact components (identity, cryptography, control plane, build pipeline, update mechanisms, and critical third-party dependencies) are designed or changed without proportional scrutiny, which can lead to outages, unauthorized access paths, and supply-chain exposure (NIST Special Publication 800-53 Revision 5).

Practical 30/60/90-day execution plan

First 30 days (get it enforceable)

  • Publish a “Criticality Analysis Standard” with minimum scope and required content.
  • Define SDLC decision points and add them to your engineering governance (architecture review, release readiness, significant change).
  • Update third-party development templates (SOW / acceptance criteria) to require criticality analysis deliverables.

Days 31–60 (run it on real work)

  • Pilot on one system or one major feature release.
  • Train engineering leads and security reviewers on the template and what “good” looks like.
  • Implement evidence capture: link analysis docs to tickets, approvals, and mitigation backlog items.

Days 61–90 (scale and make it auditable)

  • Expand to all systems/components/services in scope for your compliance boundary.
  • Add a periodic QA check: sample recent releases to confirm the gate was followed and artifacts are retrievable.
  • Build an audit-ready evidence package per system: standard, gates, last analyses, approvals, and mitigation status.

Frequently Asked Questions

What counts as a “criticality analysis” for SA-15(3)?

A documented evaluation that identifies which components/services are critical, why they are critical, and what security and resilience requirements follow from that determination. Your organization must define the required breadth, depth, and SDLC decision points (NIST Special Publication 800-53 Revision 5).

Do we need to do this for every minor code change?

No, but you do need clear change triggers. Define what “significant change” means for critical components, then require an update to the analysis at those decision points.

We buy a SaaS component; we are not the developer. How do we comply?

SA-15(3) expects you to require the developer to perform the analysis, so you should incorporate it into procurement and third-party contracting where feasible. If you cannot obtain it, document the gap, compensating review steps, and the risk decision.

How detailed does “breadth and depth” need to be?

Detailed enough that reviewers can identify critical components, trace dependencies, and derive specific control and test requirements. Keep a minimum standard, then allow deeper analysis for higher-risk components.

What evidence is most persuasive to auditors?

A completed analysis tied to a specific SDLC gate, plus proof the output drove action (security requirements, design change, test scope changes, or a documented risk acceptance). Auditors also want to see a repeatable template and consistent storage.

Can we manage this in tickets instead of long documents?

Yes, as long as the ticketing record includes the required content (scope, rationale, dependencies, decisions, approvals) and is easy to retrieve. Many teams store a short analysis doc and link it to the ticket for durability.

Frequently Asked Questions

What counts as a “criticality analysis” for SA-15(3)?

A documented evaluation that identifies which components/services are critical, why they are critical, and what security and resilience requirements follow from that determination. Your organization must define the required breadth, depth, and SDLC decision points (NIST Special Publication 800-53 Revision 5).

Do we need to do this for every minor code change?

No, but you do need clear change triggers. Define what “significant change” means for critical components, then require an update to the analysis at those decision points.

We buy a SaaS component; we are not the developer. How do we comply?

SA-15(3) expects you to require the developer to perform the analysis, so you should incorporate it into procurement and third-party contracting where feasible. If you cannot obtain it, document the gap, compensating review steps, and the risk decision.

How detailed does “breadth and depth” need to be?

Detailed enough that reviewers can identify critical components, trace dependencies, and derive specific control and test requirements. Keep a minimum standard, then allow deeper analysis for higher-risk components.

What evidence is most persuasive to auditors?

A completed analysis tied to a specific SDLC gate, plus proof the output drove action (security requirements, design change, test scope changes, or a documented risk acceptance). Auditors also want to see a repeatable template and consistent storage.

Can we manage this in tickets instead of long documents?

Yes, as long as the ticketing record includes the required content (scope, rationale, dependencies, decisions, approvals) and is easy to retrieve. Many teams store a short analysis doc and link it to the ticket for durability.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
Development Process, Standards, and Tools | Criticality A... | Daydream