Criticality Analysis
The RA-9 criticality analysis requirement means you must identify which system components and functions are “critical” and document that determination at defined points in your SDLC, so you can apply stronger supply chain, architecture, testing, and monitoring controls where compromise would cause outsized mission or security impact. The output is a repeatable method plus traceable artifacts, not a one-time narrative. 1
Key takeaways:
- Define “critical” for your environment, then apply it consistently across systems, components, and services. 1
- Build decision points into the SDLC so criticality is reassessed when designs, dependencies, or deployments change. 1
- Keep auditable artifacts: scope, method, dependency map, critical component list, rationale, approvals, and change triggers. 1
Criticality analysis is the practical bridge between “we have a system boundary” and “we know exactly which parts must not fail, must not be tampered with, and must be protected more aggressively.” Under FedRAMP Moderate, RA-9 requires you to identify critical system components and functions by performing a criticality analysis for defined systems, components, or services at defined decision points in the SDLC. 1
For a CCO, GRC lead, or compliance officer, the fastest path to operationalizing RA-9 is to treat it as an engineering-integrated governance requirement: you set scope, a consistent method, and SDLC gates; engineering produces dependency evidence and technical rationale; security validates and ties the output to control selection (for example, stricter supplier controls or additional monitoring for critical elements). Your goal is a defensible record that you know what is critical, why it is critical, and what you changed because of it.
This page gives you a requirement-level playbook: who owns what, how to run the analysis step-by-step, what artifacts auditors expect, and where teams get stuck.
Regulatory text
RA-9 states: “Identify critical system components and functions by performing a criticality analysis for organization-defined systems, system components, or system services at organization-defined decision points in the system development life cycle.” 1
What the operator must do: you must (1) decide which systems/components/services are in scope, (2) define SDLC decision points where criticality is evaluated, (3) run a documented analysis that identifies critical components and functions, and (4) retain evidence that the analysis occurred and informs downstream security actions. 1
Plain-English interpretation (what RA-9 is really asking)
You need a repeatable way to answer three audit-grade questions:
- What parts of the system are most “mission/security sensitive”? Think identity, trust anchors, cryptographic key handling, authorization paths, control planes, update mechanisms, and data movement paths.
- How did you decide that? A written method with factors and thresholds you apply consistently.
- When do you re-check it? Defined SDLC points and change triggers so the list stays current as architecture and third parties change. 1
RA-9 does not require a specific scoring model. It does require that you can show your work.
Who it applies to
Entity types: Cloud Service Providers and Federal Agencies operating systems aligned to the FedRAMP Moderate baseline. 1
Operational contexts where RA-9 shows up:
- New system builds, major feature releases, re-architecture, or boundary changes.
- Procurement and third-party onboarding for components that become critical dependencies.
- Platform changes (for example, new CI/CD tooling, new secrets management, new identity provider).
- “Inherited” services and shared responsibility areas where you still need clarity on which parts are critical within your responsibility. 1
What you actually need to do (step-by-step)
Step 1: Set scope and name decision points (make it schedulable)
Create a one-page RA-9 scope statement:
- In-scope objects: which systems, system components, and system services you will analyze. 1
- Decision points in the SDLC: choose explicit gates such as architecture review, pre-production security review, major change approval, and periodic revalidation. RA-9 requires organization-defined decision points. 1
- Change triggers: events that force a re-run outside the normal cadence (for example, introducing a new third party into an auth path, changing encryption/key handling, adding a new admin plane). Keep triggers concrete so teams can follow them.
Owner: GRC defines; Engineering/Security approve; change management enforces.
Step 2: Define a criticality method your engineers can apply
Write a short standard (2–4 pages) that answers:
- Criticality definition: what “critical” means in your environment (mission impact, confidentiality/integrity/availability impact, blast radius, irreversibility, privileged access, and exposure to untrusted inputs).
- Assessment factors: examples:
- Does this component enforce authentication/authorization?
- Does it store or process sensitive data?
- Does it manage secrets/keys?
- Does it control deployment, configuration, or policy (control plane)?
- Is it a single point of failure?
- Is compromise hard to detect or hard to recover from?
- Classification output: a small number of categories (for example: critical / non-critical, or criticality tiers). Keep it simple enough that teams use it consistently.
Tip: tie factors to things you can evidence (design docs, diagrams, configs), not opinions.
Step 3: Build the dependency picture (you cannot classify what you cannot see)
For each in-scope system/service, require a current:
- Architecture diagram(s) with trust boundaries and data flows.
- Inventory of components (services, libraries, platforms, agents, appliances).
- Third party dependency list where external providers touch critical flows (identity, logging, CI/CD, CDN/WAF, email, ticketing with privileged integrations, managed databases). Use “third party” broadly, not just “vendors.”
If your diagrams are stale, RA-9 will fail in practice because “criticality” becomes guesswork.
Step 4: Identify critical functions first, then map components that implement them
Operators move faster if they start with functions:
- Identity proofing, authentication, authorization, session management
- Policy decision and enforcement points
- Key generation, storage, rotation, and cryptographic operations
- Audit logging, log integrity, time sync, and alerting paths
- Administrative access and privileged workflows
- Update mechanism and build pipeline integrity (who can ship code/config)
Then map each function to:
- Owning component(s)
- Data touched
- Privileges required
- Downstream dependencies
- Failure/compromise outcome
This creates a traceable chain: function → component → dependency → control requirements.
Step 5: Classify components and record rationale
For each component/service, record:
- Criticality category
- Factors triggered (checklist)
- Rationale (plain language)
- Approver(s) and date
- Links to evidence (diagram, threat model, config repo, vendor docs where relevant)
Keep the rationale tight: “This service issues tokens used across the tenant; compromise enables lateral movement; all auth depends on it.”
Step 6: Turn the output into security actions (where audits focus)
RA-9 is identification, but auditors will test whether identification changes behavior. Common tie-ins:
- Third-party due diligence: critical third parties get deeper assessment, stronger contractual security requirements, and tighter exit plans.
- Secure SDLC: critical components require stronger change control, peer review, signing, and testing depth.
- Monitoring and IR: critical components get higher-fidelity logging, alerting, and incident runbooks.
- Resilience: critical components get more rigorous availability design and recovery planning.
You do not need to claim “perfect coverage.” You do need a defensible linkage from “critical” to “we added controls.”
Step 7: Embed into workflows (make it hard to skip)
Add RA-9 checkpoints into:
- Architecture review templates (“List critical functions/components; attach criticality worksheet”)
- Change request templates (trigger questions)
- Procurement intake (“Does this third party support a critical function?”)
- Risk register intake (new critical component = new tracked risk entry)
If you run Jira/ADO/ServiceNow, make the criticality worksheet a required attachment at defined gates.
Required evidence and artifacts to retain
Keep artifacts in a place auditors can access without tribal knowledge:
- RA-9 procedure/standard (scope, method, decision points) 1
- System/component/service scope list with owners 1
- SDLC gate definitions showing when criticality is assessed 1
- Architecture diagrams and data flow diagrams used in the analysis
- Criticality analysis worksheets 1, including:
- critical functions
- component inventory
- classification outcome
- rationale and evidence links
- approvals and dates 1
- Change triggers and re-assessment records (tickets, approvals, updated worksheet versions) 1
- Traceability to downstream controls (for example, “critical components require enhanced supplier review,” with references to your third-party risk process)
If you use Daydream to manage third-party risk and due diligence, store “criticality tags” on third parties and map them to required assessment depth, contract clauses, and review workflows. That makes the RA-9 output operational instead of a PDF that goes stale.
Common exam/audit questions and hangups
Expect these questions:
- “Show me your RA-9 method. Who approved it?” 1
- “What are your SDLC decision points for criticality analysis? Prove you ran it at those points.” 1
- “Pick a critical component. Walk me from diagram to rationale to controls you applied.”
- “How do you ensure re-analysis when architecture or third parties change?” 1
- “Are cloud-native managed services in your component inventory, or did you ignore them because they’re ‘inherited’?”
Hangup: teams confuse “criticality” with “asset inventory.” You need both. Inventory is the list; criticality is the prioritization with rationale.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: treating criticality as a one-time onboarding task.
Fix: define change triggers and SDLC gates; require re-approval on major changes. 1 -
Mistake: focusing only on infrastructure, missing functions.
Fix: start with critical functions (authz, keys, control plane, logging integrity), then map components. -
Mistake: “everything is critical.”
Fix: force tradeoffs. If everything is critical, nothing is prioritized. Set objective factors and require rationale. -
Mistake: no evidence trail from the analysis to action.
Fix: add a column in the worksheet for “required downstream controls” and link to the ticket or policy section. -
Mistake: excluding third parties and managed services.
Fix: include system services and external dependencies explicitly because RA-9 covers “system services,” not just servers you run. 1
Enforcement context and risk implications
No public enforcement cases were provided in the allowed source catalog for this requirement, so treat audit and authorization risk as the primary external driver.
Practically, weak RA-9 execution creates predictable risk:
- Supply chain blind spots where a third party supports a critical function but receives standard due diligence.
- Architecture decisions that increase blast radius because the “critical path” was never mapped.
- Incident response delays because teams cannot quickly identify which components must be isolated, restored, or forensically preserved.
Practical 30/60/90-day execution plan
First 30 days: Stand up the minimum viable RA-9 program
- Publish the RA-9 procedure: scope approach, method, SDLC decision points, change triggers. 1
- Pick a pilot system in your FedRAMP boundary and complete one end-to-end worksheet with engineering.
- Add a criticality section to architecture review and change request templates.
By 60 days: Expand coverage and connect to third-party risk
- Complete analyses for remaining high-impact systems/services in scope.
- Create a list of critical components and critical third parties, with owners.
- Update third-party intake so “supports a critical function” triggers deeper due diligence and contract review.
- Train engineering and procurement on how to apply the method and when reassessment is required.
By 90 days: Make it repeatable and auditable
- Audit your own process: sample recent changes and confirm RA-9 triggers fired when they should.
- Add reporting: critical component list, last review date, pending reassessments, exceptions.
- Tighten traceability: each critical component shows links to diagrams, controls, monitoring, and IR runbooks.
Frequently Asked Questions
Do we need a quantitative scoring model for criticality analysis?
No specific model is required by RA-9; you need a documented method that consistently identifies critical components and functions. The key is repeatability and evidence at SDLC decision points. 1
What counts as a “decision point in the SDLC” for RA-9?
RA-9 lets you define the decision points, but they must be explicit and provable in practice, such as architecture review gates or major change approvals. Document them and retain records showing analyses were completed at those points. 1
Are managed cloud services and SaaS dependencies in scope?
They can be, because RA-9 applies to “systems, system components, or system services.” If a managed service supports a critical function, include it in the dependency map and criticality determination. 1
How do we connect RA-9 to third-party due diligence without overloading procurement?
Tag third parties that support critical functions and apply a higher due diligence “track” only to that subset. Keep the trigger question simple on intake forms and push the heavier work to security/GRC.
Who should approve the criticality analysis output?
Assign accountability to the system owner with security review, then document approvals in the worksheet or ticketing system. Auditors mainly care that ownership is clear and repeatable. 1
What evidence is “enough” for an assessor?
Provide the procedure, the completed analyses, diagrams/data flows used, and records showing the work occurred at defined SDLC decision points. Add traceability to actions taken for critical components to prevent the analysis from reading as theoretical. 1
Footnotes
Frequently Asked Questions
Do we need a quantitative scoring model for criticality analysis?
No specific model is required by RA-9; you need a documented method that consistently identifies critical components and functions. The key is repeatability and evidence at SDLC decision points. (Source: NIST Special Publication 800-53 Revision 5)
What counts as a “decision point in the SDLC” for RA-9?
RA-9 lets you define the decision points, but they must be explicit and provable in practice, such as architecture review gates or major change approvals. Document them and retain records showing analyses were completed at those points. (Source: NIST Special Publication 800-53 Revision 5)
Are managed cloud services and SaaS dependencies in scope?
They can be, because RA-9 applies to “systems, system components, or system services.” If a managed service supports a critical function, include it in the dependency map and criticality determination. (Source: NIST Special Publication 800-53 Revision 5)
How do we connect RA-9 to third-party due diligence without overloading procurement?
Tag third parties that support critical functions and apply a higher due diligence “track” only to that subset. Keep the trigger question simple on intake forms and push the heavier work to security/GRC.
Who should approve the criticality analysis output?
Assign accountability to the system owner with security review, then document approvals in the worksheet or ticketing system. Auditors mainly care that ownership is clear and repeatable. (Source: NIST Special Publication 800-53 Revision 5)
What evidence is “enough” for an assessor?
Provide the procedure, the completed analyses, diagrams/data flows used, and records showing the work occurred at defined SDLC decision points. Add traceability to actions taken for critical components to prevent the analysis from reading as theoretical. (Source: NIST Special Publication 800-53 Revision 5)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream