RA-9: Criticality Analysis

RA-9: criticality analysis requirement means you must identify which system components and functions are truly “critical,” then use that determination to drive stronger protection, monitoring, and supply chain decisions for those elements. To operationalize it quickly, define your scope and locations, run a structured criticality analysis workshop, document the rationale, and tie results to architecture, change management, and third-party requirements. 1

Key takeaways:

  • Produce an explicit list of critical components/functions with defensible criteria and rationale, not a vague “Tier 0” label. 1
  • Bind the output to operational controls (configuration, logging, resilience, supplier requirements) so the analysis changes how you run the system. 2
  • Keep evidence that the analysis is performed for the defined scope and locations, and that it is maintained as systems and dependencies change. 1

Criticality analysis is one of the fastest ways to reduce “security-by-spreadsheet” problems. Most programs say everything is high priority, then nothing is. RA-9 fixes that by forcing you to name the components and functions that, if degraded or compromised, cause mission failure, safety impact, major financial exposure, or regulatory breach, and then treat them differently in engineering and operations. 2

For a CCO or GRC lead, the practical goal is assessment-ready documentation that an auditor can follow and an engineering leader can act on. RA-9 is also a forcing function for third-party risk: if a critical function depends on a third party (cloud, SaaS, managed service, OEM hardware, open-source maintainer), your supplier due diligence and contracting posture must match that dependency’s criticality. 2

This page gives requirement-level implementation guidance: who must do it, what decisions you must make (scope and locations), a step-by-step procedure you can assign to owners, the artifacts to retain, and the exam questions that commonly stall teams. The target outcome is simple: a criticality register that stays current and changes real controls. 1

Regulatory text

Control requirement (excerpt): “Identify critical system components and functions by performing a criticality analysis for {{ insert: param, ra-09_odp.01 }} at {{ insert: param, ra-09_odp.02 }}.” 1

How to read the placeholders (what you must decide):

  • “for [scope]”: you must define which system(s), platform(s), or environment(s) the analysis covers (for example: a specific federal information system boundary, a product line, or a shared service). 1
  • “at [locations]”: you must define where the analysis applies (for example: data centers, cloud regions, facilities, enclaves, or development/production tenants). 1

Operator expectation: produce a repeatable method that identifies and records critical components and functions, then keep it current as the system and dependencies change. The test is whether you can show (1) the analysis happened for the defined scope and locations, and (2) it drives downstream security and resilience decisions. 1

Plain-English interpretation

RA-9 asks one question: what parts of this system must not fail, must not be altered, and must be recoverable first? Your answer cannot be “the whole system.” You need a defensible way to isolate the “crown jewels” at the component/function level (identity, key management, payment capture, safety interlocks, routing, build pipeline signing, privileged access paths). 2

Then you operationalize the output: critical elements get tighter hardening, stricter change control, better monitoring, stronger third-party requirements, and higher resilience targets than noncritical elements. If your RA-9 output sits in a PDF and nothing changes, you will struggle in an assessment. 2

Who it applies to (entity and operational context)

RA-9 is commonly expected in:

  • Federal information systems implementing NIST SP 800-53 controls. 2
  • Contractor systems handling federal data where NIST SP 800-53 is flowed down contractually or used as the security control baseline. 2

Operationally, this applies to:

  • Systems with meaningful dependency graphs: cloud services, microservices, ERP stacks, OT/ICS, identity and access layers, CI/CD and artifact signing, and any environment with significant third-party or open-source dependencies. 2

What you actually need to do (step-by-step)

Step 1: Set scope and locations (make the placeholders real)

Create a short RA-9 scope statement that names:

  • System boundary or product/service boundary
  • Environments (prod, staging, DR, dev) if in scope
  • Physical/cloud locations (regions, facilities, enclaves)
  • Exclusions with rationale (what you are not analyzing and why)

Tip: auditors will ask you to reconcile this to your system inventory and boundary diagrams. Keep it consistent with your SSP or equivalent system description. 2

Step 2: Define criticality criteria your operators will accept

Pick criteria that create clear outcomes. Common criteria categories:

  • Mission/business impact: outage stops a regulated or mission-essential process.
  • Security impact: compromise enables privilege escalation, data exfiltration, fraud, or widespread lateral movement.
  • Safety impact: physical harm or unsafe operating state (OT/medical/industrial).
  • Recoverability: long restore time, hard-to-rebuild components, or unique state.
  • Concentration risk: single point of failure, high blast radius, shared service across many apps.
  • External dependency risk: depends on a third party or sole-source component.

Document the definitions, thresholds, and who approves them (typically system owner + security architect + operations lead; compliance facilitates). 2

Step 3: Build the component and function inventory for the scope

You cannot analyze what you cannot name. Pull from:

  • Architecture diagrams and CMDB/inventory
  • Cloud resource inventory (accounts/projects/subscriptions)
  • Data flow diagrams
  • Identity and key management components
  • CI/CD and build/release pipeline components
  • Third-party dependency list (SaaS, MSP, OEM, critical libraries)

Output: a working list of components/functions to score. Keep it “engineering-readable.” 2

Step 4: Run the criticality analysis workshop (structured, time-boxed)

Use a table-driven method to score and classify each component/function. Example fields:

  • Component/function name
  • Owner (person/team)
  • Dependency mapping (upstream/downstream)
  • Criteria scoring (H/M/L or 1–5; pick one and stick to it)
  • Criticality designation (Critical / Important / Standard)
  • Rationale (one paragraph max; specific failure/compromise scenario)

Decision rule: define what combination of criteria makes something “Critical” and get approval. Avoid subjective “feels critical” decisions. 2

Step 5: Tie outputs to required operational changes (the part auditors probe)

For every “Critical” item, specify minimum control consequences. Examples:

  • Change management: mandatory peer review, maintenance windows, rollback plans, tighter approval chain.
  • Configuration hardening: stricter baselines, drift detection, limited administrative paths.
  • Logging/monitoring: higher-fidelity telemetry, alerting, and retention aligned to incident response needs.
  • Access control: least privilege, stronger MFA, PAM enforcement for admin actions.
  • Resilience: explicit recovery sequence and DR priority for critical components.
  • Third-party risk: enhanced due diligence, contractual security addenda, stronger SLAs, notification requirements, right-to-audit where feasible.

Put this mapping in writing. This is where RA-9 becomes operational rather than descriptive. 2

Step 6: Integrate into governance (so it stays current)

Define “refresh triggers,” such as:

  • Major architecture change
  • New third party supporting a critical function
  • New cloud region/facility added to the defined locations
  • Material incident or near-miss
  • Significant change in data sensitivity or mission need

Also define a periodic review cadence that fits your change velocity, and assign a control owner accountable for completion and evidence capture. 2

Step 7: Make it assessment-ready (mapping, owners, evidence)

Minimum operationalization checklist:

  • Named RA-9 control owner
  • Written procedure for performing criticality analysis
  • Recurring evidence artifacts and repository location
  • Traceability: criticality register links to system inventory/architecture artifacts

Daydream fit: many teams struggle with evidence consistency across systems and third parties. Daydream becomes the system of record to assign RA-9 ownership, store the criticality register, and track recurring evidence so you can answer assessor questions without scrambling. 1

Required evidence and artifacts to retain

Keep artifacts that prove method, execution, and operational impact:

  • RA-9 policy/procedure (scope, criteria, method, approval)
  • Scope and locations statement (what the analysis covered) 1
  • Component/function inventory used as input (export from CMDB/cloud inventory + diagrams)
  • Criticality register (the scored list with owners and rationale)
  • Dependency map for critical components (can be diagram or table)
  • Approval evidence (sign-off in ticketing system, GRC tool, or meeting minutes)
  • Mapping of “Critical” items to downstream control requirements (change control, monitoring, access, DR)
  • Evidence of refresh (change-trigger tickets, periodic review record, updated register diff)

Common exam/audit questions and hangups

Expect these questions and prepare crisp answers:

  1. “What is the scope and where was it performed?” Provide the RA-9 scope/locations statement and show the analyzed system boundary. 1
  2. “Show me your criteria and how you prevent bias.” Provide definitions, scoring method, and approval trail.
  3. “Prove it changed operations.” Show the mapping from “Critical” items to concrete requirements in change management, access controls, monitoring, and DR plans.
  4. “How do third parties factor in?” Demonstrate that critical dependencies have enhanced due diligence and contractual controls aligned to criticality.
  5. “How do you keep it current?” Show triggers, review records, and evidence of updates after material changes.

Frequent implementation mistakes and how to avoid them

Mistake Why it fails in practice Fix
Labeling entire systems as “critical” with no component detail Doesn’t support targeted protections or recovery sequencing Classify at the component/function level and record dependencies. 2
No explicit scope/locations Assessors can’t tell what you covered Write scope and locations into the procedure and the register header. 1
Analysis not tied to controls Becomes a one-time document Require control consequences for each “Critical” item and link to tickets/standards. 2
Ignoring third-party and supply chain dependencies Critical functions often depend on external services Add a dependency column and a “third party required controls” section.
No refresh triggers Register drifts from reality Define triggers and make updates part of change management intake.

Risk implications (why RA-9 shows up in real incidents)

Critical components are where compromises cascade: identity, key management, update/signing pipelines, network control planes, and shared logging. If you misclassify these as “standard,” you will underinvest in monitoring and hardening, and response teams will lack a recovery priority order. RA-9 reduces that risk by forcing explicit prioritization and documented rationale. 2

Practical 30/60/90-day execution plan

First 30 days (Immediate)

  • Assign an RA-9 owner and approver(s); publish a one-page scope/locations statement. 1
  • Draft criteria and scoring method; get engineering and operations buy-in.
  • Build the component/function inventory from existing diagrams and inventories.
  • Schedule the analysis workshop with system owner, security architect, SRE/ops, and key third-party owners.

By 60 days (Near-term)

  • Complete the criticality workshop; produce the criticality register with owners and rationale.
  • Identify third-party dependencies supporting critical items; open due diligence and contracting actions for gaps.
  • Create the “control consequences” mapping for critical items (change control, logging, access, DR sequencing).
  • Put the register and approvals in a controlled repository (or Daydream) and link it to the system record. 1

By 90 days (Ongoing readiness)

  • Bake refresh triggers into change management intake.
  • Run a tabletop: “critical component failure/compromise” to validate recovery order and monitoring coverage.
  • Close the loop: verify critical items actually have the required guardrails (tickets completed, standards updated, monitoring enabled).
  • Establish recurring evidence capture (review record, register updates, exceptions with expiry).

Frequently Asked Questions

What counts as a “critical component” versus a “critical function”?

A critical function is an outcome the system must perform (e.g., authentication, transaction authorization). A critical component is the specific element that enables it (e.g., IdP tenant, KMS key hierarchy, signing service). Your register should include both where it helps responders and engineers.

Do we need a formal scoring model?

No single model is mandated in the RA-9 text, but you need consistent criteria and a repeatable method that leads to the identification of critical items for your defined scope and locations. 1

How does RA-9 connect to third-party risk management?

If a critical function depends on a third party, your due diligence, contracting, monitoring, and exit planning should reflect that dependency’s criticality. Capture the dependency in the register and link it to third-party review artifacts.

How often should we refresh the criticality analysis?

Use change-driven triggers (architecture changes, new third parties, new locations) plus a periodic review that matches your system change rate. The key is evidence that updates happen when reality changes. 2

We have multiple cloud regions and environments. Do we need separate analyses?

You can use one analysis if the scope and locations are explicit and differences are captured (region-specific dependencies, environment-specific components). If regions materially differ, document the variance and treat it as separate location coverage. 1

What’s the minimum evidence an assessor will accept?

A written method, a scoped and location-specific register of critical components/functions with rationale and owners, approval evidence, and proof that critical items receive stronger operational controls than noncritical ones. 1

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as a “critical component” versus a “critical function”?

A critical function is an outcome the system must perform (e.g., authentication, transaction authorization). A critical component is the specific element that enables it (e.g., IdP tenant, KMS key hierarchy, signing service). Your register should include both where it helps responders and engineers.

Do we need a formal scoring model?

No single model is mandated in the RA-9 text, but you need consistent criteria and a repeatable method that leads to the identification of critical items for your defined scope and locations. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How does RA-9 connect to third-party risk management?

If a critical function depends on a third party, your due diligence, contracting, monitoring, and exit planning should reflect that dependency’s criticality. Capture the dependency in the register and link it to third-party review artifacts.

How often should we refresh the criticality analysis?

Use change-driven triggers (architecture changes, new third parties, new locations) plus a periodic review that matches your system change rate. The key is evidence that updates happen when reality changes. (Source: NIST SP 800-53 Rev. 5)

We have multiple cloud regions and environments. Do we need separate analyses?

You can use one analysis if the scope and locations are explicit and differences are captured (region-specific dependencies, environment-specific components). If regions materially differ, document the variance and treat it as separate location coverage. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What’s the minimum evidence an assessor will accept?

A written method, a scoped and location-specific register of critical components/functions with rationale and owners, approval evidence, and proof that critical items receive stronger operational controls than noncritical ones. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream