SA-22: Unsupported System Components

SA-22 requires you to replace any system component (hardware, software, firmware, or service dependency) once the developer, vendor, or manufacturer no longer provides support, unless you have an approved, time-bound exception with compensating controls and a funded replacement plan. Operationalize it by inventorying components, tracking end-of-support dates, and enforcing a “no unsupported in production” rule.

Key takeaways:

  • You need an authoritative list of components plus verified support/EOL dates tied to production systems.
  • Unsupported components must be replaced, or formally excepted with compensating controls, deadlines, and executive risk acceptance.
  • Audits fail on evidence: show detection, decisioning, remediation tracking, and closure proof.

The sa-22: unsupported system components requirement is a simple rule that becomes messy in real environments: if the maker will not support it, you must replace it (or treat it as an exception you can defend). “Support” is not a vague concept in audits. It’s a concrete vendor posture: security patches stop, bug fixes stop, and you lose a credible path to remediate newly discovered vulnerabilities. That quickly turns into preventable exposure, especially for externally facing systems, privileged infrastructure, and components embedded inside “bigger” platforms.

Most programs stumble for non-technical reasons: nobody owns the requirement end-to-end; end-of-support dates live in spreadsheets with no change control; or exceptions become permanent. Your goal as a Compliance Officer, CCO, or GRC lead is to convert SA-22 into a repeatable operational control with clear triggers, decision rights, and evidence that stands up in a 3-hour audit interview.

This page gives you a runbook: who it applies to, the minimum steps to implement, what artifacts to retain, and the exam questions you should be ready to answer.

Regulatory text

Requirement (excerpt): “Replace system components when support for the components is no longer available from the developer, vendor, or manufacturer; or” 1

Operator meaning: You must have a mechanism to (1) know when a component becomes unsupported and (2) ensure it does not remain in use without an approved, documented path to replacement. The excerpt ends with “or”, which signals that NIST allows an alternative treatment path in the full control text (for example, through risk-managed exceptions). Your implementation should assume replacement is the default outcome and exceptions are constrained, documented, and time-bound. 2

Plain-English interpretation (what SA-22 expects)

SA-22 expects you to prevent “unsupported” components from quietly becoming permanent parts of production systems. In practice:

  • Unsupported means the supplier no longer provides security patches, hotfixes, or technical support for that version/model.
  • System components include operating systems, database versions, network devices, hypervisors, libraries, firmware, appliances, and managed service dependencies that are part of your system boundary.
  • Replace means upgrade, migrate, re-platform, or retire the component so the running environment returns to a supported state.

A good SA-22 implementation answers two operational questions:

  1. Detection: How do you reliably learn that a component is approaching end-of-support (EOS) or is already end-of-life (EOL)?
  2. Enforcement: What forces remediation to happen (change management gates, risk exception process, and tracked completion)?

Who it applies to

SA-22 applies wherever you are expected to align with NIST SP 800-53, especially:

  • Federal information systems and
  • Contractor systems handling federal data 1

Operationally, this requirement touches multiple teams:

  • IT infrastructure (endpoints, servers, network/security appliances)
  • Engineering / platform (runtime images, base containers, CI runners)
  • Security (vulnerability management, configuration management)
  • Procurement / third-party management (supplier lifecycle and contracts)
  • Change management / CAB (upgrade execution and scheduling)
  • GRC (exceptions, evidence, audit readiness)

What you actually need to do (step-by-step)

Use this as your minimum viable runbook. Keep it short, enforceable, and auditable.

1) Assign ownership and define the control boundary

  • Name a control owner (often Infrastructure or Security Engineering) and a GRC owner (policy/exceptions/evidence).
  • Define scope: production systems, supporting infrastructure, and “shared services” that could affect the confidentiality, integrity, or availability of federal data. 2

Deliverable: A one-page “control card” that states objective, owner, triggers, frequency, exception rules, and required evidence. This aligns with the practical expectation that teams can show who owns the requirement and how it runs. 1

2) Build (or fix) the system component inventory

You cannot replace unsupported components you cannot name.

Minimum fields to capture:

  • Component name (product + edition), version, deployment location
  • Asset/system mapping (what business service depends on it)
  • Environment (prod/non-prod)
  • Supplier (developer/vendor/manufacturer)
  • Support status and source of truth for EOS/EOL date (vendor bulletin, contract terms, or support portal record)
  • Owner (team) and replacement path (upgrade target)

Practical tip: Inventory must connect to real systems. If it’s only a procurement list, audits will find unsupported open-source libraries and runtime images that never pass through procurement.

3) Establish EOS/EOL monitoring and trigger events

Define trigger events that start remediation work:

  • Vendor announces EOS/EOL for a version/model.
  • Your support contract changes (coverage lapses).
  • A system is discovered running an out-of-support version during scanning or incident response.

Common monitoring sources:

  • Vendor lifecycle bulletins and security advisories
  • CMDB/asset tooling plus vulnerability scanning results
  • Third-party managed service notices (SaaS/PaaS deprecation schedules)

Control design goal: You should be able to show that new EOS/EOL items become tracked work, not “tribal knowledge.”

4) Enforce a “no unsupported in production” gate

Put a hard control in the path of change:

  • Change management requires verification of support status for major upgrades, new deployments, and rebuilds.
  • Build pipelines require supported base images and supported language runtimes.
  • Procurement intake flags purchases that will be unsupported within the expected deployment window.

If you cannot enforce gates everywhere, enforce them where risk concentrates: internet-facing services, identity systems, logging, and key management infrastructure.

5) Create a replacement workflow with deadlines and funding visibility

Your workflow should produce:

  • A remediation ticket with system impact, target version, and planned change window
  • Testing and rollback steps (operational risk matters)
  • Dependencies (application compatibility, drivers, licensing)
  • Approval path (CAB and service owner)

Track remediation items to validated closure with due dates and evidence. This is one of the easiest ways to demonstrate sustained operation versus one-time cleanup. 1

6) Run an exceptions process that auditors will accept

Unsupported components sometimes persist due to technical constraints. Auditors still expect you to control the risk.

Define an exception standard:

  • Business justification (why replacement cannot happen now)
  • Compensating controls (segmentation, restricted access, additional monitoring, virtual patching where applicable)
  • Time limit and a committed replacement plan (project, target state, resourcing)
  • Formal risk acceptance by an accountable executive

Make exceptions expensive: require periodic re-approval and evidence that the replacement plan is moving. If exceptions are easy, they become permanent.

7) Perform recurring control health checks

Set a recurring review cadence that produces an auditable output:

  • Report of unsupported components by system and severity
  • Status of remediation tickets
  • Open exceptions and their expiration dates
  • Management reporting and decisions

This specifically addresses the common audit gap: teams cannot show how often the requirement runs or what evidence proves it operated. 1

Required evidence and artifacts to retain

Auditors rarely argue about the principle. They argue about proof. Retain an “evidence bundle” that shows detection → decision → action → closure.

Minimum evidence bundle (recommended):

  • SA-22 control card/runbook (owner, triggers, steps, exception rules)
  • Component inventory export (with version + support status fields)
  • EOS/EOL source records (vendor notices, support portal screenshots, contract extracts)
  • Remediation tickets (creation date, approvals, implementation notes, closure)
  • Change records (CAB approvals, deployment logs, rollback plans)
  • Exception approvals (risk acceptance, compensating controls, expiration)
  • Periodic reporting (monthly/quarterly decks, metrics, and action items)

Store evidence in a system with retention controls and consistent naming, so you can produce it quickly.

Common exam/audit questions and hangups

Be ready for direct, artifact-driven questions:

  • “Show me how you know what components are in scope and who owns each.”
  • “Pick one system. Prove all major components are supported today.”
  • “How do you learn about EOS/EOL events?”
  • “Show the last time you found an unsupported component and what you did.”
  • “How do you prevent re-introducing unsupported components via rebuilds or new deployments?”
  • “Show your exception log and an example exception with compensating controls and expiration.”

Hangups that trigger findings:

  • Inventory exists but cannot map components to production services.
  • Exceptions have no expiration or no executive risk acceptance.
  • Replacement tickets exist but no closure evidence (no change record, no validation).

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails Fix
Treating SA-22 as “patching” Unsupported means no patches exist; patch SLAs don’t solve it Make upgrade/migration the default path
Inventory stops at servers and laptops Unsupported middleware, libraries, and images remain invisible Include runtime images, platforms, and key software dependencies
No trigger mechanism You find EOS/EOL only during audits Subscribe to vendor bulletins; tie lifecycle checks to scanning/CMDB
Exceptions become permanent Unsupported tech becomes “accepted risk” indefinitely Add expirations, re-approval, and proof of replacement progress
No enforcement gate Teams reintroduce unsupported versions in builds Add CI/CD and change management checks for support status

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite specific actions.

Operational risk is still concrete: unsupported components remove the supplier’s patch path and can force you into compensating controls that are harder to operate and explain. During incidents, unsupported components also slow containment because containment options (vendor fixes, supported mitigations) may not exist. 2

Practical 30/60/90-day execution plan

Use staged phases so you can show progress quickly without inventing calendar promises.

First 30 days (stabilize the control)

  • Publish the SA-22 control card: owner, scope, triggers, exception rules, evidence list. 1
  • Produce a first-pass inventory for production systems and critical infrastructure.
  • Identify currently unsupported components and open remediation tickets.
  • Stand up an exception log with required approvals and expirations.

By 60 days (enforce and instrument)

  • Connect EOS/EOL monitoring to a repeatable intake (vendor notices, scanning findings, managed service notices).
  • Add a change management check: deployments and rebuilds must confirm supported versions.
  • Build the first recurring report: unsupported list, remediation status, open exceptions, upcoming EOS/EOL.

By 90 days (make it sustainable)

  • Expand inventory coverage to remaining environments and less visible components (images, runtimes, key libraries where feasible).
  • Demonstrate at least one completed replacement end-to-end with closure evidence.
  • Run a control health check and document outcomes and remediation decisions. 1
  • If you use Daydream, map SA-22 to a control card, attach the minimum evidence bundle, and track exceptions and remediation items to validated closure in one place.

Frequently Asked Questions

What counts as a “system component” for SA-22?

Treat any hardware, software, firmware, or platform dependency that your system relies on as a component, including managed services inside your boundary. If its supplier can withdraw support, you should track its lifecycle. 2

Does “unsupported” mean no security patches, or no contract?

Either can create an unsupported condition. If the developer/vendor/manufacturer no longer provides support for that version/model, or your entitlement ends, you lack a reliable patch and fix path. 1

Are exceptions allowed under SA-22?

NIST’s excerpt indicates an alternative path beyond immediate replacement, but you should treat exceptions as temporary risk decisions. Require compensating controls, an expiration, and executive risk acceptance.

How do auditors test SA-22 in practice?

They pick a system, trace its components, confirm support status, then ask for evidence of monitoring and remediation. They also review your exception log for governance and time limits.

What if an application cannot run on the supported version yet?

Open a formal exception and document the technical constraint, compensating controls, and a funded migration plan. Tie it to tracked work with clear ownership and periodic re-approval.

How do we prevent unsupported components from coming back after we clean them up?

Add gates in change management and CI/CD so new deployments must meet support requirements. Recurring health checks catch drift and create evidence that the control operates continuously. 1

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as a “system component” for SA-22?

Treat any hardware, software, firmware, or platform dependency that your system relies on as a component, including managed services inside your boundary. If its supplier can withdraw support, you should track its lifecycle. (Source: NIST SP 800-53 Rev. 5)

Does “unsupported” mean no security patches, or no contract?

Either can create an unsupported condition. If the developer/vendor/manufacturer no longer provides support for that version/model, or your entitlement ends, you lack a reliable patch and fix path. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Are exceptions allowed under SA-22?

NIST’s excerpt indicates an alternative path beyond immediate replacement, but you should treat exceptions as temporary risk decisions. Require compensating controls, an expiration, and executive risk acceptance.

How do auditors test SA-22 in practice?

They pick a system, trace its components, confirm support status, then ask for evidence of monitoring and remediation. They also review your exception log for governance and time limits.

What if an application cannot run on the supported version yet?

Open a formal exception and document the technical constraint, compensating controls, and a funded migration plan. Tie it to tracked work with clear ownership and periodic re-approval.

How do we prevent unsupported components from coming back after we clean them up?

Add gates in change management and CI/CD so new deployments must meet support requirements. Recurring health checks catch drift and create evidence that the control operates continuously. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream