SR-11(2): Configuration Control for Component Service and Repair

SR-11(2) requires you to keep strict configuration control over system components while they are out for service/repair and again after they return but before they go back into production. Operationally, that means chain-of-custody plus “known-good” configuration baselines, tamper checks, and a controlled reintroduction process that verifies integrity and re-applies approved settings. 1

Key takeaways:

  • Treat “awaiting service/repair” and “awaiting return to service” as controlled states with explicit gates and logging.
  • Prove config integrity with baselines, authorized change records, and post-repair validation before reconnecting to your environment.
  • Your evidence needs to connect asset identity → custody → repair event → configuration state → approval to return to service.

The sr-11(2): configuration control for component service and repair requirement is a supply chain and operations control disguised as a maintenance workflow. Your exam risk is usually not the repair itself; it’s the gap in configuration governance while hardware (or other components) leave your physical control or move between teams. Assessors will look for two things: (1) whether you can show that components awaiting service/repair remain in a controlled, tracked state, and (2) whether repaired components are prevented from reentering production until you confirm their configuration and integrity match what you approved.

This control becomes urgent in environments with shared spares, depot repair, third-party maintenance, or high-turnover field operations. It also shows up in cloud-adjacent contexts (appliances, HSMs, network devices) where a “component” can carry secrets, configs, or firmware that meaningfully changes system risk.

Your goal is to turn repairs into a gated process with clear decision points, required checks, and audit-ready artifacts. If you can demonstrate controlled custody and “return-to-service” validation, you can usually satisfy SR-11(2) even if you rely on third parties for repair. 2

Regulatory text

Requirement (quoted): “Maintain configuration control over the following system components awaiting service or repair and serviced or repaired components awaiting return to service: {{ insert: param, sr-11.02_odp }}.” 1

What the operator must do

You must define which components are in scope (the “organization-defined parameter” in the control text), then ensure those components remain under configuration control in two high-risk windows:

  1. Awaiting service/repair (pre-repair holding state)
  2. Serviced/repaired and awaiting return to service (post-repair quarantine state)

“Configuration control” here should be interpreted as: only authorized, documented, and verifiable configuration changes are permitted, and the component’s identity, firmware/software level, and security-relevant settings can be confirmed before it touches production again. 2

Plain-English interpretation

If a component leaves its normal managed state, assume it can come back different. SR-11(2) expects you to prevent that difference from becoming an unreviewed change in production.

That means:

  • You track the component (asset identity, owner, location, status).
  • You freeze or capture the approved configuration before it leaves controlled operations.
  • You control custody while it’s waiting for repair and while it’s waiting to be reintroduced.
  • You validate and re-baseline after repair (firmware, settings, security controls, keys/secrets handling).
  • You require approval to return to service, backed by evidence.

Who it applies to

Entity scope

  • Federal information systems implementing NIST SP 800-53 controls. 2
  • Contractor systems handling federal data where NIST SP 800-53 is flowed down contractually or through an authorization boundary. 2

Operational contexts where SR-11(2) usually bites

  • Network gear (routers, switches, firewalls) shipped for repair or swapped via RMA.
  • Servers, storage, and hyperconverged nodes repaired by a manufacturer or third party.
  • End-user devices in regulated enclaves (especially where imaging and crypto are controlled).
  • Security components like HSMs, smart cards, or trusted platform modules (TPMs), where a repair event can invalidate assumptions about keys and firmware.
  • Shared spares programs where repaired items reenter a pool.

What you actually need to do (step-by-step)

Step 1: Define “components in scope” and document them

Because the control includes an organization-defined parameter, you must explicitly name the component categories you will govern under SR-11(2). 1

Minimum practical approach:

  • Create an SR-11(2) scope statement listing component types (e.g., “network devices, servers, storage controllers, HSMs, laptops used for admin access”).
  • Tie scope to your asset inventory categories and your system boundary.

Decision tip: Start with components that (a) store sensitive data, (b) enforce security policy, or (c) can introduce network access if compromised.

Step 2: Establish controlled states and gates in your workflow

Create two explicit lifecycle states in your ITSM/asset system:

  • Hold – Awaiting Service/Repair
  • Quarantine – Repaired, Awaiting Return to Service

For each state, define:

  • Who can change the status
  • Where the component can physically reside
  • Whether it can connect to any network
  • What checks must be completed to exit the state

This is where many programs fail: they have “repair tickets,” but no enforced “do not reconnect until verified” gate.

Step 3: Capture the “known-good” configuration baseline pre-repair

Before shipping or handing off internally:

  • Record asset identifiers (serial, asset tag, logical hostname).
  • Export or capture current configuration relevant to security and operations (device config, firmware version, installed modules, enabled services).
  • Confirm the component is approved for repair and that the ticket references the baseline artifact.

If you cannot capture a full baseline (some devices are limited), capture what you can: firmware version, secure boot status, and a signed configuration export if supported.

Step 4: Maintain chain-of-custody while awaiting service/repair

You need a custody record that ties the component to a person/team/location at all times until it is handed to the repair function or third party.

Operational controls to implement:

  • Tamper-evident packaging procedures where feasible.
  • Shipping/transfer logs.
  • Access controls for “repair cage” or storage area.
  • A requirement that third-party repair shipments reference your ticket and asset identifiers.

Third-party involvement does not remove your responsibility for configuration control. Treat it as an extension of your supply chain risk problem.

Step 5: Control what the repair process is allowed to change

Repairs can legitimately change components (boards, drives, firmware). SR-11(2) pushes you to make those changes explicit and reviewable.

Implement:

  • Repair ticket categories (no-config-change repair vs. component replacement vs. firmware update).
  • A requirement that any firmware/software updates be documented and approved through your change process (or documented as emergency/break-fix with after-the-fact approval, if your governance allows).
  • A rule that replaced parts that contained storage or secrets follow your media handling / key management process.

Step 6: Quarantine and validate post-repair before return to service

When the component returns:

  • Keep it in Quarantine – Repaired, Awaiting Return to Service.
  • Perform integrity and configuration checks appropriate to the component type:
    • Verify firmware version and configuration matches the approved baseline or approved change record.
    • Reapply hardening settings if the device returns factory-reset.
    • Run functional and security checks (e.g., disable default accounts, confirm management plane restrictions).
    • If keys/certs could have changed, follow your key rotation or re-provisioning procedures.

Operator reality: Many organizations reintroduce repaired components directly to production “to restore service.” Build an expedited validation path, but keep the gate.

Step 7: Formal approval to return to service and re-baseline

Require an approval step (control owner delegate is fine) that confirms:

  • The component is the correct asset (identity match)
  • The repair outcome is documented
  • The configuration state is approved
  • Evidence is attached to the ticket

Then:

  • Mark the component In Service
  • Update the configuration baseline repository to reflect approved changes
  • Close the repair/change records

Step 8: Make it auditable (map owner, procedure, recurring evidence)

SR-11(2) fails most often due to weak evidence. Assign a control owner, write an implementation procedure, and define what evidence is produced each time. This is also the fastest way to make the requirement assessable in a GRC tool like Daydream, where you can map SR-11(2) to the workflow artifacts you already generate. 1

Required evidence and artifacts to retain

Keep evidence that proves configuration control across the two states.

Evidence artifact What it proves Where it usually lives
SR-11(2) scope statement (components in scope) You defined the ODP scope Policy/standards repo, GRC system
Repair ticket with asset identity Traceability of the event ITSM
Pre-repair configuration snapshot/baseline Known-good starting point Config mgmt repo, device backup system
Chain-of-custody / shipping logs Controlled custody Shipping system, physical security log, ITSM attachments
Repair report / RMA documentation What changed during repair Third-party portal, email, ITSM attachments
Post-repair validation checklist results Verification before production ITSM task list, runbook output
Return-to-service approval record Enforced gate ITSM change approval, e-signature
Updated baseline / exception record Approved delta is controlled CMDB/config repo, change record

Common exam/audit questions and hangups

  • “Which components are covered by SR-11(2), and why?” Expect pushback if the scope is “everything” without operational support, or “only servers” without rationale. 1
  • “Show me a sample repair from start to finish with config evidence.” Auditors want a single thread: baseline → custody → repair docs → validation → approval.
  • “How do you prevent a repaired device from being connected before validation?” If your answer is “we tell techs,” expect a finding. Build workflow gates and network controls where possible.
  • “What happens if repair requires firmware updates?” They will look for change control alignment and an updated baseline.

Frequent implementation mistakes and how to avoid them

  1. No explicit quarantine state post-repair.
    Fix: enforce a quarantine status in ITSM/CMDB and block reconnect until a validation task is complete.

  2. Baseline exists, but not per asset.
    Fix: store per-asset config backups or per-model hardened templates plus a documented delta for the specific device.

  3. Chain-of-custody is “somewhere in email.”
    Fix: attach shipping labels, tracking IDs, or handoff forms to the ticket as required fields.

  4. Third-party repair is treated as out of scope.
    Fix: require third party repair documentation, and treat post-repair validation as mandatory regardless of repair source.

  5. Return-to-service approval is informal.
    Fix: make approval a workflow step with a named approver role and a time-stamped record.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so you should treat SR-11(2) primarily as an assessment and authorization readiness requirement under NIST SP 800-53. 2

Risk-wise, this control addresses:

  • Integrity risk (unexpected firmware/config changes)
  • Supply chain and repair channel tampering risk
  • Unauthorized access paths introduced by default configs after repair
  • Loss of traceability for security-relevant changes

If you operate under an ATO or customer security assessment, SR-11(2) gaps often show up as “configuration management” findings even though the root cause is operational maintenance.

Practical 30/60/90-day execution plan

First 30 days: Define scope, owners, and workflow gates

  • Assign a control owner (typically IT operations or security engineering) and a process owner (ITSM/asset management).
  • Draft the SR-11(2) scope list (the components you will control per the ODP).
  • Add the two states to ITSM/CMDB and require a repair ticket for any movement into/out of those states.
  • Create a one-page post-repair validation checklist per component class (network, server, endpoint, security appliance).

Days 31–60: Implement baselines and evidence capture

  • Standardize pre-repair baseline capture (config export or template+delta).
  • Standardize evidence attachments in ITSM (required fields for shipping, repair report, validation results).
  • Train the teams that physically handle equipment (field services, data center ops).
  • Pilot the process with a small set of components and run an internal “audit” by pulling two end-to-end samples.

Days 61–90: Scale, test, and make it assessor-ready

  • Expand scope to additional component types based on repair frequency and risk.
  • Add technical enforcement where possible (e.g., NAC/quarantine VLAN for returned devices pending validation).
  • Define exceptions (urgent restore) with documented compensating controls and after-action evidence.
  • Map SR-11(2) in Daydream (or your GRC system) to control owner, procedure, and recurring evidence so assessments become a retrieval exercise, not a scramble. 1

Frequently Asked Questions

Does SR-11(2) apply to cloud services?

SR-11(2) is about “system components” that undergo service or repair, so it most directly applies to components you can take out of service and return. In cloud-heavy environments, it often maps to appliances, customer-managed hardware, or dedicated connectivity/security devices in your boundary. 2

What counts as “configuration control” for a repaired device?

You need a known approved configuration, a record of what changed during repair, and validation that the returned component matches an approved state before production use. If you cannot verify configuration, treat the device as untrusted until it is reimaged/reprovisioned to a known-good baseline. 1

We do advanced exchange RMAs. How do we stay compliant?

Treat the incoming replacement as “repaired component awaiting return to service” and quarantine it until you validate firmware/config and apply your hardened template. Keep the chain-of-custody and validation evidence on the exchange ticket. 2

Can we meet SR-11(2) with a standard operating procedure and tickets, or do we need technical controls?

A procedure plus enforced workflow gates can satisfy the requirement if it is consistently followed and evidenced. Technical controls (quarantine VLAN, blocked ports, device enrollment requirements) reduce reliance on human process and typically reduce audit friction. 2

What evidence do auditors ask for first?

They usually ask for one or two repair examples and want to see: baseline before repair, custody/shipping record, repair documentation, post-repair validation, and an approval to return to service. If any link is missing, the control reads as “not operating.” 1

How should we handle components that might contain secrets (drives, HSMs, admin laptops)?

Add a repair-specific checklist step for secrets handling: confirm whether keys/certs were exposed or reset, and follow your key management and media handling processes before returning to service. Document the outcome in the repair record so you can show configuration control across the repair boundary. 2

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

Does SR-11(2) apply to cloud services?

SR-11(2) is about “system components” that undergo service or repair, so it most directly applies to components you can take out of service and return. In cloud-heavy environments, it often maps to appliances, customer-managed hardware, or dedicated connectivity/security devices in your boundary. (Source: NIST SP 800-53 Rev. 5)

What counts as “configuration control” for a repaired device?

You need a known approved configuration, a record of what changed during repair, and validation that the returned component matches an approved state before production use. If you cannot verify configuration, treat the device as untrusted until it is reimaged/reprovisioned to a known-good baseline. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

We do advanced exchange RMAs. How do we stay compliant?

Treat the incoming replacement as “repaired component awaiting return to service” and quarantine it until you validate firmware/config and apply your hardened template. Keep the chain-of-custody and validation evidence on the exchange ticket. (Source: NIST SP 800-53 Rev. 5)

Can we meet SR-11(2) with a standard operating procedure and tickets, or do we need technical controls?

A procedure plus enforced workflow gates can satisfy the requirement if it is consistently followed and evidenced. Technical controls (quarantine VLAN, blocked ports, device enrollment requirements) reduce reliance on human process and typically reduce audit friction. (Source: NIST SP 800-53 Rev. 5)

What evidence do auditors ask for first?

They usually ask for one or two repair examples and want to see: baseline before repair, custody/shipping record, repair documentation, post-repair validation, and an approval to return to service. If any link is missing, the control reads as “not operating.” (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How should we handle components that might contain secrets (drives, HSMs, admin laptops)?

Add a repair-specific checklist step for secrets handling: confirm whether keys/certs were exposed or reset, and follow your key management and media handling processes before returning to service. Document the outcome in the repair record so you can show configuration control across the repair boundary. (Source: NIST SP 800-53 Rev. 5)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream