SC-13(1): FIPS-validated Cryptography

SC-13(1) requires you to use FIPS-validated cryptographic modules (not just “strong encryption”) for the system’s cryptographic functions where the control applies. To operationalize it fast, inventory every place you encrypt, sign, hash, generate keys, or terminate TLS, then prove each function is handled by a FIPS-validated module and retain evidence that maps modules, configurations, and environments to the validation boundary 1.

Key takeaways:

  • “FIPS-validated” is about the cryptographic module’s validation status, not your cipher-suite preferences.
  • The hard part is scoping: find every crypto touchpoint across infrastructure, apps, endpoints, and third-party services.
  • Audits fail on evidence: you need a defensible mapping from each crypto use case to a specific validated module and configuration 1.

The sc-13(1): fips-validated cryptography requirement is a frequent audit friction point because teams think in terms of “we use TLS” or “data is encrypted at rest,” while assessors test whether the cryptographic module performing those functions is actually FIPS-validated and operated within its validated boundary. SC-13(1) is not a vague preference for modern algorithms; it is an implementation constraint that affects product selection, deployment architecture, and day-to-day operations.

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat SC-13(1) like an engineering inventory and evidence problem: (1) enumerate all cryptographic functions in scope, (2) identify the module that performs each function, (3) confirm the module is FIPS-validated for the environment you run, (4) document the configuration that keeps it within the validated boundary, and (5) retain proof that this stays true through change management. Your goal is simple: be able to answer, for any system crypto claim, “which validated module did it, and how do we know?” 1.

Requirement: SC-13(1) FIPS-validated cryptography (operator implementation guide)

Control intent: Ensure cryptographic protections rely on validated cryptographic modules rather than ad hoc or non-validated implementations 1.

Plain-English interpretation

SC-13(1) means: when your system uses cryptography (encryption, digital signatures, hashing for security purposes, key generation, key establishment, TLS termination), the cryptographic module doing that work must be FIPS-validated, and you must be able to prove it. If a component cannot operate with a FIPS-validated module in your deployment model, it becomes a compliance exception that needs risk acceptance and compensating controls 1.

Who it applies to (entity and operational context)

This requirement commonly applies to:

  • Federal information systems implementing NIST SP 800-53 controls 2.
  • Contractor systems handling federal data where NIST SP 800-53 is flowed down contractually or through an authorization boundary 2.

Operationally, SC-13(1) touches:

  • Cloud workloads (IaaS, PaaS, managed databases, managed HSM/KMS)
  • Network security (TLS inspection, VPNs, load balancers, API gateways)
  • Application stacks (language runtimes, crypto libraries, JWT signing, mTLS)
  • Endpoints (full disk encryption, secure boot chains, credential storage)
  • DevOps pipelines (signing artifacts, verifying signatures, secret encryption)
  • Third parties that perform crypto on your behalf (SaaS encrypting customer data, managed key services)

If you can’t clearly describe your system boundary, you can’t scope SC-13(1). Make boundary definition the first gating item 2.

Regulatory text

The provided excerpt is: “NIST SP 800-53 control SC-13.1.” 1.

What the operator must do: implement SC-13(1) so that cryptographic functions used by the system are performed by FIPS-validated cryptography and maintain assessment-ready evidence that the implemented cryptographic modules are validated and deployed/configured consistently with the validated boundary 1.

What you actually need to do (step-by-step)

Step 1: Define scope and crypto “touchpoints”

Create a “crypto register” for the system boundary. For each item, capture:

  • Component/service name (e.g., API gateway, RDS/SQL, object storage, service mesh sidecar)
  • Crypto function (TLS termination, at-rest encryption, signing, hashing, password storage, token signing)
  • Data classification and sensitivity driver (why crypto is required)
  • Where keys live and who administers them

Practical tip: start from architecture diagrams + a packet capture/TLS inventory + KMS/HSM inventory + application dependency manifests. Most misses are hidden in “platform glue” like ingress controllers, service meshes, and CI signing steps.

Step 2: Identify the cryptographic module for each touchpoint

For every crypto function, name the module doing the crypto:

  • OS crypto module (for example, a platform crypto provider)
  • Application library module
  • Hardware module (HSM)
  • Managed service module (cloud KMS/HSM, managed database encryption layer)

Your evidence goal is a 1:1 mapping: crypto use case → module → deployment location.

Step 3: Verify FIPS validation status and boundary fit

For each module, confirm:

  • The module is FIPS-validated (not “FIPS compliant,” not “FIPS mode available”)
  • The validation applies to the version/build you run
  • The validation applies to the operating environment you run (OS, hardware, cloud service boundary)
  • You have a method to detect drift (patching can silently change the module version)

If a third party performs cryptography (SaaS encrypting stored data, signing, key management), treat it as a dependency that must provide FIPS validation evidence within your due diligence package.

Step 4: Enforce approved configurations

Validated modules often require specific settings to operate in a validated mode. Operationalize this as a configuration standard:

  • TLS settings and crypto policies (what libraries/providers are permitted)
  • Key sizes and algorithms permitted (tie to your crypto standard)
  • Runtime flags and OS-level crypto policy settings
  • Restrictions on “fallback” libraries in containers or language runtimes

Make enforcement real:

  • Infrastructure-as-code checks for cryptographic settings
  • Golden images/baselines for hosts
  • Admission controls for Kubernetes images that bring their own crypto stacks
  • SDLC guardrails to prevent developers from pulling non-validated crypto libraries

Step 5: Build change management triggers

SC-13(1) breaks during routine work. Add explicit change triggers:

  • OS patching or base image updates
  • Language runtime/library updates (OpenSSL, Java crypto providers)
  • Load balancer / ingress updates
  • Enabling new regions, instance types, or FIPS endpoints in cloud
  • Vendor/service plan changes that alter encryption implementation

Your change process should require a “crypto impact” check and evidence refresh when triggers occur.

Step 6: Document exceptions and compensating controls

If any crypto function cannot be backed by a FIPS-validated module:

  • Record the exception with clear scope (component, data types, environments)
  • Document business justification and risk acceptance path
  • Add compensating controls (segmentation, additional monitoring, stronger key custody, reduced exposure)
  • Put a remediation plan on a trackable timeline (avoid “temporary” exceptions that never close)

Required evidence and artifacts to retain

Assessors typically want proof that is specific, current, and traceable. Build an evidence pack per system:

Inventory and mapping

  • Crypto register (system crypto touchpoints and module mapping)
  • System boundary description and architecture diagrams
  • Data flow diagrams that show encryption points and key custody

Validation proof

  • Vendor/module FIPS validation references and version/build identifiers
  • Cloud provider or third-party attestation artifacts if crypto is performed by a managed service (retain in third-party due diligence records)

Configuration proof

  • Baseline configuration standards (crypto policy, TLS policy, key management standard)
  • Screenshots/exports/config files showing FIPS mode enabled where applicable
  • IaC snippets or policy-as-code rules enforcing approved crypto modules/settings

Operational proof

  • Change tickets showing crypto impact review
  • Patch records tied to module version tracking
  • Periodic reviews (internal control checks) confirming no drift

If you use Daydream to manage controls, structure this as a control record with: owner, implementation procedure, and recurring evidence artifacts so evidence refresh becomes routine rather than a pre-audit scramble 1.

Common exam/audit questions and hangups

Expect these, and pre-answer them in your evidence pack:

  • “Where exactly is TLS terminated, and what module performs the cryptography there?”
  • “Show that the crypto module version in production matches the validated version.”
  • “How do you prevent teams from introducing non-validated crypto libraries in containers?”
  • “If you rely on a third party for encryption, what proof do you have that their cryptographic module is FIPS-validated?”
  • “What is your process when a module is upgraded or patched?”

Hangup pattern: teams provide a policy statement (“we require FIPS”) but cannot show runtime proof, version pinning, or change control linkage.

Frequent implementation mistakes (and how to avoid them)

  1. Confusing algorithms with validation. “AES-256” is not evidence of FIPS validation. Track the module and its validated boundary.
  2. Assuming “FIPS mode” equals “FIPS validated.” Many products can run in a FIPS-like configuration without being validated for your exact version/environment.
  3. Ignoring managed services. If your database, messaging platform, or SaaS encrypts data, you still need due diligence artifacts that support your control claim.
  4. No drift detection. Container rebuilds, AMI updates, and automated patching can change crypto modules silently. Tie module verification to release and patch pipelines.
  5. Overlooking internal-only crypto. JWT signing, password hashing, service-to-service mTLS, and artifact signing often sit outside “data at rest/in transit” checklists. Put them in the crypto register.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite specific actions. Practically, the risk is not theoretical: if you cannot show validated cryptography, assessors may treat the control as not implemented, which can drive authorization conditions, contract findings, or remediation mandates in federal-aligned programs 2.

Practical execution plan (30/60/90)

You asked for speed. Use phases without fixed day-count commitments beyond the labels.

First 30 days (Immediate stabilization)

  • Assign a single accountable owner for SC-13(1) evidence quality (GRC + security engineering pairing).
  • Build the crypto register for the highest-risk paths: external TLS termination, key management, and data stores.
  • Identify obvious gaps: non-standard crypto libraries, unknown TLS termination points, unmanaged keys.
  • Stand up an evidence repository and a naming convention so artifacts are retrievable during an assessment.

By 60 days (Control hardening)

  • Expand the crypto register to cover internal service-to-service crypto, signing, and CI/CD cryptographic functions.
  • Implement baseline configurations (FIPS mode settings where required, approved libraries/providers list).
  • Add change triggers to your change management process and release pipeline checks for crypto-impact changes.
  • Formalize third-party evidence collection for any external services performing cryptography.

By 90 days (Assessment-ready operations)

  • Run an internal “tabletop audit”: pick a crypto claim (e.g., “TLS 1.2+ everywhere”) and trace it to the module, version, configuration, and validation evidence.
  • Close or formally accept exceptions with compensating controls and a remediation plan.
  • Schedule recurring evidence refresh aligned to patch cycles and major releases.
  • If you use Daydream, convert the crypto register into recurring tasks tied to change events so SC-13(1) stays current without heroics 1.

Frequently Asked Questions

Does SC-13(1) mean every encryption setting must be “FIPS mode”?

It means cryptographic functions must be performed by FIPS-validated modules where the requirement applies. “FIPS mode” can be part of meeting that requirement, but you still need to prove the module is validated and deployed within its validated boundary 1.

If our cloud provider says “encryption at rest is enabled,” is that enough?

Not by itself. You still need documentation that identifies the cryptographic module/service boundary and supports that it is FIPS-validated for the service and deployment context you use 1.

What’s the fastest way to find all cryptography in an environment?

Start with TLS termination points, KMS/HSM inventories, and data stores, then expand to application signing/hashing and CI/CD. Treat “crypto discovery” as an architecture and dependency mapping exercise, not a policy exercise.

How do we handle third parties that perform cryptography (SaaS, managed platforms)?

Put them in scope as third-party dependencies and collect evidence in due diligence records. Your assessor will still ask how you validated the crypto claim, even if you don’t operate the module directly.

What evidence fails most often in practice?

Teams cannot tie production module versions to validation evidence, or they cannot show that approved crypto settings are enforced through configuration management and change control.

Can we claim partial compliance if only some components are FIPS-validated?

You can document a scoped implementation with explicit exceptions, but you should expect findings if in-scope cryptographic functions rely on non-validated modules. Keep exceptions tightly bounded and backed by compensating controls and a remediation plan 1.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

Does SC-13(1) mean every encryption setting must be “FIPS mode”?

It means cryptographic functions must be performed by FIPS-validated modules where the requirement applies. “FIPS mode” can be part of meeting that requirement, but you still need to prove the module is validated and deployed within its validated boundary (Source: NIST SP 800-53 Rev. 5 OSCAL JSON).

If our cloud provider says “encryption at rest is enabled,” is that enough?

Not by itself. You still need documentation that identifies the cryptographic module/service boundary and supports that it is FIPS-validated for the service and deployment context you use (Source: NIST SP 800-53 Rev. 5 OSCAL JSON).

What’s the fastest way to find all cryptography in an environment?

Start with TLS termination points, KMS/HSM inventories, and data stores, then expand to application signing/hashing and CI/CD. Treat “crypto discovery” as an architecture and dependency mapping exercise, not a policy exercise.

How do we handle third parties that perform cryptography (SaaS, managed platforms)?

Put them in scope as third-party dependencies and collect evidence in due diligence records. Your assessor will still ask how you validated the crypto claim, even if you don’t operate the module directly.

What evidence fails most often in practice?

Teams cannot tie production module versions to validation evidence, or they cannot show that approved crypto settings are enforced through configuration management and change control.

Can we claim partial compliance if only some components are FIPS-validated?

You can document a scoped implementation with explicit exceptions, but you should expect findings if in-scope cryptographic functions rely on non-validated modules. Keep exceptions tightly bounded and backed by compensating controls and a remediation plan (Source: NIST SP 800-53 Rev. 5 OSCAL JSON).

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream