SC-6: Resource Availability
To meet the sc-6: resource availability requirement, you must define which system resources are critical to keep available (for example, bandwidth, CPU, storage, connections) and implement allocation controls that prevent any user, process, or third party workload from exhausting them. Operationalize SC-6 by setting resource thresholds, enforcing quotas/rate limits, monitoring saturation, and retaining evidence that allocation is configured and reviewed.
Key takeaways:
- SC-6 is about preventing resource exhaustion that causes outages or denial-of-service conditions, including self-inflicted incidents.
- You need explicit allocation rules (quotas, limits, reservations, rate limits) tied to measurable thresholds and monitored in production.
- Auditors look for repeatable evidence: configs, dashboards, alerts, incident records, and periodic reviews mapped to a named control owner.
SC-6: Resource Availability sits in the NIST SP 800-53 System and Communications Protection (SC) family and focuses on a practical availability problem: shared resources get exhausted. That exhaustion can be malicious (DoS) or accidental (a runaway job, noisy neighbor, misconfigured autoscaling, unbounded API calls, log growth, connection pool depletion). Either way, the system becomes unavailable to legitimate users and dependent services.
For a Compliance Officer, CCO, or GRC lead, SC-6 is easiest to implement by treating it as an engineering control with compliance wrappers: (1) define which resources matter for each in-scope system, (2) set enforceable allocation boundaries, (3) monitor and alert on approaching limits, and (4) keep the evidence that proves these boundaries exist and are maintained.
This page gives requirement-level implementation guidance you can hand to an infrastructure or platform owner and then assess quickly. It also flags the common audit hangups: undefined “resources,” controls that exist only on paper, and missing proof that allocation is enforced in the live environment.
Regulatory text
NIST excerpt (SC-6): “Protect the availability of resources by allocating {{ insert: param, sc-06_odp.01 }} by {{ insert: param, sc-06_odp.02 }}.” 1
Operator interpretation: NIST is requiring you to (a) identify resource types that must remain available, then (b) allocate them in a defined way to prevent exhaustion. The placeholders in the OSCAL text represent organization-defined parameters; your job is to fill those in with specific resource categories and allocation mechanisms appropriate to your system. 2
What an assessor expects: A documented decision about which resources are protected, plus implemented technical controls that enforce allocation, plus monitoring/response that shows you will detect and correct resource pressure before it becomes an outage.
Plain-English interpretation (what SC-6 is really asking)
SC-6 requires guardrails on shared capacity so one workload cannot starve others. “Resources” includes more than servers. In most environments it includes:
- Compute: CPU, memory, container/pod limits, VM reservations.
- Storage: disk space, inode exhaustion, object store quotas, log volume growth.
- Network: bandwidth, egress limits, NAT gateways, load balancer capacity.
- Service-level constraints: database connections, thread pools, message queue depth, API request rate, authentication rate limits.
- Third party dependencies: SaaS API quotas, payment gateway rate limits, managed database IOPS limits.
A practical test: if the resource hits a meaningful percentage (or a hard platform limit), do you have an enforced control that prevents one actor from taking the whole thing?
Who it applies to (entity and operational context)
Applies to:
- Federal information systems and contractor systems handling federal data implementing NIST SP 800-53 controls in an authorization, assessment, or customer-driven flowdown context. 2
Operational contexts where SC-6 is commonly assessed:
- Multi-tenant services (internal or customer-facing) with “noisy neighbor” risk.
- Cloud/container platforms where teams self-provision resources.
- Internet-facing APIs (risk of volumetric spikes and application-layer abuse).
- Data processing pipelines with bursty jobs and long-running workloads.
- Systems with critical external rate limits (IdP, payment processors, KYC providers).
What you actually need to do (step-by-step)
Use this as an implementation checklist you can assign to a control owner.
Step 1 — Set control ownership and scope boundaries
- Name a control owner (usually Platform/SRE, Infrastructure, or Service Owner).
- Define in-scope services (those in your system boundary or customer-required boundary).
- Define critical user journeys (what must remain available) to prioritize resources.
Deliverable: SC-6 control record in your GRC system mapping owner, scope, and evidence cadence. This is also where Daydream fits cleanly: track SC-6 as a requirement with an owner, procedure, and recurring evidence tasks so audits are repeatable instead of ad hoc.
Step 2 — Identify “resources” for each in-scope service
For each service, list:
- Resource type (CPU, memory, DB connections, API rate, disk, queue).
- Where it is enforced (Kubernetes, cloud service quotas, WAF/API gateway, database, OS).
- Failure mode (what happens when exhausted).
- Dependency owner (internal team or third party).
Tip for speed: Start with production incident history and top alerts. If you have no history, start with the platform limits you already know are finite: DB connections, disk, and API rate.
Step 3 — Define allocation rules (the heart of SC-6)
Pick allocation mechanisms that match the resource:
| Resource | Common allocation control | Practical example |
|---|---|---|
| CPU / memory | Requests/limits, reservations, cgroups | Per workload container limits; namespace quotas |
| Database connections | Pool caps, per-role limits | Max connections per app identity; pool size upper bounds |
| API requests | Rate limits, quotas, WAF rules | Token bucket per API key/tenant/IP |
| Disk/log growth | Quotas, retention, log rotation | Max volume size; retention windows; alert on growth |
| Queues/streams | Quotas, backpressure, max lag | Consumer group lag alerts; publish rate throttles |
Your allocation rules must be explicit (numbers exist somewhere) and enforced (not a guideline). SC-6 does not require a specific technology; it requires that the availability risk is controlled by allocation. 2
Step 4 — Implement monitoring and alerting for saturation
For each protected resource:
- Choose a metric (for example: CPU throttling, memory OOM kills, disk %, connection count, request rate, 429 rates).
- Define an alert strategy (warning vs critical).
- Define who receives alerts and expected response path (on-call, incident channel).
Keep it tight: a small set of high-signal saturation alerts beats dozens of low-quality ones.
Step 5 — Add runbooks and operational response
Write short runbooks that answer:
- How to confirm resource exhaustion.
- How to identify the top consumer (tenant/workload/user).
- How to apply mitigation safely (throttle, scale, shed load, disable feature, block abusive IPs, expand capacity).
- How to prevent recurrence (add tighter quotas, fix unbounded queries, change defaults).
Step 6 — Prove it stays in place (governance and review)
Build a lightweight recurring review:
- Confirm quotas/limits exist for new services.
- Review changes to allocation defaults.
- Review recent saturation events and whether thresholds were adequate.
This is where many programs fail: controls exist at launch, then drift. Map SC-6 to recurring evidence artifacts and reviews so you can show continued operation.
Required evidence and artifacts to retain
Keep evidence that answers two questions: What is allocated? and How do you know it works?
Minimum evidence set (retain per in-scope system/service):
- SC-6 procedure describing identification of resources, allocation methods, and monitoring responsibilities.
- Resource inventory (table) mapping services → resource types → allocation mechanism → enforcement point.
- Configuration evidence (screenshots, exports, IaC snippets, policy-as-code):
- Kubernetes quotas/limits, namespace quotas.
- API gateway/WAF rate limit policies.
- Database max connections / parameter groups.
- Cloud quotas and service limit settings where configurable.
- Monitoring evidence:
- Dashboard screenshots showing resource saturation metrics.
- Alert definitions (rule configs) and notification routing.
- Operational evidence:
- Incident tickets/postmortems for resource exhaustion events.
- Change tickets showing quota adjustments.
- Review evidence:
- Periodic control check results (sign-off, meeting notes, control attestation).
Daydream (as a workflow layer) is a practical way to assign evidence tasks to owners and keep the artifacts attached to SC-6 over time, which reduces the “scramble” during assessments.
Common exam/audit questions and hangups
Expect these questions, and pre-answer them in your evidence:
- “What resources are covered by SC-6 for this system boundary?”
- “Where are the allocation settings configured, and who can change them?”
- “Show me enforcement in production, not a policy document.”
- “How do you prevent one tenant or one job from exhausting shared capacity?”
- “What alerts tell you a resource is approaching exhaustion?”
- “Show a ticket where you tuned thresholds or quotas after an event.”
Common hangup: teams provide autoscaling as the only control. Autoscaling helps, but SC-6 is about allocation and protection. You still need boundaries (rate limits, quotas, connection caps) because some exhaustion modes scale poorly (DB connections, third party API quotas, disk).
Frequent implementation mistakes (and how to avoid them)
-
Treating “resources” as only infrastructure capacity.
Fix: include application and dependency constraints (DB connections, API rate limits, queue depth). -
Quotas defined but not enforced.
Fix: require evidence from the enforcement point (gateway policy, cluster quota object, DB parameter). -
One global limit that breaks priority services.
Fix: tier limits by service criticality and tenant type; reserve capacity for admin/control-plane functions. -
Alerting without an action path.
Fix: attach runbooks and make ownership explicit; test paging and escalation paths. -
No proof of ongoing operation.
Fix: schedule periodic reviews and collect artifacts automatically where possible (config exports, monitoring snapshots).
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for SC-6, so this page does not list specific cases.
Risk still matters operationally: weak resource allocation increases outage probability, can convert minor abuse into a denial-of-service condition, and can cause cascading failures across dependent services. For regulated environments, repeated availability incidents also create assessment risk because SC-6 is straightforward for assessors to validate through configuration and monitoring evidence. 2
Practical 30/60/90-day execution plan
Use this plan to operationalize quickly without boiling the ocean.
First a defined days — Establish scope and baseline controls
- Assign SC-6 control owner per system.
- Build the resource inventory for top critical services.
- Implement “must-have” allocation controls where missing: API rate limits, DB connection caps, container limits, disk/log retention.
- Stand up a saturation dashboard and a small set of high-signal alerts.
- Create first-pass runbooks for resource exhaustion.
Days 31–60 — Expand coverage and harden governance
- Extend inventory and allocation controls to remaining in-scope services.
- Add tenant/workload attribution so you can identify the top consumer fast.
- Add change control guardrails for quota/limit modifications (approval, peer review).
- Run a tabletop exercise for resource exhaustion response and update runbooks.
Days 61–90 — Prove ongoing operation and assessment readiness
- Start a recurring SC-6 review cycle (evidence capture, sign-offs).
- Validate enforcement with spot checks (attempt to exceed quota in a test environment).
- Tie incidents and changes back to SC-6 evidence for traceability.
- In Daydream, convert the above into recurring tasks with owners and attach artifacts so the next audit is evidence retrieval, not evidence creation.
Frequently Asked Questions
What counts as a “resource” under SC-6?
Any finite capacity that, if exhausted, would impair availability. Include infrastructure (CPU, memory, disk, bandwidth) and application/dependency limits (DB connections, API request rate, queue depth, third party quotas). 2
Is autoscaling enough to satisfy SC-6?
Usually no. Autoscaling helps with compute pressure, but it does not prevent exhaustion of hard limits like database connections, disk capacity, or third party API quotas. Pair scaling with enforceable allocation controls like rate limits and caps.
How do we handle third party SaaS limits under SC-6?
Treat third party quotas as protected resources: document the limit, implement client-side throttling/backoff, monitor error rates tied to quota exhaustion, and keep runbooks for failover or degraded mode.
What evidence is most persuasive to an assessor?
Configuration outputs from enforcement points (gateway policies, cluster quotas, DB settings) plus monitoring/alert definitions and a record of periodic reviews. Incident tickets showing you tuned limits after a saturation event are strong operational proof.
We run Kubernetes. What are the fastest wins for SC-6?
Set namespace quotas, define CPU/memory requests and limits for all workloads, cap concurrency where applicable, and ensure cluster monitoring alerts on CPU throttling, OOM kills, and storage pressure. Retain the YAML/IaC and screenshots/exports as evidence.
How should a GRC team “own” SC-6 without writing engineering specs?
Own the requirement definition, evidence standard, and review cadence. Assign engineering owners to implement the technical controls, then use a system like Daydream to map SC-6 to owners, procedures, and recurring evidence artifacts so operation stays provable.
Footnotes
Frequently Asked Questions
What counts as a “resource” under SC-6?
Any finite capacity that, if exhausted, would impair availability. Include infrastructure (CPU, memory, disk, bandwidth) and application/dependency limits (DB connections, API request rate, queue depth, third party quotas). (Source: NIST SP 800-53 Rev. 5)
Is autoscaling enough to satisfy SC-6?
Usually no. Autoscaling helps with compute pressure, but it does not prevent exhaustion of hard limits like database connections, disk capacity, or third party API quotas. Pair scaling with enforceable allocation controls like rate limits and caps.
How do we handle third party SaaS limits under SC-6?
Treat third party quotas as protected resources: document the limit, implement client-side throttling/backoff, monitor error rates tied to quota exhaustion, and keep runbooks for failover or degraded mode.
What evidence is most persuasive to an assessor?
Configuration outputs from enforcement points (gateway policies, cluster quotas, DB settings) plus monitoring/alert definitions and a record of periodic reviews. Incident tickets showing you tuned limits after a saturation event are strong operational proof.
We run Kubernetes. What are the fastest wins for SC-6?
Set namespace quotas, define CPU/memory requests and limits for all workloads, cap concurrency where applicable, and ensure cluster monitoring alerts on CPU throttling, OOM kills, and storage pressure. Retain the YAML/IaC and screenshots/exports as evidence.
How should a GRC team “own” SC-6 without writing engineering specs?
Own the requirement definition, evidence standard, and review cadence. Assign engineering owners to implement the technical controls, then use a system like Daydream to map SC-6 to owners, procedures, and recurring evidence artifacts so operation stays provable.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream