SA-11(6): Attack Surface Reviews

SA-11(6) requires you to contractually require the developer of your system, component, or service to perform attack surface reviews, and to collect evidence that those reviews actually happened. To operationalize it fast, define “developer” per your delivery model, set trigger points (build/release/major change), and retain review outputs tied to remediation and sign-off. 1

Key takeaways:

  • Treat SA-11(6) as a supplier/developer obligation plus your verification duty, not an internal “nice-to-have” security activity. 1
  • Make attack surface reviews release-gated and change-triggered, with a documented method, scope, and accountable approver.
  • Evidence must connect review findings to fixes (or accepted risk) and show the developer performed the work.

SA-11(6): Attack Surface Reviews is a build-and-delivery control that often fails in practice for one reason: teams run security scans, but they cannot prove the developer performed an attack surface review as a defined activity with scope, outputs, and accountability. NIST’s requirement is short, but your assessor will look for operational signals: contract language (or internal SDLC requirements if the “developer” is your own team), repeatable procedures, and artifacts that show review results were acted on. 1

For a CCO, GRC lead, or security compliance owner, the fastest path is to implement SA-11(6) as a small set of guardrails embedded in procurement and the SDLC: define who the “developer” is for each system/service, define when reviews are required, standardize what “attack surface” means in your environment, and set evidence requirements that engineering can produce without heroics. Done well, SA-11(6) reduces preventable exposure from untracked endpoints, unmanaged interfaces, misconfigured network paths, and undocumented dependencies, especially when delivery relies on third parties or distributed internal teams. 2

Regulatory text

Requirement excerpt: “Require the developer of the system, system component, or system service to perform attack surface reviews.” 1

What that means operationally: you must (1) put an obligation on the developer, and (2) be able to show that the developer performed attack surface reviews for the relevant scope. NIST’s phrasing focuses on who performs the review (the developer) rather than only the security team. For most organizations, this becomes a hybrid: developers perform (or commission) the review as part of delivery, and security/GRC verifies that the activity occurred and that findings were dispositioned. 1

Plain-English interpretation (what SA-11(6) is really asking for)

An attack surface review is a structured review of “what’s exposed” and “what could be reached” for a system/component/service, then confirming those exposures are intended, documented, protected, and minimized. In practice, auditors expect you to cover at least:

  • External exposure: internet-facing hosts, domains, ports, and SaaS endpoints.
  • Interfaces and entry points: APIs, auth endpoints, admin consoles, VPNs, partner connections.
  • Dependencies that expand exposure: third-party services, libraries, containers/images, CI/CD and build artifacts.
  • Configuration and topology drift: changes that introduced new exposure since the last release.

SA-11(6) is satisfied when you can show a repeatable review happened, the scope matched the release/change, and issues were fixed or formally risk-accepted.

Who it applies to (entity and operational context)

SA-11(6) commonly applies where you have to align to NIST SP 800-53 Rev. 5 for:

  • Federal information systems, including agency-operated systems. 2
  • Contractor systems handling federal data, including systems operated by third parties on behalf of agencies. 1

Operational contexts where SA-11(6) becomes high-friction:

  • You buy or subscribe to a system/service and the “developer” is a third party. You need contractual hooks and ongoing evidence.
  • You build internally but have multiple product lines and platform teams; ownership of “attack surface” is unclear.
  • You assemble components (commercial software, open source, infrastructure-as-code) and release frequently; reviews must be lightweight but consistent.

What you actually need to do (step-by-step)

1) Define the “developer” for each covered system/service

Create a simple RACI per system:

  • If third party-developed, the developer is the supplier (or their designated integrator).
  • If internally developed, the developer is the engineering org responsible for code/configuration changes.
  • If mixed, define which party owns which part of the surface (for example, your team owns IaC and cloud config; the supplier owns application code).

Output: “Developer of record” field in your system inventory and/or SSP mapping to SA-11(6). 1

2) Define what an “attack surface review” must include (your minimum standard)

Write a one-page standard that specifies:

  • Scope dimensions: network exposure, APIs, identity/admin paths, data ingress/egress, dependencies.
  • Methods allowed: architecture review, threat modeling focused on entry points, inventory-based review, configuration review, and verification against actual reachable endpoints.
  • Outputs required: findings list, severity/priority, owner, due date, and disposition (fix/accept/transfer).

Keep it implementable. If you require outputs that engineering cannot produce, the control becomes paper-only.

3) Set trigger events (when reviews are required)

Define objective triggers in your SDLC/change process, such as:

  • New system/service onboarding.
  • Major feature release that adds a new endpoint, API, or integration.
  • Infrastructure or network boundary change.
  • Authn/authz model changes (SSO, IAM roles, privileged access paths).
  • Any new third party connection or data egress path.

Tie triggers to your change categories so the review is not optional.

4) Build it into release gates and third-party delivery milestones

For internal development:

  • Add an “Attack Surface Review complete?” gate in your release checklist.
  • Require sign-off from a named role (AppSec lead, security architect, or delegated reviewer).

For third parties:

  • Add contract language that requires attack surface reviews and delivery of specified artifacts.
  • Add acceptance criteria to SOWs and release acceptance.

If you use Daydream to run third-party due diligence workflows, set SA-11(6) as a required control with recurring evidence requests and escalation when artifacts are stale or incomplete.

5) Run the review using a consistent template

A pragmatic template section list:

  • System diagram/topology and trust boundaries (as-built, not aspirational).
  • Inventory of exposed endpoints (DNS, IPs, ports, services).
  • API inventory (public/private), auth methods, rate limiting, admin endpoints.
  • Data flows (ingress/egress), third-party integrations, secrets handling.
  • Findings and remediation plan.

Focus on “what is reachable” and “what can be abused” from realistic attacker paths.

6) Track remediation to closure (or formal risk acceptance)

Auditors will look for closure discipline:

  • Findings logged in a ticketing system with owners and target dates.
  • Evidence of fix (PR, config change, retest note).
  • If risk accepted, documented approval and rationale.

7) Verify and retain evidence (you are accountable for proof)

Security/GRC should sample reviews periodically to confirm:

  • The developer performed the review.
  • The scope matches triggers.
  • Remediation was completed or accepted.

This closes the loop: “required” plus “verified.”

Required evidence and artifacts to retain

Maintain a clean evidence set per system/service and per review cycle:

Governance / design evidence

  • Attack Surface Review standard/procedure (versioned).
  • System inventory entry showing “developer of record.”
  • RACI showing who performs and approves reviews.

Execution evidence 1

  • Completed review document or exported template.
  • Current architecture/topology diagram used for the review.
  • Endpoint inventory snapshot used (or equivalent evidence of enumeration).
  • Findings register with IDs and severities/priorities.
  • Tickets/PRs showing remediation, plus retest notes.
  • Risk acceptances with approver, date, and scope.

Third-party evidence (when applicable)

  • Contract/SOW clause requiring attack surface reviews.
  • Supplier-provided review report and findings.
  • Your acceptance sign-off and remediation tracking.

Your goal is “tell the story” in one folder: trigger → review → findings → fix/accept → approval.

Common exam/audit questions and hangups

Expect questions like:

  • “Who is the developer for this SaaS and how did you require them to do the review?” 1
  • “Show the last attack surface review for this release and the evidence it was completed before production.”
  • “How do you know the review reflects the actual exposure (not an outdated diagram)?”
  • “Where are findings tracked, and who approves risk acceptance?”
  • “How do you handle microservices where the surface changes weekly?”

Hangup patterns:

  • Reviews exist but aren’t tied to a release/change record.
  • Findings exist but no closure evidence exists.
  • Third-party attestation is vague (“we do security testing”) and not an attack surface review with artifacts.

Frequent implementation mistakes (and how to avoid them)

  1. Calling vulnerability scanning an attack surface review.
    Scanning helps, but SA-11(6) expects a review of exposure and entry points. Fix: require a structured inventory-and-interface review plus scan results as supporting evidence. 1

  2. No definition of “developer.”
    In shared responsibility models, everyone assumes someone else did it. Fix: assign developer of record per system/component and reflect it in contracts/SOWs or internal SDLC policy.

  3. Reviews done “when we remember.”
    Fix: triggers in change management and release checklists.

  4. Third-party reviews are non-verifiable.
    Fix: specify minimum artifacts in the contract and require delivery on a cadence tied to releases or material changes.

  5. Findings disappear into chat.
    Fix: require ticket IDs in the review output and sample them during control testing.

Enforcement context and risk implications

No public enforcement cases were provided in the supplied source catalog for SA-11(6), so you should treat this as a control-quality and assessment-readiness issue rather than anchoring to a specific penalty narrative. 1

Risk-wise, weak attack surface governance creates avoidable exposure: undocumented endpoints, unintended admin access, and uncontrolled integrations. For federal or federal-adjacent environments, the operational consequence is often assessment findings, ATO friction, delayed releases, and heightened monitoring requirements.

Practical 30/60/90-day execution plan

First 30 days (Immediate)

  • Assign an SA-11(6) control owner (AppSec or Security Architecture) and a GRC counterpart for evidence.
  • Publish a one-page “Attack Surface Review Standard” and a review template.
  • Identify highest-impact systems (internet-facing, high data sensitivity) and set the “developer of record.”
  • Update one contract/SOW template clause for third-party developers to require attack surface reviews and artifacts. 1

Days 31–60 (Near-term)

  • Embed triggers into change management/release management (release checklist item plus approver).
  • Pilot the review on a small set of systems and refine the template based on engineering feedback.
  • Stand up a findings workflow (tickets, owners, closure, risk acceptance path).
  • In Daydream, map SA-11(6) to an evidence checklist per system/third party and set review requests aligned to releases.

Days 61–90 (Operationalize)

  • Expand coverage across the system inventory based on risk tiering.
  • Start sampling and internal testing: pick recent releases and verify review → findings → closure.
  • Formalize metrics qualitatively (for example: “reviews completed for all high-impact releases”) without inventing numeric targets.
  • Prepare an audit-ready evidence package per system for the last review cycle.

Frequently Asked Questions

What counts as an “attack surface review” for SA-11(6)?

A documented review of exposed entry points and reachable components, with findings and dispositions. Scans can be inputs, but the review needs explicit scope, outputs, and accountability tied to a system/component/service. 1

If we buy SaaS, does SA-11(6) still apply?

Yes if the SaaS is in scope for your NIST-aligned boundary; you must require the developer (the provider) to perform the review and you must retain evidence. The cleanest path is contract language plus periodic artifact collection and review. 1

Can our security team perform the review instead of developers?

The requirement is to require the developer to perform the review. You can have security lead or verify it, but keep evidence that the developer performed or commissioned the activity as part of delivery. 1

How do we handle agile teams with frequent deployments?

Tie reviews to defined triggers such as new endpoints, new integrations, or boundary changes, and keep the review lightweight but consistent. For low-risk changes, require an update to the attack surface inventory and a quick reviewer sign-off.

What evidence is “enough” for an assessor?

A dated review artifact with scope, enumerated exposures/interfaces, findings, and approval, plus tickets/PRs showing remediation or documented risk acceptance. Assessors will also look for a repeatable procedure and clear ownership. 2

How do we operationalize SA-11(6) across third parties without creating procurement gridlock?

Standardize the clause and artifacts, then tier requirements by service criticality. Use a workflow tool like Daydream to request, track, and refresh artifacts so procurement and engineering aren’t chasing evidence manually.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as an “attack surface review” for SA-11(6)?

A documented review of exposed entry points and reachable components, with findings and dispositions. Scans can be inputs, but the review needs explicit scope, outputs, and accountability tied to a system/component/service. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

If we buy SaaS, does SA-11(6) still apply?

Yes if the SaaS is in scope for your NIST-aligned boundary; you must require the developer (the provider) to perform the review and you must retain evidence. The cleanest path is contract language plus periodic artifact collection and review. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Can our security team perform the review instead of developers?

The requirement is to require the developer to perform the review. You can have security lead or verify it, but keep evidence that the developer performed or commissioned the activity as part of delivery. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we handle agile teams with frequent deployments?

Tie reviews to defined triggers such as new endpoints, new integrations, or boundary changes, and keep the review lightweight but consistent. For low-risk changes, require an update to the attack surface inventory and a quick reviewer sign-off.

What evidence is “enough” for an assessor?

A dated review artifact with scope, enumerated exposures/interfaces, findings, and approval, plus tickets/PRs showing remediation or documented risk acceptance. Assessors will also look for a repeatable procedure and clear ownership. (Source: NIST SP 800-53 Rev. 5)

How do we operationalize SA-11(6) across third parties without creating procurement gridlock?

Standardize the clause and artifacts, then tier requirements by service criticality. Use a workflow tool like Daydream to request, track, and refresh artifacts so procurement and engineering aren’t chasing evidence manually.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream