SA-15(7): Automated Vulnerability Analysis

SA-15(7) requires you to make your developers run automated vulnerability analysis against the system, component, or service you are acquiring or building, and to prove it happened with repeatable evidence. To operationalize it, bake automated scanning into the SDLC and supplier contract language, define scan cadence and gating rules, and retain scan outputs plus remediation records mapped to releases. 1

Key takeaways:

  • You must contractually and procedurally require automated vulnerability analysis by the developer, not treat it as optional “security testing.” 1
  • Passing audits depends on evidence: tool configuration, scan results, triage decisions, fixes, and release approvals tied together end-to-end.
  • The fastest path is SDLC integration: CI/CD scanning, ticketing workflows, and defined stop-ship criteria for high-severity findings.

SA-15(7): automated vulnerability analysis requirement is an SDLC control that focuses on developer-performed (or developer-managed) automated analysis, not ad hoc penetration testing or periodic infrastructure scans. The compliance outcome you want is simple to explain to an assessor: “For every build/release (or defined interval), the developer runs automated vulnerability analysis using approved tools; results are reviewed; findings are tracked to closure or formally risk-accepted; and releases follow documented gating rules.”

This control shows up in federal information system contexts and contractor environments handling federal data, where you often have a mixed delivery model: internal development teams, integrators, SaaS providers, and other third parties shipping components into your boundary. Your operational challenge is to standardize expectations across that supply chain without blocking delivery. That means: (1) define what “automated vulnerability analysis” means in your environment, (2) make it a requirement in contracts/SOWs and in your internal SDLC, (3) implement automation (SAST/SCA/container/IaC scanning as applicable), and (4) retain evidence that connects scan results to remediation and release decisions.

Daydream can help by turning SA-15(7) into a requirement record with a named control owner, a repeatable procedure, and a recurring evidence checklist so you can stay assessment-ready as tools and pipelines change.

Regulatory text

Excerpt (control requirement): “Require the developer of the system, system component, or system service [organization-defined parameter] to:” 1

What the operator must do with this text

  • Identify who counts as “the developer.” In practice, this includes your internal engineering teams and any third party building or customizing code, images, infrastructure-as-code, or deployable packages you run. Your procurement and SDLC need to treat both as in-scope. 1
  • Define the organization-specific parameter. The excerpt includes an organization-defined element. You must fill that in with your scope statement, such as: which systems/components/services, which development phases, which scan types, and what “required” means (gates vs. reporting). 1
  • Make it enforceable. “Require the developer” implies a formal mechanism: SDLC policy, engineering standards, pipeline controls, and third-party contract language with deliverables and acceptance criteria. 1

Plain-English interpretation

You need an automated way to find vulnerabilities in what’s being built or delivered, and you must ensure the developer runs it as a normal part of development and delivery. Then you must show evidence that:

  1. automated analysis happened,
  2. results were reviewed,
  3. findings were fixed or formally accepted,
  4. releases were approved with known risk.

Automated vulnerability analysis typically includes combinations of:

  • SAST (source code scanning),
  • SCA (open-source dependency vulnerability scanning),
  • Container image scanning,
  • IaC scanning (Terraform/CloudFormation/Kubernetes manifests),
  • DAST (where feasible for deployed apps), as long as you can demonstrate it is automated, repeatable, and tied to developer activity.

Who it applies to (entity and operational context)

Entities

  • Federal information systems and programs implementing NIST SP 800-53 controls. 2
  • Contractors and other third parties handling federal data or delivering systems/components/services into federal environments, where contracts flow down security requirements. 2

Operational contexts where SA-15(7) commonly bites

  • CI/CD-driven application delivery with frequent releases.
  • COTS/SaaS onboarding where the “developer” is a third party and you need attestable SDLC evidence.
  • Platform engineering delivering golden images, hardened baselines, container platforms, and shared libraries.
  • System integration where multiple component developers contribute to a final system and you need consistent scanning and evidence.

What you actually need to do (step-by-step)

1) Name an owner and define scope in one page

Create a control implementation note that answers, in plain terms:

  • In-scope systems/components/services.
  • In-scope development models (internal, outsourced, SaaS).
  • Required scan categories (pick what matches your stack).
  • When scans run 1.
  • Minimum evidence required for each release.
    Map this to a single control owner (often AppSec or Product Security) with shared responsibilities across Engineering and Procurement.

Daydream tip: Track SA-15(7) as a requirement with an owner, procedure, and evidence artifacts so you can answer auditor requests fast and keep the control stable across tool migrations.

2) Translate the requirement into engineering standards (your “definition of done”)

Write SDLC requirements that engineers can follow without interpretation:

  • All repos must have automated scans enabled (by repo template or pipeline library).
  • Findings must flow into a ticketing system with an owner and due date (or sprint assignment).
  • Define what blocks a merge vs. what blocks a release.
  • Require a documented exception path (risk acceptance) with approver role(s).

Keep it short. Your goal is enforceability, not a textbook.

3) Implement automation in pipelines and build systems

For each artifact type, implement the appropriate scanner and connect it to your CI/CD:

  • Code: SAST on pull requests and main branch.
  • Dependencies: SCA on manifest changes and scheduled runs.
  • Containers: scan at build time and before deployment.
  • IaC: scan on PR and before apply.

Hard requirement for auditability: ensure scans are non-optional or require documented override. If teams can silently disable scans, you will fail the “require” test.

4) Define triage, remediation, and acceptance rules

Automated scanning produces noise unless you operationalize triage:

  • Severity model (tool severity plus your business context).
  • SLAs or “fix-by” expectations (you can define these internally as policy guidance).
  • False positive handling (who can mark it and with what justification).
  • Risk acceptance workflow (who approves, for how long, and re-review triggers like major version changes).

A common, defensible pattern: allow exceptions, but require time-bound review and a compensating control note (for example, WAF rule, configuration change, feature flag, or segmentation).

5) Make third-party development deliverables measurable

For third parties delivering code or services, add contract/SOW language that requires:

  • Automated vulnerability analysis results for the deliverable.
  • A summary of open findings at delivery time and a remediation plan.
  • Notification obligations for newly discovered vulnerabilities affecting delivered components.

If the third party will not provide raw scan outputs, require a structured attestation plus a right-to-audit clause and a way to validate through your own testing where feasible.

6) Prove “release decisions” incorporate scan results

This is where auditors linger. You need a release artifact that shows:

  • Scans executed for the release candidate.
  • Findings reviewed.
  • Open risks documented and approved (or confirmed none).
  • Approval captured (change record, CAB ticket, or release sign-off).

Required evidence and artifacts to retain

Store evidence so you can reconstruct a release story end-to-end:

Governance artifacts

  • SDLC/AppSec policy section that states automated vulnerability analysis is required.
  • Secure coding standard and pipeline requirements.
  • Third-party contract clauses / security addendum language requiring automated analysis. 1

Operational artifacts (high value in audits)

  • Tool configuration screenshots/exports (org settings, project settings, rulesets).
  • CI/CD logs showing scan jobs executed (build IDs, timestamps, repo/branch).
  • Scan reports (SAST/SCA/container/IaC) tied to commits or release tags.
  • Triage records: tickets, assignments, status changes, and closures.
  • Risk acceptance records with approver, rationale, scope, and expiration/review trigger.
  • Release/change approval records referencing scan status.

Traceability matrix (simple but powerful) A table that maps: System → Repo/Image → Pipeline → Scan types → Evidence location → Owner.

Common exam/audit questions and hangups

Expect these questions and prepare the evidence path:

  1. “Show me that developers are required to run automated vulnerability analysis.”
    They want policy + a technical control that enforces scans (not just guidance).

  2. “Show me evidence for a specific release.”
    Pick one recent release and provide: pipeline run → scan output → tickets → closure/exception → release approval.

  3. “How do you handle false positives and exceptions?”
    If exceptions exist without approvals and expiry, auditors treat it as uncontrolled risk.

  4. “How do you ensure third-party developers do this?”
    They will look for contract language, onboarding checklists, and deliverable acceptance criteria.

Frequent implementation mistakes and how to avoid them

  • Mistake: treating this as quarterly infrastructure vulnerability scanning.
    Fix: make scanning developer-integrated and release-tied; infrastructure scanning can complement but rarely satisfies the “developer” focus alone. 1

  • Mistake: scans run, but nothing forces remediation.
    Fix: define gating and escalation. If you can ship with critical findings without a documented exception, you have automation without control.

  • Mistake: evidence lives in too many places and can’t be reconstructed.
    Fix: standardize evidence locations and name them in your traceability matrix; link tickets to pipeline run IDs and release tags.

  • Mistake: third-party “attestations” with no substance.
    Fix: request scan summaries tied to versions, plus open findings lists; negotiate right-to-audit or provide your own validation scans where possible.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for SA-15(7). Practically, your risk exposure is assessment failure (unable to show you “required” and verified automated analysis) and increased likelihood of shipping known vulnerable components. The most common failure mode is evidentiary: teams do the scans, but can’t prove consistency across systems, releases, and third parties. 1

A practical 30/60/90-day execution plan

First 30 days (establish “required”)

  • Assign a control owner and publish a one-page SA-15(7) implementation standard.
  • Inventory in-scope repos/images/IaC modules and identify the “developer” for each (internal or third party).
  • Select approved scanning categories per stack and set minimum pipeline requirements.
  • Draft third-party contract addendum language for automated vulnerability analysis deliverables.

Days 31–60 (make it operational and measurable)

  • Roll out CI/CD templates that enable required scanners by default.
  • Integrate findings into ticketing with ownership, status taxonomy, and exception workflow.
  • Define release gating and the documented override path.
  • Create an evidence checklist and a traceability matrix; test it on one pilot system.

Days 61–90 (scale and make it audit-ready)

  • Expand pipeline enforcement across all in-scope systems; eliminate “optional” scan stages.
  • Run an internal mini-assessment: pick multiple releases and validate evidence completeness.
  • Onboard third parties to the deliverable format; start collecting evidence at intake.
  • Operationalize recurring reporting: top open findings by system, exception inventory, and overdue remediation.

Frequently Asked Questions

Does SA-15(7) require penetration testing?

SA-15(7) is specific to automated vulnerability analysis performed by the developer as part of developing/delivering a system, component, or service. Pen testing can complement it, but it does not replace automated, repeatable analysis tied to the SDLC. 1

What counts as “automated vulnerability analysis” for this requirement?

Use automated scanners appropriate to what you ship, such as SAST for code, SCA for dependencies, container scanning for images, and IaC scanning for infrastructure definitions. Your implementation must be repeatable and produce retained outputs tied to builds or releases. 1

If we buy SaaS, how can we “require the developer” to do this?

Treat the SaaS provider as a third party developer and flow the requirement into contracts and onboarding: require scan evidence or structured attestations tied to versions and remediation commitments. Where evidence is limited, document compensating validation steps you perform. 1

Are we allowed to ship with open findings?

You can, but only if you define a formal exception process with documented risk acceptance, scope, and review triggers. Auditors will focus on whether shipping with known issues is controlled and approved, not accidental.

What evidence is the fastest to produce during an audit?

A single release packet: CI/CD run logs showing scans executed, the scan reports, linked remediation tickets (or an approved exception), and the release/change approval referencing scan status. Package it per system so you can respond quickly.

How do we keep this from becoming a manual compliance chore?

Standardize repo templates and pipeline libraries so scans run automatically, and push findings into your normal engineering workflow (tickets and pull requests). Use Daydream to track the requirement, assign ownership, and maintain a recurring evidence checklist so gaps show up before the assessment.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

Does SA-15(7) require penetration testing?

SA-15(7) is specific to automated vulnerability analysis performed by the developer as part of developing/delivering a system, component, or service. Pen testing can complement it, but it does not replace automated, repeatable analysis tied to the SDLC. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What counts as “automated vulnerability analysis” for this requirement?

Use automated scanners appropriate to what you ship, such as SAST for code, SCA for dependencies, container scanning for images, and IaC scanning for infrastructure definitions. Your implementation must be repeatable and produce retained outputs tied to builds or releases. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

If we buy SaaS, how can we “require the developer” to do this?

Treat the SaaS provider as a third party developer and flow the requirement into contracts and onboarding: require scan evidence or structured attestations tied to versions and remediation commitments. Where evidence is limited, document compensating validation steps you perform. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Are we allowed to ship with open findings?

You can, but only if you define a formal exception process with documented risk acceptance, scope, and review triggers. Auditors will focus on whether shipping with known issues is controlled and approved, not accidental.

What evidence is the fastest to produce during an audit?

A single release packet: CI/CD run logs showing scans executed, the scan reports, linked remediation tickets (or an approved exception), and the release/change approval referencing scan status. Package it per system so you can respond quickly.

How do we keep this from becoming a manual compliance chore?

Standardize repo templates and pipeline libraries so scans run automatically, and push findings into your normal engineering workflow (tickets and pull requests). Use Daydream to track the requirement, assign ownership, and maintain a recurring evidence checklist so gaps show up before the assessment.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream