SA-11(8): Dynamic Code Analysis

SA-11(8) requires you to ensure developers run dynamic code analysis (DAST and runtime testing) against your system, components, or services to find common flaws, and document the results. To operationalize it quickly, define when dynamic scans must run (by pipeline stage and release type), enforce it in CI/CD, and retain scan outputs plus remediation decisions. 1

Key takeaways:

  • SA-11(8) is a developer requirement: dynamic testing must happen and results must be documented. 1
  • Auditors look for repeatable execution (triggers, scope, thresholds) and traceable remediation (tickets, risk acceptance, retest). 1
  • The fastest path is to embed dynamic analysis into SDLC gates and standardize evidence capture per release. 1

The sa-11(8): dynamic code analysis requirement is an SDLC control: it expects dynamic testing to be part of how software is built and changed, not a one-time security exercise. Unlike static analysis, dynamic analysis tests the running application, service, or component to expose issues that only show up at runtime, in real request flows, or under specific configurations.

For a Compliance Officer, CCO, or GRC lead, the operational challenge is rarely “Which tool is best?” It’s governance: defining required scan points, ensuring teams actually run them, setting expectations for triage and fixes, and producing evidence that an assessor can verify. SA-11(8) is also explicit about documentation. Even strong testing programs fail audits when results are not retained, are not attributable to a specific build/release, or cannot be tied to remediation outcomes.

This page gives requirement-level implementation guidance you can hand to engineering and security teams, then measure. It focuses on: scope and applicability, step-by-step operationalization, evidence to retain, and the exam questions that commonly break otherwise mature programs. 2

Regulatory text

Requirement (verbatim): “Require the developer of the system, system component, or system service to employ dynamic code analysis tools to identify common flaws and document the results of the analysis.” 1

What the operator must do (plain reading):

  1. Make dynamic code analysis mandatory for developers (internal teams and applicable third parties) working on the system, components, or services. 1
  2. Use tools that perform dynamic analysis (testing the running target, not just reading code). 1
  3. Identify “common flaws” through that dynamic analysis (you operationalize this as a flaw taxonomy and test policy). 1
  4. Document results in a way that supports review, remediation, and audit. 1

Plain-English interpretation

You must be able to prove, for each in-scope application/service (and for meaningful changes), that:

  • dynamic testing ran against a real running deployment (test/stage is acceptable if it represents production),
  • findings were captured, reviewed, and dispositioned (fix, false positive, risk accept, defer),
  • retesting occurred when fixes were made or when risk acceptance expires.

This is not satisfied by “we do occasional pen tests” unless you can show repeatability, coverage expectations, and documented outputs mapped to releases. SA-11(8) is a control you operationalize as part of your SDLC gates and release management. 1

Who it applies to (entity and operational context)

In-scope entities

  • Federal information systems implementing NIST SP 800-53 controls. 2
  • Contractor systems handling federal data where 800-53 is flowed down contractually or used as the governing control baseline. 2

In-scope work

  • Custom applications (web, mobile backends, APIs).
  • System components and shared services (authentication, gateways, message brokers) where dynamic testing is feasible.
  • CI/CD pipelines and release processes that can enforce scanning and evidence capture.

Third-party considerations If a third party develops or materially changes an in-scope component, you still need the outcome: dynamic analysis performed and results documented. Operationally, this becomes a contract/SOW requirement plus evidence delivery requirement (scan reports, defect logs, exception approvals). 1

What you actually need to do (step-by-step)

1) Assign ownership and define scope

  • Control owner: AppSec leader or Head of Engineering with GRC oversight.
  • System scope list: maintain an inventory of in-scope apps/services/components and their repos, owners, and environments where dynamic scans run.
  • Change scope: define what triggers required dynamic analysis (new app onboarding; major release; significant security-impacting change like auth flows, access control, deserialization, file upload, payment flows).

Deliverable: a one-page “Dynamic Analysis Standard” that names scope, triggers, tooling categories, required outputs, and exception path. 1

2) Define what “dynamic code analysis” means in your program

Operational definition you can use in policy and audits:

  • DAST against running web apps/APIs (authenticated and unauthenticated where applicable).
  • IAST or RASP-style runtime instrumentation where feasible for deeper runtime visibility.
  • Dynamic fuzzing for parsers/critical endpoints where risk justifies it.
  • Manual dynamic testing (structured security test scripts) when automation cannot cover a target.

Be explicit about environment: “Runs against staging that mirrors production configuration and is refreshed on a defined cadence,” or “runs against ephemeral review apps per pull request for internet-facing services.”

3) Build it into CI/CD and release gates

What auditors want is predictable enforcement:

  • Pre-merge / pre-release checks: run baseline DAST for high-risk services or at least per release candidate.
  • Nightly or scheduled scans: cover broader crawling and authenticated flows.
  • Release gate: require a documented triage of findings and a sign-off for exceptions.

Practical gating model:

  • Block releases for confirmed critical/high-risk findings in high-impact areas (auth, access control, injection) unless a time-bound exception is approved by the risk owner.
  • Allow releases for low-risk findings with documented backlog tickets.

Keep the gate rules simple enough that engineering follows them, then tighten over time.

4) Establish triage, remediation, and exception workflows

Minimum workflow states you should enforce in your tracker:

  • New finding → Triage (confirm/false positive) → Remediate → Retest/verify → Close.
  • Exception path: Risk accept / Defer with compensating controls, owner approval, and an expiry/review trigger.

Map each finding to:

  • affected system and environment,
  • build/release identifier,
  • severity rationale (your rubric),
  • remediation owner and due date,
  • retest evidence.

5) Document results in an audit-ready way

SA-11(8) explicitly requires documentation of analysis results. 1

Standardize the record per scan run:

  • scan date/time, tool name/version (if available), target URL/service identifier, environment,
  • scan policy profile used (e.g., “Authenticated API profile”),
  • summary of findings by severity,
  • full export (PDF/JSON) attached or linked with access controls,
  • link to tickets and remediation outcomes.

If you operate in regulated environments, treat scan outputs as controlled security records. Store them in a system with retention, access control, and immutable logs where feasible.

6) Manage third-party developer deliverables

For outsourced development or purchased components with custom integration:

  • Contract clause: dynamic analysis is required for delivered code/services; results must be provided.
  • Acceptance criteria: provide scan artifacts and evidence of remediation or documented exceptions.
  • Verification: run your own dynamic scan against the delivered build in your staging environment to validate.

7) Run a quarterly control health review (operator practice)

Do a recurring control check focused on operation, not paperwork:

  • Are scans running on schedule and for releases?
  • Are there gaps (apps with no authenticated coverage, scans failing due to environment drift)?
  • Are exceptions expiring and being reviewed?
  • Are repeat findings recurring (signals missing root-cause fixes)?

If you need a system of record for these recurring artifacts and mappings, Daydream is a practical place to track the control requirement, owners, procedures, and evidence requests without relying on scattered tickets and shared drives. 1

Required evidence and artifacts to retain

Maintain evidence that answers: “Did dynamic analysis occur, what did it find, and what did you do about it?” 1

Core artifacts (keep current + historical samples)

  • Dynamic Analysis Standard (policy/procedure) with scope, triggers, and exception process.
  • Tool configuration baselines (scan profiles, authenticated credential handling approach, safe test data rules).
  • Scan execution logs/reports (exports) tied to specific targets and time windows.
  • Findings register (or ticket links) with dispositions, remediation actions, and retest proof.
  • Exception approvals (risk acceptance) with owner, rationale, compensating controls, and review date.
  • Third-party deliverables: scan reports and remediation attestations for outsourced development.

Audit convenience artifacts

  • “Evidence packet” per major release: scan report + triage summary + top findings closure list + exception list.
  • Coverage map: system inventory crosswalk to “last dynamic scan date” and “auth coverage yes/no.”

Common exam/audit questions and hangups

Auditors and assessors tend to probe these points under SA-11(8): 1

  • Scope: Which systems/components/services are required to have dynamic analysis? Why?
  • Frequency/triggers: What events force a scan (release, major change, emergency patch)?
  • Authentication: Do scans test authenticated flows? How do you manage credentials safely?
  • Documentation: Where are results stored, and can you show a complete trail from finding → fix → retest?
  • Exceptions: Who can accept risk, and how do you ensure exceptions expire?
  • Independence: Is there any review beyond the developer (AppSec review, peer review, QA security checks)?

Hangups you can preempt:

  • “We ran scans but cannot reproduce the report tied to a release.”
  • “We only scanned the landing page; critical APIs were missed.”
  • “Findings were exported but not tracked to closure.”

Frequent implementation mistakes and how to avoid them

Mistake Why it fails SA-11(8) Fix
Treating DAST as a one-time annual activity SA-11(8) is framed as a developer practice with documented results, not an occasional test. 1 Tie scans to SDLC triggers and releases; keep evidence per release.
No authenticated scanning Common flaws often sit behind login; auditors will question coverage gaps. Build a safe auth strategy: test users, scoped tokens, test data, and rate limits.
“Results” are screenshots or summaries only Documentation must support verification and traceability. 1 Keep exports (machine-readable where possible) plus triage notes and tickets.
Findings never get retested A closed ticket is not proof the flaw is gone. Require retest evidence for high-risk findings and for recurring weakness classes.
Exceptions are informal Verbal approvals don’t survive audits. Use a standard exception template with approvals and expiry.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this control enhancement, so you should treat SA-11(8) primarily as an assessment and authorization readiness requirement under NIST SP 800-53 programs. 2

Risk implications if you under-implement:

  • Runtime vulnerabilities persist into production (auth bypass, injection, SSRF, insecure deserialization, business logic abuse) because they are not reliably detected by static review alone.
  • You fail control testing due to missing documentation even if some scanning occurred.
  • Third-party developed components become blind spots if you do not contract for scan evidence.

Practical 30/60/90-day execution plan

First 30 days (stand up governance + quick wins)

  • Name control owner(s) and publish the Dynamic Analysis Standard aligned to SA-11(8). 1
  • Build the in-scope inventory: apps/services, owners, environments, and release trains.
  • Pick initial toolchain category per target (DAST for web/API; IAST where feasible).
  • Run pilot scans for the highest-risk internet-facing apps and document results in a consistent template.

Day 31–60 (embed into delivery + fix evidence gaps)

  • Integrate dynamic scans into CI/CD for the pilot apps, with clear triggers and non-negotiable evidence capture.
  • Stand up the triage workflow (tickets, severity rubric, retest requirement).
  • Implement the exception process with approvals and expirations.
  • Add third-party contract language and evidence intake process for any outsourced development.

Day 61–90 (scale coverage + operationalize oversight)

  • Expand scanning coverage to remaining in-scope services based on risk and feasibility.
  • Add authenticated coverage for critical flows; document where it is not feasible and the compensating approach.
  • Create an audit-ready dashboard or register: last scan, open high-risk findings, exceptions, retest status.
  • Run an internal control test: select a recent release and prove end-to-end evidence from scan → remediation → retest. 1

Frequently Asked Questions

Does SA-11(8) require DAST specifically, or can we use IAST?

The control requires “dynamic code analysis tools” and documented results, so DAST and IAST can both fit if they test runtime behavior and produce auditable outputs. Your policy should define which tool types apply to which system types. 1

Can we run dynamic scans only in staging and still comply?

Yes if staging is representative and your procedure defines it as the approved test environment, but you must document scope and results per the requirement. If staging differs materially from production, document the gap and your compensating approach. 1

What counts as “document the results” for SA-11(8)?

Keep the scan output (export), the triage decision for each material finding, and the remediation or exception record tied to a release or time window. A summary without underlying report artifacts is a common audit failure. 1

How do we handle false positives without failing the control?

False positives are acceptable if you document the triage rationale and keep enough detail to reproduce the decision later (request/response evidence, configuration notes, or rule IDs). Auditors generally object to undocumented dismissals, not to false positives themselves. 1

Our third-party developer says they can’t share scan reports. What should we do?

Add contractual requirements for dynamic analysis evidence as a delivery condition, and allow secure sharing methods (redacted reports, controlled portals). If they still can’t provide evidence, run your own dynamic analysis against the delivered build and retain those results. 1

How do we prove scans are actually happening over time?

Retain execution logs/reports with timestamps, targets, and scan profiles, and maintain a simple register showing last scan date per in-scope system plus links to the artifacts. That pairing is usually enough to demonstrate ongoing operation. 1

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

Does SA-11(8) require DAST specifically, or can we use IAST?

The control requires “dynamic code analysis tools” and documented results, so DAST and IAST can both fit if they test runtime behavior and produce auditable outputs. Your policy should define which tool types apply to which system types. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Can we run dynamic scans only in staging and still comply?

Yes if staging is representative and your procedure defines it as the approved test environment, but you must document scope and results per the requirement. If staging differs materially from production, document the gap and your compensating approach. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What counts as “document the results” for SA-11(8)?

Keep the scan output (export), the triage decision for each material finding, and the remediation or exception record tied to a release or time window. A summary without underlying report artifacts is a common audit failure. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we handle false positives without failing the control?

False positives are acceptable if you document the triage rationale and keep enough detail to reproduce the decision later (request/response evidence, configuration notes, or rule IDs). Auditors generally object to undocumented dismissals, not to false positives themselves. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Our third-party developer says they can’t share scan reports. What should we do?

Add contractual requirements for dynamic analysis evidence as a delivery condition, and allow secure sharing methods (redacted reports, controlled portals). If they still can’t provide evidence, run your own dynamic analysis against the delivered build and retain those results. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we prove scans are actually happening over time?

Retain execution logs/reports with timestamps, targets, and scan profiles, and maintain a simple register showing last scan date per in-scope system plus links to the artifacts. That pairing is usually enough to demonstrate ongoing operation. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream