Developer Testing and Evaluation | Static Code Analysis

To meet the Developer Testing and Evaluation | Static Code Analysis requirement, you must contractually require the developer (internal team or third-party supplier) to run static code analysis to find common flaws and to document the results. Operationally, that means integrating SAST into your SDLC/CI pipeline, defining what “good” looks like (scope, thresholds, remediation), and retaining proof that scans ran and findings were addressed. (NIST Special Publication 800-53 Revision 5)

Key takeaways:

  • Make it a developer obligation (contracts/SOWs and SDLC requirements), not a security “nice-to-have.” (NIST Special Publication 800-53 Revision 5)
  • Run static code analysis on the right scope (system, components, and services) and retain scan output plus remediation evidence. (NIST Special Publication 800-53 Revision 5)
  • Auditors look for repeatability: defined cadence/triggers, documented results, and a clear path from finding → fix → verification. (NIST Special Publication 800-53 Revision 5)

Static code analysis is one of the fastest ways to prove you are proactively identifying common software flaws before release, rather than relying on incident response after deployment. In FedRAMP-aligned programs, SA-11(1) pushes you to make this a developer responsibility: the organization must require the developer of the system, system component, or system service to run static code analysis tools and document the results. (NIST Special Publication 800-53 Revision 5)

For a Compliance Officer, CCO, or GRC lead, the operational problem is rarely “which tool is best.” The problem is governance: setting the requirement in policy and contracts, ensuring engineering executes it consistently across repos and build pipelines, defining what evidence is acceptable, and making the output auditable. If you cannot show that scans ran, what they found, and how findings were handled, you effectively have no control, even if teams claim they scan “sometimes.”

This page translates SA-11(1) into an implementation playbook you can hand to Engineering, Product Security, and Procurement. It focuses on practical steps, artifacts to retain, and the exam questions that cause findings.

Regulatory text

Requirement (SA-11(1)): “Require the developer of the system, system component, or system service to employ static code analysis tools to identify common flaws and document the results of the analysis.” (NIST Special Publication 800-53 Revision 5)

Operator interpretation (what you must do):

  1. Impose the requirement on “the developer.” If development is internal, this becomes an SDLC/secure engineering standard. If development is performed by a third party, it must be embedded in contract language, statements of work, or supplier security requirements. (NIST Special Publication 800-53 Revision 5)
  2. Use static code analysis tools (SAST and related static analyzers) to identify common flaws (for example, injection risks, insecure cryptography usage, hard-coded secrets, unsafe deserialization, and other patterns your toolset supports). (NIST Special Publication 800-53 Revision 5)
  3. Document results in a form that supports independent review: what was scanned, when, by what tool/version/config, what was found, and what was done about it. (NIST Special Publication 800-53 Revision 5)

Plain-English requirement

You need a repeatable, provable process where code is scanned with static analysis, findings are recorded, and there is a defined way to remediate or formally accept risk. “Document the results” means you can hand an assessor a trail showing scans occurred and how you handled what they surfaced. (NIST Special Publication 800-53 Revision 5)

Who it applies to

Entity types: Cloud Service Providers and Federal Agencies implementing NIST SP 800-53 Rev. 5 controls (including FedRAMP-aligned environments). (NIST Special Publication 800-53 Revision 5)

Operational contexts where this shows up:

  • Internal software development (your engineers build/maintain the system or components).
  • Third-party development (a contractor develops a service, component, or feature).
  • Open-source and commercial components integrated into your system (you may not be the author, but you still need to manage how code in scope is evaluated within your delivery process).
  • CI/CD pipelines that build artifacts deployed into production or authorized environments.

What you actually need to do (step-by-step)

1) Define scope and ownership

Create a short “static analysis scope statement” that answers:

  • In-scope codebases: repos, services, infrastructure-as-code, build scripts, and shared libraries that constitute the system/component/service.
  • Ownership: who configures the tool, who triages findings, who approves exceptions, and who verifies closure.
  • Trigger points: scans on pull request, on merge to main, on release build, and/or on a scheduled cadence.

Practical rule: if the code can reach production (or an authorized boundary), it should be scannable and covered by your requirement language.

2) Put the requirement into binding language (internal and third party)

You need two parallel implementations:

A. Internal development (policy/standard):

  • Add a control in your SDLC standard: “Static code analysis is required for all in-scope repositories; results are retained; findings are remediated or risk-accepted.” (NIST Special Publication 800-53 Revision 5)

B. Third-party development (contract/SOW):

  • Include a security clause requiring the developer to:
    • run static code analysis using an agreed tool or equivalent capability,
    • provide documented results for each release or agreed checkpoint,
    • remediate findings based on severity timelines or acceptance criteria you define,
    • disclose tool configuration exceptions that could reduce detection coverage. (NIST Special Publication 800-53 Revision 5)

If Procurement resists “tool specifics,” require outcomes: static analysis must be performed and results must be delivered in an auditable format. (NIST Special Publication 800-53 Revision 5)

3) Standardize tool configuration and rulesets

Audits fail when scans are inconsistent across teams.

Minimum configuration decisions to document:

  • tool name, versioning approach, and where configs live (repo, centralized template, or pipeline library)
  • enabled rule packs and language coverage
  • baseline handling (how you treat existing debt vs new findings)
  • criteria for “pass/fail” gating (for example, block merges for certain severities or for specific categories like hard-coded secrets)

Keep it simple. Consistency beats perfect tuning.

4) Integrate static analysis into engineering workflows

You need evidence that scanning happens as part of development, not as an occasional security task.

Common implementation patterns:

  • PR/merge checks: scan runs on pull request; blocks merge if policy fails.
  • Pipeline stage: scan runs during build; publishes artifacts to a central system.
  • Developer IDE hooks (optional): improves early detection but does not replace pipeline evidence.

If your environment includes multiple delivery paths, ensure each path triggers scanning for in-scope code.

5) Define triage, remediation, and exception handling

Static analysis produces noise unless you define decision rights and closure criteria.

Operationalize with:

  • a triage workflow (assign to repo owner; validate true/false positive)
  • remediation requirements (fix, mitigate, or document risk acceptance)
  • exception process (who can approve, required rationale, expiry/review trigger, compensating controls)

Assessors will expect you to show that documented results lead to action, not just reports stored in a folder. (NIST Special Publication 800-53 Revision 5)

6) Document results in an auditable way

“Document the results” should be satisfied by records that show:

  • scan execution (job logs, pipeline run IDs, timestamps)
  • scan output (findings list, severity, affected files, rule IDs)
  • disposition (fixed commit/PR, false positive justification, accepted risk ticket)
  • re-scan or verification evidence after remediation

Avoid storing only screenshots. Keep machine-readable outputs when possible plus a human-readable summary for review.

7) Align with broader control expectations

SA-11(1) is part of “Developer Testing and Evaluation.” In practice, map static analysis to:

  • your secure SDLC standard,
  • vulnerability management workflows,
  • supplier/third-party security requirements.

If you use Daydream to run third-party due diligence and continuous control tracking, treat static analysis as a measurable supplier engineering requirement: collect attestations, sample scan outputs, and release-by-release evidence from developers delivering code into your boundary.

Required evidence and artifacts to retain

Retain artifacts that let an assessor independently validate execution:

Governance artifacts

  • SDLC/security engineering standard stating static analysis requirement (internal developers). (NIST Special Publication 800-53 Revision 5)
  • Third-party contract/SOW clauses requiring static analysis and documented results (external developers). (NIST Special Publication 800-53 Revision 5)
  • Defined exception/risk acceptance procedure tied to security findings.

Operational artifacts

  • CI/CD configuration showing SAST stage (pipeline config files, build logs).
  • Tool configuration/rulesets and any approved deviations.
  • Scan reports (exported results) tied to releases/builds.
  • Tickets/issues for findings with status history and closure evidence (PR links, commits).
  • Evidence of periodic review of outstanding findings and exceptions.

Audit-ready summaries

  • A per-release (or per-sprint) security testing summary that references scan run IDs and high-severity findings disposition.

Common exam/audit questions and hangups

  • “Show me that static analysis is required.” Auditors want binding language: policy/standard and contracts for third parties. (NIST Special Publication 800-53 Revision 5)
  • “What exactly was scanned?” Expect requests for repo lists, service inventories, and proof coverage matches your system boundary.
  • “Are results documented and retained?” They will ask for scan outputs from specific time windows and trace them to remediation work. (NIST Special Publication 800-53 Revision 5)
  • “How do you handle false positives and exceptions?” Weak exception governance is a frequent hangup.
  • “Is this performed by the developer?” If Security runs scans separately, show how the developer is still responsible and receives/acts on results. (NIST Special Publication 800-53 Revision 5)

Frequent implementation mistakes (and how to avoid them)

  1. Policy says “scan,” but pipelines don’t enforce it. Fix: add required checks in CI and require scan artifacts for release readiness.
  2. Only scanning one flagship repo. Fix: create an inventory of in-scope repositories/services and map each to a scan job.
  3. No documented results, only tool dashboards. Fix: export reports per build/release and retain them with immutable build records.
  4. Exceptions last forever. Fix: require an owner, rationale, and a review trigger tied to change events (new release, major refactor).
  5. Third-party developers are out of band. Fix: put requirements in SOWs, require delivery of results, and verify before accepting code. (NIST Special Publication 800-53 Revision 5)

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement. Practically, the risk is operational: if you cannot prove static analysis was performed and results were documented, assessors can treat the control as not implemented, which can delay authorization decisions or create POA&M items. (NIST Special Publication 800-53 Revision 5)

Practical execution plan (30/60/90)

Because source-backed timelines were not provided, treat these as phases you can execute in sequence.

Immediate phase

  • Assign control ownership (GRC + Product Security + Engineering).
  • Draft the static analysis requirement language for SDLC standards and third-party SOWs. (NIST Special Publication 800-53 Revision 5)
  • Inventory in-scope repos/services/components and identify current scanning coverage.

Near-term phase

  • Standardize tool configuration and rulesets.
  • Integrate static analysis into CI for in-scope repos; produce retained artifacts.
  • Stand up triage workflow, exception process, and evidence retention location.

Ongoing phase

  • Expand coverage to new repos automatically (templates/golden pipelines).
  • Review outstanding findings and exceptions on a defined governance cadence.
  • For third parties, require scan results as a delivery gate and store them in your evidence repository (Daydream can track these third-party obligations and evidence requests centrally).

Frequently Asked Questions

Does SA-11(1) require a specific static analysis tool?

The requirement is to employ static code analysis tools and document results, not to use a named product. Pick a toolset that supports your languages and produces exportable, reviewable results. (NIST Special Publication 800-53 Revision 5)

If we outsource development, who is responsible for running SAST?

You must require the developer to run static code analysis and document results, so the third party should run it and deliver the outputs. You still own oversight and must retain evidence that the requirement was met. (NIST Special Publication 800-53 Revision 5)

Do we need to block releases on SAST findings?

SA-11(1) does not mandate gating, but auditors will expect you to show findings lead to action. If you don’t gate, document your alternative enforcement method (tickets, risk acceptance, release criteria) and prove it works in practice. (NIST Special Publication 800-53 Revision 5)

What counts as “document the results” for audit purposes?

Keep scan outputs tied to builds/releases plus proof of disposition for material findings (fix, false positive rationale, or approved exception). Also retain evidence of scan execution (pipeline logs or job history). (NIST Special Publication 800-53 Revision 5)

How do we handle legacy code with a huge backlog of findings?

Baseline existing findings and focus enforcement on new/changed code, while tracking legacy remediation as managed risk. Document the baselining approach and show consistent scanning plus prioritized cleanup work.

Can we satisfy this with manual code review alone?

No. The text specifically requires static code analysis tools and documented results, so manual review can be complementary but does not replace the tool-based requirement. (NIST Special Publication 800-53 Revision 5)

Frequently Asked Questions

Does SA-11(1) require a specific static analysis tool?

The requirement is to employ static code analysis tools and document results, not to use a named product. Pick a toolset that supports your languages and produces exportable, reviewable results. (NIST Special Publication 800-53 Revision 5)

If we outsource development, who is responsible for running SAST?

You must require the developer to run static code analysis and document results, so the third party should run it and deliver the outputs. You still own oversight and must retain evidence that the requirement was met. (NIST Special Publication 800-53 Revision 5)

Do we need to block releases on SAST findings?

SA-11(1) does not mandate gating, but auditors will expect you to show findings lead to action. If you don’t gate, document your alternative enforcement method (tickets, risk acceptance, release criteria) and prove it works in practice. (NIST Special Publication 800-53 Revision 5)

What counts as “document the results” for audit purposes?

Keep scan outputs tied to builds/releases plus proof of disposition for material findings (fix, false positive rationale, or approved exception). Also retain evidence of scan execution (pipeline logs or job history). (NIST Special Publication 800-53 Revision 5)

How do we handle legacy code with a huge backlog of findings?

Baseline existing findings and focus enforcement on new/changed code, while tracking legacy remediation as managed risk. Document the baselining approach and show consistent scanning plus prioritized cleanup work.

Can we satisfy this with manual code review alone?

No. The text specifically requires static code analysis tools and documented results, so manual review can be complementary but does not replace the tool-based requirement. (NIST Special Publication 800-53 Revision 5)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
Developer Testing and Evaluation | Static Code Analysis | Daydream