SA-11(1): Static Code Analysis

To meet the sa-11(1): static code analysis requirement, you must require developers (internal teams and relevant third parties) to run static code analysis tools on your system/components, identify common flaws, and document the results for review and audit. Operationalize this by embedding SAST into CI/CD, defining severity thresholds and remediation workflows, and retaining scan outputs and exception approvals. 1

Key takeaways:

  • You need both execution (run SAST) and proof (documented results, decisions, and remediation). 1
  • Scope includes systems, components, and services, including code delivered by third parties when they develop or provide components you run. 1
  • Audits fail most often on missing repeatable procedure, inconsistent scanning coverage, and weak exception handling.

SA-11(1) is a build-time and release-time discipline: code gets scanned automatically for common weakness patterns, and the organization keeps a clear record of what was found and what was done about it. The control text is short, but assessors will expect operational rigor: defined tooling, defined gates, defined roles, and evidence that the process runs consistently across in-scope repositories and delivery pipelines. 1

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat SA-11(1) as a “secure SDLC requirement” that can be tested like any other control. Your job is to make the requirement unambiguous for engineering, ensure it applies to third-party-developed code you rely on, and set expectations for documentation and retention. Engineering’s job is to implement and run the scans, triage findings, remediate or formally accept risk, and prove those steps happened. 1

This page gives requirement-level implementation guidance you can hand to an AppSec lead or platform owner and then audit against: scope, step-by-step actions, minimum artifacts, and the exam questions that commonly stall assessments.

Regulatory text

Requirement (verbatim): “Require the developer of the system, system component, or system service to employ static code analysis tools to identify common flaws and document the results of the analysis.” 1

What the operator must do:

  1. Make SAST mandatory for developers building or supplying code that becomes part of your system, system components, or system services. 1
  2. Run static code analysis tools in a defined, repeatable way to identify common flaws (think: insecure patterns, injection risks, unsafe deserialization, hardcoded secrets, weak crypto usage, etc.). 1
  3. Document results: keep scan outputs and the disposition of findings (fixed, false positive, accepted risk, deferred with tracking). 1

Plain-English interpretation

SA-11(1) means: “No significant code change ships without automated static scanning, and we can prove what the scanner found and how we handled it.” The control is satisfied by disciplined execution plus durable evidence, not by a policy statement alone. 1

Who it applies to

Entity scope

  • Federal information systems implementing NIST SP 800-53 controls. 1
  • Contractor systems handling federal data (including cloud/SaaS providers, integrators, and product teams delivering software used in federal contexts). 1

Operational scope (what software is in)

Apply SA-11(1) to:

  • First-party developed code (apps, APIs, microservices, infrastructure-as-code when it contains logic or policy-like statements).
  • System components and services where you control or can contractually require the developer to scan (custom modules, plugins, extensions, scripts, middleware). 1
  • Third-party developed deliverables when a third party is acting as “the developer” for your environment (outsourced dev shops, contractors, embedded consultants). Contract language matters here.

A practical scoping rule that auditors accept: “All production-deployed repositories owned by our organization, plus any externally developed code delivered for production use, are subject to SAST scans and documented results.”

What you actually need to do (step-by-step)

1) Assign ownership and write the requirement in engineering terms

  • Control owner (GRC): defines scope, minimum documentation, retention, and the exception process.
  • Control operator (AppSec/Platform): configures tools, enforces pipeline rules, runs reporting.
  • Engineering managers: ensure teams remediate or risk-accept findings.

Write one page of “SAST Standard” with:

  • In-scope repos and languages (or “all supported languages in production repos”).
  • Minimum scan events (PR, merge, release build, scheduled scans).
  • Required documentation fields for each significant finding (ID, severity, location, status, disposition rationale). 1

2) Select and standardize static analysis tooling

Pick tools your teams will actually run. Your evidence will be judged on coverage and repeatability, not brand.

  • Define the tool-of-record for each major language/framework.
  • Standardize rulesets (baseline secure coding rules and any organization-specific checks).
  • Standardize output storage (central artifact repo, SIEM, ticketing system, or GRC evidence vault).

3) Embed SAST into CI/CD with enforceable gates

Your goal: prevent “scan drift” where teams stop scanning quietly.

  • Run scans automatically on pull requests and/or merges to main.
  • Define a quality gate: what severities block a merge or release, and what requires an approved exception.
  • Ensure scanners run in a consistent environment (containerized scanner runner or managed CI job template).

If you cannot block merges for all repos immediately, require scanning plus documented triage, then phase in gates repo-by-repo.

4) Define triage and remediation workflow

Static tools generate noise. The control still requires documented results and action. Minimum workflow:

  1. Scanner runs and produces findings.
  2. Findings are triaged by engineering and/or AppSec (valid, false positive, needs investigation).
  3. Valid findings are remediated or risk-accepted with explicit approval and an expiry/review trigger.
  4. Work is tracked in tickets with traceability back to scan IDs and commits.

Make “document the results” real by requiring:

  • A ticket link or case record for each High/Critical (or your chosen threshold).
  • A reason code for “false positive” classifications.
  • Evidence of fix (commit/PR reference) or exception approval. 1

5) Extend the requirement to third parties who develop code for you

SA-11(1) says “require the developer,” which includes external developers where you can impose terms. 1

Contract and intake controls:

  • Contract clause: third party must run static analysis with agreed rulesets (or provide results that meet your standard).
  • Delivery requirement: provide a scan report for the delivered build or commit range, plus a list of open findings and dispositions.
  • Verification: you re-scan delivered code in your pipeline where feasible.

6) Produce repeatable reporting for audits

You need an audit-ready view that answers:

  • Which repos are in scope?
  • When was the last scan per repo?
  • What was found?
  • What’s the disposition and aging for open items?
  • What exceptions exist and who approved them?

This is where tools like Daydream help in practice: map SA-11(1) to a clear control owner, an implementation procedure, and recurring evidence artifacts so you are not reconstructing proof at audit time. 1

Required evidence and artifacts to retain

Keep evidence that shows mandate, execution, results, and disposition.

Minimum artifacts (audit-ready):

  • SAST standard/procedure (scope, tools, gating rules, triage workflow).
  • Current inventory of in-scope repos/components and responsible owners.
  • Tool configuration snapshots (rulesets, pipeline templates, gating settings).
  • Scan outputs (exported reports or immutable logs) showing findings. 1
  • Triage records (tickets/cases) for significant findings with status and rationale.
  • Remediation evidence (PRs, commits, release notes) tied to finding IDs.
  • Exception records: risk acceptance approvals, compensating controls, review date/trigger.
  • Third-party deliverable evidence: contractual requirement plus received scan reports (or your independent scan results).

Retention: align to your broader SDLC evidence retention policy; ensure you can produce evidence for the assessment period without depending on ephemeral CI logs.

Common exam/audit questions and hangups

Assessors commonly press on these points:

  • Coverage: “Show me that all in-scope repos are scanned, not just your flagship app.”
  • Recency: “When were scans last run, and what changed since?”
  • Consistency: “Are rulesets standardized, or does each team tune them to zero?”
  • Documentation quality: “Where are the documented results and dispositions?” 1
  • Exceptions: “Who can accept risk, and how do you ensure exceptions don’t last forever?”
  • Third parties: “If a contractor wrote this module, how did you require static analysis and receive results?” 1

Hangup to anticipate: teams often rely on a scanner dashboard that later changes or loses history. Export or snapshot what you need for the audit record.

Frequent implementation mistakes (and how to avoid them)

  1. Policy-only compliance. A PDF that says “we do SAST” fails if you cannot show scan runs and documented results. Fix: require pipeline evidence and ticket linkage. 1
  2. Partial coverage hidden by a single report. One repo scans; others do not. Fix: build an inventory-to-scan coverage report.
  3. No documented disposition. Findings exist, but nobody can explain why they were closed. Fix: standardize disposition categories and require rationale fields.
  4. Ruleset drift. Teams disable rules to reduce noise. Fix: central ruleset governance with limited override path and logging.
  5. Third-party blind spot. Outsourced code arrives without scan evidence. Fix: add contractual requirements and re-scan on receipt. 1

Enforcement context and risk implications

NIST SP 800-53 is a control framework, not an enforcement agency. Your enforcement exposure usually comes from the authority or contract that incorporates NIST controls (for example, federal ATO decisions, contractual remedies, or audit findings). The practical risk is straightforward: static flaws that slip into production can become exploitable vulnerabilities, and inability to document results becomes an assessment deficiency even if your engineers did scan informally. 1

Practical execution plan (30/60/90)

You asked for speed. Use this plan to get to “auditable” quickly, then mature.

First 30 days (baseline and proof)

  • Name control owner and operator; publish the SAST standard with scope and documentation rules. 1
  • Inventory in-scope repos/components; assign engineering owners.
  • Select tool-of-record per primary language and create standard CI job templates.
  • Turn on scans for the highest-risk repos first; start capturing exported reports.
  • Stand up a simple exception process (approval authority, ticket template, required rationale).

Days 31–60 (coverage and workflow)

  • Expand scanning coverage across all in-scope repos.
  • Implement triage workflow in the ticketing system with required fields and linkage to scan IDs.
  • Define gating thresholds for new findings on PR/merge for at least the most critical repos.
  • Add third-party requirement language to SOW/MSA templates; update intake checklist for externally developed code. 1

Days 61–90 (audit readiness and optimization)

  • Produce a standing “SA-11(1) evidence packet” that can be regenerated on demand: inventory, last-scan dates, sample reports, ticket samples, exception list.
  • Add scheduled scans (to catch drift and new rules) and reporting for scan failures.
  • Review false-positive rates and tune rulesets centrally without weakening baseline coverage.
  • Use Daydream (or your GRC system) to map SA-11(1) to owners, procedures, and recurring evidence artifacts so collection is routine, not heroic. 1

Frequently Asked Questions

Does SA-11(1) require a specific static analysis tool?

No tool is named. The requirement is that developers use static code analysis tools and that you document the results. 1

Does this apply to third-party developers and contractors?

Yes, if a third party is acting as the developer of a system, component, or service you rely on, you should contractually require static analysis and obtain documented results, and re-scan where feasible. 1

What counts as “document the results” for auditors?

Keep scan reports (or immutable exports) plus a record of triage and disposition for significant findings, including remediation links or approved exceptions. 1

Can we meet SA-11(1) if we only scan right before release?

You can start there, but auditors usually expect SAST to be integrated into the development workflow with repeatable runs and consistent evidence. Earlier scanning reduces rework and makes documentation easier to maintain.

How do we handle false positives without failing the control?

Create a documented disposition category for false positives, require rationale, and restrict who can mark items as false positive for higher-severity findings. Keep the record tied to the scan output.

What if our CI logs don’t retain scan history long enough?

Export scan results to a system with retention aligned to your audit needs (artifact storage, ticketing attachments, or a GRC evidence repository) so you can reproduce proof across the assessment period.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

Frequently Asked Questions

Does SA-11(1) require a specific static analysis tool?

No tool is named. The requirement is that developers use static code analysis tools and that you document the results. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Does this apply to third-party developers and contractors?

Yes, if a third party is acting as the developer of a system, component, or service you rely on, you should contractually require static analysis and obtain documented results, and re-scan where feasible. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What counts as “document the results” for auditors?

Keep scan reports (or immutable exports) plus a record of triage and disposition for significant findings, including remediation links or approved exceptions. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Can we meet SA-11(1) if we only scan right before release?

You can start there, but auditors usually expect SAST to be integrated into the development workflow with repeatable runs and consistent evidence. Earlier scanning reduces rework and makes documentation easier to maintain.

How do we handle false positives without failing the control?

Create a documented disposition category for false positives, require rationale, and restrict who can mark items as false positive for higher-severity findings. Keep the record tied to the scan output.

What if our CI logs don’t retain scan history long enough?

Export scan results to a system with retention aligned to your audit needs (artifact storage, ticketing attachments, or a GRC evidence repository) so you can reproduce proof across the assessment period.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream