SA-13: Trustworthiness
SA-13: Trustworthiness requirement means you must treat “trust” as an engineered property of the system and its components: define what must be trustworthy, set objective criteria, select trustworthy components/suppliers, and retain evidence that those criteria are met throughout the system lifecycle 1. Operationalize it by assigning an owner, implementing a repeatable process, and producing recurring artifacts.
Key takeaways:
- SA-13 is easiest to pass when you define measurable trustworthiness criteria per system/component and tie them to procurement and engineering gates 1.
- Auditors look for evidence that “trustworthy” is not a slogan: documented decisions, supplier checks, acceptance criteria, and ongoing monitoring records 1.
- Your fastest path is a control map: owner, procedure, and a standing evidence set you can produce on demand 2.
Compliance teams usually stumble on SA-13 because “trustworthiness” sounds subjective. Treat it as a lifecycle control: you set explicit trustworthiness requirements for the system and its critical components, you ensure those requirements are met before deployment, and you keep proving they remain true after changes. SA-13 sits in the System and Services Acquisition (SA) family, so it connects directly to procurement decisions, supplier selection, integration, and acceptance testing, not just cybersecurity operations 1.
If you support federal information systems or you are a contractor operating systems that handle federal data, SA-13 tends to surface during system authorization work, continuous monitoring, third-party due diligence, and major technology refreshes 1. Practically, the best way to implement the sa-13: trustworthiness requirement is to define trustworthiness criteria that are testable and documentable, then enforce them through design reviews, purchase approvals, and release gates.
This page gives you requirement-level implementation guidance you can hand to a control owner tomorrow: who does what, what evidence to retain, what auditors ask, and how to avoid the common failure mode of having “good security” but no SA-13 trail.
Regulatory text
Requirement: “NIST SP 800-53 control SA-13.” 2
Operator translation: You must implement SA-13 in a way that demonstrates the system (and the components and services it depends on) are trustworthy enough for their intended use and risk level, and you must be able to show how you made that determination with defensible evidence 1.
Because the provided excerpt is short, implementation hinges on assessment expectations: your program should show (1) defined trustworthiness criteria, (2) a method to select/approve components and third parties against criteria, and (3) ongoing evidence that changes don’t erode trustworthiness 1.
Plain-English interpretation (what SA-13 is asking you to prove)
For audit purposes, “trustworthiness” needs to become a set of documented claims you can defend, such as:
- The system is composed of components you have evaluated and approved.
- The third parties who build, host, or support the system meet your security and integrity expectations.
- You have controls that reduce the chance of malicious, counterfeit, tampered, or poorly engineered components entering the environment.
- Engineering and procurement decisions are recorded and repeatable, not ad hoc.
Think of SA-13 as a join between security engineering and third-party risk management: you are proving you can trust what you buy, what you integrate, and what you run.
Who it applies to (entity and operational context)
Entity types
- Federal information systems (internal agency systems) 1.
- Contractor systems handling federal data (including systems operated by cloud/service providers, integrators, and managed service providers) 1.
Operational contexts where SA-13 becomes “real”
- New system builds and major modernization programs.
- Procurement of critical software/hardware, including managed services and cloud services.
- Supply chain changes: new subcontractors, new hosting regions, new build pipelines.
- System authorization / assessment cycles where you must provide control implementation details and evidence 1.
What you actually need to do (step-by-step)
1) Assign ownership and define scope
- Name a control owner (often Security Engineering, with Procurement/TPRM as key stakeholders).
- Define in-scope systems (start with systems handling federal data and high-impact business processes).
- Identify “critical components”: software components, build pipelines, hosting, identity providers, endpoint management, and any third party with privileged access.
Deliverable: SA-13 control statement with named owner, scope, and dependencies 1.
2) Define trustworthiness criteria you can test
Create a “Trustworthiness Criteria” standard that answers:
- What properties must be true? (examples: integrity, provenance, secure development practices, configuration control, incident reporting, access governance)
- What evidence proves each property?
- What is the approval gate (design review, procurement approval, release sign-off)?
Keep it concrete. Auditors accept “we require X and verify with Y artifact,” not “we trust vendors with good reputations.”
Deliverable: Trustworthiness Criteria Standard (system-level and component-level).
3) Embed criteria into acquisition and engineering workflows
Map the criteria to the places decisions happen:
- Procurement intake: require a security review for any new third party that will handle federal data or receive privileged access.
- Architecture review: require explicit component selection justification for identity, logging, encryption, CI/CD, and hosting.
- Change management: require reassessment when you swap critical components, add new third parties, or change build/deploy paths.
Deliverable: workflow checklists and approval gates (linked to ticketing/procurement systems).
4) Implement third-party trustworthiness checks (TPRM alignment)
For third parties supporting the system, define a minimum due diligence package aligned to your criteria, such as:
- Security controls documentation and audit reports (if available).
- Secure development and vulnerability handling practices for software suppliers.
- Access model and privileged access controls for service providers.
- Incident notification commitments and cooperation requirements.
Deliverable: third-party risk assessments and approval records tied to the system/component.
5) Implement component provenance and integrity controls (practical minimums)
Select controls that produce durable evidence:
- Approved supplier lists for critical categories.
- Controlled repositories and signed builds (where feasible).
- Bill of materials or dependency inventory for critical software components.
- Verification steps for hardware/software authenticity (where applicable).
Deliverable: provenance and integrity evidence tied to releases and procurement.
6) Document acceptance and ongoing monitoring
- Acceptance: keep records of what criteria were checked at go-live and what exceptions were granted.
- Ongoing: trigger revalidation when there is a major change, a supplier incident, or a material vulnerability in a critical component.
Deliverable: acceptance memo, exception register, and monitoring triggers.
7) Map SA-13 to a repeatable evidence set (assessment readiness)
Your fastest readiness win is to build the “one folder” auditors ask for:
- Owner
- Procedure
- Evidence cadence and storage location
- Last review date and exceptions
This is the recommended control packaging approach for SA-13: map the requirement to a control owner, implementation procedure, and recurring evidence artifacts 2. If you use Daydream, implement SA-13 as a requirement page with an assigned owner, linked workflows, and an evidence checklist so you can produce proof without rebuilding the narrative every assessment cycle.
Required evidence and artifacts to retain
Keep artifacts tied to specific systems and time periods (what was true, when it was true, and who approved it):
Governance
- SA-13 control statement and RACI (owner, approvers, contributors).
- Trustworthiness Criteria Standard.
- System/component criticality classification.
Acquisition and third-party due diligence
- Third-party intake records and risk assessments.
- Contract/security addendum requirements that support your criteria (access, incident notice, subcontractor controls).
- Approval/exception decisions with rationale.
Engineering and technical integrity
- Architecture review records showing selected components and justification.
- Release/acceptance checklists that include trustworthiness criteria.
- Dependency inventory for critical applications, plus change history.
- Secure build/provenance artifacts if you have them (build logs, signing attestations).
Operations
- Change tickets that show reassessment triggers were followed.
- Exception register and compensating controls.
Common exam/audit questions and hangups
Expect these lines of questioning:
- “Define trustworthiness for this system. Where is it documented?” 1
- “Which components are considered critical, and why?”
- “Show the approval trail for selecting this cloud service / identity provider / CI/CD tool.”
- “How do you prevent unapproved suppliers or components from entering production?”
- “What causes you to reassess a supplier’s trustworthiness after onboarding?”
- “Show exceptions and compensating controls.”
Hangup: teams can describe what they do, but cannot show the decision record. Fix that by making trustworthiness an explicit gate in procurement and architecture tickets.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: treating SA-13 as a pure vendor risk control.
Fix: scope includes internal components and build processes, not only external third parties. -
Mistake: criteria are vague (“must be secure”).
Fix: rewrite criteria into testable statements with required artifacts (policy, report, config, log, review record). -
Mistake: trustworthiness reviews happen once at onboarding.
Fix: define reassessment triggers tied to change management and supplier events. -
Mistake: evidence exists but is scattered across teams.
Fix: create a single SA-13 evidence index with pointers to systems of record (procurement tool, ticketing, GRC). -
Mistake: exceptions are informal.
Fix: enforce a written exception record with expiration conditions and compensating controls.
Enforcement context and risk implications (practical)
No public enforcement cases were provided in the source catalog for SA-13, so focus on assessment and authorization risk: weak SA-13 evidence can stall an ATO package, delay procurement approvals, or force costly remediation during an assessment 1. Operationally, SA-13 gaps also increase exposure to supply chain compromise and component integrity failures, which tend to have high blast radius because they affect many systems through shared dependencies.
Practical 30/60/90-day execution plan
First 30 days (establish control shape)
- Assign SA-13 owner and define in-scope systems and critical components list.
- Draft Trustworthiness Criteria Standard with evidence requirements.
- Create an SA-13 evidence index (where artifacts live, who updates, how often).
Days 31–60 (wire into workflows)
- Add trustworthiness checkpoints to procurement intake and architecture review.
- Create a third-party due diligence checklist aligned to criteria for in-scope systems.
- Stand up an exception process with approvers and documentation rules.
Days 61–90 (prove operation and close gaps)
- Run SA-13 through one real onboarding or major change to validate the workflow.
- Perform a mini internal assessment: pick one system and assemble the full SA-13 evidence pack.
- Fix the top evidence gaps (missing approvals, missing criteria mapping, missing reassessment triggers).
If you manage controls in Daydream, mirror these phases as tasks under SA-13 with assigned owners and required uploads so evidence is produced as work happens, not retrofitted.
Frequently Asked Questions
What counts as “trustworthiness” evidence for SA-13?
Evidence that you defined criteria, applied them to system/component decisions, and recorded approvals and exceptions. Typical proof includes documented criteria, third-party assessments, architecture reviews, and release/acceptance records 1.
Does SA-13 apply to cloud providers and SaaS tools?
Yes if the service is part of the system boundary or supports processing/handling of federal data, or has privileged access. Treat the provider as a critical component and apply your trustworthiness criteria through due diligence and approval gates 1.
How do we define “critical components” without over-scoping?
Start with components that can materially impact confidentiality, integrity, or availability: identity, logging, build/deploy, hosting, and any third party with admin access. Document the selection rationale so you can defend the boundary during assessment.
We have strong security controls already; why do auditors still flag SA-13?
SA-13 findings often come from missing decision records and missing linkage between acquisition/engineering choices and explicit criteria. Create a repeatable procedure and an evidence set that shows who approved what and why 2.
How should we handle exceptions (e.g., urgent procurement)?
Use a formal exception record with scope, risk statement, compensating controls, and an explicit revisit condition. Keep the exception tied to the system and component so it is discoverable during audits.
What is the simplest way to operationalize the sa-13: trustworthiness requirement in a GRC tool?
Create one SA-13 control record with an owner, a step-by-step procedure, and a checklist of recurring artifacts (intake reviews, approvals, exceptions, monitoring triggers). Daydream works well when you attach those artifacts directly to the requirement so the evidence trail is continuous 2.
Footnotes
Frequently Asked Questions
What counts as “trustworthiness” evidence for SA-13?
Evidence that you defined criteria, applied them to system/component decisions, and recorded approvals and exceptions. Typical proof includes documented criteria, third-party assessments, architecture reviews, and release/acceptance records (Source: NIST SP 800-53 Rev. 5).
Does SA-13 apply to cloud providers and SaaS tools?
Yes if the service is part of the system boundary or supports processing/handling of federal data, or has privileged access. Treat the provider as a critical component and apply your trustworthiness criteria through due diligence and approval gates (Source: NIST SP 800-53 Rev. 5).
How do we define “critical components” without over-scoping?
Start with components that can materially impact confidentiality, integrity, or availability: identity, logging, build/deploy, hosting, and any third party with admin access. Document the selection rationale so you can defend the boundary during assessment.
We have strong security controls already; why do auditors still flag SA-13?
SA-13 findings often come from missing decision records and missing linkage between acquisition/engineering choices and explicit criteria. Create a repeatable procedure and an evidence set that shows who approved what and why (Source: NIST SP 800-53 Rev. 5 OSCAL JSON).
How should we handle exceptions (e.g., urgent procurement)?
Use a formal exception record with scope, risk statement, compensating controls, and an explicit revisit condition. Keep the exception tied to the system and component so it is discoverable during audits.
What is the simplest way to operationalize the sa-13: trustworthiness requirement in a GRC tool?
Create one SA-13 control record with an owner, a step-by-step procedure, and a checklist of recurring artifacts (intake reviews, approvals, exceptions, monitoring triggers). Daydream works well when you attach those artifacts directly to the requirement so the evidence trail is continuous (Source: NIST SP 800-53 Rev. 5 OSCAL JSON).
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream