SA-5(3): High-level Design
To meet the sa-5(3): high-level design requirement, you must produce and maintain a high-level system design that is consistent with security requirements and is usable for security review, change control, and assessment. Operationalize it by standardizing a design template, tying it to requirements and architecture decisions, and retaining versioned design evidence for each system release.
Key takeaways:
- High-level design is an auditable artifact, not an engineer’s slide deck.
- The design must be traceable to security requirements and kept current as the system changes.
- Your fastest path is a repeatable template + ownership + evidence rhythm tied to SDLC gates.
SA-5(3) sits in the System and Services Acquisition family and focuses on design-level accountability. For a CCO, GRC lead, or Compliance Officer, the practical objective is simple: ensure each in-scope system has a high-level design that a reviewer can use to understand security-relevant architecture choices, validate requirements coverage, and evaluate change impact without reverse-engineering the codebase.
Teams often “have architecture,” but it lives in tribal knowledge, whiteboards, or unversioned diagrams. Auditors do not accept that. They want a durable artifact with a clear owner, review history, and alignment to requirements. This is especially true in federal contexts or contractor environments handling federal data, where NIST SP 800-53 alignment is a core assessment expectation 1.
This page translates SA-5(3) into an implementation playbook you can run immediately: who owns it, what to produce, how to review it, what evidence to retain, and what exam teams typically challenge. You’ll also get a practical execution plan and FAQs for common edge cases like agile delivery, third-party platforms, and inherited architecture.
Regulatory text
Provided excerpt: “NIST SP 800-53 control SA-5.3.” 2
Operator interpretation for SA-5(3): High-level Design:
Treat “high-level design” as a required, reviewable description of the system’s architecture and security-relevant design decisions. Your obligation is to (1) create it for each in-scope system, (2) keep it aligned to security requirements and major changes, and (3) retain evidence that it was reviewed and used as part of governance 3.
Practical standard: If a knowledgeable reviewer cannot answer “how does security work here?” within a short read of your design package, your artifact is not audit-ready.
Plain-English interpretation (what SA-5(3) really demands)
SA-5(3) expects you to document the architecture at a level above implementation details:
- What the system is and does (scope, boundaries).
- Which components exist and how they interact (services, data stores, external dependencies).
- Where trust boundaries are and how data flows.
- Which security controls are addressed by the design (authn/authz, encryption, logging, segmentation, key management, admin access patterns).
- What design decisions were made and why, especially when tradeoffs exist.
A strong SA-5(3) program makes architecture review repeatable. A weak one relies on “we can explain it live,” which fails when staff changes, incidents occur, or assessors ask for point-in-time evidence.
Who it applies to (entity and operational context)
Entities commonly in scope:
- Federal information systems and programs assessed against NIST SP 800-53 1.
- Contractors and service providers handling federal data where NIST SP 800-53 controls are flowed down contractually or via system authorization requirements 1.
Operational contexts that trigger real scrutiny:
- New system authorizations or major system upgrades.
- Material architecture changes (new identity provider, new hosting model, new data store, new third party integration).
- Boundary changes (adding/exposing APIs, onboarding a SaaS, connecting to new networks).
- Incident-driven reviews where assessors ask, “Show us the design as it existed before the event.”
What you actually need to do (step-by-step)
1) Assign ownership and define “in-scope system”
- Name a control owner (often Architecture, Engineering, or Security Engineering) and a compliance owner (GRC).
- Define which systems need a high-level design package (customer-facing apps, data platforms, identity systems, shared infrastructure, and any system storing or processing regulated/federal data).
Tip: Don’t scope only “applications.” Include shared services where security is implemented (identity, logging, key management).
2) Standardize a high-level design template (make it hard to do wrong)
Create a template that teams must complete. Minimum fields:
- System overview: purpose, users, environment(s).
- System boundary: what’s in/out; key third parties and dependencies (IdP, payment processor, CI/CD, monitoring).
- Architecture diagram(s): component diagram plus a trust boundary diagram.
- Data flows: key data types, entry/exit points, and where sensitive data is stored and transmitted.
- Security mechanisms: authentication/authorization approach, encryption approach, secrets and key management, admin access model, logging/monitoring, backup/recovery design assumptions.
- Assumptions and constraints: inherited controls, platform constraints, accepted risks.
- Design decisions log: architecture decision records (ADRs) or equivalent, linked.
Keep it lightweight enough for teams to complete, but structured enough for consistent assessment.
3) Map the design to security requirements and review gates
Tie SA-5(3) to your SDLC governance:
- Design review gate: no production release for a new system or major change without an updated high-level design.
- Security review: AppSec or Security Architecture signs off that the design addresses security requirements.
- Change control: architecture-impacting changes must reference the design artifact and update it.
If you already have threat modeling or architecture review, make the high-level design the entry document that feeds those processes.
4) Establish version control and evidence discipline
Your design must be:
- Versioned (Git, controlled document repository, or GRC evidence vault).
- Dated and associated to a release/change identifier (ticket, PR, change request).
- Approved (record who reviewed, what issues were found, and resolution).
A practical approach is “docs as code” in the same repo as the system, with mandatory pull request review by security/architecture.
5) Make third-party architecture explicit (TPRM tie-in)
For systems that depend on third parties:
- Show the integration points in diagrams.
- Document what security controls you inherit vs. what you must implement (for example, SSO handled by a SaaS vs. local authorization checks still required).
- Keep references to third-party due diligence outcomes where they shape design assumptions.
This is where many assessments fail: the diagram omits the SaaS that processes sensitive data, but contracts and data maps show it exists.
6) Operationalize continuous maintenance (don’t treat it as a one-time deliverable)
Create triggers that force updates:
- New data type introduced.
- New external connection.
- AuthN/AuthZ change.
- Major infrastructure change (cloud account restructure, network segmentation change).
- Material change to logging, key management, or admin access.
You don’t need to update for every small refactor. You do need updates when security-relevant architecture changes.
Required evidence and artifacts to retain
Keep evidence in a single “SA-5(3) Design Package” per system:
- High-level architecture diagrams (component + trust boundaries).
- Data flow diagram(s) for sensitive data paths.
- High-level design document with system boundary definition.
- ADRs or design decision log entries tied to major decisions.
- Review/approval evidence (tickets, pull request approvals, meeting minutes with decisions captured).
- Change history: prior versions retained or accessible via version control.
- Crosswalk showing where the design addresses key security requirements (a simple table is enough).
Daydream can help by mapping SA-5(3) to a named owner, a repeatable procedure, and a recurring evidence checklist so design packages don’t become one-off documentation drills 2.
Common exam/audit questions and hangups
Assessors and internal audit commonly ask:
- “Show the system boundary and all external connections.”
- “Where is sensitive data stored, and how is it protected in transit and at rest?”
- “Where are trust boundaries, and what controls exist at each boundary?”
- “How does authentication and authorization work end-to-end?”
- “Prove this design was current at the time of the last major release.”
- “Show how design decisions were reviewed and approved.”
Hangups that stall exams:
- Diagrams with no date/version or unclear environment (dev vs prod).
- Missing third-party services that appear elsewhere in your inventory.
- High-level design that contradicts actual implementation (for example, diagram says “central logging,” but logs are per-service and not aggregated).
Frequent implementation mistakes (and how to avoid them)
-
Treating a slide deck as the control.
Fix: Require a structured design package with versioning, not “latest architecture presentation.” -
No traceability to requirements.
Fix: Add a requirements-to-design crosswalk table. It can be short, but it must exist. -
Out-of-date diagrams after re-architecture.
Fix: Add release/change triggers and make the design update part of the definition of done for architecture-impacting work. -
Ignoring third parties and SaaS dependencies.
Fix: Force a “External Services and Data Exchanges” section that must be completed. -
No evidence of review.
Fix: Require approvals in the same system you use for engineering work (PR approvals, tickets) and retain them as evidence.
Enforcement context and risk implications
No public enforcement cases were provided in the available source catalog for this requirement. Practically, SA-5(3) gaps create three recurring risk outcomes:
- Assessment failure risk: you cannot demonstrate design intent, control coverage, or change governance.
- Security risk: teams implement controls inconsistently across services because architecture decisions are implicit.
- Operational resilience risk: incident response and recovery slow down when system boundaries and dependencies are undocumented.
Practical 30/60/90-day execution plan
First 30 days (foundation)
- Name SA-5(3) control owner and approvers (Architecture + Security + GRC).
- Define in-scope systems list (start with highest-risk and externally connected systems).
- Publish the high-level design template and diagram standards.
- Choose the system of record for versioning (repo docs, controlled wiki, or evidence vault).
- Pilot the template on one system and tune it based on reviewer feedback.
Days 31–60 (rollout)
- Require design packages for new systems and architecture-impacting changes.
- Backfill design packages for the most critical existing systems.
- Build the evidence checklist and teach teams how to store artifacts consistently.
- Add a lightweight requirements-to-design crosswalk for each system.
Days 61–90 (operational rhythm)
- Add quality checks: periodic spot checks for diagram accuracy and third-party completeness.
- Connect design packages to your change management process and system inventory.
- Create an “assessment-ready” export per system (single folder or packet with the latest approved artifacts).
- Use Daydream (or your GRC system) to assign tasks, track owners, and collect recurring evidence so you have continuous readiness without chasing teams at audit time.
Frequently Asked Questions
Does SA-5(3) require detailed low-level design or code-level documentation?
SA-5(3) is focused on high-level design, so keep it architecture-centric: components, boundaries, flows, and security mechanisms. Low-level details can live elsewhere, but the high-level design must be sufficient for security review and assessment 1.
We’re agile and deploy frequently. How do we keep designs current without slowing delivery?
Tie updates to architecture-impacting change triggers, not every deployment. Store diagrams and the design doc in version control and update them through the same pull request process as code.
What counts as “evidence” that the design was reviewed?
Use durable records: pull request approvals on the design doc, security architecture review tickets with disposition, or signed review notes stored with the design package. The key is that a third party can validate who approved what and when.
Our system is mostly a SaaS third party. Do we still need a high-level design?
Yes. Your design should show the system boundary, integration points, data exchanged, and which controls are inherited versus implemented by you. Assessors still need to see how your environment connects to the third party and how you govern that risk.
Can we have one design document for a platform with many microservices?
You can, if it clearly describes shared services, common security mechanisms, and per-service variations that matter for security. Many teams keep a platform-level design plus short service addenda for services that handle sensitive data or have unique trust boundaries.
What’s the minimum acceptable diagram set?
Keep at least a component diagram and a trust boundary or data flow diagram that highlights sensitive paths. If you only have one, reviewers often cannot validate boundary controls or data protection assumptions.
Footnotes
Frequently Asked Questions
Does SA-5(3) require detailed low-level design or code-level documentation?
SA-5(3) is focused on high-level design, so keep it architecture-centric: components, boundaries, flows, and security mechanisms. Low-level details can live elsewhere, but the high-level design must be sufficient for security review and assessment (Source: NIST SP 800-53 Rev. 5).
We’re agile and deploy frequently. How do we keep designs current without slowing delivery?
Tie updates to architecture-impacting change triggers, not every deployment. Store diagrams and the design doc in version control and update them through the same pull request process as code.
What counts as “evidence” that the design was reviewed?
Use durable records: pull request approvals on the design doc, security architecture review tickets with disposition, or signed review notes stored with the design package. The key is that a third party can validate who approved what and when.
Our system is mostly a SaaS third party. Do we still need a high-level design?
Yes. Your design should show the system boundary, integration points, data exchanged, and which controls are inherited versus implemented by you. Assessors still need to see how your environment connects to the third party and how you govern that risk.
Can we have one design document for a platform with many microservices?
You can, if it clearly describes shared services, common security mechanisms, and per-service variations that matter for security. Many teams keep a platform-level design plus short service addenda for services that handle sensitive data or have unique trust boundaries.
What’s the minimum acceptable diagram set?
Keep at least a component diagram and a trust boundary or data flow diagram that highlights sensitive paths. If you only have one, reviewers often cannot validate boundary controls or data protection assumptions.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream