SA-12(8): Use of All-source Intelligence
SA-12(8) requires you to incorporate all-source intelligence into supply chain risk management so your third-party decisions reflect current, credible threat and risk information, not static questionnaires. Operationalize it by defining approved intelligence sources, embedding them into onboarding and ongoing monitoring workflows, and retaining evidence that intelligence changed (or validated) risk decisions. 1
Key takeaways:
- Define “all-source intelligence” for your program, including internal, government, commercial, and open-source inputs, with governance and quality checks.
- Embed intelligence checks into third-party lifecycle gates (intake, due diligence, contracting, renewal, incident response) and document decision impact.
- Retain repeatable evidence: source lists, review logs, risk memos, and ticketed outcomes tied to third parties and systems.
The sa-12(8): use of all-source intelligence requirement sits in the NIST SP 800-53 Supply Chain Risk Management (SA) family and is easiest to fail in practice for one reason: teams “do intelligence” informally, but cannot prove it is systematic, repeatable, and decision-relevant.
For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat SA-12(8) as a workflow requirement, not a tooling requirement. You need a defined set of intelligence sources, clear criteria for what “actionable” means, and points in the third-party lifecycle where intelligence is reviewed and recorded. Then you need evidence that your intelligence review actually affected outcomes (risk ratings, required controls, contract clauses, onboarding blocks, escalations), even when the decision is “no change.”
This page gives requirement-level implementation guidance you can hand to control owners. It prioritizes audit-ready execution: roles, steps, artifacts, and examiner questions, with minimal theory and no invented enforcement claims. 1
Regulatory text
Excerpt: “NIST SP 800-53 control SA-12.8.” 2
What the operator must do: Implement SA-12(8) by establishing a defined practice for using “all-source intelligence” as an input to supply chain risk management decisions. In operational terms: you must (1) identify and govern your intelligence sources, (2) make intelligence review a required step in third-party and supplier risk workflows, and (3) keep records showing intelligence was considered and influenced decisions. 1
Practical translation: “We check current threat and risk intelligence about a third party or product before we approve it, and we keep proof.”
Plain-English interpretation (what SA-12(8) is really asking)
SA-12(8) expects your supply chain risk decisions to reflect what is happening in the real world. That includes known exploitation activity, emerging threat actor behavior, geopolitical or legal developments that affect supplier reliability, and credible reporting of supplier compromise.
“All-source intelligence” means you are not relying on one channel (for example, only a security questionnaire, only a rating score, or only internal anecdotes). Instead, you pull from multiple categories of sources and reconcile them into a decision record that is traceable to a third party and a system. 1
Who it applies to (entity and operational context)
Applies to:
- Federal information systems implementing NIST SP 800-53 as their security control baseline. 1
- Contractor systems handling federal data, where NIST SP 800-53 controls are flowed down through contracts, authorizations, or program requirements. 1
Operational contexts where SA-12(8) shows up:
- Third-party onboarding for SaaS, managed services, hosting, security tools, and data processors.
- Supplier sourcing for hardware, firmware, embedded software, and critical components.
- M&A technology intake where inherited suppliers and products require rapid re-risking.
- Ongoing monitoring for critical third parties, including changes in ownership, location, control environment, or compromise signals.
What you actually need to do (step-by-step)
Use this as a control-owner runbook. Keep it tight and repeatable.
Step 1: Assign ownership and scope the decisions SA-12(8) must inform
- Name a control owner (often Third-Party Risk Management, Cyber Threat Intelligence, or Security Operations) and a GRC coordinator for evidence collection.
- Define which decisions require intelligence review:
- approve / reject onboarding
- assign inherent and residual risk ratings
- decide required compensating controls
- approve exceptions
- trigger offboarding or suspension
Audit hook: Examiners look for consistency across third parties, not heroics for one high-profile supplier.
Step 2: Define “all-source intelligence” categories and approved sources
Create a governed list with at least these categories (tailor to your environment):
- Internal intelligence: incident tickets, SOC detections tied to a supplier’s product, abuse reports, past third-party performance issues.
- Government / public-sector intelligence: advisories and alerts relevant to your stack and suppliers.
- Commercial intelligence: threat intel feeds you subscribe to, security ratings inputs (if used), breach intelligence services.
- Open-source intelligence (OSINT): reputable reporting, vendor disclosures, vulnerability databases, confirmed breach notifications.
Record, for each source: owner, access method, refresh cadence, and quality rules (what makes it credible enough to drive action). This is where programs break: “we Google it” is not governable.
Step 3: Embed intelligence checks into third-party lifecycle gates
Add required tasks to your TPRM workflow (ticketing system, GRC platform, or procurement intake form). Minimum gates:
- Pre-screen / intake: quick intelligence triage for criticality and red flags.
- Due diligence: intelligence review alongside questionnaires and technical testing.
- Contracting: intelligence-driven clauses (notification timelines, audit rights, subcontractor controls, incident cooperation).
- Go-live approval: confirm no unresolved intelligence flags.
- Ongoing monitoring: periodic intelligence refresh for critical third parties and for key products.
- Event-driven reviews: re-run intelligence when triggers occur (major vulnerability, credible breach report, geopolitical disruption, ownership change).
Decision record requirement: Each gate should end with a documented outcome: “no material intelligence findings” or “findings present; action taken.”
Step 4: Define triage criteria and actions (make it executable)
Create a small decision matrix that analysts can apply consistently:
| Intelligence finding | Minimum triage question | Typical actions |
|---|---|---|
| Reported breach/compromise (credible) | Does it affect our data, environment, or dependency chain? | escalate to Security + Legal, require incident details, pause onboarding, contract addendum |
| Active exploitation of product used by third party | Is the third party exposed and patching? | require patch attestation, increase monitoring, add compensating controls |
| Sanctions/export/legal restriction risk | Do restrictions apply to our use or payment? | route to Legal/Procurement, alternate supplier plan |
| Financial/operational distress indicators | Is service continuity at risk? | business continuity review, exit plan, redundancy |
Keep the matrix short. You want consistent outcomes, not a thesis.
Step 5: Train reviewers and make evidence collection automatic
- Train TPRM reviewers on what counts as actionable intelligence, when to escalate, and how to write a decision memo.
- Build templates so evidence is produced as part of work (not after the auditor asks).
Where Daydream fits naturally: If you struggle to map SA-12(8) to a named owner, a defined procedure, and recurring evidence artifacts, Daydream can act as the control-to-operations layer: assign ownership, standardize the workflow, and generate an evidence checklist aligned to SA-12(8) review cycles. This is especially useful when procurement, security, and GRC each own part of the process.
Required evidence and artifacts to retain
Keep artifacts that prove: sources exist, reviews happened, and reviews influenced decisions.
Governance artifacts
- SA-12(8) control narrative (purpose, scope, roles, workflow gates)
- Approved “all-source intelligence” source register (with owners and access)
- Intelligence credibility/quality criteria
- Escalation and triage matrix
Operational evidence 2
- Intake or due diligence ticket showing intelligence review completion
- Screenshot/export of intelligence findings (or a reference ID to the intel platform record)
- Decision memo linking findings to risk rating changes or required controls
- Procurement/security approvals or exception records
- Contract language changes driven by intelligence (redlines or clause library references)
Ongoing monitoring evidence
- Periodic review logs for critical third parties
- Event-driven re-review tickets (trigger → review → outcome)
Common exam/audit questions and hangups
Expect these lines of questioning:
- “Define all-source intelligence in your program.” Auditors want your categories and approved sources list. Vague definitions fail.
- “Show me where intelligence is required in the third-party lifecycle.” Point to workflow gates and required tasks, not a policy PDF.
- “Demonstrate that intelligence changes decisions.” Provide examples: onboarding paused, additional controls required, contract amended, monitoring increased.
- “How do you ensure intelligence quality and avoid rumor-driven decisions?” Show criteria, escalation rules, and documentation standards.
- “Who owns SA-12(8) day-to-day?” Name a role and show tickets/reviews completed by that role.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Treating SA-12(8) as ‘subscribe to a feed.’
Fix: Tie intelligence to explicit lifecycle gates and decision records. -
Mistake: No documented source governance.
Fix: Maintain an approved source register with owners and review cadence. -
Mistake: Intelligence review happens, but outcomes aren’t recorded.
Fix: Require a short decision memo field in the workflow: finding summary, impact, action. -
Mistake: Analysts copy-paste news articles without credibility checks.
Fix: Implement quality criteria (source reputation, corroboration, date relevance) and escalation thresholds. -
Mistake: Intelligence is siloed in the SOC and never reaches procurement/TPRM.
Fix: Add routing rules: critical third parties trigger a shared queue or weekly triage meeting.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for SA-12(8), so you should treat this as an assessment-readiness and risk-exposure control rather than an enforcement-cited item in this write-up. 2
Operational risk is still real:
- You can approve a third party with known compromise indicators and fail to apply compensating controls.
- You can miss event-driven triggers (a supplier breach, active exploitation) and learn about it only after customer impact.
- You can fail an authorization or customer assessment because you cannot prove intelligence was used systematically.
Practical 30/60/90-day execution plan
Use this as a GRC delivery plan. Adjust sequencing to match procurement and security bandwidth.
First 30 days (foundation and scoping)
- Assign SA-12(8) control owner and backup; document RACI across TPRM, SOC/CTI, Procurement, Legal, and Business Owners.
- Draft the “all-source intelligence” source register and credibility criteria.
- Identify the lifecycle gates where intelligence review must occur and add them to your TPRM workflow as required steps.
- Create a one-page decision memo template for intelligence findings.
Days 31–60 (embed into operations)
- Pilot intelligence reviews on a subset of critical third parties and one high-change category (for example, SaaS or MSPs).
- Establish escalation paths and a weekly triage meeting for new intelligence affecting critical third parties.
- Update contract clause library so intelligence findings can drive standard amendments (notification, cooperation, subcontractor flow-down).
Days 61–90 (prove repeatability and audit readiness)
- Expand the workflow requirement to all in-scope third parties based on criticality tiers.
- Run a tabletop: “supplier breach rumor vs confirmed compromise” and test credibility criteria and escalation decisions.
- Produce an evidence packet: source register, sample tickets, decision memos, and examples of decisions changed by intelligence.
- If evidence is inconsistent, standardize through tooling or workflow enforcement; Daydream can help by mapping SA-12(8) to owners, procedures, and recurring evidence artifacts so your pack is complete and repeatable.
Frequently Asked Questions
What counts as “all-source intelligence” for SA-12(8)?
Define it as multiple categories of intelligence inputs (internal, government/public, commercial, OSINT) that are governed and used in decisions. The key is that sources are approved, reviews are repeatable, and outcomes are documented. 1
Do we need a dedicated threat intelligence team to comply?
No. You need an assigned owner and a repeatable workflow. Many programs satisfy SA-12(8) by routing intelligence checks to the SOC or security engineering with clear intake questions and a documented decision memo.
How do we prove “intelligence was used” during an audit?
Retain tickets or workflow records that show: intelligence sources checked, findings summarized, and an explicit decision (no change, increased controls, paused onboarding, contract change). Evidence must be tied to a third party and a date.
Can we rely on security ratings alone?
Ratings can be one input, but “all-source” implies you should not base decisions on a single source type. Pair ratings with internal signals, advisories, and credible external reporting, then record how you reconciled conflicts.
How often must we refresh intelligence for critical third parties?
NIST doesn’t provide a single fixed cadence in the provided excerpt, so set a cadence based on criticality and change rate, then enforce it. Auditors care more about consistency and trigger-based re-reviews than a specific interval. 1
What’s the minimum viable documentation for a small program?
A source register, a required intelligence check at onboarding and renewal, and a short decision record per in-scope third party. If you can show repeatability across several third parties, you have a defensible baseline.
Footnotes
Frequently Asked Questions
What counts as “all-source intelligence” for SA-12(8)?
Define it as multiple categories of intelligence inputs (internal, government/public, commercial, OSINT) that are governed and used in decisions. The key is that sources are approved, reviews are repeatable, and outcomes are documented. (Source: NIST SP 800-53 Rev. 5)
Do we need a dedicated threat intelligence team to comply?
No. You need an assigned owner and a repeatable workflow. Many programs satisfy SA-12(8) by routing intelligence checks to the SOC or security engineering with clear intake questions and a documented decision memo.
How do we prove “intelligence was used” during an audit?
Retain tickets or workflow records that show: intelligence sources checked, findings summarized, and an explicit decision (no change, increased controls, paused onboarding, contract change). Evidence must be tied to a third party and a date.
Can we rely on security ratings alone?
Ratings can be one input, but “all-source” implies you should not base decisions on a single source type. Pair ratings with internal signals, advisories, and credible external reporting, then record how you reconciled conflicts.
How often must we refresh intelligence for critical third parties?
NIST doesn’t provide a single fixed cadence in the provided excerpt, so set a cadence based on criticality and change rate, then enforce it. Auditors care more about consistency and trigger-based re-reviews than a specific interval. (Source: NIST SP 800-53 Rev. 5)
What’s the minimum viable documentation for a small program?
A source register, a required intelligence check at onboarding and renewal, and a short decision record per in-scope third party. If you can show repeatability across several third parties, you have a defensible baseline.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream