ID.RA-03: Internal and external threats to the organization are identified and recorded

To meet the id.ra-03: internal and external threats to the organization are identified and recorded requirement, you need a repeatable process that gathers threat inputs (internal telemetry and external intelligence), normalizes them into a single “threat register,” assigns owners, and keeps entries current enough to drive risk decisions and control priorities. This is an evidence-heavy requirement: documented identification plus durable records.

Key takeaways:

  • Build and maintain a threat register that captures internal and external threats, with ownership and traceability.
  • Define sources, cadence, and triage rules so threat identification is consistent, not ad hoc.
  • Retain evidence that threats were identified, recorded, reviewed, and acted on through risk assessment and control updates.

ID.RA-03 is one of those controls that sounds simple until an auditor asks, “Show me where you record threats, how you decide what counts as a threat, and how you keep it current.” The expectation is not a one-time brainstorm. You need an operational loop that continuously identifies both internal threats (for example, insider misuse patterns, misconfigurations, vulnerable assets, weak admin practices, and control gaps revealed by incidents) and external threats (for example, threat actor tactics relevant to your tech stack, sector targeting, third-party compromise patterns, and newly disclosed vulnerabilities with active exploitation). Then you must record them in a way that supports risk decisions.

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat ID.RA-03 as a documentation and governance requirement with security inputs: define sources, implement a consistent recording format, assign owners, and connect recorded threats to risk assessments, exception handling, and remediation work. This page gives you a practical, requirement-level implementation you can stand up quickly and defend under exam conditions, aligned to NIST CSF 2.0 1 and the CSF 1.1 to 2.0 transition materials 2.

Regulatory text

Requirement: “Internal and external threats to the organization are identified and recorded.” 3

Operator meaning: You must (1) identify threats relevant to your organization from internal and external sources, and (2) record them in an auditable system of record. “Recorded” means more than a slide deck; it means a durable log/register with enough detail to show what the threat is, why it matters to you, who owns follow-up, and how you keep it current.

Plain-English interpretation (what “good” looks like)

A defensible ID.RA-03 implementation has these properties:

  • Defined sources: You can name the internal telemetry and external intelligence feeds you monitor.
  • Consistent triage: You have written criteria for what becomes a “recorded threat” versus noise.
  • Central record: A threat register (or equivalent) exists and is discoverable, permissioned, and retained.
  • Ownership and workflow: Each material threat has an owner and a documented disposition (accepted, mitigated, transferred, monitored).
  • Linkage: Threats connect to risk assessment outputs, vulnerability management, third-party risk, incident learnings, and control changes.

Who it applies to

Entity scope: Any organization running a cybersecurity program aligned to NIST CSF 2.0 1.

Operational scope (where you must implement it):

  • Enterprise IT and corporate systems: identity, endpoints, SaaS, email, networks.
  • Product/engineering environments: CI/CD, cloud control planes, container platforms, code repositories.
  • Third-party ecosystem: critical third parties, fourth parties where visible, and key integrations.
  • Business processes with cyber impact: payment flows, customer data handling, support tooling, HR systems.

If you have multiple business units, treat ID.RA-03 as an enterprise requirement with local inputs. Centralize the register, decentralize collection.

What you actually need to do (step-by-step)

1) Name the system of record

Pick one primary record location and enforce it. Options:

  • GRC platform module (risk/threat register)
  • Ticketing system with a structured “Threat” issue type
  • A controlled spreadsheet in a governed repository (acceptable early, weak at scale)

Minimum fields to include in the threat record Use a template so entries are comparable:

Field What to capture Why auditors care
Threat title Short, specific name Discoverability and reporting
Threat type Internal / External (and subtype) Shows you cover both halves of ID.RA-03
Description What could happen; plausible scenario Demonstrates you understand the threat
Affected assets/processes Systems, data, third parties Shows organizational relevance
Source Where you learned it (internal alert, advisory, incident, etc.) Traceability
Date identified / last reviewed Timestamps Proves recency and maintenance
Owner Named person/team Accountability
Initial disposition Monitor / assess / mitigate / accept Shows action, not just logging
Linkages Risk ID, vuln ID, incident ID, third-party record Proves integration into operations

2) Define internal and external threat sources (and document them)

Create a one-page “Threat Source Catalog” that lists what you monitor and who owns each input.

Internal sources (examples)

  • Security monitoring and detection outputs (SIEM/EDR alerts, anomalous auth, DLP events)
  • Vulnerability scanning and configuration findings
  • Incident postmortems and near-miss reports
  • Access review findings (privilege creep, orphaned accounts)
  • Change management and architecture reviews
  • Internal audit and penetration test results

External sources (examples)

  • Vendor security advisories relevant to your stack
  • Threat intelligence reports you subscribe to (commercial or community)
  • Government/standards body cybersecurity publications relevant to your industry
  • Third-party breach notifications and security bulletins for key providers

Documenting sources is part of “identified,” because it proves the mechanism you use to find threats 1.

3) Set triage rules: what qualifies as a “recorded threat”

Write a short procedure that answers:

  • What signals create a new threat entry?
  • When do you merge into an existing entry?
  • When does something stay as a vulnerability finding instead of a threat?

Practical rule of thumb: record threats that are (a) plausible for your environment, and (b) could materially affect confidentiality, integrity, or availability if realized. Then document the rationale for exclusion when you decide not to record an item that an auditor might expect you to track (for example, “not applicable due to no exposure or compensating controls”).

4) Establish workflow and ownership

Define roles so this doesn’t stall between Security and GRC:

  • Threat identification owner (Security): gathers inputs, drafts entries, attaches evidence.
  • Threat register custodian (GRC): enforces completeness, taxonomy, review cadence, reporting.
  • Risk owner (Business/IT): accepts/mitigates and funds remediation.
  • Approver (CISO/CCO delegate): signs off on acceptance for defined categories.

Create a simple RACI and attach it to the procedure. Also decide how “threats” translate into action:

  • Create/adjust a risk entry when the threat is material and persistent.
  • Create remediation work when a control gap exists.
  • Create monitoring tasks for evolving external threats.

5) Connect ID.RA-03 to adjacent processes (this is where programs pass audits)

Auditors test whether the register drives decisions. Build explicit linkages:

  • Risk assessment: each material threat should map to at least one risk scenario or control evaluation.
  • Vulnerability management: external threats tied to exploited CVEs should map to scanning, prioritization, and patch/mitigation tickets.
  • Third-party risk: threats that involve supplier compromise should map to your critical third-party inventory, due diligence, and contractual controls.
  • Incident response: incidents should update the threat register (new threat, revised likelihood, new detection requirements).

6) Run a recurring review and show evidence of operation

Set a recurring forum (security/risk triage) with an agenda that includes:

  • New threats to record
  • Threats to close/merge
  • Threats requiring risk acceptance decisions
  • Threats requiring control changes

Record attendance, minutes, and decisions. The key risk factor in the provided guidance is missing implementation evidence 3.

Required evidence and artifacts to retain

Keep artifacts that prove both design (your process exists) and operation (it runs):

  1. Threat Management Procedure (how you identify and record internal/external threats).
  2. Threat Source Catalog (internal systems, external feeds, owners).
  3. Threat Register export (current snapshot plus prior snapshots or change history).
  4. Sample threat records with completed fields and traceability (source evidence attached or referenced).
  5. Triage meeting minutes / decision log (including dispositions and assignments).
  6. Linkage evidence:
    • threat → risk entry
    • threat → remediation tickets
    • threat → third-party record (where applicable)

If you use Daydream, the cleanest approach is to map ID.RA-03 to a control owner, store the procedure, and schedule recurring evidence pulls (register export, meeting notes, and samples) so audit readiness is continuous rather than a scramble.

Common exam/audit questions and hangups

Expect these, and pre-answer them in your artifacts:

  • “Show me your list of internal and external threats.” Provide the register export and your source catalog.
  • “How do you know these threats are relevant to your environment?” Show asset/process mapping fields and rationale notes.
  • “Who reviews this and how often?” Show calendar invites, minutes, and last-reviewed timestamps in the register.
  • “What changed after you identified threats?” Show linked remediation tickets, risk entries, or control updates.
  • “How do you avoid duplicate or stale entries?” Show merge/closure criteria and a closed-items view.

Common hangup: teams treat vulnerability findings as “threats” without recording adversary/exposure context. Your register should support threat scenarios, not only technical findings.

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails Fix
Recording threats in slide decks Hard to audit; no workflow Use a register with required fields and change history
No differentiation between internal vs external Misses half the requirement Add a mandatory “Threat type” field and reporting
No owner or disposition Becomes a graveyard Require owner + disposition before closing triage
Threats not connected to risk decisions Looks performative Link each material threat to risk, remediation, or acceptance
External intel monitored but not recorded Identification without records Convert relevant items into threat entries with sources

Enforcement context and risk implications

NIST CSF is a framework, not itself a regulator with direct penalties in the provided materials 1. Your practical risk is indirect but real: if you cannot show that you systematically identify and record threats, you weaken the defensibility of your risk assessment, security prioritization, and third-party oversight. In audits, that typically shows up as a governance and evidence gap, not a tooling gap.

A practical 30/60/90-day execution plan

First 30 days (stand up the minimum viable control)

  • Assign a control owner (GRC) and operational owner (Security).
  • Choose the system of record and publish the threat record template.
  • Draft and approve the Threat Management Procedure and Threat Source Catalog.
  • Backfill an initial set of threats from recent incidents, pen tests, and top external advisories relevant to your stack.

Next 60 days (make it operational and auditable)

  • Start recurring triage meetings with minutes and decision logs.
  • Implement linkage rules: how threat entries generate risk entries and remediation tickets.
  • Add quality checks (required fields, review timestamps, owner present).
  • Produce the first management report: “Top recorded threats and dispositions.”

By 90 days (make it durable)

  • Expand coverage to product engineering and critical third parties.
  • Add closure and merge criteria so the register stays usable.
  • Run an internal audit-style test: pick recorded threats and trace them end-to-end (source → register → action).
  • Set recurring evidence collection in Daydream (procedure, register export, meeting minutes, sample records) so you can answer exam requests quickly.

Frequently Asked Questions

Do we need a dedicated threat intelligence platform to meet ID.RA-03?

No. The requirement is that threats are identified and recorded 1. A well-governed register plus documented sources and triage can meet the requirement without specialized tooling.

What’s the difference between a threat register and a risk register?

A threat register records threat conditions and scenarios; a risk register records evaluated risks with impact/likelihood and treatment decisions. In practice, you link them so material threats drive risk entries and remediation.

How do we prove we covered “internal” threats, not just external news?

Show internal sources (SIEM/EDR, incidents, audits, vuln scans) in the source catalog and include internal-origin entries in the register. Make “internal/external” a required field and report on it 1.

Can vulnerabilities be recorded as threats?

Sometimes. If a vulnerability represents a plausible adversary path in your environment, it can justify a threat entry with context (exposure, affected assets, exploitation signals). Otherwise, keep it in vulnerability management and reference it as a source.

How granular should threat entries be?

Granular enough to assign ownership and drive action, but not so granular that you create duplicates for every alert. Use merge rules (same scenario, same assets, same actor/path) and keep linkages to the underlying alerts or findings.

What evidence is most persuasive to auditors?

A current register export with completed fields, timestamps, and owners; written procedures and a source catalog; and examples showing traceability from threat identification to decisions and remediation work 1.

Footnotes

  1. NIST CSWP 29

  2. NIST CSF 1.1 to 2.0 Core Transition Changes

  3. NIST CSWP 29; NIST CSF 1.1 to 2.0 Core Transition Changes

Frequently Asked Questions

Do we need a dedicated threat intelligence platform to meet ID.RA-03?

No. The requirement is that threats are identified and recorded (Source: NIST CSWP 29). A well-governed register plus documented sources and triage can meet the requirement without specialized tooling.

What’s the difference between a threat register and a risk register?

A threat register records threat conditions and scenarios; a risk register records evaluated risks with impact/likelihood and treatment decisions. In practice, you link them so material threats drive risk entries and remediation.

How do we prove we covered “internal” threats, not just external news?

Show internal sources (SIEM/EDR, incidents, audits, vuln scans) in the source catalog and include internal-origin entries in the register. Make “internal/external” a required field and report on it (Source: NIST CSWP 29).

Can vulnerabilities be recorded as threats?

Sometimes. If a vulnerability represents a plausible adversary path in your environment, it can justify a threat entry with context (exposure, affected assets, exploitation signals). Otherwise, keep it in vulnerability management and reference it as a source.

How granular should threat entries be?

Granular enough to assign ownership and drive action, but not so granular that you create duplicates for every alert. Use merge rules (same scenario, same assets, same actor/path) and keep linkages to the underlying alerts or findings.

What evidence is most persuasive to auditors?

A current register export with completed fields, timestamps, and owners; written procedures and a source catalog; and examples showing traceability from threat identification to decisions and remediation work (Source: NIST CSWP 29).

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream