RA-5(3): Breadth and Depth of Coverage

RA-5(3) requires you to define vulnerability scanning coverage so it is explicit, reviewable, and repeatable: what assets are scanned (breadth) and how thoroughly you scan them (depth). Operationalize it by documenting your scanning scope tiers, authenticated vs. unauthenticated methods, frequency triggers, and exclusions, then proving execution with scan outputs and exception approvals. 1

Key takeaways:

  • Define “what gets scanned” and “how hard you scan it” in a way an assessor can test. 1
  • Tie coverage rules to asset inventory, network boundaries, and scan configurations, not just a policy statement.
  • Keep evidence that demonstrates coverage decisions, execution, and approved gaps over time.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

A lot of vulnerability programs fail audits for a simple reason: scanning happens, but coverage is not defined well enough to prove it is complete and risk-based. RA-5(3) fixes that by forcing you to define the breadth and depth of vulnerability scanning coverage so stakeholders can see exactly what is in scope, what level of scanning is required, and what exceptions exist. 1

For a Compliance Officer, CCO, or GRC lead, the work is mostly governance and evidence: you need a defensible definition of coverage that matches your environment (endpoints, servers, cloud, containers, network devices, applications, and third-party managed components) and that can be mapped to owners and recurring artifacts. The best implementations treat “coverage” as a measurable control objective backed by asset inventory, scanning configurations, and exception management, then demonstrate operations with reports that reconcile scanned assets to total assets.

This page gives requirement-level implementation guidance you can hand to security operations and infrastructure teams, then audit internally with a clear checklist.

Regulatory text

Requirement (excerpt): “Define the breadth and depth of vulnerability scanning coverage.” 1

What the operator must do:
Document and maintain a concrete definition of scanning coverage that answers, at minimum:

  • Breadth: which asset classes, environments, and network segments must be scanned (and which are excluded).
  • Depth: what scan methods are required per asset class (for example, authenticated scanning, configuration checks, web app scanning, container image scanning) and the minimum thoroughness expected.
  • Decision rules: when scans run (cadence and triggers), how credentials are handled, and how exceptions are approved and revisited.

Plain-English interpretation (what RA-5(3) is really asking)

RA-5(3): breadth and depth of coverage requirement is about preventing “checkbox scanning.” An auditor wants to see that:

  1. You made deliberate choices about scope (not “we scan everything” with no proof).
  2. Your scan techniques match the assets (a network port scan is not the same as authenticated host scanning).
  3. You can demonstrate coverage and explain gaps with approvals and compensating controls.

If your program cannot produce an accurate list of in-scope assets and show how each category is scanned, you will struggle to prove RA-5(3), even if your scanners run daily.

Who it applies to

Entity types (common applicability):

  • Federal information systems
  • Contractor systems handling federal data 1

Operational context:

  • Enterprises using vulnerability scanners (infrastructure and cloud), web application scanners, container/image scanners, and SAST/DAST pipelines.
  • Organizations with mixed ownership boundaries: corporate IT, DevOps, subsidiaries, and third parties operating parts of your stack.

Control ownership (typical):

  • Primary owner: Vulnerability Management Lead / Security Operations
  • Accountable: CISO or Head of Security Risk
  • Key partners: Infrastructure, Cloud Platform, DevOps, AppSec, IT Asset Management, Third-Party Risk/Procurement (for managed services)

What you actually need to do (step-by-step)

Step 1: Define “breadth” using an asset coverage matrix

Create a table that becomes your single reference for coverage decisions. Example structure:

Asset class Examples In-scope criteria Out-of-scope criteria Owner
Servers (on-prem) Windows/Linux Managed by org, on corp network Decommissioned, lab/dev sandbox (if approved) Infra
Cloud compute EC2/VMs Tagged prod, handles federal data Ephemeral test with no data (if approved) Cloud
Network devices Firewalls, routers Any device in boundary Vendor-owned edge (if contract says) NetOps
Web apps/APIs Customer portal Internet-facing or processes sensitive data Retired apps App team

This matrix is your “breadth” definition, and it should tie directly to your asset inventory sources (CMDB, cloud accounts, endpoint management). The requirement is to define coverage, so the matrix must be authoritative and version-controlled. 1

Step 2: Define “depth” by mapping scan methods to asset classes

For each asset class, specify the minimum scanning depth. Keep it testable:

  • Unauthenticated network scanning (discovery, open ports, basic service detection)
  • Authenticated host scanning (OS patches, local configuration, installed packages)
  • Configuration/compliance scanning (benchmarks, insecure settings, hardening drift)
  • Web application scanning (DAST; include authenticated crawling where feasible)
  • Container image scanning (base image CVEs, package vulnerabilities)
  • Dependency scanning / SCA (application libraries)
  • Cloud posture checks (misconfigurations across accounts/projects)

Write “depth rules” as requirements, not aspirations. Example: “Production servers require authenticated scanning; unauthenticated scans alone do not satisfy depth.” This is the kind of statement an assessor can verify by looking at scanner settings and results. 1

Step 3: Set execution triggers and coverage SLAs (internal)

RA-5(3) does not prescribe a specific cadence in the excerpt you provided, so avoid claiming a standards-mandated frequency. Define your own triggers that you can meet consistently:

  • Scheduled scans (by asset criticality tier)
  • Event-based scans (new asset onboarding, major change, exposure of new internet-facing endpoint, newly disclosed high-impact vulnerability relevant to your tech stack)

Document these triggers in the procedure that implements RA-5(3), and align them to change management and asset onboarding so scans happen by default.

Step 4: Build an exceptions process that is tight and time-bound

Your breadth definition will always have edge cases. Auditors accept exclusions when you can show control:

  • Written exception request with business reason
  • Risk acceptance and compensating controls (for example, isolation, EDR controls, restricted access)
  • Approval by the right risk owner
  • Review/renewal workflow

Store exceptions in a central register and link each exception to the specific asset(s) and the coverage matrix row it affects.

Step 5: Prove coverage with reconciliation reporting

You need a repeatable way to answer: “Of all in-scope assets, which were scanned at the required depth?”

Minimum reporting set:

  • Asset inventory export (system of record)
  • Scanner asset list / targets
  • Scan results summary
  • Reconciliation report showing mismatches:
    • In inventory but not scanned
    • Scanned but not in inventory (shadow IT)
    • Scanned with insufficient depth (for example, missing credentials)

This reconciliation is the difference between “we run scans” and “we meet RA-5(3).” 1

Step 6: Map the requirement to an owner, procedure, and recurring evidence

Make it operational by assigning:

  • Control owner
  • Implementation procedure (how coverage is defined, updated, and reviewed)
  • Evidence cadence (what artifacts are retained each cycle)

If you use Daydream to manage your control library, treat RA-5(3) as a requirement with a named owner, linked procedures, and auto-collected evidence tasks (inventory export, scan summaries, exception register snapshot). That reduces the “missing evidence” failure mode that commonly drives findings.

Required evidence and artifacts to retain

Keep artifacts that show both definition and execution:

Definition artifacts

  • Vulnerability scanning coverage standard/procedure defining breadth and depth (version-controlled) 1
  • Asset coverage matrix (scope tiers, owners, inclusions/exclusions)
  • Tool configuration standards (credentialed scanning requirements, scan profiles, safe checks vs. deep checks guidance)

Execution artifacts

  • Scan schedules and job configurations (screenshots or exports from tooling)
  • Latest scan reports per asset class (or per tier)
  • Reconciliation report (inventory vs. scanned vs. required depth)
  • Exception register with approvals and compensating controls
  • Evidence of remediation workflow integration (tickets linked to findings)

Common exam/audit questions and hangups

Auditors and assessors typically push on these points:

  • “Show me your definition of coverage. Where is it documented?” 1
  • “How do you know you scanned all in-scope assets?” Expect reconciliation evidence, not a single dashboard screenshot.
  • “Which scans are authenticated? Prove credentials worked.” Be ready with scan logs that show credentialed checks succeeded.
  • “What about cloud, containers, and CI/CD?” If you define breadth to include them, prove depth with the right tools and outputs.
  • “Which systems are excluded and who approved it?” Missing approvals create immediate findings.

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: “We scan everything” with no scope definition.
    Fix: Maintain an explicit coverage matrix tied to inventory sources and boundaries. 1

  2. Mistake: Counting discovery scans as adequate depth.
    Fix: Require authenticated scanning (or equivalent depth controls) for server/endpoint classes where credentials are feasible.

  3. Mistake: Tool output without inventory reconciliation.
    Fix: Produce a repeatable reconciliation report and track deltas as findings.

  4. Mistake: Exceptions handled in email or chat.
    Fix: Use a centralized exception register with approvals and review dates.

  5. Mistake: Third-party managed assets ignored.
    Fix: Define breadth rules for third-party operated components and contract for evidence (scan reports, attestations) where you cannot scan directly.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so treat RA-5(3) primarily as an assessment readiness and control effectiveness issue. The risk is practical: unclear breadth and depth leads to blind spots (unscanned assets, shallow scans, credential failures) and weak evidence during audits, authorizations, or customer security reviews. 1

Practical 30/60/90-day execution plan

First 30 days (Immediate)

  • Assign a control owner and name backup owners for infra, cloud, AppSec.
  • Draft the asset coverage matrix (breadth) using current inventory sources.
  • Define depth requirements per asset class, including authenticated scanning expectations.
  • Stand up an exceptions register template and approval workflow.

By 60 days (Near-term)

  • Configure scanning profiles to match depth requirements (credentialed checks, web app auth, container pipelines where applicable).
  • Produce the first reconciliation report (inventory vs scanned vs depth met).
  • Triage mismatches: missing agents, unscannable segments, credential failures, unknown assets.
  • Start recurring evidence capture in your GRC system (Daydream or equivalent): coverage matrix version, scan summaries, exception snapshots.

By 90 days (Operationalized)

  • Make coverage updates part of onboarding and change management (new assets inherit scan requirements).
  • Establish recurring review of breadth and depth definitions with stakeholders (security, IT, cloud, AppSec).
  • Trend coverage gaps and exceptions, then drive them down through engineering work and contract changes with third parties.
  • Run an internal mock audit: sample assets from each class and prove required depth with artifacts.

Frequently Asked Questions

What does “breadth” mean in RA-5(3)?

Breadth is the scope of assets and environments you commit to scanning, including what is excluded and why. You prove it by tying the scope to an authoritative inventory and documented boundaries. 1

What does “depth” mean for vulnerability scanning?

Depth is how thorough the scanning is for each asset class, such as authenticated host scanning versus basic network probing. Depth should be written as minimum scan methods and validated through scan configurations and logs. 1

Do we have to scan every system the same way?

No. RA-5(3) asks you to define coverage; you can set different depth requirements by asset tier as long as the rules are explicit and consistently followed. 1

How do we handle assets run by a third party where we cannot scan directly?

Define them in your breadth matrix and specify what evidence you require from the third party (for example, scan reports or attestations). Track gaps as exceptions with compensating controls and approvals.

What evidence is most persuasive to auditors for RA-5(3)?

A version-controlled coverage matrix plus a reconciliation report that proves in-scope assets were scanned at the required depth. Add exception approvals for anything not meeting the defined coverage. 1

How should Daydream fit into RA-5(3) operations?

Use Daydream to assign ownership, schedule recurring evidence collection (inventory export, scan summaries, exception register), and keep the coverage definition and artifacts in one audit-ready record.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

Frequently Asked Questions

What does “breadth” mean in RA-5(3)?

Breadth is the scope of assets and environments you commit to scanning, including what is excluded and why. You prove it by tying the scope to an authoritative inventory and documented boundaries. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What does “depth” mean for vulnerability scanning?

Depth is how thorough the scanning is for each asset class, such as authenticated host scanning versus basic network probing. Depth should be written as minimum scan methods and validated through scan configurations and logs. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Do we have to scan every system the same way?

No. RA-5(3) asks you to define coverage; you can set different depth requirements by asset tier as long as the rules are explicit and consistently followed. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we handle assets run by a third party where we cannot scan directly?

Define them in your breadth matrix and specify what evidence you require from the third party (for example, scan reports or attestations). Track gaps as exceptions with compensating controls and approvals.

What evidence is most persuasive to auditors for RA-5(3)?

A version-controlled coverage matrix plus a reconciliation report that proves in-scope assets were scanned at the required depth. Add exception approvals for anything not meeting the defined coverage. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How should Daydream fit into RA-5(3) operations?

Use Daydream to assign ownership, schedule recurring evidence collection (inventory export, scan summaries, exception register), and keep the coverage definition and artifacts in one audit-ready record.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream