CA-7(2): Types of Assessments
CA-7(2) requires you to define and run the right types of assessments as part of your continuous monitoring program, then keep evidence that those assessments happen, produce results, and drive fixes. Operationally, you must document your assessment mix (e.g., automated scans, manual tests, independent reviews), schedule it by system risk, and track findings through closure. 1
Key takeaways:
- Specify which assessment types you use for each system and why, then bake them into your continuous monitoring plan. 1
- Treat “assessment” as a portfolio: combine automated, manual, and independent methods so coverage matches real risk. 1
- Retain evidence that assessments ran as planned and that findings were triaged, remediated, and re-tested.
CA-7 is the NIST SP 800-53 control for continuous monitoring; CA-7(2) drills into a common failure mode: teams monitor, but they do not define a defensible set of assessment types, or they run one assessment repeatedly and call it “continuous monitoring.” Examiners and authorizing officials usually want to see that your monitoring program uses multiple assessment methods appropriate to the system’s technology, threat exposure, and impact level, and that those methods produce actionable security decisions.
For a Compliance Officer, CCO, or GRC lead, the fastest way to operationalize CA-7(2) is to treat it like a requirements-to-operations mapping exercise. You define the assessment types your program will use, assign owners, set a cadence based on risk, and standardize evidence. Then you connect outputs to your vulnerability management, POA&M (or equivalent corrective action tracking), and authorization decisions.
This page gives you a requirement-level interpretation and a practical implementation playbook focused on speed, audit defensibility, and repeatability.
Regulatory text
Excerpt (as provided): “NIST SP 800-53 control CA-7.2.” 2
What the operator must do (plain language):
You must explicitly define the types of assessments you will perform under continuous monitoring, apply them to the system based on risk, and be able to prove the assessments occurred and informed remediation and risk acceptance decisions. CA-7(2) is less about buying a tool and more about showing disciplined coverage: the right assessment methods, applied to the right components, with results that drive action. 1
Plain-English interpretation (what CA-7(2) is really asking)
A continuous monitoring program that relies on a single method (for example, only vulnerability scanning) leaves predictable gaps: configuration drift, insecure changes, weak IAM administration, and control failures that require human testing or independent review. CA-7(2) expects you to define a portfolio of assessment types and use them intentionally.
Think of it as answering three questions in a way an assessor can validate:
- What assessment types do we run? (automated and human-driven)
- Where do we run them? (which systems, environments, and components)
- What do we do with the results? (ticketing, remediation, retesting, risk acceptance)
Who it applies to
Entities
- Federal information systems and programs implementing NIST SP 800-53. 1
- Contractors and other third parties operating systems that handle federal data or provide services to federal agencies, where NIST SP 800-53 controls are flowed down contractually or via an authorization boundary. 1
Operational contexts where CA-7(2) becomes “real”
- Systems seeking or maintaining an Authority to Operate (ATO) or equivalent authorization decision.
- Cloud platforms and SaaS where frequent releases create continuous change.
- Environments with inherited controls (shared responsibility) where you must still show assessment coverage for what you own and how you validate what you inherit.
- Highly outsourced stacks where assessment types must include third-party attestations and customer-performed tests where contractually allowed.
What you actually need to do (step-by-step)
1) Define your assessment type taxonomy (the minimum set you will defend)
Create a short, explicit list of assessment types your program will use. Keep the names simple and map each to what it covers and what it misses.
A practical taxonomy many teams can execute:
- Automated technical assessments: vulnerability scanning, configuration compliance checks, SAST/DAST where applicable.
- Manual technical assessments: penetration testing, targeted testing of high-risk paths, manual configuration validation.
- Control effectiveness assessments: evidence-based control testing (samples, walkthroughs, inquiry + inspection).
- Change-triggered assessments: post-change validation for major releases, architecture changes, or new integrations.
- Independent assessments: assessments performed by a party independent from system operators (internal audit, separate security team, or qualified external assessors).
Your goal is not maximum categories; it is defensible coverage tied to risk. Document the scope boundaries for each assessment type (prod vs non-prod, internal vs external, frequency logic, and which components are included). 1
2) Map assessment types to system components and control areas
Build a matrix that answers “what assesses what.” This becomes your CA-7(2) backbone.
Example matrix fields (recommended):
- System / boundary
- Component (network, endpoints, IAM, app, database, CI/CD, logging pipeline)
- Assessment type(s)
- Tool or method
- Owner
- Trigger (time-based, event-based, change-based)
- Output artifact
- Tracking mechanism (ticketing, POA&M)
This matrix is what auditors look for when they ask, “How do you know your monitoring is complete?” You are showing intentional design, not ad hoc testing.
3) Assign clear ownership and separation where independence is required
For each assessment type, assign:
- Control owner (accountable for results and remediation)
- Assessment operator (runs the scan/test)
- Reviewer (validates results, checks false positives, confirms closure)
For “independent” assessment types, document how independence is achieved (organizational separation, reporting line, or external party). Don’t hand-wave this; write it down in your monitoring plan and SOP. 1
4) Build the execution procedure (SOP) per assessment type
For each assessment type, write a one-to-two page SOP with:
- Preconditions (access, credentials, test windows, approvals)
- Steps to run the assessment
- Triage criteria (severity model, exploitability signals, asset criticality)
- How findings become tracked work (tickets/POA&M)
- Retest/verification steps and closure criteria
- Exception handling (risk acceptance, false positives, compensating controls)
This is where teams often fail: they have tool output but no repeatable procedure that proves control operation.
5) Integrate assessment outputs into remediation and risk decisions
CA-7(2) lives or dies on follow-through. Make the workflow explicit:
- Findings flow to a centralized backlog (vuln management queue, GRC issues register, or POA&M).
- Each finding has an owner, due date logic, and disposition (fix, mitigate, accept, transfer).
- Closure requires evidence plus retest or validation.
If you use Daydream, this is the moment to connect assessment outputs to control records so each assessment type has a mapped owner, procedure, and recurring evidence artifacts. That mapping is what keeps CA-7(2) “always ready,” rather than a scramble before an ATO package is due.
6) Prove you are running the planned mix (metrics without invented stats)
Avoid made-up percentages. Use operational indicators you can back with logs:
- “Assessments executed vs planned” by assessment type
- “Findings opened/closed” trend by source
- “Time to triage” and “time to validate closure” (use your own system-of-record timestamps)
Auditors accept simple counts and dated exports if they are consistent and tied to your plan.
Required evidence and artifacts to retain
Retain artifacts that show planning, execution, and outcomes:
Planning
- Continuous monitoring strategy/plan that lists assessment types (CA-7 parent context) 1
- Assessment type matrix (system/component-to-assessment mapping)
- SOPs/runbooks per assessment type
- RACI chart (owners, operators, reviewers)
Execution
- Tool configurations and job schedules (scanner policies, config benchmarks, CI checks)
- Scan/test output reports with timestamps and scope identifiers
- Pen test scope statement and final report (where used)
- Control test workpapers (samples selected, evidence inspected, results)
Outcomes
- Tickets/POA&M entries linked to originating assessment output
- Risk acceptances with approver and expiration/reauthorization conditions
- Retest evidence showing remediation validation
- Management review notes (what was escalated, what decisions were made)
Common exam/audit questions and hangups
-
“Show me the list of assessment types you run and where they are documented.”
Hangup: teams point to tools, not documented assessment types. -
“How did you decide which assessment types apply to this system?”
Hangup: no risk-based rationale; only “we always do this.” -
“Where is independence demonstrated?”
Hangup: the same admin runs tests and signs off closure. -
“Prove assessments ran continuously, not just right before the audit.”
Hangup: missing historical logs, overwritten reports, or inconsistent retention. -
“Show that findings drive remediation.”
Hangup: scan results exist, but tickets are missing, unowned, or never retested.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Equating ‘assessment’ with ‘vulnerability scan.’
Fix: define at least one manual and one independent assessment type for high-risk areas, and document why. -
Mistake: No scope control (what exactly was assessed).
Fix: require each assessment artifact to include target inventory identifiers (hosts, repos, accounts, environments). -
Mistake: Treating exceptions as informal.
Fix: require documented risk acceptance with approver, rationale, and planned revisit tied to the monitoring program. -
Mistake: Evidence is scattered across tools with no mapping.
Fix: maintain a single CA-7(2) evidence map that points to the system-of-record location for each assessment type. Daydream can act as the control-to-evidence index so you can answer audits quickly without rebuilding context.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite enforcement examples.
Risk-wise, CA-7(2) gaps usually surface as: undetected control failure, delayed detection of misconfiguration, and inability to support an authorization decision with credible ongoing assurance. The operational impact is rework during ATO/annual reviews, control findings that expand in scope, and avoidable incident exposure.
Practical 30/60/90-day execution plan
First 30 days (stabilize and define)
- Inventory current assessment activities (tools, tests, audits) and group them into assessment types.
- Draft the CA-7(2) assessment type taxonomy and the system/component mapping matrix.
- Assign owners and reviewers; document how independence is achieved for independent assessments.
- Standardize evidence retention locations and naming conventions.
Next 60 days (operationalize and prove)
- Write or tighten SOPs for each assessment type with triage and closure criteria.
- Implement consistent workflows from assessment output to tickets/POA&M and retest.
- Run at least one full cycle per assessment type and store artifacts in the agreed system-of-record.
- Build an “assessment executed vs planned” tracker that you can export on demand.
By 90 days (audit-ready)
- Perform an internal readiness review: pick a system and walk an auditor through the matrix, artifacts, and remediation trail end-to-end.
- Close gaps where evidence is weak (scope clarity, timestamps, approvals, independence).
- Configure Daydream (or your GRC tool) so CA-7(2) is mapped to control owners, procedures, and recurring evidence artifacts, reducing manual effort each cycle.
- Establish a steady-state review cadence for the assessment portfolio to keep it aligned with system changes. 1
Frequently Asked Questions
What counts as an “assessment type” under CA-7(2)?
Treat an assessment type as a distinct method with a defined procedure and output, such as automated scanning, manual testing, control testing, or independent review. If it has a different purpose, scope, or operator profile, document it as its own type. 1
Do we need independent assessments for every system?
CA-7(2) pushes you to define the mix appropriate to risk. For higher-impact or higher-exposure systems, an independent assessment type is easier to defend than self-attestation alone. 1
Can third-party reports (SOC reports, pen tests) satisfy CA-7(2)?
They can be part of your assessment portfolio if you document how they map to your system boundary and which controls/components they cover. You still need a plan for gaps, follow-up findings, and evidence retention.
How do we show “continuous” if assessments are periodic?
Use a documented schedule and triggers that reflect ongoing monitoring, then retain dated execution evidence across multiple cycles. Auditors usually accept periodic methods if the plan is risk-based and consistently executed. 1
What evidence is most likely to fail an audit?
Missing linkage between findings and remediation is a common failure: scan output exists, but there is no owned ticket, no disposition, or no retest proof. The second common gap is unclear scope, where reports don’t identify what was assessed.
How should a GRC team operationalize CA-7(2) without becoming a bottleneck?
Keep GRC accountable for the assessment portfolio definition, mapping, and evidence standards, while engineering/security operations run the assessments and remediate. A system like Daydream helps by keeping the owner/procedure/evidence mapping current and exportable for audits.
Footnotes
Frequently Asked Questions
What counts as an “assessment type” under CA-7(2)?
Treat an assessment type as a distinct method with a defined procedure and output, such as automated scanning, manual testing, control testing, or independent review. If it has a different purpose, scope, or operator profile, document it as its own type. (Source: NIST SP 800-53 Rev. 5)
Do we need independent assessments for every system?
CA-7(2) pushes you to define the mix appropriate to risk. For higher-impact or higher-exposure systems, an independent assessment type is easier to defend than self-attestation alone. (Source: NIST SP 800-53 Rev. 5)
Can third-party reports (SOC reports, pen tests) satisfy CA-7(2)?
They can be part of your assessment portfolio if you document how they map to your system boundary and which controls/components they cover. You still need a plan for gaps, follow-up findings, and evidence retention.
How do we show “continuous” if assessments are periodic?
Use a documented schedule and triggers that reflect ongoing monitoring, then retain dated execution evidence across multiple cycles. Auditors usually accept periodic methods if the plan is risk-based and consistently executed. (Source: NIST SP 800-53 Rev. 5)
What evidence is most likely to fail an audit?
Missing linkage between findings and remediation is a common failure: scan output exists, but there is no owned ticket, no disposition, or no retest proof. The second common gap is unclear scope, where reports don’t identify what was assessed.
How should a GRC team operationalize CA-7(2) without becoming a bottleneck?
Keep GRC accountable for the assessment portfolio definition, mapping, and evidence standards, while engineering/security operations run the assessments and remediate. A system like Daydream helps by keeping the owner/procedure/evidence mapping current and exportable for audits.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream