CMMC Level 2 Practice 3.6.3: Test the organizational incident response capability

To meet CMMC Level 2 Practice 3.6.3, you must regularly test your incident response capability and keep evidence that the tests occurred, produced findings, and drove corrective actions. A passing implementation looks like planned exercises (tabletop and technical), documented results, tracked remediation, and retesting tied to your CUI environment. 1

Key takeaways:

  • Tests must validate people, process, and tooling, not just that an IR plan exists. 1
  • Evidence matters as much as execution: scope, participants, scenarios, results, and remediation tracking. 1
  • Align testing to CMMC assessment expectations and your CUI data flows under the CMMC Program. 2

Footnotes

  1. NIST SP 800-171 Rev. 2

  2. 32 CFR Part 170

CMMC Level 2 Practice 3.6.3: test the organizational incident response capability requirement is an operations requirement, not a documentation exercise. Assessors want proof that your team can detect, triage, contain, eradicate, and recover from realistic incidents affecting systems that store, process, or transmit CUI, and that you improve based on what you learn. This practice is mapped to NIST SP 800-171 Rev. 2 control 3.6.3, so you should treat NIST 800-171 terminology and artifacts as your baseline for what “good” looks like. 1

Most organizations fail this practice in two ways: they run ad hoc tabletops with no measurable outcomes, or they run technical tests (like an EDR drill) that are not connected to incident response roles, decision-making, communications, or post-incident improvement. Your goal is to create a repeatable program where tests are planned, scoped to your CUI boundary, executed with the right stakeholders (including key third parties where applicable), and closed with corrective actions and retesting.

This page gives you requirement-level steps you can implement quickly, plus the evidence package you should retain for a CMMC Level 2 assessment under the CMMC Program. 2

Regulatory text

Regulatory excerpt (as provided): “CMMC Level 2 practice mapped to NIST SP 800-171 Rev. 2 requirement 3.6.3 (Test the organizational incident response capability).” 1

Operator translation: You must run planned incident response tests that demonstrate your organization can execute its incident response procedures in practice. The tests need to be substantive enough to validate readiness (roles, playbooks, tooling, escalation, and decision-making) for incidents relevant to your environment, especially where CUI is involved. Retain evidence that the tests happened and that lessons learned were turned into improvements. 1

Plain-English interpretation

“Test the organizational incident response capability” means you can’t treat incident response as a binder on a shelf. You need to prove, with records, that:

  • Your incident response roles know what to do.
  • Your monitoring and response tools produce usable signals.
  • Your team can coordinate under pressure and make containment decisions.
  • You can restore operations and preserve relevant evidence.
  • You fix gaps found in testing and confirm the fix worked. 1

Think of this practice as a readiness check: can you run the incident lifecycle on demand, with your actual systems, actual escalation paths, and realistic scenarios tied to your CUI environment? 1

Who it applies to (entity and operational context)

This applies to organizations pursuing CMMC Level 2 that handle CUI for the DoD, including prime contractors and subcontractors in the Defense Industrial Base. 3

Operationally, it applies wherever your incident response capability touches the CUI boundary, including:

  • Security operations (SIEM/EDR monitoring, alert triage)
  • IT operations (identity, endpoint, network, backup/restore)
  • Legal/compliance (notification decisioning, contract obligations)
  • HR (insider threat and employee actions)
  • Key third parties that provide managed security services, incident response retainers, hosting, or critical SaaS used in the CUI environment (you still own the control; third parties can support execution). 1

What you actually need to do (step-by-step)

Use this as a practical build order. Treat it like an assessor-ready checklist.

1) Define the test scope around your CUI environment

  • Identify the “in-scope” systems, networks, and enclaves that store/process/transmit CUI.
  • List critical dependencies: identity provider, email, EDR, VPN, backups, ticketing, logging.
  • Document test boundaries: what you will test directly vs what a third party operates. 1

Output: IR Test Scope Statement (one page), mapped to your CUI boundary.

2) Establish an incident response test plan with a repeatable cadence

Your plan should specify:

  • Test types you will run (tabletop exercises; technical simulations; communication drills).
  • Who participates (IR lead, IT ops, security, comms, executive decision-maker, third party contacts).
  • Scenario selection criteria (threats relevant to your environment; prior incidents; new systems; major changes).
  • Success criteria (observable actions and time-bound checkpoints you can actually measure internally, like “EDR alert triaged and ticket created,” “containment decision made,” “forensic image request initiated”). 1

Practical tip: Write success criteria as “someone did X in system Y and produced artifact Z.” That makes evidence collection automatic.

3) Build scenarios that map to your real playbooks and tools

Pick scenarios that force the team to execute key steps, such as:

  • Compromised account accessing CUI repository
  • Malware on an endpoint used for CUI processing
  • Suspected data exfiltration from a CUI enclave
  • Third party security incident affecting a system that touches CUI (SaaS outage with suspected compromise, MSSP missed alert) 1

For each scenario, pre-stage:

  • Injects (what “new information” arrives and when)
  • Required decisions (contain, isolate, disable account, block egress, engage third party)
  • Evidence expectations (tickets, logs, screenshots, emails, bridge notes)

4) Run the exercise with real roles and real communications paths

During execution:

  • Use the actual paging/on-call process.
  • Use the real ticketing system.
  • Use the actual incident channel/bridge process.
  • Confirm escalation to management happens per your procedure.
  • If a third party is part of response (MSSP, IR firm, cloud provider), test the handoff: contact method, response time expectation, and data they need from you. 1

Hangup to avoid: A tabletop with only security staff is not an “organizational” capability test. Bring the functions that must act during a real incident.

5) Capture results in a structured after-action report (AAR)

Your AAR should include:

  • Date, duration, scenario, and scope
  • Participants and roles (including third party attendees or points of contact)
  • Timeline of key events and decisions
  • What worked / what failed, with specifics (missing logs, unclear authority, broken runbook step)
  • Corrective actions with owners and due dates
  • Retest plan for material issues 1

6) Track corrective actions to closure (and prove it)

Create a remediation tracker that ties each gap to:

  • A ticket number
  • Owner
  • Target date
  • Status
  • Evidence of completion (updated runbook, new alert rule, added logging, training record)
  • Retest evidence where applicable 1

This is where many teams get assessed as “partially met.” The test happened, but the program didn’t produce measurable improvement.

7) Operationalize recurring evidence capture (assessment readiness)

Build a lightweight evidence routine:

  • Store exercise artifacts in a controlled repository (with access control).
  • Keep a rolling index of tests with links to AARs and remediation.
  • Map the artifacts explicitly to practice 3.6.3 for quick retrieval during assessment. 1

If you use Daydream to manage control operations, set 3.6.3 up as a recurring control with evidence tasks (exercise plan, attendance, AAR, remediation closure). The goal is simple: no last-minute scrambling for screenshots, calendars, and ticket exports.

Required evidence and artifacts to retain

Assessors typically want “show me” proof. Keep these artifacts ready:

Artifact What it proves Good enough looks like
IR Test Plan Testing is planned and repeatable Documented scope, participants, scenarios, success criteria 1
Exercise materials Scenario realism and structure Injects, communications plan, decision points
Attendance/participation record “Organizational” capability tested Named roles, business/IT/security present; third party contacts noted
Tickets/alerts/log excerpts Tooling and workflow worked SIEM/EDR alert screenshot, ticket timestamps, containment action record
After-Action Report You evaluated performance Findings + corrective actions 1
Remediation tracker Improvement occurred Closed items with evidence; retest notes
Updated playbooks/runbooks Lessons learned implemented Version history, change approval

Common exam/audit questions and hangups

Expect questions like:

  • “Describe your incident response testing program and show the last test package.” 1
  • “How do you ensure tests cover the CUI environment specifically?” 1
  • “Show evidence you closed findings from the last exercise.” 1
  • “How do third parties participate in incident response, and have you tested that interaction?” 1

Common hangups:

  • Tests exist, but no formal success criteria.
  • Evidence is scattered across email and chat with no index.
  • Findings are not tracked to closure, or closures lack proof.
  • The test excluded IT operations, leadership decision-makers, or key third parties.

Frequent implementation mistakes and how to avoid them

  1. Running only a tabletop with generic discussion.
    Fix: Include at least one step that generates system evidence (ticket, alert, access disablement record). 1

  2. Testing tooling without testing decisions and communications.
    Fix: Add escalation injects: who approves containment that impacts operations, who contacts a third party, who decides on restoring from backup.

  3. No linkage to CUI scope.
    Fix: State explicitly which systems are in scope and why; choose scenarios where CUI access is implicated.

  4. Remediation tracked informally.
    Fix: Use a single tracker with owners and closure evidence. Tie it to your GRC workflow so you can produce it on demand.

  5. Third party dependency ignored.
    Fix: Build a scenario where your MSSP, cloud provider, or SaaS provider must provide logs, take action, or join the bridge. Record that the contact path works.

Enforcement context and risk implications

No public enforcement cases specific to this practice were provided in the source catalog, so you should treat risk primarily as assessment failure and contractual impact under the CMMC Program requirements. 4

Operational risk is straightforward: untested incident response tends to fail at handoffs (security to IT, company to third party) and at the “last mile” steps that matter for CUI, such as account containment, evidence preservation, and controlled restoration. Practice 3.6.3 exists to force you to prove those handoffs work. 1

A practical 30/60/90-day execution plan

First 30 days (stand up the minimum viable testing program)

  • Define CUI scope for testing and document system inventory dependencies.
  • Draft a one-page IR Test Plan template and an AAR template aligned to 3.6.3. 1
  • Schedule the first tabletop exercise and confirm attendance across security, IT, and leadership.
  • Set up a single evidence repository and an index for “3.6.3 Test Packages.”

Next 60 days (run the first test and close findings)

  • Execute the first exercise using real comms paths and ticketing.
  • Produce an AAR within a short internal turnaround and open remediation tickets for every material gap.
  • Validate third party contacts and escalation steps as part of the scenario.
  • Begin mapping artifacts directly to 3.6.3 for assessment retrieval. 1

Next 90 days (prove improvement and make it repeatable)

  • Close high-risk corrective actions with evidence (updated runbooks, new alerts, training completion).
  • Retest at least the most material failure points (example: containment authority, log availability, restore procedure).
  • Add testing to your steady-state GRC calendar and assign an owner accountable for evidence completeness.
  • If you use Daydream, convert the process into a recurring control workflow so evidence collection happens as part of operations, not as a one-time scramble.

Frequently Asked Questions

What counts as “testing” for CMMC Level 2 Practice 3.6.3?

A tabletop can count if it validates roles, decisions, and procedures and produces documented results and corrective actions. Many teams also include at least one technical drill that generates system artifacts (tickets, alerts, access changes) to show the capability is real. 1

Do we have to include third parties (MSSP, cloud, SaaS) in the test?

If a third party is part of your incident response workflow for in-scope systems, you should test the contact path and handoff steps. You still own the requirement and must retain evidence that the process works end-to-end. 1

How do we scope tests to CUI without revealing CUI in exercise documents?

Reference systems and repositories by name and sensitivity category rather than embedding CUI content in the exercise materials. Store artifacts in an access-controlled location and limit distribution to need-to-know roles. 1

What evidence will an assessor ask for most often?

Expect requests for the test plan, the most recent after-action report, and proof that corrective actions were tracked to closure. Also be ready to show raw artifacts like tickets, alert screenshots, and logs tied to the exercise timeline. 1

We had a real incident. Can that count as a “test”?

A real incident can support evidence of capability if you can show the same elements: execution records, timelines, decisions, and a documented lessons-learned process with corrective actions. You should still maintain a planned testing program rather than relying on incidents to occur. 1

How do we keep this from becoming a once-a-year paperwork drill?

Make testing an owned control operation with a standard package: scope, scenario, execution artifacts, AAR, and remediation closure. Tools like Daydream help by assigning recurring tasks and keeping evidence indexed to 3.6.3 for assessment readiness. 1

Footnotes

  1. NIST SP 800-171 Rev. 2

  2. DoD CMMC Program Guidance

  3. 32 CFR Part 170

  4. 32 CFR Part 170; Source: DoD CMMC Program Guidance

Frequently Asked Questions

What counts as “testing” for CMMC Level 2 Practice 3.6.3?

A tabletop can count if it validates roles, decisions, and procedures and produces documented results and corrective actions. Many teams also include at least one technical drill that generates system artifacts (tickets, alerts, access changes) to show the capability is real. (Source: NIST SP 800-171 Rev. 2)

Do we have to include third parties (MSSP, cloud, SaaS) in the test?

If a third party is part of your incident response workflow for in-scope systems, you should test the contact path and handoff steps. You still own the requirement and must retain evidence that the process works end-to-end. (Source: NIST SP 800-171 Rev. 2)

How do we scope tests to CUI without revealing CUI in exercise documents?

Reference systems and repositories by name and sensitivity category rather than embedding CUI content in the exercise materials. Store artifacts in an access-controlled location and limit distribution to need-to-know roles. (Source: NIST SP 800-171 Rev. 2)

What evidence will an assessor ask for most often?

Expect requests for the test plan, the most recent after-action report, and proof that corrective actions were tracked to closure. Also be ready to show raw artifacts like tickets, alert screenshots, and logs tied to the exercise timeline. (Source: NIST SP 800-171 Rev. 2)

We had a real incident. Can that count as a “test”?

A real incident can support evidence of capability if you can show the same elements: execution records, timelines, decisions, and a documented lessons-learned process with corrective actions. You should still maintain a planned testing program rather than relying on incidents to occur. (Source: NIST SP 800-171 Rev. 2)

How do we keep this from becoming a once-a-year paperwork drill?

Make testing an owned control operation with a standard package: scope, scenario, execution artifacts, AAR, and remediation closure. Tools like Daydream help by assigning recurring tasks and keeping evidence indexed to 3.6.3 for assessment readiness. (Source: NIST SP 800-171 Rev. 2)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream