ID.IM-02: Improvements are identified from security tests and exercises, including those done in coordination with suppliers and relevant third parties

To meet the id.im-02: improvements are identified from security tests and exercises, including those done in coordination with suppliers and relevant third parties requirement, you must run security tests and exercises, capture findings (including those involving third parties), turn them into tracked improvement actions, and prove closure through retesting and governance reporting. The control fails when testing results do not reliably drive measurable remediation.

Key takeaways:

  • Convert every test/exercise result into a prioritized improvement backlog with owners, due dates, and verification steps.
  • Extend exercises to key third parties (and their touchpoints) so supply-chain findings produce concrete fixes.
  • Retain evidence that links “test performed” → “finding” → “action” → “closure” → “retest/validation.”

ID.IM-02 sits in the “Identify” function of NIST CSF 2.0 and is about disciplined learning. You are expected to treat security testing (technical testing, control testing, and operational exercises) as an input to continuous improvement, not as a checkbox. The operational bar is simple: when a test or exercise surfaces a weakness, your program reliably converts that insight into an implemented improvement with accountability and proof.

This requirement explicitly includes tests and exercises performed in coordination with suppliers and relevant third parties. For a CCO, GRC lead, or security risk owner, that phrase matters because a large fraction of real incidents involve outsourced technology, cloud hosting, managed services, and software supply chains. ID.IM-02 expects you to incorporate those relationships into your testing strategy and your improvement loop.

The fastest way to operationalize ID.IM-02 is to define (1) what counts as a “test or exercise,” (2) how findings are normalized and prioritized, (3) how improvement actions are governed and verified, and (4) what evidence you will show an examiner, customer auditor, or board committee.

Regulatory text

Excerpt (ID.IM-02): “Improvements are identified from security tests and exercises, including those done in coordination with suppliers and relevant third parties.” 1

What the operator must do:
You must maintain a repeatable process that takes outputs from security tests and exercises (including joint activities with third parties), identifies improvement opportunities, and drives them to closure with clear accountability and verification. This includes: documenting the test/exercise scope, capturing findings, deciding what changes to make (people/process/technology), tracking remediation, and confirming effectiveness after implementation.

Plain-English interpretation

  • Tests and exercises can include penetration tests, vulnerability assessments, purple-team events, tabletop exercises, incident response simulations, disaster recovery tests, phishing simulations, access reviews with negative testing, control effectiveness testing, and third-party joint incident simulations.
  • Improvements are identified means findings are not left as narrative notes. They become actionable changes: new controls, updated configurations, revised playbooks, better monitoring, contract/SLA changes, or revised third-party onboarding/offboarding steps.
  • Including those done with suppliers and relevant third parties means you need a method to incorporate third-party results into your improvement pipeline. If a third party declines to participate, you must document the decision, the compensating approach, and the residual risk.

Who it applies to

Entity scope: Any organization operating a cybersecurity program using NIST CSF 2.0 as an internal standard, customer requirement, or audit baseline. 2

Operational scope (where this becomes real work):

  • Organizations with cloud providers, SaaS platforms, MSP/MSSP support, payment processors, payroll/HR platforms, customer data processors, or outsourced development.
  • Teams running security testing programs and operational resilience exercises.
  • Regulated or customer-audited environments where you must demonstrate continuous improvement and third-party oversight.

Control ownership (typical):

  • Primary owner: Security/GRC (program governance, evidence, reporting).
  • Executors: Security engineering, IT ops, application teams, incident response, business continuity, procurement/vendor management.
  • Third-party participants: critical suppliers, key service providers, and data processors (as applicable).

What you actually need to do (step-by-step)

Use this as an implementation runbook. The goal is traceability from test to remediation.

1) Define “tests and exercises” and set your intake sources

Create a documented inventory of test/exercise types your organization runs (or receives results from), including third-party coordinated events. Examples to include:

  • Internal technical testing (vulnerability scans, configuration audits)
  • External testing (penetration tests, bug bounty reports if applicable)
  • Operational exercises (tabletops, incident simulations, DR tests)
  • Third-party testing signals (SOC reports, assurance reports, coordinated incident simulations, vendor-driven findings that affect your environment)

Operational decision: define which sources are mandatory intake into the improvement process and which are “informational only.” Document the rationale.

2) Standardize the finding record (normalize so you can govern it)

Create a single “Finding” template in your GRC or ticketing system. Minimum fields:

  • Source (test/exercise name, date, facilitator)
  • Scope (systems, business process, third parties involved)
  • Finding statement (what failed, where, evidence)
  • Risk statement (impact path; include third-party dependency if relevant)
  • Recommended improvement(s)
  • Owner, due date, and dependencies
  • Validation method (how you will prove the fix worked)
  • Status and closure notes

If third parties are involved, add:

  • Third party name, service, and relationship owner
  • Contractual lever (SLA, security addendum, right-to-audit, notification duty)
  • Agreed action plan and how you will confirm completion

3) Triage and prioritize improvements with a repeatable method

Define a prioritization approach that your auditors can understand:

  • Severity of exposure (what could happen)
  • Exploitability/likelihood (qualitative if you lack scoring)
  • Business criticality (system/process importance)
  • Third-party concentration (single points of failure, systemic dependencies)

Write the rule, then follow it. In practice, most audit issues arise when prioritization is “whoever shouts loudest,” and the record does not show consistent decisioning.

4) Convert improvements into governed remediation work

For every accepted finding, create a linked remediation ticket or project work item with:

  • Implementation steps
  • Change management linkage (if you use CAB/ITIL)
  • Testing/rollback plan (as applicable)
  • Target completion date and stakeholder approvals

For third-party-driven improvements, use a coordinated plan:

  • Send a formal notice to the third party relationship owner.
  • Agree on deliverables and evidence (screen captures, configuration attestations, updated runbooks, joint retest).
  • Track it the same way you track internal remediation, with clear ownership on both sides.

5) Verify effectiveness (don’t stop at “done”)

ID.IM-02 expects improvements to be identified and driven. Operationally, you should also prove they worked.

  • Retest the control or rerun the failed step in the exercise.
  • Update detection/response content if the gap involved monitoring or playbooks.
  • For third-party fixes, request evidence and, where feasible, perform a joint validation (for example, a reconducted exercise step).

6) Feed the outcomes back into your program governance

Turn patterns into program-level improvements:

  • Update policies/standards/runbooks when a finding indicates a systemic gap.
  • Update training when exercises reveal role confusion or poor escalation.
  • Update third-party requirements when supplier coordination is weak (for example, clarify incident comms, contacts, and timelines).

This is where Daydream fits naturally: many teams can run tests but struggle to retain consistent evidence and show traceability across third-party findings, remediation, and validation. A requirement-mapped evidence calendar and control owner mapping reduces the scramble during audits and customer security reviews.

Required evidence and artifacts to retain

Retain artifacts that show an end-to-end improvement loop.

Testing/exercise proof

  • Test plan or exercise charter (scope, participants, objectives)
  • Execution evidence (reports, logs, facilitator notes, attendance)
  • Final report with findings and recommendations

Improvement tracking

  • Finding register entries with unique IDs
  • Remediation tickets/projects linked to each finding
  • Risk acceptance records for deferred items, with approvals and rationale

Validation

  • Retest results (scan output, pen test verification, exercise rerun notes)
  • Change records (configuration diffs, approved change tickets)
  • Updated procedures/playbooks/policies (redlines or version history)

Third-party coordination

  • Communications showing coordination and agreed actions
  • Third-party provided evidence (attestations, deliverables)
  • Updated contract/security addendum language if a gap required it

Common exam/audit questions and hangups

Expect questions like:

  • “Show me your last exercise. What improvements were identified and closed?”
  • “How do third-party findings enter your remediation workflow?”
  • “Where is your evidence that the fix was validated, not just implemented?”
  • “Who owns remediation when the issue spans internal teams and a supplier?”
  • “How do you handle a third party that won’t participate in coordinated testing?”

Hangups typically occur when evidence is fragmented across email, spreadsheets, and multiple ticketing queues, with no clean linkage.

Frequent implementation mistakes and how to avoid them

  1. Mistake: Treating reports as the deliverable.
    Avoid it by requiring a remediation record for every in-scope finding, even if you later accept risk.

  2. Mistake: No third-party path.
    Avoid it by adding a third-party field set to the finding template and defining who engages suppliers and how evidence is collected.

  3. Mistake: Closing items without validation.
    Avoid it by making “validation method” mandatory and blocking closure until validation evidence is attached.

  4. Mistake: Exercises don’t test real dependencies.
    Avoid it by building at least one scenario that forces coordination with a critical third party (communications, access, logging, recovery steps).

  5. Mistake: Repeating the same failures.
    Avoid it by running trend reviews and turning recurring findings into program changes (standards, baselines, training, contractual requirements).

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite specific actions or penalties. Practically, weak ID.IM-02 execution increases:

  • Incident impact due to repeated control failures that were previously observed.
  • Supply-chain exposure when third-party weaknesses remain untracked or unverified.
  • Audit and customer due diligence friction because you cannot show a closed-loop improvement process with evidence.

Practical 30/60/90-day execution plan

First 30 days (stand up the mechanism)

  • Assign a control owner and backup; document responsibilities.
  • Define what counts as a test/exercise and what sources feed ID.IM-02. 2
  • Implement a standardized finding template and a single register (GRC tool or ticketing system).
  • Pilot with the most recent test/exercise report; convert findings into tracked actions.

Days 31–60 (extend to third parties and governance)

  • Identify critical third parties to include in coordinated exercises based on service criticality and data access.
  • Add third-party coordination steps: relationship owner engagement, evidence requirements, and escalation path.
  • Establish a recurring governance review (security steering committee or risk committee) that reviews open improvements and overdue actions.

Days 61–90 (prove closure and make it repeatable)

  • Run a coordinated exercise step with at least one critical third party (incident comms, recovery dependency, or logging/forensics handoff).
  • Validate closure on a meaningful subset of improvements via retest and attach proof.
  • Publish an “ID.IM-02 evidence pack” checklist so audits become retrieval work, not reconstruction.

Frequently Asked Questions

What counts as a “security exercise” for ID.IM-02?

Any planned activity that tests how your controls and teams perform under realistic conditions counts if it produces findings you can act on. Tabletop exercises, incident simulations, and DR tests qualify if you capture gaps and track improvements to closure. 2

Do we have to run joint tabletop exercises with every vendor?

No. Focus on suppliers and relevant third parties where the service is critical, the third party has sensitive access, or you depend on them for response or recovery. Document selection criteria and what you do when a third party will not participate.

How do we handle a third party that refuses to provide evidence of remediation?

Record the refusal, escalate through procurement and the relationship owner, and evaluate contractual options. If you must accept residual risk, document the risk decision and compensating controls, then revisit at renewal.

Can we close findings based on a third party’s attestation alone?

Sometimes, but you should define when attestation is sufficient versus when you require validation (for example, joint testing, technical proof, or updated runbooks). Auditors tend to challenge closures with weak verification for high-impact dependencies.

What artifacts do auditors ask for most often?

They typically ask for the most recent test/exercise report, the corresponding remediation tickets, and proof of retest or validation. For third-party items, they ask for coordination records and evidence the supplier completed the agreed actions.

How should we map ID.IM-02 into our control library?

Map it to a policy statement, an operating procedure, a named control owner, and a recurring evidence collection cadence so you can consistently produce proof. This mapping approach aligns with common program design expectations. 3

Footnotes

  1. NIST CSWP 29; NIST CSF 1.1 to 2.0 Core Transition Changes

  2. NIST CSWP 29

  3. NIST CSF 1.1 to 2.0 Core Transition Changes; NIST CSWP 29

Frequently Asked Questions

What counts as a “security exercise” for ID.IM-02?

Any planned activity that tests how your controls and teams perform under realistic conditions counts if it produces findings you can act on. Tabletop exercises, incident simulations, and DR tests qualify if you capture gaps and track improvements to closure. (Source: NIST CSWP 29)

Do we have to run joint tabletop exercises with every vendor?

No. Focus on suppliers and relevant third parties where the service is critical, the third party has sensitive access, or you depend on them for response or recovery. Document selection criteria and what you do when a third party will not participate.

How do we handle a third party that refuses to provide evidence of remediation?

Record the refusal, escalate through procurement and the relationship owner, and evaluate contractual options. If you must accept residual risk, document the risk decision and compensating controls, then revisit at renewal.

Can we close findings based on a third party’s attestation alone?

Sometimes, but you should define when attestation is sufficient versus when you require validation (for example, joint testing, technical proof, or updated runbooks). Auditors tend to challenge closures with weak verification for high-impact dependencies.

What artifacts do auditors ask for most often?

They typically ask for the most recent test/exercise report, the corresponding remediation tickets, and proof of retest or validation. For third-party items, they ask for coordination records and evidence the supplier completed the agreed actions.

How should we map ID.IM-02 into our control library?

Map it to a policy statement, an operating procedure, a named control owner, and a recurring evidence collection cadence so you can consistently produce proof. This mapping approach aligns with common program design expectations. (Source: NIST CSF 1.1 to 2.0 Core Transition Changes; NIST CSWP 29)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream