ID.IM-01: Improvements are identified from evaluations

To meet the id.im-01: improvements are identified from evaluations requirement, you must run documented evaluations of your cybersecurity program (assessments, audits, tests, incident reviews) and convert the results into tracked improvement items with owners, due dates, and verification of closure. Keep evidence that findings are prioritized, approved, implemented, and re-tested.

Key takeaways:

  • Treat every evaluation output as input to a single, managed improvement backlog.
  • Assign ownership, timelines, and acceptance criteria; “noted” findings do not satisfy ID.IM-01.
  • Preserve end-to-end evidence: evaluation → finding → remediation plan → implementation → validation.

ID.IM-01 sits in the “Identify” function of NIST CSF 2.0 and is deceptively simple: you must identify improvements from evaluations. Operators usually fail this requirement in one of two ways. First, they do evaluations (internal audits, penetration tests, risk assessments, tabletop exercises) but don’t translate results into controlled, owned remediation work. Second, they do remediation work but can’t prove it started from a formal evaluation and was validated after completion.

For a Compliance Officer, CCO, or GRC lead, the practical goal is audit-ready traceability. You need a repeatable mechanism that turns evaluation outputs into decisions and actions: what will be fixed, when, by whom, with what success criteria, and how you will confirm the fix worked. ID.IM-01 does not require a specific tool, cadence, or maturity model. It requires operational discipline and evidence.

This page gives requirement-level implementation guidance you can execute quickly: scope, roles, step-by-step workflow, artifacts to retain, common audit questions, and a pragmatic execution plan. All requirement interpretation here anchors to NIST CSF 2.0 source text and transition materials. 1

Regulatory text

Requirement excerpt: “Improvements are identified from evaluations.” 1

Plain-English interpretation

You must prove that your organization learns from formal evaluations of cybersecurity and turns that learning into concrete improvements. “Evaluations” can include internal audits, external audits, risk assessments, penetration tests, vulnerability assessments, control testing, incident post-mortems, disaster recovery tests, and third-party assessments of your environment. ID.IM-01 expects an operator-visible pipeline from evaluation results to an improvement plan, tracked execution, and validation that the improvement actually resolved the issue.

What the operator must do

  • Define what counts as an “evaluation” in your program.
  • Capture evaluation outputs in a consistent format (findings, recommendations, observations).
  • Convert outputs into actionable improvement items in a tracked system.
  • Prioritize and approve work based on risk and business impact.
  • Assign owners, timelines, and acceptance criteria.
  • Verify completion (re-test, re-assess, or collect objective evidence) and document closure rationale.

Who it applies to

Entity scope

  • Any organization operating a cybersecurity program and using NIST CSF 2.0 as a framework reference, including regulated and non-regulated entities. 2

Operational context (where it shows up in real programs)

ID.IM-01 is relevant anywhere you perform or receive evaluations, including:

  • Security assurance: penetration tests, red team exercises, vulnerability scans, configuration reviews.
  • Risk management and GRC: control assessments, risk assessments, policy compliance reviews.
  • Incident response: post-incident reviews, root cause analysis, lessons learned.
  • Business continuity: tabletop exercises, disaster recovery and backup restore tests.
  • Third-party risk management: SOC report reviews, third-party security assessments, contract compliance reviews of security obligations.

If you rely on third parties for critical services, ID.IM-01 also applies to how you respond to evaluation signals coming from those third parties (for example, a SOC report exception that requires compensating controls or contract changes).

What you actually need to do (step-by-step)

The fastest way to operationalize ID.IM-01 is to implement a closed-loop “evaluation-to-improvement” workflow. Below is a practical sequence you can stand up with existing tools (ticketing + GRC tracker + document repository).

Step 1: Define “evaluation” and create an intake rule

Create a short procedure that lists approved evaluation sources and how outputs enter your system of record. Examples:

  • Internal audit report
  • External audit report
  • Penetration test report
  • Vulnerability scan summary
  • Incident post-mortem
  • Tabletop exercise after-action report
  • Third-party assessment results relevant to your environment

Operational rule: every evaluation must produce either (a) at least one tracked improvement item or (b) an explicit “no findings” record with approver sign-off.

Step 2: Standardize findings into a minimum data set

Normalize evaluation outputs into consistent fields so you can sort, prioritize, and report. Minimum fields that auditors expect to see in some form:

  • Finding title and description
  • Source evaluation (name, date, evaluator)
  • Affected systems/processes
  • Risk statement (what could happen)
  • Recommended remediation (or improvement intent)
  • Severity/priority rating (your scale is fine; be consistent)
  • Owner (person/team)
  • Target completion date
  • Dependencies (budget, change window, third-party action)

Step 3: Create a single improvement backlog (system of record)

Pick one authoritative backlog. Common patterns:

  • GRC platform issues register
  • Ticketing system (with a GRC linkage)
  • Risk register with remediation tasks

The control objective is traceability. A spreadsheet can work temporarily, but you must control versioning and approvals. Daydream typically fits here as the control-and-evidence layer: map ID.IM-01 to a control owner, define the recurring evidence request, and keep the evaluation-to-remediation linkage in one place so you can answer auditors quickly.

Step 4: Triage and prioritize using explicit criteria

Write down your prioritization criteria and apply them consistently. Example criteria:

  • Exploitability and exposure (internet-facing, privileged access)
  • Data sensitivity (regulated or confidential data)
  • Control impact (does it break a key control)
  • Operational criticality (tier-0 systems)
  • Third-party dependency (requires vendor change)

Record the decision. If you accept risk or defer, document the rationale and approval. ID.IM-01 is satisfied by identification plus managed disposition, but only if you can show the decision trail and planned next step.

Step 5: Convert findings into remediation plans with acceptance criteria

For each prioritized improvement item:

  • Define the remediation approach (technical fix, process change, training, contract update)
  • Set acceptance criteria (what evidence proves it’s fixed)
  • Assign implementation and validation owners (not always the same person)
  • Define a validation method (re-test, configuration evidence, control re-test)

Avoid vague closure like “patched” without proof. Closure should reference objective evidence.

Step 6: Execute and track status changes with timestamps and artifacts

Track state transitions such as: Open → In Progress → Pending Validation → Closed (or Deferred/Accepted Risk). For each state, capture:

  • Change ticket or implementation record
  • Approvals
  • Evidence attachments (screenshots, scan results, configs, meeting minutes)

Step 7: Validate effectiveness and close the loop

Validation is where many programs fail ID.IM-01. Require one of:

  • Re-test result (pen test re-test letter, vulnerability scan delta)
  • Control test evidence (sample testing results)
  • Post-change monitoring output (alerts reduced, config compliance)
  • Updated policy/procedure plus training completion evidence (if people/process fix)

Close the item only after validation evidence is attached or linked.

Step 8: Feed program-level improvements back into governance

On a defined governance cadence, summarize trends:

  • Recurring root causes (asset inventory gaps, access control drift)
  • Process bottlenecks (change management delays, third-party responsiveness)
  • Control areas needing redesign

This is where ID.IM-01 becomes a maturity driver: improvements are not just one-off fixes; they also adjust the system that produced the findings.

Required evidence and artifacts to retain

Use this checklist to be audit-ready:

Evaluation artifacts

  • Evaluation scope statement and dates
  • Final report (or executive summary) and supporting workpapers, if available
  • “No findings” attestation when applicable
  • For third parties: received assessment reports and your internal review notes

Improvement tracking artifacts

  • Central backlog export or dashboard view showing all items and statuses
  • Individual records with required fields (source, owner, priority, due date)
  • Risk acceptance or deferral approvals (with rationale)

Remediation execution artifacts

  • Change tickets, pull requests, configuration management records
  • Meeting minutes for remediation working group decisions
  • Communications to third parties when remediation depends on them

Validation artifacts

  • Re-test reports or scan results
  • Control re-test evidence
  • Closure memo for exceptions (why closed without re-test, who approved)

Mapping and ownership artifacts

  • Control statement and procedure referencing ID.IM-01
  • Named control owner and backups
  • Recurring evidence collection schedule and results

Common exam/audit questions and hangups

Auditors usually probe the “closed-loop” nature of the requirement:

  1. Show me your last evaluation and the improvements identified from it.
    Hangup: the report exists, but the remediation items are not linked.

  2. How do you ensure improvements are prioritized by risk?
    Hangup: prioritization is informal; no documented criteria.

  3. How do you know the improvement worked?
    Hangup: items are closed based on completion, not validation.

  4. What happens when a finding is deferred or accepted?
    Hangup: “accepted” is stated verbally; no approval trail.

  5. How do you ensure evaluations don’t fall through the cracks?
    Hangup: no intake mechanism; evaluations sit in email or shared drives.

Frequent implementation mistakes and how to avoid them

Mistake Why it fails ID.IM-01 Fix
Treating evaluations as one-time documents You can’t prove improvements were identified and managed Implement an intake rule and backlog linkage for every evaluation
Mixing remediation tracking across multiple tools with no linkage Audit trail breaks Pick a system of record and link out to execution tickets
Closing items without validation No evidence of effectiveness Require re-test or objective closure evidence before status “Closed”
Over-scoping “improvements” as a wish list Backlog becomes noise Only log actionable items with an owner and acceptance criteria
Ignoring third-party-driven findings Supply chain risks persist Track third-party dependencies, escalation, and contractual remedies

Enforcement context and risk implications

NIST CSF is a framework, not a regulator, so ID.IM-01 does not carry direct statutory penalties by itself. Your risk is indirect but real: many regulators and auditors expect continuous improvement evidence, and failures often surface as “governance” or “program effectiveness” gaps during examinations. Practically, if you cannot show a closed-loop improvement process, you will struggle to defend control effectiveness after an incident, a failed audit, or a material third-party issue. 2

Practical 30/60/90-day execution plan

Use this as an operator’s rollout plan. Adjust to your org’s change control pace.

First 30 days (stand up the mechanism)

  • Name the ID.IM-01 control owner and approver.
  • Publish a one-page procedure: what counts as an evaluation, how findings are logged, required fields, and closure rules.
  • Create the improvement backlog structure (fields, statuses, linkage conventions).
  • Import recent evaluation findings into the backlog and assign owners.
  • Define validation requirements for closure (re-test or objective evidence).

Days 31–60 (make it consistent and reviewable)

  • Add prioritization criteria and a triage meeting cadence with minutes retained.
  • Create a standard “evaluation intake” template for internal teams and third parties.
  • Build a basic management report: open items by priority, overdue items, and items pending validation.
  • Pilot end-to-end traceability for at least one evaluation type (for example, pen test to closure).

Days 61–90 (harden for audit and scale)

  • Run an internal audit-style spot check: pick several closed items and verify linkage and validation evidence.
  • Formalize risk acceptance workflow and approvals for deferred items.
  • Trend findings and propose program-level improvements (process fixes, control redesign).
  • In Daydream, map ID.IM-01 to policy, procedure, control owner, and recurring evidence collection so evidence requests and audit exports are standardized. 1

Frequently Asked Questions

What qualifies as an “evaluation” for ID.IM-01?

Any structured review that assesses security posture or control performance can qualify, including audits, penetration tests, vulnerability assessments, incident post-mortems, and tabletop exercises. Define your accepted evaluation sources in a procedure and apply it consistently. 2

Do we need to remediate every finding to meet ID.IM-01?

You need to identify improvements and manage disposition. If you defer or accept risk, keep documented rationale and approval, plus a plan to revisit if conditions change.

How do we prove the improvement was identified “from” the evaluation?

Maintain a link from each improvement item back to the originating evaluation (report name/date) and quote or summarize the specific finding. Auditors look for traceability from source to closure evidence.

Can a spreadsheet satisfy the requirement?

A spreadsheet can work if it is controlled, consistently updated, and includes approvals and evidence links. Most teams eventually move to a system that better preserves audit trails and ownership history.

What evidence is most commonly missing during audits?

Validation evidence. Teams often have the report and the remediation ticket, but no proof that the fix worked (re-test result, scan delta, or control re-test package).

How should we handle third-party findings that require the provider to act?

Track the item in your backlog with your internal owner, record the dependency, and retain communications and escalation steps. If timelines slip, document risk decisions and contractual or compensating-control actions.

Footnotes

  1. NIST CSWP 29; NIST CSF 1.1 to 2.0 Core Transition Changes

  2. NIST CSWP 29

Frequently Asked Questions

What qualifies as an “evaluation” for ID.IM-01?

Any structured review that assesses security posture or control performance can qualify, including audits, penetration tests, vulnerability assessments, incident post-mortems, and tabletop exercises. Define your accepted evaluation sources in a procedure and apply it consistently. (Source: NIST CSWP 29)

Do we need to remediate every finding to meet ID.IM-01?

You need to identify improvements and manage disposition. If you defer or accept risk, keep documented rationale and approval, plus a plan to revisit if conditions change.

How do we prove the improvement was identified “from” the evaluation?

Maintain a link from each improvement item back to the originating evaluation (report name/date) and quote or summarize the specific finding. Auditors look for traceability from source to closure evidence.

Can a spreadsheet satisfy the requirement?

A spreadsheet can work if it is controlled, consistently updated, and includes approvals and evidence links. Most teams eventually move to a system that better preserves audit trails and ownership history.

What evidence is most commonly missing during audits?

Validation evidence. Teams often have the report and the remediation ticket, but no proof that the fix worked (re-test result, scan delta, or control re-test package).

How should we handle third-party findings that require the provider to act?

Track the item in your backlog with your internal owner, record the dependency, and retain communications and escalation steps. If timelines slip, document risk decisions and contractual or compensating-control actions.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream