Evaluation

HIPAA’s Evaluation requirement means you must periodically test and review, both technically and operationally, whether your security policies and procedures still meet the Security Rule, and you must repeat the evaluation when meaningful environmental or operational changes could affect ePHI security (45 CFR Parts 160, 162, 164). Operationalize it by setting an evaluation trigger model, running scoped technical and nontechnical checks, documenting results, and tracking remediation to closure.

Key takeaways:

  • Run periodic evaluations plus event-driven evaluations tied to change.
  • Cover both technical controls (systems, configurations) and nontechnical controls (policies, workflows, training, governance).
  • Keep a tight evidence package: scope, methods, results, decisions, and remediation tracking.

“Evaluation” is one of the most misunderstood HIPAA Security Rule administrative requirements because it sounds like “do a risk assessment.” It overlaps, but it is not the same deliverable. The Evaluation requirement is about verifying, on a repeating basis and after significant change, whether the security policies and procedures you already implemented are still effective and still meet the Security Rule’s expectations for protecting electronic protected health information (ePHI) (45 CFR Parts 160, 162, 164).

For a Compliance Officer, CCO, or GRC lead, the fastest path to operationalization is to treat Evaluation as a governed program with two lanes: (1) a scheduled evaluation cycle that tests a defined slice of controls, and (2) an event-driven evaluation process that activates when changes occur (new EHR module, cloud migration, acquisition, new third party handling ePHI, major workflow change, incident learnings). Your goal is to produce defensible, auditor-ready proof that you checked whether controls still work as intended, and that you corrected gaps in a timely, tracked way.

Regulatory text

Requirement (excerpt): “Perform a periodic technical and nontechnical evaluation… and, subsequently, in response to environmental or operational changes affecting the security of electronic protected health information… that establishes the extent to which a covered entity’s or business associate’s security policies and procedures meet the requirements of this subpart.” (45 CFR Parts 160, 162, 164)

What the operator must do

You must (a) run evaluations on a recurring basis, and (b) run evaluations when change could affect ePHI security. The evaluation must include technical and nontechnical components, and it must explicitly measure whether your security policies and procedures still meet the HIPAA Security Rule requirements (45 CFR Parts 160, 162, 164).

Plain-English interpretation

“Evaluation” means you don’t get to set security controls once and forget them. You must periodically prove your HIPAA security program still matches reality:

  • Systems drift (patching, configurations, identity rules).
  • Workflows drift (new teams, new intake processes, remote work changes).
  • Third parties change (new subcontractors, new hosting models).
  • The environment changes (new threats, new locations, new connectivity).

Your output is not just a memo. It is a repeatable set of checks, results, and remediation actions that show you verified alignment between written policies, implemented controls, and how ePHI is actually handled.

Who it applies to

Entity scope: Covered Entities and Business Associates (45 CFR Parts 160, 162, 164).

Operational context: Any environment where ePHI is created, received, maintained, or transmitted. Common examples:

  • EHR/EMR platforms, patient portals, imaging systems, revenue cycle tools.
  • Identity and access management for workforce members and privileged users.
  • Endpoints used to access ePHI (VDI, laptops, mobile).
  • Cloud services and third parties that store/process ePHI under a BAA.

What you actually need to do (step-by-step)

1) Define your evaluation “system”

Write a short Evaluation Procedure that answers four questions:

  • Cadence: What “periodic” means for your organization (for example, annual enterprise evaluation plus quarterly targeted evaluations). HIPAA does not define a fixed interval; you must define it and follow it (45 CFR Parts 160, 162, 164).
  • Triggers: The changes that require an out-of-cycle evaluation.
  • Scope model: How you decide what gets tested (applications, infrastructure, workforce groups, locations, third parties).
  • Pass/fail logic: What counts as a deficiency, and how you document compensating controls.

Practical trigger list (use and tailor):

  • New system handling ePHI goes live or materially changes.
  • Cloud migration, network segmentation redesign, identity platform change.
  • M&A, new affiliate onboarding, major workforce model change.
  • New third party with ePHI access; existing third party changes hosting/subprocessors.
  • Security incident with ePHI exposure potential; major audit findings.

2) Build an evaluation plan that includes technical + nontechnical tests

Create an Evaluation Plan template with two workstreams:

Technical evaluation (examples of what to test)

  • Access controls: role design, MFA enforcement, privileged access, termination timeliness.
  • Audit controls: logging enabled, retention aligned to policy, alerting routes to responders.
  • Integrity controls: change control, configuration baselines, malware defenses.
  • Transmission security: encryption in transit, secure remote access, third-party connections.

Nontechnical evaluation (examples of what to test)

  • Policies and procedures: do they reflect current systems and responsibilities?
  • Governance: assigned owners, approvals, exceptions, and risk acceptance.
  • Workforce practices: onboarding/offboarding workflow reality vs. written process.
  • Third-party management: BAAs in place, security requirements tracked, access reviewed.

Tie each test back to an internal control statement, then record the evidence you reviewed and your conclusion. Keep the plan scoped; “everything everywhere” evaluations collapse under their own weight.

3) Execute fieldwork and document results like an auditor will read it

For each evaluation area, document:

  • Scope: systems, locations, teams, and third parties reviewed.
  • Methods: interviews, config review, ticket sampling, log review, tabletop, etc.
  • Results: what you found, including partial compliance and exceptions.
  • Impact statement: why the gap matters for ePHI security.
  • Corrective actions: owner, target date, dependency, interim compensating control.

A clean format is a findings register (spreadsheet or GRC workflow) with unique IDs for each finding, mapped to the Security Rule control domain.

4) Create a remediation workflow that actually closes the loop

Evaluation without remediation tracking fails in audits. Minimum mechanics:

  • Assign a single accountable owner per finding.
  • Require a remediation plan for high-risk findings (what changes, when, validation step).
  • Validate closure with evidence (screenshots, configs, tickets, test results).
  • Record risk acceptance separately with approver and rationale.

5) Make evaluation event-driven through change management

Plug evaluation triggers into:

  • IT change control (CAB) intake forms: “Does this affect systems that store/process ePHI?”
  • Third-party onboarding: “Will this third party access or host ePHI?”
  • Security incident postmortems: “Do we need a targeted evaluation based on lessons learned?”

This is how you prove you respond “in response to environmental or operational changes” (45 CFR Parts 160, 162, 164).

6) Report up and preserve the record

Produce an Evaluation Report or briefing packet that includes:

  • Executive summary of scope, major findings, remediation status.
  • Top risks requiring leadership decisions (budget, timing, risk acceptance).
  • Program improvements (policy updates, new monitoring, control redesign).

Required evidence and artifacts to retain

Keep an “Evaluation evidence pack” per cycle/event:

  • Evaluation Procedure (cadence, triggers, roles).
  • Evaluation Plan (scope, test list, methods).
  • Interview notes and attendance (who confirmed processes).
  • Technical evidence: configurations, access listings, log settings, encryption settings, screenshots, system reports.
  • Sample testing artifacts: tickets, approval records, termination samples, access review samples.
  • Findings register with owners and status.
  • Remediation evidence and closure validation.
  • Formal risk acceptances with approvals and time bounds.
  • Final Evaluation Report and leadership readout materials.

If you use Daydream to manage HIPAA control testing or third-party due diligence, the most valuable output here is a single system of record: scoped test procedures, evidence requests, and remediation tracking in one place, so you can reproduce the evaluation history on demand.

Common exam/audit questions and hangups

Expect these questions:

  • “Show me your last periodic evaluation and any event-driven evaluations after major system changes.” (45 CFR Parts 160, 162, 164)
  • “How did you define ‘periodic’ and how do you ensure it happens?”
  • “What changes trigger an evaluation, and how do you detect those changes?”
  • “How do you know policies match actual workflows for ePHI?”
  • “Show evidence that findings were remediated or formally accepted.”

Hangups auditors focus on:

  • Evaluations that are just narrative write-ups with no test evidence.
  • No link between change management and evaluation triggers.
  • Findings that linger without owners, dates, or closure proof.

Frequent implementation mistakes (and how to avoid them)

  1. Treating Evaluation as a one-time risk assessment.
    Fix: Keep risk analysis separate. Use Evaluation to test whether implemented controls and procedures still meet HIPAA requirements (45 CFR Parts 160, 162, 164).

  2. Only doing technical testing.
    Fix: Add nontechnical checks: policy accuracy, training workflows, exception handling, third-party access governance.

  3. No trigger model, so “change-driven” evaluations never happen.
    Fix: Embed trigger questions in change tickets and third-party intake.

  4. Findings recorded, remediation not validated.
    Fix: Require closure evidence and a validation step (re-test, config check, or workflow walk-through).

  5. Scope creep makes the evaluation impossible to finish.
    Fix: Use a tiered scope model: enterprise baseline plus targeted, high-risk domains tied to recent changes.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this page, so this guidance avoids naming specific matters. From a risk standpoint, weak evaluation practices tend to show up indirectly: if an incident occurs or an audit starts, you will be asked to prove you reassessed controls after meaningful change and that your written program reflects real operations (45 CFR Parts 160, 162, 164). Gaps here often correlate with unmanaged system drift, stale access models, and third-party changes that were never re-reviewed.

A practical 30/60/90-day execution plan

First 30 days (stabilize and define)

  • Assign an Evaluation owner and backup.
  • Draft the Evaluation Procedure: cadence, triggers, roles, required outputs (45 CFR Parts 160, 162, 164).
  • Inventory “ePHI systems” at a workable level: major applications, hosting, key third parties.
  • Create templates: Evaluation Plan, Findings Register, Evaluation Report.

Days 31–60 (run a first evaluation you can defend)

  • Pick a bounded scope: one high-impact ePHI workflow (for example, patient portal access) plus one infrastructure slice (identity and logging).
  • Execute technical + nontechnical tests.
  • Publish findings with owners and target dates.
  • Start remediation on the highest-risk gaps; document compensating controls where needed.

Days 61–90 (operationalize triggers and repeatability)

  • Integrate triggers into change management and third-party onboarding.
  • Run one event-driven evaluation based on a recent change (even if small) to prove the mechanism works.
  • Present results and remediation status to leadership.
  • Store all evidence in a single repository with consistent naming and retention.

Frequently Asked Questions

What counts as “periodic” under the HIPAA evaluation requirement?

HIPAA does not set a specific interval; you must define a cadence and follow it (45 CFR Parts 160, 162, 164). Document your rationale and use targeted evaluations to keep scope manageable between larger cycles.

Is this the same as the HIPAA risk analysis requirement?

No. Evaluation checks whether your implemented security policies and procedures still meet the Security Rule and still fit your environment (45 CFR Parts 160, 162, 164). Risk analysis focuses on identifying and assessing risks; evaluation tests ongoing alignment and effectiveness.

What is a “technical” vs. “nontechnical” evaluation in practice?

Technical evaluation reviews systems and control configurations (access, logs, encryption). Nontechnical evaluation reviews governance and operational reality, such as whether policies match actual workflows, and whether responsibilities and approvals function as written.

Do I need to evaluate every third party each cycle?

You need evaluations that reflect where ePHI risk changes. Use triggers for new or changed third parties handling ePHI, and use periodic sampling for recurring oversight so you can show a rational, repeatable approach (45 CFR Parts 160, 162, 164).

What evidence is “enough” to satisfy an auditor?

Provide scope, methods, test steps performed, the evidence reviewed, findings, and remediation proof (45 CFR Parts 160, 162, 164). A short report without underlying artifacts rarely holds up.

How do we operationalize this with limited staff?

Use a narrow baseline evaluation and rotate focus areas tied to change. A tool like Daydream can reduce manual coordination by standardizing test procedures, collecting evidence, and tracking remediation in one workflow.

Frequently Asked Questions

What counts as “periodic” under the HIPAA evaluation requirement?

HIPAA does not set a specific interval; you must define a cadence and follow it (45 CFR Parts 160, 162, 164). Document your rationale and use targeted evaluations to keep scope manageable between larger cycles.

Is this the same as the HIPAA risk analysis requirement?

No. Evaluation checks whether your implemented security policies and procedures still meet the Security Rule and still fit your environment (45 CFR Parts 160, 162, 164). Risk analysis focuses on identifying and assessing risks; evaluation tests ongoing alignment and effectiveness.

What is a “technical” vs. “nontechnical” evaluation in practice?

Technical evaluation reviews systems and control configurations (access, logs, encryption). Nontechnical evaluation reviews governance and operational reality, such as whether policies match actual workflows, and whether responsibilities and approvals function as written.

Do I need to evaluate every third party each cycle?

You need evaluations that reflect where ePHI risk changes. Use triggers for new or changed third parties handling ePHI, and use periodic sampling for recurring oversight so you can show a rational, repeatable approach (45 CFR Parts 160, 162, 164).

What evidence is “enough” to satisfy an auditor?

Provide scope, methods, test steps performed, the evidence reviewed, findings, and remediation proof (45 CFR Parts 160, 162, 164). A short report without underlying artifacts rarely holds up.

How do we operationalize this with limited staff?

Use a narrow baseline evaluation and rotate focus areas tied to change. A tool like Daydream can reduce manual coordination by standardizing test procedures, collecting evidence, and tracking remediation in one workflow.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
HIPAA Evaluation: Implementation Guide | Daydream