Frequency of Malware Risk Evaluations
PCI DSS 4.0.1 requires you to set (and document) how often you will re-evaluate system components you’ve classified as “not at risk for malware,” and that frequency must come from a targeted risk analysis performed under Requirement 12.3.1. Your job is to make the evaluation cadence defensible, repeatable, and evidenced. (PCI DSS v4.0.1 Requirement 5.2.3.1)
Key takeaways:
- You cannot pick an arbitrary review interval; the evaluation frequency must be defined by a targeted risk analysis. (PCI DSS v4.0.1 Requirement 5.2.3.1)
- The scope is the set of system components you’ve decided are “not at risk for malware,” not your whole environment. (PCI DSS v4.0.1 Requirement 5.2.3.1)
- Auditors will look for both the decision (why “not at risk”) and the operational follow-through (reviews performed on schedule with evidence). (PCI DSS v4.0.1 Requirement 5.2.3.1)
“Not at risk for malware” is a high-scrutiny claim in PCI programs. It can be valid for certain system types, architectures, or locked-down environments, but it creates a second obligation: you must periodically re-check whether that claim still holds. Requirement 5.2.3.1 makes the cadence non-negotiable. If you have system components in scope for PCI where you’ve determined anti-malware controls aren’t needed because they’re “not at risk,” you must define how often you will re-evaluate that determination, and you must define it through a targeted risk analysis performed according to Requirement 12.3.1. (PCI DSS v4.0.1 Requirement 5.2.3.1)
Operationally, this is a governance-and-evidence requirement as much as it is a security requirement. You need (1) an inventory of components you’ve classified as not at risk, (2) a documented risk analysis that sets the evaluation frequency, and (3) a working process that hits that schedule and records results. If you miss any of these, you’ll struggle to defend the control during assessment, even if your environment is genuinely low malware risk.
Regulatory text
Requirement: “The frequency of periodic evaluations of system components identified as not at risk for malware is defined in the entity's targeted risk analysis, which is performed according to all elements specified in Requirement 12.3.1.” (PCI DSS v4.0.1 Requirement 5.2.3.1)
What the operator must do:
- Identify which system components you have designated as “not at risk for malware.”
- Perform a targeted risk analysis (following the elements required by Requirement 12.3.1) to set how often you will re-evaluate those components.
- Run those periodic evaluations at the frequency you defined, and keep evidence that you did them. (PCI DSS v4.0.1 Requirement 5.2.3.1)
This requirement is not asking you to prove malware never occurs. It is asking you to prove your “not at risk” classification stays current, and that the review cadence is risk-based and documented. (PCI DSS v4.0.1 Requirement 5.2.3.1)
Plain-English interpretation
If you’ve decided some systems don’t need anti-malware controls because they’re not exposed to malware, PCI expects you to revisit that decision on a schedule you can defend. You set the schedule via targeted risk analysis, document it, then follow it with recurring reviews and artifacts. (PCI DSS v4.0.1 Requirement 5.2.3.1)
A good mental model: the “not at risk” determination is not permanent. It expires unless you renew it through periodic evaluation. Your targeted risk analysis defines the renewal interval. (PCI DSS v4.0.1 Requirement 5.2.3.1)
Who it applies to (entity and operational context)
Applies to: merchants, service providers, and payment processors operating under PCI DSS 4.0.1. (PCI DSS v4.0.1 Requirement 5.2.3.1)
Applies when: you have system components in the PCI environment (or connected-to environment, as applicable to your scoping) that you have identified as not at risk for malware. Typical real-world candidates include certain purpose-built appliances, highly restricted systems, or environments without user-browsing, email clients, or software installation pathways. The requirement triggers based on your classification, not on your technology brand. (PCI DSS v4.0.1 Requirement 5.2.3.1)
Operational owners: CISO/Head of Security (control owner), IT Operations (asset owners), Security Operations (validation), GRC/Compliance (evidence and governance), and third-party management if any components are managed by a third party. Even if a third party runs the system, you still need your evaluation frequency defined and evidenced for your PCI program. (PCI DSS v4.0.1 Requirement 5.2.3.1)
What you actually need to do (step-by-step)
Step 1: Build the “not at risk” inventory
Create a list of all system components currently designated “not at risk for malware.” For each item, capture:
- Asset identifier (hostname, instance ID, serial, etc.)
- Business function and whether it touches CHD/SAD or the CDE boundary
- Owner (team) and operator (internal or third party)
- Rationale for “not at risk” (short statement you can defend)
- Date of last evaluation and next scheduled evaluation
Exam reality: if it’s not in a list, it doesn’t exist. Auditors will test completeness by sampling CMDB records, vulnerability scan targets, EDR exclusion lists, and network segments to find “missing” components.
Step 2: Define evaluation frequency via targeted risk analysis
Requirement 5.2.3.1 requires the frequency be defined in your targeted risk analysis, performed according to Requirement 12.3.1. (PCI DSS v4.0.1 Requirement 5.2.3.1)
Your targeted risk analysis should produce:
- A clear decision on evaluation frequency 1
- The risk factors and assumptions behind the decision
- Approval by accountable leadership
- Review triggers that force an out-of-cycle evaluation
Practical risk factors to include (examples you can tailor):
- Exposure pathways: web browsing, email, file transfer, removable media, admin jump access
- Change rate: frequency of software/firmware changes, patching model, configuration drift
- Privilege model: interactive logons allowed, local admin exposure, service accounts
- Network placement: segmentation strength, inbound/outbound connectivity, egress controls
- Dependency changes: new integrations, new remote access tooling, new management agents
- Third-party operational control: who can install software or modify config
Output format: one page is fine if it is specific. The audit failure mode is a generic risk memo that never translates into an operational schedule.
Step 3: Write the evaluation procedure (make it runnable)
Define what “periodic evaluation” means in your environment. Include:
- Who performs it (role, not a name)
- What systems are in scope (tie to the inventory)
- What checks are performed (minimum set)
- Pass/fail criteria and required remediation paths
- How results are recorded and where evidence is stored
Suggested minimum checks for each evaluation cycle:
- Confirm the asset still matches the “not at risk” rationale (no new user interaction pathways, no new software install capability, no new remote access exposure).
- Review recent changes: patching/firmware updates, new agents, configuration changes, exceptions.
- Validate security boundaries: segmentation and access paths that support the low-malware-risk assumption.
- Confirm ownership and support model (including third-party operational changes).
Step 4: Operate the cadence and capture evidence
Run the evaluations on the schedule defined in the targeted risk analysis. (PCI DSS v4.0.1 Requirement 5.2.3.1)
A clean operating approach:
- Maintain a recurring task queue (ticketing system) keyed off the inventory.
- Require completion artifacts before closure.
- Escalate overdue items as policy exceptions with documented approval and compensating actions.
If you use Daydream or another GRC system, map each “not at risk” component to a control/test object, attach the targeted risk analysis that sets the cadence, and collect each evaluation record as time-stamped evidence. The goal is simple retrieval during assessment: “show me the frequency decision, then show me you followed it.”
Step 5: Define out-of-cycle triggers (don’t wait for the calendar)
Even with a defined frequency, you need triggers that force reassessment when risk changes. Common triggers:
- New remote access method, admin tooling, or management plane
- New integration with a higher-risk network segment
- Change in third-party operator or support model
- Material change in allowed software, user interaction, or data flows
- Security incident or suspected malware event
Tie triggers to your change management process so security review automatically considers whether the “not at risk” status still holds.
Required evidence and artifacts to retain
Keep evidence that supports both (a) your defined frequency and (b) proof you executed evaluations.
Minimum artifact set to make audits painless:
- Targeted risk analysis document that defines evaluation frequency (and references Requirement 12.3.1 as the method used). (PCI DSS v4.0.1 Requirement 5.2.3.1)
- Inventory/list of “not at risk for malware” system components, with owners and evaluation schedule.
- Written procedure/SOP for periodic evaluations (what is checked, by whom, how recorded).
- Completed evaluation records for each cycle (tickets, checklists, sign-offs, exported system reports).
- Exception records for missed evaluations (approval, rationale, compensating actions, new due date).
- Change management linkages showing out-of-cycle reassessments when triggers occurred.
Common exam/audit questions and hangups
Expect assessors to probe these points:
- “Show me the targeted risk analysis that sets the frequency.” If you only have a policy statement, you will struggle. (PCI DSS v4.0.1 Requirement 5.2.3.1)
- “Which components are ‘not at risk,’ and how do you know the list is complete?” They will sample against CMDB, diagrams, and admin tool inventories.
- “Show me the last evaluation for this sampled system and the next scheduled date.” Missing timestamps is a common failure.
- “What would cause you to re-evaluate sooner than scheduled?” If you have no triggers, your program looks static.
- “How do you handle third-party-managed components?” Auditors will expect your governance to cover them, even if evidence comes from the third party.
Frequent implementation mistakes and how to avoid them
-
Mistake: Treating “not at risk” as permanent.
Fix: Put an explicit expiration concept into the process. If an evaluation is overdue, the status becomes “pending reassessment” and exceptions are tracked. -
Mistake: One frequency for everything with no rationale.
Fix: Segment by risk profile. For example, separate immutable appliances from systems with admin logon and frequent patching. Document why the cadence differs. -
Mistake: No operational mechanism to ensure the schedule happens.
Fix: Drive the cadence through tickets with SLAs, escalation, and management reporting. -
Mistake: Evidence exists, but it’s scattered.
Fix: Store artifacts in a single evidence repository keyed to the inventory entry (Daydream can function as the system of record for control evidence and audit-ready retrieval). -
Mistake: Ignoring changes introduced by third parties.
Fix: Add contract language or operational runbooks requiring the third party to notify you of changes that affect malware exposure assumptions, and require them to supply evaluation artifacts on request.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement. Treat the risk as assessment-driven: if you claim “not at risk for malware” without a targeted risk analysis defining evaluation frequency, you create a likely audit finding because the requirement is explicit about how the frequency must be set. (PCI DSS v4.0.1 Requirement 5.2.3.1)
The operational risk is straightforward: environments change faster than documentation. Remote access pathways, management agents, and third-party support models routinely evolve. A stale “not at risk” classification becomes a blind spot that can undermine your broader malware controls and your PCI compliance posture.
Practical 30/60/90-day execution plan
First 30 days (stabilize scope and governance)
- Assign control ownership (Security) and evidence ownership (GRC).
- Build the initial inventory of “not at risk” components and confirm owners.
- Collect existing rationales and identify gaps where “not at risk” is asserted but undocumented.
- Draft the evaluation SOP (what checks, what outputs, where stored).
- Decide how you will track cadence (ticketing + evidence repository).
Days 31–60 (complete targeted risk analysis and set frequency)
- Perform the targeted risk analysis per Requirement 12.3.1 elements and document the frequency decision. (PCI DSS v4.0.1 Requirement 5.2.3.1)
- Define out-of-cycle triggers and embed them into change management intake.
- Pilot evaluations on a small sample, then tune the checklist so it produces consistent evidence.
Days 61–90 (operationalize and prove repeatability)
- Roll the process across all “not at risk” components.
- Run the first full evaluation cycle and close out findings or reclassify systems that no longer qualify.
- Create audit-ready reporting: inventory with last/next evaluation dates, exceptions, and links to artifacts.
- If using Daydream, map each asset category to the requirement, attach the targeted risk analysis, and standardize evidence collection so audits become export work, not a scramble.
Frequently Asked Questions
Do we have to evaluate every system for malware risk on a fixed schedule?
Requirement 5.2.3.1 focuses on periodic evaluations for components you identified as “not at risk for malware,” and it requires the frequency be defined via targeted risk analysis. (PCI DSS v4.0.1 Requirement 5.2.3.1)
Can we set one evaluation frequency for all “not at risk” components?
You can, but you need a targeted risk analysis that supports that decision and shows the assumptions hold across the population. If different components have different exposure pathways or change rates, a single cadence is harder to defend. (PCI DSS v4.0.1 Requirement 5.2.3.1)
What does “periodic evaluation” need to include?
PCI DSS 4.0.1 Requirement 5.2.3.1 requires that you define the frequency via targeted risk analysis; it does not prescribe specific evaluation steps. Define checks that confirm the original “not at risk” rationale still holds and record results consistently. (PCI DSS v4.0.1 Requirement 5.2.3.1)
What evidence will an assessor ask for?
Expect requests for the targeted risk analysis that sets the frequency, your inventory of “not at risk” components, and records showing evaluations were completed on schedule. Keep artifacts tied to specific assets and dates. (PCI DSS v4.0.1 Requirement 5.2.3.1)
How do we handle “not at risk” systems managed by a third party?
Keep the systems in your inventory, define the evaluation frequency in your targeted risk analysis, and require the third party to provide the evaluation artifacts or inputs you need to complete the evaluation. Governance stays with you for PCI purposes. (PCI DSS v4.0.1 Requirement 5.2.3.1)
What if we miss an evaluation date?
Treat it as an exception: document why it was missed, record approval, complete the evaluation as soon as possible, and consider whether the “not at risk” status should be temporarily suspended pending reassessment. The key is a controlled, evidenced process. (PCI DSS v4.0.1 Requirement 5.2.3.1)
Footnotes
Frequently Asked Questions
Do we have to evaluate every system for malware risk on a fixed schedule?
Requirement 5.2.3.1 focuses on periodic evaluations for components you identified as “not at risk for malware,” and it requires the frequency be defined via targeted risk analysis. (PCI DSS v4.0.1 Requirement 5.2.3.1)
Can we set one evaluation frequency for all “not at risk” components?
You can, but you need a targeted risk analysis that supports that decision and shows the assumptions hold across the population. If different components have different exposure pathways or change rates, a single cadence is harder to defend. (PCI DSS v4.0.1 Requirement 5.2.3.1)
What does “periodic evaluation” need to include?
PCI DSS 4.0.1 Requirement 5.2.3.1 requires that you define the frequency via targeted risk analysis; it does not prescribe specific evaluation steps. Define checks that confirm the original “not at risk” rationale still holds and record results consistently. (PCI DSS v4.0.1 Requirement 5.2.3.1)
What evidence will an assessor ask for?
Expect requests for the targeted risk analysis that sets the frequency, your inventory of “not at risk” components, and records showing evaluations were completed on schedule. Keep artifacts tied to specific assets and dates. (PCI DSS v4.0.1 Requirement 5.2.3.1)
How do we handle “not at risk” systems managed by a third party?
Keep the systems in your inventory, define the evaluation frequency in your targeted risk analysis, and require the third party to provide the evaluation artifacts or inputs you need to complete the evaluation. Governance stays with you for PCI purposes. (PCI DSS v4.0.1 Requirement 5.2.3.1)
What if we miss an evaluation date?
Treat it as an exception: document why it was missed, record approval, complete the evaluation as soon as possible, and consider whether the “not at risk” status should be temporarily suspended pending reassessment. The key is a controlled, evidenced process. (PCI DSS v4.0.1 Requirement 5.2.3.1)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream