Security Patch Installation
PCI DSS 4.0.1 Requirement 6.3.3 requires you to install applicable security patches so systems stay protected from known vulnerabilities: critical/high patches must be installed within one month of release, and all other patches within an “appropriate” timeframe you define and can defend. Operationalize this with a complete asset inventory, risk-based patch SLAs, disciplined change control, and audit-ready evidence. (PCI DSS v4.0.1 Requirement 6.3.3)
Key takeaways:
- Patch critical/high vulnerabilities within one month of release, based on your risk-ranking method. (PCI DSS v4.0.1 Requirement 6.3.3)
- Define and document the “appropriate timeframe” for non-critical patches; auditors will test that you follow it. (PCI DSS v4.0.1 Requirement 6.3.3)
- Evidence matters as much as execution: you need logs, reports, exceptions, approvals, and coverage across all system components. (PCI DSS v4.0.1 Requirement 6.3.3)
“Security patch installation requirement” in PCI DSS has a simple core: you must keep system components protected from known vulnerabilities by installing applicable patches on time, with special urgency for critical/high patches. PCI DSS 4.0.1 Requirement 6.3.3 sets a fixed deadline for critical/high patches (within one month of release) and lets you set the deadline for everything else, as long as it’s “appropriate” and consistently executed. (PCI DSS v4.0.1 Requirement 6.3.3)
For a Compliance Officer, CCO, or GRC lead, the fastest path to operationalizing this is to treat patching as a governed, measured service: define scope (what systems), define prioritization (what counts as critical/high), define timelines (SLAs), define exception handling, and produce evidence that ties back to each of those decisions. Auditors tend to fail teams not because they never patch, but because they cannot prove completeness (missed assets), timeliness (missed deadlines), or governance (ad-hoc exceptions).
This page gives requirement-level implementation guidance for PCI DSS 4.0.1 Requirement 6.3.3 with steps you can hand to IT Operations and Security, plus the artifacts you should expect to collect and the audit questions you will get.
Regulatory text
PCI DSS 4.0.1 Requirement 6.3.3 (excerpt): “All system components are protected from known vulnerabilities by installing applicable security patches/updates as follows: critical or high-security patches/updates (identified according to the risk ranking process at Requirement 6.3.1) are installed within one month of release, and all other applicable security patches/updates are installed within an appropriate time frame as determined by the entity.” (PCI DSS v4.0.1 Requirement 6.3.3)
What the operator must do (plain reading):
- You must patch all system components in scope for PCI DSS.
- You must identify which patches are critical/high using your risk-ranking process referenced by the requirement.
- You must install those critical/high patches within one month of release.
- You must also patch everything else within a timeframe you define, document, and follow. (PCI DSS v4.0.1 Requirement 6.3.3)
Plain-English interpretation (what auditors expect you to mean)
This requirement is a governance test plus an execution test.
- Governance: You have a policy-backed patch program with defined timelines (including a defined timeline for “other” patches), clear ownership, and an exception process.
- Execution: Patch deployment data shows the organization actually hit the timelines, across the full population of in-scope assets, and remediated or formally accepted any misses. (PCI DSS v4.0.1 Requirement 6.3.3)
A practical way to explain “appropriate timeframe” to an auditor: it’s a documented service-level target based on your environment’s risk and operational constraints, consistently achieved and reviewed.
Who it applies to (entity and operational context)
Entity types: Merchants, service providers, and payment processors that store, process, or transmit cardholder data, or that impact the security of the cardholder data environment (CDE). (PCI DSS v4.0.1 Requirement 6.3.3)
Operational scope: “All system components” includes, in practice, anything in the CDE and connected or supporting systems in scope for your assessment, such as:
- Servers (physical/virtual), endpoints, and jump hosts used for CDE administration
- Network devices (firewalls, routers, switches), security tools, and appliances
- OS, firmware, and application components where patches are provided as “security patches/updates”
- Cloud workloads and managed services, where patch responsibility must be clear in your shared responsibility model
Third parties: If a third party manages systems in scope (managed hosting, managed firewalls, SaaS components that affect payment flows), you still need evidence that patching meets your timelines. Contract language should require timely patching and reporting aligned to Requirement 6.3.3. (PCI DSS v4.0.1 Requirement 6.3.3)
What you actually need to do (step-by-step)
1) Define patching scope using an authoritative system inventory
Your patch SLA only matters if you can prove coverage.
- Produce (or validate) a list of in-scope system components.
- Tag assets by environment (prod/non-prod), ownership, criticality, and patch mechanism (WSUS, MDM, Linux repo, appliance portal, cloud-managed).
- Confirm you can measure patch state for each class of asset with a system-of-record (endpoint manager, vuln scanner, cloud agent, etc.). (PCI DSS v4.0.1 Requirement 6.3.3)
Operator tip: Auditors commonly sample “forgotten” assets: golden images, appliance firmware, bastion hosts, and emergency break-glass accounts/systems.
2) Define severity mapping to “critical/high” based on your risk ranking process
Requirement 6.3.3 keys off your risk ranking method: how you label vulnerabilities/patches as critical/high. (PCI DSS v4.0.1 Requirement 6.3.3)
Minimum operator output:
- A documented severity scheme that determines which patches are critical/high and therefore must be installed within one month of release. (PCI DSS v4.0.1 Requirement 6.3.3)
- A rule for “release date” sources (vendor bulletin date, cloud provider advisory date, etc.) so the clock starts consistently.
3) Set patch timelines (SLAs) for each patch class and platform
You must meet:
- Critical/high patches: installed within one month of release. (PCI DSS v4.0.1 Requirement 6.3.3)
- All other patches: installed within your defined “appropriate timeframe.” (PCI DSS v4.0.1 Requirement 6.3.3)
Make “appropriate timeframe” defendable by defining:
- Different timelines by asset class (servers vs endpoints vs network devices) if needed
- Normal patch windows and emergency patch processes
- Treatment for compensating controls when immediate patching is not feasible (with formal exception approval)
4) Implement a repeatable patch workflow (intake → test → deploy → verify)
A workable operating cadence looks like this:
- Patch intake: Subscribe to vendor security advisories and consolidate into a ticketing queue mapped to assets.
- Triage: Mark patches critical/high vs other based on your risk ranking.
- Testing: Validate in a staging environment or controlled pilot ring.
- Change control: Record approvals, rollout plan, and rollback steps (especially for production and payment systems).
- Deployment: Automated where possible; controlled manual for appliances where required.
- Verification: Prove installation (agent state, package version, KB installed, firmware version) and close tickets only with evidence.
- Reporting: Track SLA attainment and exceptions. (PCI DSS v4.0.1 Requirement 6.3.3)
5) Build an exception process that does not break the requirement
Exceptions will happen; unmanaged exceptions break compliance.
Your exception record should include:
- Asset(s) covered, patch ID(s), and severity
- Business justification and risk statement
- Compensating controls (segmentation, virtual patching rules, WAF signatures, restricted access)
- Expiration date and reassessment trigger (e.g., vendor fix availability, maintenance window)
- Approver (risk owner) and security sign-off
Auditors will ask whether exceptions are rare, time-bound, and reviewed.
6) Validate third-party patching responsibilities
Where a third party patches systems that affect your CDE:
- Ensure contracts/SOWs require patching within your timelines for critical/high patches and defined timelines for other patches. (PCI DSS v4.0.1 Requirement 6.3.3)
- Require periodic patch compliance reporting and the right to obtain evidence during assessments.
- Confirm responsibility boundaries for cloud services (what you patch vs what the provider patches).
Where Daydream fits: Daydream can help you run third-party due diligence and ongoing monitoring for patching obligations by standardizing evidence requests, tracking SLAs in workflows, and keeping an audit-ready record tied to each third party and system scope.
Required evidence and artifacts to retain
Maintain artifacts that prove coverage, timeliness, and governance:
Core governance
- Patch management policy and standard (includes critical/high within one month; defines “appropriate timeframe” for others). (PCI DSS v4.0.1 Requirement 6.3.3)
- Documented risk ranking process that identifies critical/high items. (PCI DSS v4.0.1 Requirement 6.3.3)
- Roles and responsibilities (RACI) for OS, application, network device, and cloud patching
Operational records
- In-scope asset inventory (including network devices and security appliances)
- Patch/vulnerability scan reports or endpoint management compliance reports showing patch state
- Change records for representative patch cycles (approvals, test results, rollout evidence, rollback plan)
- Patch deployment logs (MDM/endpoint manager output, package manager logs, configuration management run logs)
- Exception register with approvals and compensating controls
- Third-party attestation or reporting for in-scope managed components
Metrics (for management review)
- SLA compliance reporting for critical/high (within one month) and for “other” patches per your defined timeframe. (PCI DSS v4.0.1 Requirement 6.3.3)
Common exam/audit questions and hangups
Expect these questions in a PCI DSS assessment:
- “Show me your definition of critical/high and how it maps to your risk ranking process.” (PCI DSS v4.0.1 Requirement 6.3.3)
- “How do you know you patched all in-scope system components, including appliances and cloud workloads?”
- “Pick a critical/high vendor patch from the last period. Prove it was installed within one month of release.” (PCI DSS v4.0.1 Requirement 6.3.3)
- “What is your ‘appropriate timeframe’ for other patches, and show consistent adherence?” (PCI DSS v4.0.1 Requirement 6.3.3)
- “How do you handle patch failures, reboots, and deferred maintenance windows?”
- “Show exceptions, who approved them, and how you reduced risk during the exception period.”
Audit hangup pattern: teams show vulnerability scan results but cannot link them cleanly to patch release dates and installation dates, which is the core timing proof Requirement 6.3.3 demands. (PCI DSS v4.0.1 Requirement 6.3.3)
Frequent implementation mistakes (and how to avoid them)
-
No defensible definition of “appropriate timeframe.”
Fix: Put specific timelines in policy/standard, align by asset class if needed, then measure and report adherence. (PCI DSS v4.0.1 Requirement 6.3.3) -
Asset blind spots (especially network/security appliances).
Fix: Reconcile inventory against network scans, CMDB, cloud accounts, and firewall rule bases that reference “unknown” IPs. -
Treating “release date” inconsistently.
Fix: Define the system-of-record for vendor release dates (security bulletin publication date, cloud advisory date) and store it in the patch ticket. -
Exceptions handled in email or chat.
Fix: Centralize exceptions in a register with explicit approvals and expiry, and tie each exception to affected assets and patch IDs. -
Third-party patching assumed, not evidenced.
Fix: Require periodic reports and keep them; contractually require cooperation during PCI assessments. Use Daydream to standardize requests and track compliance across third parties.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so you should treat enforcement risk as indirect: failure here often shows up as a control breakdown that increases the likelihood of compromise from known vulnerabilities. Under PCI DSS assessments, missed critical/high patch timelines or inability to prove timelines can lead to assessment findings and remediation obligations. (PCI DSS v4.0.1 Requirement 6.3.3)
Practical 30/60/90-day execution plan
First 30 days: make the requirement measurable
- Confirm in-scope asset inventory for all system components tied to the CDE.
- Document severity mapping and what counts as critical/high per your risk ranking process. (PCI DSS v4.0.1 Requirement 6.3.3)
- Write/refresh patch standard: critical/high within one month; define “appropriate timeframe” for others. (PCI DSS v4.0.1 Requirement 6.3.3)
- Stand up an exception register and approval workflow.
Next 60 days: operationalize and start producing evidence
- Implement intake/triage workflow and ensure each patch has a tracked release date and deployment date.
- Align change control templates to include patch evidence (test, approval, rollout, verification).
- Build compliance reporting (by asset class and by critical/high vs other) and start monthly reviews. (PCI DSS v4.0.1 Requirement 6.3.3)
- For third parties: update contract language and begin collecting patch compliance reports.
By 90 days: close gaps and harden the program
- Run an internal audit-style sample: pick recent critical/high patches and prove installation within one month across multiple platforms. (PCI DSS v4.0.1 Requirement 6.3.3)
- Remediate inventory gaps and bring straggler platforms into tooling coverage.
- Review exceptions for expiry, compensating controls, and repeat offenders.
- Prepare an assessor-ready evidence package (policy, reports, sample tickets, exceptions, third-party artifacts).
Frequently Asked Questions
Does PCI DSS require patching within one month for every patch?
No. The one-month deadline applies to patches/updates you classify as critical or high according to your risk ranking process. All other applicable patches must be installed within a timeframe you define as appropriate and then follow. (PCI DSS v4.0.1 Requirement 6.3.3)
What counts as “release” for the one-month clock?
PCI DSS 6.3.3 measures from “release,” but you must define what source you treat as authoritative (vendor security bulletin date, cloud advisory date) and apply it consistently in your workflow and evidence. (PCI DSS v4.0.1 Requirement 6.3.3)
Can we meet the requirement with vulnerability scanning alone?
Scanning helps prove patch state, but it does not automatically prove installation timing from release date. Keep patch tickets or records that link release date, classification (critical/high vs other), and verified installation date. (PCI DSS v4.0.1 Requirement 6.3.3)
How should we handle systems that cannot be patched quickly due to uptime constraints?
Use a formal exception with a risk owner approval, compensating controls, and a time-bound plan to patch at the next feasible window. Auditors will look for documented rationale and evidence that the exception is managed. (PCI DSS v4.0.1 Requirement 6.3.3)
Are firmware and appliance updates in scope?
If the device is a system component in scope for PCI DSS, applicable security patches/updates include the relevant updates for that component, which often includes firmware. Treat these as first-class citizens in your inventory and patch reporting. (PCI DSS v4.0.1 Requirement 6.3.3)
How do we manage patching when a third party hosts or operates part of the environment?
Define patch responsibilities contractually, require reporting that demonstrates timelines, and retain evidence for assessment. Daydream can help track third-party obligations and store patch compliance artifacts in an audit-ready workflow. (PCI DSS v4.0.1 Requirement 6.3.3)
Frequently Asked Questions
Does PCI DSS require patching within one month for every patch?
No. The one-month deadline applies to patches/updates you classify as critical or high according to your risk ranking process. All other applicable patches must be installed within a timeframe you define as appropriate and then follow. (PCI DSS v4.0.1 Requirement 6.3.3)
What counts as “release” for the one-month clock?
PCI DSS 6.3.3 measures from “release,” but you must define what source you treat as authoritative (vendor security bulletin date, cloud advisory date) and apply it consistently in your workflow and evidence. (PCI DSS v4.0.1 Requirement 6.3.3)
Can we meet the requirement with vulnerability scanning alone?
Scanning helps prove patch state, but it does not automatically prove installation timing from release date. Keep patch tickets or records that link release date, classification (critical/high vs other), and verified installation date. (PCI DSS v4.0.1 Requirement 6.3.3)
How should we handle systems that cannot be patched quickly due to uptime constraints?
Use a formal exception with a risk owner approval, compensating controls, and a time-bound plan to patch at the next feasible window. Auditors will look for documented rationale and evidence that the exception is managed. (PCI DSS v4.0.1 Requirement 6.3.3)
Are firmware and appliance updates in scope?
If the device is a system component in scope for PCI DSS, applicable security patches/updates include the relevant updates for that component, which often includes firmware. Treat these as first-class citizens in your inventory and patch reporting. (PCI DSS v4.0.1 Requirement 6.3.3)
How do we manage patching when a third party hosts or operates part of the environment?
Define patch responsibilities contractually, require reporting that demonstrates timelines, and retain evidence for assessment. Daydream can help track third-party obligations and store patch compliance artifacts in an audit-ready workflow. (PCI DSS v4.0.1 Requirement 6.3.3)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream