Security Policies and Procedures for Security Testing
PCI DSS 4.0.1 Requirement 11.1.1 requires you to document, maintain, and operationalize the security policies and procedures that govern security testing, and to make sure the right people know and follow them. To implement it fast, publish an approved “Requirement 11 Security Testing Policy + Procedures” set, map each Requirement 11 activity to an owner and cadence, and retain evidence that teams execute the procedures. 1
Key takeaways:
- Your assessor will look for written, approved procedures for every Requirement 11 testing activity, not just a high-level policy. 1
- “In use and known” means you need operational proof: tickets, schedules, results, exceptions, and acknowledgments or training records. 1
- Keep documents current with a defined review/approval workflow and version history tied to your security testing program. 1
Requirement 11 in PCI DSS is where security testing lives: vulnerability scanning, penetration testing, detection of unauthorized wireless access points, change-detection mechanisms, and other testing activities that validate your controls. Requirement 11.1.1 is the “paper and practice” requirement that makes the rest of Requirement 11 auditable. It does not ask you to invent new testing activities; it asks you to document the policies and operational procedures you rely on for those activities, keep them current, run them consistently, and ensure the relevant teams actually know what to do. 1
For a CCO or GRC lead, the fastest path is to treat this as a governance control with hard operational hooks: a controlled document set, clear ownership, embedded workflows, and repeatable evidence. If your security testing is partly performed by third parties (ASVs, pen test firms, MSSPs, cloud providers), Requirement 11.1.1 still lands on you: you must be able to show your internal procedures for engaging them, reviewing results, tracking remediation, and handling exceptions. 1
Regulatory text
PCI DSS 4.0.1 Requirement 11.1.1: “All security policies and operational procedures that are identified in Requirement 11 are documented, kept up to date, in use, and known to all affected parties.” 1
What the operator must do (plain-English interpretation)
You need a controlled set of documents that covers every security testing activity required by PCI DSS Requirement 11, and you must prove four things:
- the documents exist (documented),
- they’re current and approved (kept up to date),
- teams follow them in real work (in use), and
- everyone involved understands their responsibilities (known to all affected parties). 1
This is where many programs fail assessments: the testing may be happening, but the organization cannot show consistent procedures, approvals, or repeatable evidence across teams and environments.
Who it applies to
Entities
- Merchants that store, process, or transmit payment card account data.
- Service providers whose people, processes, or systems can affect the security of the cardholder data environment (CDE).
- Payment processors and similar payment ecosystem entities. 1
Operational context (where it shows up)
- You have a defined CDE and supporting/connected systems in scope for PCI DSS.
- Multiple teams touch security testing (security, infrastructure, cloud, appsec, network, IT operations).
- Some testing is performed by a third party (e.g., penetration testing firm), but you still need internal procedures for selection, oversight, and remediation tracking. 1
What you actually need to do (step-by-step)
Step 1: Inventory “Requirement 11 procedures” you must have
Create a single list of all security policies and operational procedures you rely on to meet Requirement 11. Keep it simple: a table with document name, owner, scope, and where it’s stored.
Minimum expectation: your inventory covers each Requirement 11 activity your environment uses. Requirement 11.1.1 is explicit that the policies and procedures “identified in Requirement 11” must be documented and maintained. 1
Practical output: “PCI Requirement 11 Security Testing Document Register.”
Step 2: Write (or standardize) the documents so they are assessable
Assessors commonly reject documents that are aspirational or too generic. Your procedures should include:
- Purpose and scope (CDE, segmented environments, cloud accounts/subscriptions, on-prem).
- Roles and responsibilities (RACI works well).
- Testing cadence triggers (for example: “after significant change,” “on a defined schedule,” “on onboarding of new systems”). Keep these aligned to your actual Requirement 11 implementation.
- Tooling and method (what scanners, what pen test approach, what change-detection mechanism).
- Evidence produced (scan reports, pen test reports, tickets, attestations).
- Exception handling (risk acceptance workflow; compensating controls where applicable).
- Escalation path (who signs off, who gets notified, what constitutes overdue). 1
Tip: Separate policy (what you require) from procedures/runbooks (how to do it). Both can satisfy 11.1.1 as long as all required Requirement 11 procedures are documented and used, but assessors tend to test at the runbook level. 1
Step 3: Put documents under formal control (versioning + approval)
Implement a lightweight document control workflow:
- Named document owner (Security Testing Program Owner or GRC Control Owner).
- Review/approval steps (security leadership; include IT/app owners when needed).
- Review cadence tied to change triggers (tool changes, org changes, new CDE segments, new third party providers).
- Version history and change log. 1
Your goal is to eliminate “stale procedure” findings where the written process references old tools, old networks, or an org chart that no longer exists.
Step 4: Operationalize “in use” with workflow hooks
This is where most teams should spend time. Convert each procedure into something that leaves a trail:
- Scheduling artifact: a testing calendar, recurring tasks, or control run schedule.
- Execution artifact: tickets/work items for each run (scan execution, pen test coordination, wireless checks, file integrity/change-detection reviews).
- Results artifact: reports, dashboards, or exported results.
- Remediation artifact: findings triage, assignment, retest/closure evidence.
- Exception artifact: documented risk acceptance with expiry and approver. 1
If you use Daydream for third-party risk and control oversight, treat security testing as a control family with linked evidence requests: you can standardize “what evidence counts,” assign owners, and keep version history and artifacts in one place for audit readiness. That directly supports “documented,” “kept up to date,” and “in use” expectations. 1
Step 5: Make it “known to all affected parties”
Choose an approach your organization can sustain:
- Role-based training for security testing operators (AppSec, Infra, SecOps).
- Targeted policy acknowledgment for teams that approve exceptions or own remediation SLAs.
- Onboarding checklist items for new engineers or new third-party engagements. 1
Keep “knowledge” measurable. A Slack post is not a control. Acknowledgments, training completion, or documented SOP sign-off is auditable.
Step 6: Add management oversight and periodic validation
Run a recurring control check that answers:
- Are procedures current with the environment and tooling?
- Did required testing runs occur?
- Were findings tracked to closure or exception?
- Are owners still correct?
- Can you produce evidence quickly for the assessor? 1
Required evidence and artifacts to retain
Use this as your evidence checklist for Requirement 11.1.1:
| Evidence type | What it proves | Examples to retain |
|---|---|---|
| Approved policies + procedures | Documented, approved, current | Policy PDFs/wiki pages with approvals, revision history, owner, effective date 1 |
| Document register | Completeness across Requirement 11 | Requirement 11 procedure inventory mapped to owners/systems 1 |
| Operating records | “In use” | Tickets, job runs, scan execution logs, meeting notes for results review 1 |
| Results + remediation | Testing drives action | Scan/pen test reports, triage records, fix validation, closure notes 1 |
| Awareness records | “Known to affected parties” | Training completion, acknowledgments, SOP sign-offs 1 |
| Exceptions | Controlled deviations | Approved exception forms, compensating control descriptions, expiry dates, review outcomes 1 |
Common exam/audit questions and hangups
Expect questions like:
- “Show me all procedures that support Requirement 11.” If you cannot produce a register, you will waste the assessor’s time and raise sampling risk. 1
- “How do you know people follow this procedure?” Be ready with tickets and run evidence for the sampled period. 1
- “How do you keep procedures current?” Show approvals, review logs, and change triggers. 1
- “Who is ‘affected’ and how are they informed?” Have a role map and proof of training/acknowledgment. 1
- “What happens when remediation is delayed?” Auditors want to see defined exception handling and escalation. 1
Frequent implementation mistakes (and how to avoid them)
- Generic security policy with no procedures. Fix: add runbooks per testing activity, with inputs/outputs and owners. 1
- Procedures exist but are not followed consistently across teams. Fix: drive execution through a single system of record (ticketing/GRC), and sample monthly for drift. 1
- Third-party testing with no internal oversight procedure. Fix: document how you select providers, define scope, receive results, track remediation, and validate closure. 1
- Stale documents after reorganizations or cloud migrations. Fix: tie reviews to change management triggers and update ownership fields as part of offboarding/onboarding. 1
- “Known to affected parties” treated as informal communication. Fix: require acknowledgments or role-based training, and keep records. 1
Enforcement context and risk implications
Public enforcement sources are not provided for this specific requirement in the supplied materials, but the operational risk is straightforward: if your security testing program is undocumented or inconsistently executed, you increase the chance of missed control failures and you create an assessment failure mode where you cannot demonstrate a defined operating standard during scoping, testing, or remediation follow-up. 1
Practical execution plan (30/60/90-day)
First 30 days (stabilize)
- Build the Requirement 11 document register and name owners for each artifact. 1
- Collect current policies/runbooks; identify gaps where procedures are missing or outdated. 1
- Stand up document control: repository, approval workflow, version history. 1
Next 60 days (operationalize)
- Convert each procedure into a repeatable workflow that generates evidence (tickets, schedules, reports, remediation tracking). 1
- Formalize third-party engagement procedures for security testing providers and internal review/acceptance steps. 1
- Launch role-based “Requirement 11 procedures” acknowledgment/training for affected parties. 1
By 90 days (prove it works)
- Run an internal mini-assessment: sample recent testing events and trace from procedure → execution → results → remediation → closure/exception. 1
- Fix evidence gaps (missing tickets, missing approvals, unclear ownership). 1
- Lock the steady-state rhythm: periodic reviews, reporting, and audit-ready evidence packaging (Daydream or your GRC system). 1
Frequently Asked Questions
Does Requirement 11.1.1 require new security tests, or just documentation?
It requires that the security policies and operational procedures for the Requirement 11 security testing activities are documented, current, followed, and known by the right people. The testing obligations are elsewhere in Requirement 11; 11.1.1 makes them governable and auditable. 1
What counts as “known to all affected parties”?
Treat “known” as something you can prove: training completion, policy acknowledgments, or documented SOP sign-off for roles that execute, review, approve, or remediate findings. Map “affected parties” to roles, not individuals. 1
If a third party performs pen testing or scanning, do we still need procedures?
Yes. You still need documented internal procedures for selecting the provider, defining scope, receiving and reviewing results, tracking remediation, and closing findings or exceptions. The assessor will test your governance trail. 1
Can we meet this with a wiki page instead of a formal policy document?
Format matters less than control. A wiki can work if it has clear ownership, version history, approval evidence, and is actually used by teams. If you cannot show approvals and change history, expect audit friction. 1
What’s the fastest way to make this audit-ready?
Create a Requirement 11 document register and an evidence map that ties each procedure to the artifacts it produces (tickets, reports, remediation records). Then run a sample-based internal check to confirm “in use” is demonstrable. 1
How do we keep procedures “up to date” without creating process drag?
Tie updates to specific triggers: tool changes, environment changes, and ownership changes. Keep reviews lightweight, but enforce approvals and versioning so you can show controlled change over time. 1
What you actually need to do
Use the cited implementation guidance when translating the requirement into day-to-day operating steps. 2
Footnotes
Frequently Asked Questions
Does Requirement 11.1.1 require new security tests, or just documentation?
It requires that the security policies and operational procedures for the Requirement 11 security testing activities are documented, current, followed, and known by the right people. The testing obligations are elsewhere in Requirement 11; 11.1.1 makes them governable and auditable. (Source: PCI DSS v4.0.1 Requirement 11.1.1)
What counts as “known to all affected parties”?
Treat “known” as something you can prove: training completion, policy acknowledgments, or documented SOP sign-off for roles that execute, review, approve, or remediate findings. Map “affected parties” to roles, not individuals. (Source: PCI DSS v4.0.1 Requirement 11.1.1)
If a third party performs pen testing or scanning, do we still need procedures?
Yes. You still need documented internal procedures for selecting the provider, defining scope, receiving and reviewing results, tracking remediation, and closing findings or exceptions. The assessor will test your governance trail. (Source: PCI DSS v4.0.1 Requirement 11.1.1)
Can we meet this with a wiki page instead of a formal policy document?
Format matters less than control. A wiki can work if it has clear ownership, version history, approval evidence, and is actually used by teams. If you cannot show approvals and change history, expect audit friction. (Source: PCI DSS v4.0.1 Requirement 11.1.1)
What’s the fastest way to make this audit-ready?
Create a Requirement 11 document register and an evidence map that ties each procedure to the artifacts it produces (tickets, reports, remediation records). Then run a sample-based internal check to confirm “in use” is demonstrable. (Source: PCI DSS v4.0.1 Requirement 11.1.1)
How do we keep procedures “up to date” without creating process drag?
Tie updates to specific triggers: tool changes, environment changes, and ownership changes. Keep reviews lightweight, but enforce approvals and versioning so you can show controlled change over time. (Source: PCI DSS v4.0.1 Requirement 11.1.1)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream