Safeguard 13.4: Perform Traffic Filtering Between Network Segments
To meet the safeguard 13.4: perform traffic filtering between network segments requirement, you must enforce and document allowlisted network flows between defined segments so that only approved protocols, ports, and sources/destinations can traverse segment boundaries. Operationalize it by defining segmentation zones, implementing filtering controls (firewalls/ACLs/microsegmentation), validating rule effectiveness, and retaining recurring evidence of review and change control 1.
Key takeaways:
- Define segments and “approved flows” first; filtering rules come second 1.
- Enforce default-deny at segment boundaries, with explicit, justified exceptions tied to owners and tickets.
- Treat evidence as part of the control: configs, rule reviews, test results, and change records must be repeatable.
“Perform Traffic Filtering Between Network Segments” is a requirement-level control that examiners and internal auditors interpret through one question: can systems in one network zone talk to systems in another zone only when you have explicitly approved the traffic, implemented the enforcement, and can prove it stays that way? Safeguard 13.4 in CIS Controls v8 sets the expectation that segmentation is not only a diagram. It is an enforced policy at network boundaries 1.
For a CCO, GRC lead, or security assurance owner, the fastest path is to treat 13.4 as a finite inventory-and-enforcement problem: (1) enumerate your segments and boundary points, (2) define business-justified flows that must cross boundaries, (3) implement technical controls that block everything else, (4) test and monitor the boundaries, and (5) retain evidence that the filtering remains effective through changes.
This page is written to help you turn safeguard 13.4 into an auditable control with clear ownership, a stable operating cadence, and artifacts that reduce back-and-forth during assessments 1.
Regulatory text
Framework requirement: “CIS Controls v8 safeguard 13.4 implementation expectation (Perform Traffic Filtering Between Network Segments).” 1
Operator interpretation: You must implement technical traffic filtering at the boundaries between network segments so that only explicitly approved traffic can pass between segments. In practice, assessors will look for (a) defined segments, (b) enforced filtering rules (not “tribal knowledge”), (c) a default-deny posture with documented exceptions, and (d) recurring evidence that rules are reviewed, tested, and changed through governance 1.
Plain-English interpretation (what the requirement means)
- You have more than one network “zone” (segments such as user LAN, server VLAN, prod VPC, PCI environment, OT network, dev/test, SaaS connectivity edge).
- Any traffic that crosses from one zone to another must be controlled by a filtering mechanism (firewall rules, network ACLs, security groups, microsegmentation policy, service mesh policy, host firewall policy).
- “Controlled” means you can state what is allowed (source, destination, protocol/port, direction), why it is allowed (business need), who owns it, and how changes are approved.
- Everything else is blocked by design, not blocked “because nobody configured it yet.”
Who it applies to
Entity types: Enterprises and technology organizations implementing CIS Controls v8 1.
Operational contexts where 13.4 usually becomes a finding:
- Hybrid networks: on-prem segments plus cloud VPC/VNet plus remote access.
- Flat internal networks: few VLANs, broad east-west connectivity, minimal internal firewalling.
- M&A environments: inherited networks with undocumented trust relationships.
- Third-party connectivity: direct links (VPNs, private circuits) into internal segments.
- Production vs non-production: weak boundaries that allow dev/test to reach prod.
What you actually need to do (step-by-step)
Step 1: Define your segments and boundary control points
Create a segmentation register that lists:
- Segment name (e.g., “Prod-App,” “Prod-DB,” “Corp-User,” “Shared-Services,” “3rd-Party-Extranet”).
- Where it lives (on-prem, cloud account/subscription, colo).
- Boundary enforcement point(s) (firewall pair, cloud NVA, security groups + NACLs, SDN policy controller).
- Owner (network/security team, cloud platform team).
- High-level data/system sensitivity (use your internal classification labels, but be consistent).
Evidence output: segmentation register + current network diagrams showing boundary locations.
Step 2: Define “allowed flows” between segments (the allowlist)
For each pair of segments that must communicate, document:
- Source segment(s) and destination segment(s).
- Protocol/port (and app identity where possible).
- Directionality (initiator and responder).
- Business justification (system function, dependency).
- Data sensitivity impact (if the flow touches regulated data, call it out).
- Rule owner and approver.
- Expiration or review trigger (at minimum: review on material change).
Practical tip: Start with a small set of high-risk boundaries (user-to-server, dev-to-prod, third-party-to-internal). Expand once the operating model works.
Evidence output: an “Inter-Segment Flow Matrix” table (see example below).
Example (flow matrix excerpt):
| From | To | Allowed traffic | Why | Owner | Ticket/approval |
|---|---|---|---|---|---|
| Corp-User | Prod-App | HTTPS to app VIP only | User access to web app | App Owner | CHG- / SEC- |
| Prod-App | Prod-DB | DB port to DB cluster | App-to-DB dependency | Platform Owner | CHG- / SEC- |
| 3rd-Party-Extranet | Shared-Services | SFTP to hardened endpoint | File exchange | Integration Owner | TPRM exception |
Step 3: Implement filtering with default-deny at each boundary
At each boundary enforcement point:
- Set an explicit baseline policy: deny inter-segment traffic by default.
- Add rules only for flows in your allowlist matrix.
- Avoid “any/any” and broad CIDR ranges unless you can defend them with system constraints.
- Use rule naming standards that map to the flow matrix and a change record.
Where controls live (typical patterns):
- On-prem: internal segmentation firewalls, VLAN ACLs, router ACLs.
- Cloud: security groups, network ACLs, cloud firewall services, microsegmentation tooling.
- Workloads: host-based firewalls (especially where network-level enforcement is limited).
Evidence output: configuration exports or screenshots, rulebase excerpts, policy objects, and mapping to allowlisted flows.
Step 4: Validate effectiveness (prove traffic is actually filtered)
Filtering that exists only on paper will fail an assurance review. Validate in two ways:
- Configuration validation: confirm rules match the allowlist and deny everything else across segment boundaries.
- Behavioral validation: test that disallowed traffic is blocked (for example, attempt connection from a workstation segment to a database segment outside the approved path).
Keep the validation lightweight but repeatable. Tie tests to your most sensitive boundaries first.
Evidence output: test plans, test results, packet capture excerpts where appropriate, and logging showing denies at the boundary.
Step 5: Put 13.4 into an operating cadence (reviews + change control)
Operationalize the control so it stays true after the initial cleanup:
- Require a change record for any rule creation/modification/removal at a segment boundary.
- Require justification and owner approval for exceptions.
- Perform recurring reviews of boundary rules against the allowlist matrix, and capture evidence each cycle.
- Monitor for drift: new rules added outside process, shadow IT networks, or new cloud security groups with broad ingress/egress.
Evidence output: change tickets, approval records, recurring review sign-offs, and drift reports.
Step 6: Map the requirement to control language and evidence capture (audit readiness)
CIS 13.4 becomes auditable when you can show “design + operating effectiveness.” Build a short control statement that describes:
- Scope (which segments/boundaries are in scope).
- The enforcement mechanisms.
- The review and change process.
- The evidence produced each cycle.
A practical approach is to map 13.4 to documented control operation and recurring evidence capture, then automate collection where possible 1. Daydream is typically introduced here as the system of record for the control narrative, the evidence checklist, and the recurring task schedule, so the control survives staffing changes without losing proof.
Required evidence and artifacts to retain
Keep artifacts that let an assessor trace: segments → allowed flows → enforced rules → validation → governance.
Minimum evidence set:
- Segmentation register and current network diagrams.
- Inter-segment flow matrix (allowlist) with owners and justifications.
- Boundary rulebase exports (or read-only auditor views) mapped to the flow matrix.
- Default-deny proof (policy screenshots/config showing denies and limited allows).
- Validation evidence: test results and logs showing blocked inter-segment attempts.
- Change control records for rule changes (tickets, approvals, implementation notes).
- Recurring review evidence (review checklist, findings, remediation tickets).
Common exam/audit questions and hangups
Auditors usually press on these points:
- “Show me your segments and where filtering is enforced.” (Diagrams must match reality.)
- “Is the default posture deny?” If you can’t articulate the baseline, expect deeper testing.
- “How do you prevent rule sprawl?” You need review cadence and naming/ownership.
- “How do cloud security groups and on-prem firewalls align?” Split ownership is a common gap.
- “Can you prove disallowed traffic is blocked?” Config reviews alone may not satisfy internal audit.
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails 13.4 | Fix |
|---|---|---|
| “We have VLANs, so we’re segmented” | VLANs without enforcement can still permit broad routing | Put filtering at the L3 boundary and document default-deny |
| Allowlist exists, but rules don’t map cleanly | You can’t prove the requirement is implemented | Add rule naming standards and a matrix-to-rule mapping |
| Too many “temporary” any/any exceptions | Exceptions become permanent and expand blast radius | Require expirations and periodic exception review |
| Cloud is ignored | East-west traffic in cloud can be wide open | Treat security groups/NACLs as boundary controls and evidence them |
| No recurring evidence | You can’t show the control operates over time | Schedule recurring reviews and retain outputs in a system of record |
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement. Practically, failing to filter between segments increases the blast radius of credential theft, malware propagation, and third-party access misuse. It also raises the likelihood that a single misconfigured system becomes a path into sensitive environments 1.
Practical 30/60/90-day execution plan
First 30 days: establish scope and baseline control design
- Name an owner for inter-segment filtering governance (network security or platform security).
- Build the segmentation register and identify boundary enforcement points.
- Inventory existing boundary rules and identify broad permissions.
- Draft the inter-segment flow matrix for the highest-risk boundaries.
- Create the control narrative and evidence checklist (store it in Daydream if you already manage controls there).
Days 31–60: implement and normalize change control
- Convert high-risk boundaries to default-deny with explicit allows aligned to the matrix.
- Implement rule naming/tagging that links to change records.
- Stand up a rule request workflow (ticket template with justification, owner, approver, expiry).
- Run the first formal rule review and capture evidence.
Days 61–90: validate, monitor, and harden
- Execute repeatable validation tests for key boundaries and store results.
- Add monitoring for denied inter-segment traffic spikes and unexpected new rules.
- Expand coverage to remaining segments and cloud environments.
- Hold a tabletop with incident response to confirm segmentation boundaries support containment.
Frequently Asked Questions
Do we need a “default deny” rule everywhere to meet safeguard 13.4?
You need a provable posture that only approved traffic crosses segment boundaries. Default-deny is the clearest way to demonstrate that, and it simplifies audits because exceptions become explicit.
Does microsegmentation count, or must this be a firewall?
Microsegmentation can satisfy the requirement if it enforces policy between segments/workloads and you can produce the same artifacts: allowed-flow definitions, enforced rules, testing evidence, and change governance 1.
How do we handle cloud security groups versus on-prem firewalls?
Treat both as boundary enforcement points and document them in one segmentation register. Your flow matrix should be technology-agnostic, then you map each allowed flow to the specific control implementation (security group rule, firewall policy, ACL).
What level of documentation is “enough” for allowed flows?
Enough to trace each rule to a justified business dependency with an accountable owner. If you can’t explain why a port is open, it’s a candidate for removal or an exception with explicit approval.
What evidence is most likely to be missing during an audit?
Recurring evidence of operation: rule review outputs, validation tests, and change approvals. Teams often have configs but can’t show governance over time.
How do we operationalize this without slowing down engineering?
Use a standard request template, pre-approved patterns (for common app tiers), and automation for evidence collection. Keep the approval path short but require ownership, justification, and traceability for every boundary rule.
Footnotes
Frequently Asked Questions
Do we need a “default deny” rule everywhere to meet safeguard 13.4?
You need a provable posture that only approved traffic crosses segment boundaries. Default-deny is the clearest way to demonstrate that, and it simplifies audits because exceptions become explicit.
Does microsegmentation count, or must this be a firewall?
Microsegmentation can satisfy the requirement if it enforces policy between segments/workloads and you can produce the same artifacts: allowed-flow definitions, enforced rules, testing evidence, and change governance (Source: CIS Controls v8; CIS Controls Navigator v8).
How do we handle cloud security groups versus on-prem firewalls?
Treat both as boundary enforcement points and document them in one segmentation register. Your flow matrix should be technology-agnostic, then you map each allowed flow to the specific control implementation (security group rule, firewall policy, ACL).
What level of documentation is “enough” for allowed flows?
Enough to trace each rule to a justified business dependency with an accountable owner. If you can’t explain why a port is open, it’s a candidate for removal or an exception with explicit approval.
What evidence is most likely to be missing during an audit?
Recurring evidence of operation: rule review outputs, validation tests, and change approvals. Teams often have configs but can’t show governance over time.
How do we operationalize this without slowing down engineering?
Use a standard request template, pre-approved patterns (for common app tiers), and automation for evidence collection. Keep the approval path short but require ownership, justification, and traceability for every boundary rule.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream