CMMC Level 2 Practice 3.13.5: Implement subnetworks for publicly accessible system components that are physically or

To meet cmmc level 2 practice 3.13.5: implement subnetworks for publicly accessible system components that are physically or requirement, place any internet-facing components in a dedicated subnetwork (or equivalent segmented zone) so they are isolated from systems that store/process/transmit CUI, and tightly control traffic between zones. Operationalize it by defining your DMZ/subnet boundary, enforcing allow-listed flows, and keeping configuration evidence ready for assessment. (NIST SP 800-171 Rev. 2)

Key takeaways:

  • Internet-facing services belong in segmented subnetworks (e.g., DMZ/VPC subnet) with restricted paths to internal networks. (NIST SP 800-171 Rev. 2)
  • Assessors will expect both technical enforcement (routing/ACLs/firewalls) and clear documentation of the boundary and allowed flows. (NIST SP 800-171 Rev. 2)
  • Evidence quality often fails teams: you need configs, diagrams, rulesets, and proof the segmentation is maintained over time. (DoD CMMC Program Guidance)

CMMC Level 2 practice 3.13.5 is about reducing blast radius. If you expose a component to the public internet (a web server, VPN portal, email gateway, SFTP endpoint, DNS, reverse proxy, cloud load balancer, or even a SaaS connector you manage), you should assume it will be probed and eventually misconfigured. The control requires you to separate those public-facing components from internal systems by using subnetworks (or equivalent segmented network zones) and then strictly control any traffic that crosses that boundary. (NIST SP 800-171 Rev. 2)

For a CCO, GRC lead, or Compliance Officer, the fastest path is to translate this practice into three auditable outcomes: (1) you can point to where the “public zone” lives (diagram + inventory), (2) you can show enforced technical controls that prevent direct access from the internet-facing zone to the CUI environment except for explicitly approved flows, and (3) you can prove the segmentation stays in place through change control and recurring evidence capture. This page gives requirement-level guidance you can hand to network/security engineering and then verify yourself for assessment readiness under the CMMC Program. (32 CFR Part 170; DoD CMMC Program Guidance)

Regulatory text

Regulatory excerpt (provided): “CMMC Level 2 practice mapped to NIST SP 800-171 Rev. 2 requirement 3.13.5 (Implement subnetworks for publicly accessible system components that are physically or).” (NIST SP 800-171 Rev. 2)

Operator meaning: You must implement network segmentation so that publicly accessible system components are placed on separate subnetworks (or equivalent segmentation constructs) from internal networks, especially from the environment that stores, processes, or transmits CUI. The intent is to isolate exposure and constrain inbound/outbound pathways using enforced network controls (routing, firewalls, ACLs, security groups, micro-segmentation) and to make that isolation demonstrable to a CMMC assessor. (NIST SP 800-171 Rev. 2; DoD CMMC Program Guidance)

Plain-English interpretation (what the requirement is really asking)

If something is reachable from the public internet, it should not sit “on the same network” as your internal business systems or your CUI enclave. Put it in a dedicated zone/subnet and only permit the minimum required connections to and from internal systems. Everything else is blocked by default.

A practical interpretation most assessors accept aligns with common patterns:

  • DMZ / screened subnet for internet-facing services in on-prem networks.
  • Dedicated VPC/VNet subnets in cloud with security groups/NACLs and controlled peering to private subnets.
  • Reverse proxy / WAF tier in front of apps, with app and data tiers on private subnets.
  • Dedicated management plane separate from public and user networks.

Your compliance goal: show this is designed intentionally, implemented technically, and kept current as systems change. (NIST SP 800-171 Rev. 2)

Who it applies to

Entities: Defense contractors and subcontractors handling CUI who must meet CMMC Level 2 requirements. (32 CFR Part 170; DoD CMMC Program Guidance)

Operational context (what triggers 3.13.5):

  • You host any public-facing application or service used by employees, customers, suppliers, or third parties (e.g., portals, ticketing front ends, support tools).
  • You operate perimeter access points (VPN, ZTNA gateways, remote access portals) that are reachable from the internet.
  • You run internet-facing infrastructure components (MX/email gateways, public DNS, externally accessible APIs).
  • You have cloud-hosted components with public IPs or internet-facing load balancers.

If none of your system components are publicly accessible, you still need to be able to explain that conclusion and show how you verified it (asset inventory + network exposure review). (NIST SP 800-171 Rev. 2)

What you actually need to do (step-by-step)

1) Identify “publicly accessible system components”

Build a list that includes:

  • Hostnames/domains, public IPs, cloud load balancers, VPN endpoints
  • Where they live (on-prem segment, cloud account/subscription/project)
  • Owners and business purpose
  • Whether they touch or can reach the CUI environment

Execution tip: Ask engineering for an export from external attack surface tooling (if you have it) or combine DNS inventory, firewall NAT rules, and cloud public IP listings. Your goal is a defensible inventory. (NIST SP 800-171 Rev. 2)

2) Define required zones and boundaries

Document at least these zones:

  • Public zone / DMZ (internet-facing)
  • Internal private zone (corporate IT)
  • CUI enclave / controlled environment (if you use enclaving)

Then document the boundary controls between them (firewalls, cloud security groups, routing rules, segmentation gateways). (NIST SP 800-171 Rev. 2)

3) Implement subnetworks (or equivalent) for internet-facing components

Choose a pattern per environment:

On-prem option:

  • Create a DMZ VLAN/subnet
  • Place web servers, reverse proxies, email gateways, or VPN concentrators in the DMZ
  • Ensure the DMZ cannot route freely into internal networks

Cloud option:

  • Place internet-facing components in a dedicated public subnet
  • Put app/data tiers in private subnets with no direct internet route
  • Use security groups/NACLs to enforce zone separation
  • Restrict peering/Transit Gateway routes so public subnets cannot reach the CUI enclave except through explicit controls

The assessor focus is less about the brand of technology and more about whether segmentation is enforced and evidenced. (NIST SP 800-171 Rev. 2; DoD CMMC Program Guidance)

4) Allow-list only the required traffic across the boundary

Create a simple rule: deny by default, then allow only:

  • Inbound to the public service ports you truly need (typically via a reverse proxy/WAF)
  • Outbound from DMZ to internal systems only when required (for example, to a specific app service port)
  • Admin access only from a dedicated management network or jump host, not from the open internet

Write down each permitted flow as: Source zone → Destination zone → Port/Protocol → Purpose → Owner → Ticket/approval.

This becomes both your security design and your audit narrative. (NIST SP 800-171 Rev. 2)

5) Separate management access from public access

Public-facing components usually need administration. That is a common failure point.

Minimum expectation:

  • Admin interfaces are not exposed publicly
  • Admin access comes from a controlled internal management segment or controlled remote access solution
  • Privileged access is logged and reviewable

Tie this to your broader access control and audit logging practices so the segmentation control is not a standalone island. (NIST SP 800-171 Rev. 2)

6) Put change control and recurring evidence capture around segmentation

Most teams implement segmentation once and then drift.

Operationalize:

  • Network/security group changes require a ticket and approval
  • Maintain a baseline of “known good” firewall/ACL/security group rules
  • Recurring review of internet exposure (public IPs, open ports, new domains)
  • Periodic validation that the CUI enclave remains non-routable from the public zone except for approved flows

Daydream (when used for control operations) fits well here: map 3.13.5 to an owned control procedure and schedule recurring evidence capture so you are not rebuilding proof at assessment time. (DoD CMMC Program Guidance)

Required evidence and artifacts to retain

Assessors will ask for artifacts that show design + enforcement + operation. Keep these in a single “3.13.5 evidence packet”:

Design / documentation

  • Network segmentation policy/standard that states public components must be isolated
  • Current network diagram(s) showing public zone, internal zone, and CUI enclave boundaries
  • Data flow diagram for any app that bridges DMZ and internal/CUI systems
  • Inventory of publicly accessible components (with owners)

Technical enforcement proof

  • Firewall rule exports (or screenshots) showing deny-by-default posture and explicit allows
  • Cloud security group/NACL exports and routing tables
  • VPN/remote access configuration showing where it lands (not directly into CUI)
  • Reverse proxy/WAF configuration summary (where applicable)

Operational evidence

  • Change tickets for rule changes with approvals
  • Recurring review records (exposure scans, public IP review, firewall rule review)
  • Exception register for any required cross-zone connectivity that is higher risk, with compensating controls

Keep evidence aligned to how CMMC assessments are conducted under the program so your narrative matches what an assessor can verify. (32 CFR Part 170; DoD CMMC Program Guidance)

Common exam/audit questions and hangups

Expect these, and pre-answer them in your documentation:

  1. “Show me all publicly accessible components in scope.”
    Hangup: incomplete inventory or missing cloud assets.

  2. “Where are these hosted, and what subnetwork are they on?”
    Hangup: “flat network” designs or unclear subnet boundaries.

  3. “Prove the public zone cannot reach the CUI environment.”
    Hangup: overly broad allow rules, default routes, permissive security groups.

  4. “Which flows are allowed from DMZ to internal, and why?”
    Hangup: rules justified as “temporary” without expiry.

  5. “How do you prevent admins from managing DMZ hosts from the internet?”
    Hangup: open management ports, unmanaged jump boxes.

  6. “How do you keep this accurate over time?”
    Hangup: no recurring review cadence or missing change control artifacts.

These questions map directly to the intent of 3.13.5 under NIST SP 800-171 Rev. 2 and what CMMC assessments validate. (NIST SP 800-171 Rev. 2; DoD CMMC Program Guidance)

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails 3.13.5 Fix
Public web server sits on the same subnet as internal servers No isolation; compromise pivots easily Move to DMZ/public subnet; restrict routing and ACLs (NIST SP 800-171 Rev. 2)
“Segmentation” exists only as a diagram Assessors need enforced controls Produce firewall/SG/NACL configs and route tables (DoD CMMC Program Guidance)
Allow rules are too broad (“any-any” between zones) Defeats the purpose of a subnetwork boundary Convert to explicit allow-list flows with owners and purpose (NIST SP 800-171 Rev. 2)
Admin ports exposed publicly Creates direct attack path Force admin access through management network/jump host; document it (NIST SP 800-171 Rev. 2)
Cloud: public subnet can route to private/CUI via wide peering routes Common misconfiguration Tighten route tables and security groups; validate effective routes (NIST SP 800-171 Rev. 2)
No ongoing review Drift breaks compliance Add change control + recurring evidence capture; track in Daydream (DoD CMMC Program Guidance)

Enforcement context and risk implications

No public enforcement cases were provided in the source set for this requirement, so this page does not cite specific enforcement outcomes.

Risk-wise, this practice exists because internet-facing components are high-exposure by design. If they share networks with sensitive systems, one compromised service can become a stepping stone into CUI, expanding incident scope and triggering contractual and program-level consequences under CMMC expectations. Keep your discussion factual: segmentation reduces reachable attack paths and supports defensible boundary scoping for CUI environments. (NIST SP 800-171 Rev. 2; DoD CMMC Program Guidance)

Practical 30/60/90-day execution plan

First 30 days (stabilize the boundary)

  • Assign an owner for 3.13.5 (network/security engineering + GRC co-owner).
  • Produce the inventory of publicly accessible components and validate it with cloud/network exports.
  • Publish a “public zone” standard: what must go into DMZ/public subnet and what must not.
  • Draft current-state diagrams and list cross-zone flows that exist today. (NIST SP 800-171 Rev. 2)

Days 31–60 (implement and tighten controls)

  • Move or re-platform public components into dedicated subnetworks where needed.
  • Implement deny-by-default between public zone and internal/CUI networks.
  • Build allow-listed firewall/SG rules for required flows only, with documented rationale and approvals.
  • Restrict administrative access paths; remove public exposure of management interfaces. (NIST SP 800-171 Rev. 2)

Days 61–90 (evidence, repeatability, assessment readiness)

  • Create the evidence packet: diagrams, configs, exports, and flow register.
  • Stand up recurring reviews: public exposure check, firewall/SG rule review, route validation.
  • Add a change-control gate so segmentation-impacting changes cannot ship without review.
  • In Daydream, map CMMC Level 2 practice 3.13.5 to a documented control procedure and automate reminders for recurring evidence capture. (DoD CMMC Program Guidance)

Frequently Asked Questions

Does “subnetworks” mean I must build a traditional on-prem DMZ?

No. The requirement is satisfied by enforceable segmentation, which can be a DMZ, cloud subnets with security groups/NACLs, or equivalent network zoning, as long as isolation is real and provable. (NIST SP 800-171 Rev. 2)

Are SaaS applications “publicly accessible system components” under 3.13.5?

If the SaaS is operated by a third party, you usually can’t subnet it, but you can control connectivity from your environment to it and avoid direct network paths from internet-facing connectors into CUI systems. Document the boundary and compensating controls you control. (NIST SP 800-171 Rev. 2)

What’s the minimum evidence an assessor will accept?

A current diagram showing the segmented zones, plus configuration evidence (firewall/ACL/security group rules and routes) that enforces the separation, plus records showing you maintain it through change control or recurring review. (DoD CMMC Program Guidance)

We use ZTNA instead of VPN. Does 3.13.5 still apply?

Yes if there are publicly reachable access brokers or gateways you manage; place those components in an isolated zone and tightly control any pathways from them into internal/CUI networks. Document how the access path terminates and what it can reach. (NIST SP 800-171 Rev. 2)

Can a reverse proxy in front of an internal app count as segmentation?

Only if the reverse proxy is in a separate subnetwork/zone and the network controls restrict traffic so the internet cannot directly reach the internal app tier. Provide the network path and the enforced rules. (NIST SP 800-171 Rev. 2)

How do I handle a legacy system that can’t be moved out of a flat network?

Treat it as an exception with a dated remediation plan, then add compensating controls you can enforce now (front it with a DMZ proxy, restrict routes, and lock down ports). Keep the exception approval and the technical controls as evidence. (NIST SP 800-171 Rev. 2)

Frequently Asked Questions

Does “subnetworks” mean I must build a traditional on-prem DMZ?

No. The requirement is satisfied by enforceable segmentation, which can be a DMZ, cloud subnets with security groups/NACLs, or equivalent network zoning, as long as isolation is real and provable. (NIST SP 800-171 Rev. 2)

Are SaaS applications “publicly accessible system components” under 3.13.5?

If the SaaS is operated by a third party, you usually can’t subnet it, but you can control connectivity from your environment to it and avoid direct network paths from internet-facing connectors into CUI systems. Document the boundary and compensating controls you control. (NIST SP 800-171 Rev. 2)

What’s the minimum evidence an assessor will accept?

A current diagram showing the segmented zones, plus configuration evidence (firewall/ACL/security group rules and routes) that enforces the separation, plus records showing you maintain it through change control or recurring review. (DoD CMMC Program Guidance)

We use ZTNA instead of VPN. Does 3.13.5 still apply?

Yes if there are publicly reachable access brokers or gateways you manage; place those components in an isolated zone and tightly control any pathways from them into internal/CUI networks. Document how the access path terminates and what it can reach. (NIST SP 800-171 Rev. 2)

Can a reverse proxy in front of an internal app count as segmentation?

Only if the reverse proxy is in a separate subnetwork/zone and the network controls restrict traffic so the internet cannot directly reach the internal app tier. Provide the network path and the enforced rules. (NIST SP 800-171 Rev. 2)

How do I handle a legacy system that can’t be moved out of a flat network?

Treat it as an exception with a dated remediation plan, then add compensating controls you can enforce now (front it with a DMZ proxy, restrict routes, and lock down ports). Keep the exception approval and the technical controls as evidence. (NIST SP 800-171 Rev. 2)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream