CIS AWS Foundations v1.2 4.1: Security groups should not allow ingress from 0.0.0.0/0 or ::/0 to port 22
To meet cis aws foundations v1.2 4.1: security groups should not allow ingress from 0.0.0.0/0 or ::/0 to port 22 requirement, you must ensure no AWS Security Group rule allows inbound SSH (TCP/22) from the public internet over IPv4 (0.0.0.0/0) or IPv6 (::/0). Operationalize it by continuously detecting violations (Security Hub EC2.13), remediating or replacing rules, and retaining evidence of the control’s ongoing operation.
Key takeaways:
- Eliminate any Security Group inbound rule that exposes TCP/22 to 0.0.0.0/0 or ::/0.
- Prefer SSM Session Manager or tightly scoped administrative access (bastion, VPN, or fixed IP allowlists).
- Treat this as a continuous control: detect, remediate, prevent reintroduction, and document evidence.
This requirement targets a common and high-impact cloud misconfiguration: exposing SSH to the internet. CIS AWS Foundations v1.2 4.1 expects you to prevent Security Groups from allowing inbound access from “anywhere” to port 22, over both IPv4 and IPv6. In practice, auditors and internal security teams treat this as a baseline cloud hygiene control because it reduces brute-force attempts, credential stuffing against SSH keys/passwords, and opportunistic scanning turning into initial access.
For a Compliance Officer, CCO, or GRC lead, the fastest path is to: (1) define what “compliant” means in your environment (no public ingress to TCP/22), (2) implement detection that covers all accounts/regions, (3) remediate current findings with a safe workflow, and (4) put guardrails in place so teams can’t accidentally re-open SSH to the world. AWS Security Hub already maps this to a control (EC2.13), which makes it easier to standardize evidence collection and reporting across accounts 1. The CIS Benchmark provides the baseline expectation and framing 2.
Regulatory text
Excerpt (as provided): “Implement CIS AWS Foundations Benchmark v1.2 requirement 4.1 as mapped in AWS Security Hub.” 3
Operator interpretation: You need a control that (a) detects any Security Group allowing inbound SSH from 0.0.0.0/0 or ::/0, (b) remediates violations promptly, and (c) prevents reoccurrence through technical guardrails and change control. “As mapped in AWS Security Hub” means you can use Security Hub’s CIS mapping (EC2.13) as your system-of-record for detection results and evidence packaging 1.
Plain-English interpretation (what this requirement really means)
- You may not expose SSH (port 22) to the public internet through a Security Group rule that allows inbound from:
- 0.0.0.0/0 (any IPv4 address), or
- ::/0 (any IPv6 address).
- The requirement is about Security Group ingress rules, not NACLs or host firewalls (though those can be compensating layers).
- “Should not allow” is operationally enforced as zero tolerance in most CIS-aligned programs: if any SG has a public SSH rule, you treat it as a finding to fix.
Who it applies to
Entities
- Any organization operating workloads in AWS and aligning to CIS AWS Foundations Benchmark v1.2 2.
- Teams using AWS Security Hub to assess CIS posture 1.
Operational scope (what systems are in-scope)
- All AWS accounts and regions where Security Groups are created or managed.
- All VPC Security Groups, including those attached to:
- EC2 instances
- Load balancers (less common for SSH, but still possible)
- Network interfaces used by appliances or container hosts
- Both IPv4 and IPv6 ingress rules.
What you actually need to do (step-by-step)
Step 1: Define the compliance rule (in writing)
Create a short control statement your engineers can implement without interpretation drift:
Control statement: “No Security Group may allow inbound TCP/22 from 0.0.0.0/0 or ::/0 in any AWS account/region.” Map it to CIS AWS Foundations v1.2 4.1 and Security Hub control EC2.13 4.
Also define the approved admin access patterns (pick at least one):
- Preferred: AWS Systems Manager Session Manager (SSH disabled or restricted).
- Alternative: Bastion host with restricted ingress (corporate VPN or fixed egress IPs).
- Exception-only: Temporary access with ticket, approval, auto-expiration, and narrow source IPs.
Step 2: Implement detection (continuous monitoring)
Use AWS Security Hub’s CIS mapping as your primary detection feed (EC2.13) 1. Operational tasks:
- Confirm Security Hub is enabled in all in-scope accounts and regions.
- Ensure findings are centralized (typically to a security account) and retained long enough to support audits.
- Create an internal severity rubric: even if benchmark severity is “medium,” treat internet-exposed SSH as high operational urgency in your triage queue.
Day-to-day output you want: a list of Security Groups, rules, and attached resources that violate the condition.
Step 3: Triage findings safely (avoid outages)
Before you remove a rule, identify blast radius:
- Identify attached resources (ENIs, instances, autoscaling groups).
- Confirm the port is actually used (many “temporary” rules become permanent).
- Decide remediation path:
- If SSH isn’t needed: remove TCP/22 ingress entirely.
- If SSH is needed: replace 0.0.0.0/0 or ::/0 with a restricted source (VPN CIDR, jump host SG reference, or fixed admin IP list).
- If operationally sensitive: migrate to SSM Session Manager and remove inbound SSH over time.
Step 4: Remediate (change with accountability)
Implement a consistent workflow:
- Open a change ticket that references “CIS AWS Foundations v1.2 4.1 / Security Hub EC2.13”.
- Make the Security Group change (remove or restrict).
- Validate connectivity via the approved admin path.
- Close the finding and document the before/after.
If you have infrastructure-as-code (IaC), fix it in code first (or immediately after the emergency change), then redeploy. Otherwise the rule will reappear.
Step 5: Put preventive guardrails in place
Detection without prevention turns into recurring work. Common guardrails:
- Policy-as-code checks in CI for Terraform/CloudFormation to block public SSH rules.
- SCPs or config rules where feasible (be careful: overly broad blocks can break legitimate patterns).
- Change management gates for Security Group modifications by privileged roles.
If you use Daydream to run third-party risk and control evidence operations, treat guardrails as a control improvement item with an owner, a due date, and an evidence checklist so the control stays “operating,” not just “designed.”
Step 6: Handle exceptions explicitly (rare, time-bound)
If a business unit insists on public SSH (for example, a legacy appliance), require:
- Documented business justification and compensating controls.
- A fixed end date and migration plan.
- Enhanced monitoring and credential/key management review. CIS is a benchmark; your governance program can allow exceptions, but auditors will expect them to be controlled and visible 2.
Required evidence and artifacts to retain
Keep evidence that proves both current state and ongoing operation:
-
Control narrative
- Control description mapped to CIS AWS Foundations v1.2 4.1 and Security Hub EC2.13 4.
-
Detection evidence
- Screenshots or exports of Security Hub results for EC2.13 showing pass/fail trends.
- Centralized finding retention settings and scope (accounts/regions) documentation.
-
Remediation records
- Change tickets/PRs for Security Group updates (before/after).
- Incident or exception approvals if remediation was delayed.
-
Preventive controls evidence
- CI policy checks (sample pipeline run output).
- Guardrail documentation (SCPs, config rules, or deployment standards).
-
Asset-level proof (spot checks)
- A periodic snapshot of Security Groups showing no TCP/22 from 0.0.0.0/0 or ::/0.
Common exam/audit questions and hangups
| Auditor question | What they’re probing | What to show |
|---|---|---|
| “How do you know no Security Group allows 0.0.0.0/0 to port 22?” | Completeness and monitoring | Security Hub EC2.13 scope + latest results 1 |
| “Does this include IPv6?” | Control coverage maturity | Evidence that ::/0 is included in checks and remediation |
| “What happens when engineers need emergency access?” | Exception handling and process control | Emergency access SOP, approvals, expiration approach |
| “How do you prevent reintroduction?” | Control sustainability | CI/IaC checks, guardrails, and change control evidence |
| “Are all accounts/regions covered?” | Control boundaries | Inventory and Security Hub enablement documentation |
Hangup to expect: engineering teams may argue “SSH is locked down by keys.” That does not satisfy CIS 4.1; the requirement is about network exposure, not authentication strength 2.
Frequent implementation mistakes (and how to avoid them)
-
Fixing only IPv4 and forgetting IPv6 (::/0).
Avoidance: include both address families in detection queries and change reviews. -
Remediating in the console while IaC reintroduces the rule.
Avoidance: require IaC PR as part of closure criteria, or implement drift detection. -
Leaving “temporary” 0.0.0.0/0 rules in place.
Avoidance: enforce expirations through tooling and require ticket references in rule descriptions (where your process supports it). -
Breaking operations by removing SSH without an admin path.
Avoidance: standardize on SSM Session Manager or a bastion pattern before mass remediation. -
Treating the Security Hub finding as evidence by itself.
Avoidance: keep the finding plus change records plus guardrail evidence. Auditors look for “detect + respond + prevent,” not only “scan.”
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite specific regulator actions. Practically, public SSH exposure is a common initial access vector in cloud incident investigations. Your risk story to leadership is simple: reducing internet-exposed administrative ports reduces attack surface and limits the chance that a single misconfiguration becomes an intrusion pathway.
Practical 30/60/90-day execution plan
(These phases are guidance, not a claim about required timing.)
First 30 days (stabilize and stop the bleeding)
- Confirm Security Hub CIS checks are enabled and centralized for EC2.13 1.
- Triage all current EC2.13 findings: identify owners, affected assets, and operational constraints.
- Implement an emergency break-glass pattern: temporary allowlist-based access with approval and clear rollback steps.
- Start capturing evidence: findings exports, tickets, remediation PRs.
Days 31–60 (systematize remediation and standard patterns)
- Migrate teams to an approved admin access pattern (SSM or bastion + VPN).
- Remediate recurring offenders by fixing IaC modules and templates.
- Add CI checks to block public SSH rules before deployment.
- Establish an exceptions register with an approval workflow and review cadence.
Days 61–90 (prevention, metrics, and audit readiness)
- Add preventive guardrails (policy-as-code, SCP/config controls where appropriate).
- Define steady-state reporting: open findings aging, repeat offender services, and exception inventory.
- Run an internal audit-style walkthrough: pick samples, trace detection → ticket → fix → validation → evidence retention.
- Use Daydream (if in your stack) to track control ownership, store evidence artifacts, and keep exception records tied to the requirement so audits don’t become a scavenger hunt.
Frequently Asked Questions
Does this requirement ban SSH entirely?
No. It bans SSH exposed to the public internet via 0.0.0.0/0 or ::/0 on port 22. You can still use SSH from restricted sources (for example, a VPN CIDR or a bastion Security Group) consistent with your access model.
What if we need SSH for a vendor or third party to support a system?
Require a restricted source (their fixed IP ranges) or provide access through your controlled pathway (VPN or bastion). Record the third party access approval and keep a time-bound exception if you cannot meet the baseline immediately.
Is using SSH keys a compensating control for public exposure?
Keys help, but CIS 4.1 focuses on network-level exposure. You still need to remove 0.0.0.0/0 and ::/0 from inbound TCP/22 rules 2.
How should we handle autoscaling groups or ephemeral hosts where engineers want quick access?
Standardize on SSM Session Manager for ephemeral fleets, or route access through a bastion pattern. The control goal is stable: no public inbound SSH regardless of host lifecycle.
Do we need to check every region even if we don’t deploy there?
Yes if Security Groups can be created there by your roles or automation. Scope is an audit flashpoint; document regional coverage and restrict unused regions where feasible.
What evidence is usually enough for an audit package?
A control narrative mapped to CIS 4.1, Security Hub EC2.13 detection outputs, remediation tickets/PRs, and proof of preventive guardrails. Auditors typically want to see the control operating over time, not a single point-in-time screenshot 1.
Footnotes
Frequently Asked Questions
Does this requirement ban SSH entirely?
No. It bans SSH exposed to the public internet via 0.0.0.0/0 or ::/0 on port 22. You can still use SSH from restricted sources (for example, a VPN CIDR or a bastion Security Group) consistent with your access model.
What if we need SSH for a vendor or third party to support a system?
Require a restricted source (their fixed IP ranges) or provide access through your controlled pathway (VPN or bastion). Record the third party access approval and keep a time-bound exception if you cannot meet the baseline immediately.
Is using SSH keys a compensating control for public exposure?
Keys help, but CIS 4.1 focuses on network-level exposure. You still need to remove 0.0.0.0/0 and ::/0 from inbound TCP/22 rules (Source: CIS AWS Foundations Benchmark).
How should we handle autoscaling groups or ephemeral hosts where engineers want quick access?
Standardize on SSM Session Manager for ephemeral fleets, or route access through a bastion pattern. The control goal is stable: no public inbound SSH regardless of host lifecycle.
Do we need to check every region even if we don’t deploy there?
Yes if Security Groups can be created there by your roles or automation. Scope is an audit flashpoint; document regional coverage and restrict unused regions where feasible.
What evidence is usually enough for an audit package?
A control narrative mapped to CIS 4.1, Security Hub EC2.13 detection outputs, remediation tickets/PRs, and proof of preventive guardrails. Auditors typically want to see the control operating over time, not a single point-in-time screenshot (Source: AWS Security Hub CIS AWS Foundations mapping table).
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream