CIS AWS Foundations v1.2 3.10: Ensure a log metric filter and alarm exist for security group changes
To meet cis aws foundations v1.2 3.10: ensure a log metric filter and alarm exist for security group changes requirement, you must detect AWS security group changes in CloudTrail logs, convert those events into a CloudWatch Logs metric filter, and trigger an alarm that notifies your security response channel. Operationally, this is a “signal pipeline”: CloudTrail → CloudWatch Logs metric filter → CloudWatch alarm → notification/ticketing.
Key takeaways:
- You need both detection (metric filter) and response routing (alarm actions) for security group change events.
- The control fails in audits when CloudTrail isn’t centrally logging, the filter pattern is wrong, or alarms notify nobody.
- Evidence must prove the pipeline works across accounts/regions and is monitored over time.
Security groups are one of the fastest ways to change effective network exposure in AWS. A single inbound rule change can turn a private workload into an internet-reachable one, or open a sensitive port to broad CIDRs. CIS AWS Foundations v1.2 control 3.10 addresses this operational risk by requiring you to detect security group changes and alert on them quickly enough for triage and rollback.
For a Compliance Officer, CCO, or GRC lead, the most practical framing is: “Do we reliably generate an alert whenever someone changes a security group, and can we prove it?” Your objective is not to prevent all changes; it’s to ensure changes are visible, attributable, and routed into an incident workflow.
This page translates the benchmark requirement into implementable steps, evidence to retain, and audit-ready talking points. The implementation is typically owned by Cloud Security/Platform Engineering, but the compliance owner needs to define the standard (what must be detected, where alerts go, and how it’s tested) and verify coverage across the AWS estate. Sources for mapping and benchmark context: 1 and 2.
Regulatory text
Requirement excerpt (provided): “Implement CIS AWS Foundations Benchmark v1.2 requirement 3.10 as mapped in AWS Security Hub.” 1 2
Operator interpretation of the text:
You must implement the AWS Security Hub-mapped control for CIS AWS Foundations v1.2 3.10, which expects:
- CloudTrail events for security group changes are logged,
- a CloudWatch Logs metric filter detects those events, and
- a CloudWatch alarm triggers on that metric and routes to a monitored notification path (for example, SNS to email, ChatOps, or incident tooling). 2
Plain-English interpretation (what the requirement really means)
You need an automated alert any time someone creates, deletes, or modifies a security group or its rules. The alert must be based on authoritative audit logs (CloudTrail), not an ad-hoc script on a single instance. If an engineer makes an emergency rule change, your security operations process should still see it, review it, and confirm it was authorized.
This is a “detect and notify” requirement. It does not mandate a specific remediation action, but your program should connect the alarm to a defined triage and documentation workflow.
Who it applies to
Entity scope
- AWS cloud operators and organizations running workloads in AWS accounts where security groups are used to control network access. 1
Operational scope (where this matters)
- All production accounts and any non-production accounts that connect to production networks, handle regulated data, or host identity/security tooling.
- All enabled AWS regions where security groups can be changed and CloudTrail events can occur.
- Multi-account organizations (AWS Organizations) where central security teams need consistent detection across accounts.
Control owners (typical RACI)
- Engineering owner: Cloud Platform / Cloud Security (builds CloudTrail + metric filter + alarm + notification)
- Compliance owner: GRC (sets the requirement, validates coverage, retains evidence, ensures periodic verification)
- SOC/IR owner: Security Operations (receives alarms, triages, documents outcomes)
What you actually need to do (step-by-step)
Step 1: Confirm CloudTrail is capturing the right events
- Verify at least one trail is enabled and configured for management events (security group changes are management events).
- Verify CloudTrail is delivering to CloudWatch Logs (either directly, or via a centralized logging pattern that still results in a CloudWatch Logs group you can filter).
- Confirm coverage across regions you operate in, or implement an organization-level trail pattern for consistency.
Audit intent: If CloudTrail isn’t logging, metric filters and alarms are cosmetic.
Step 2: Identify the security group change events to detect
At minimum, your detection should cover common EC2 security group API actions, such as:
- CreateSecurityGroup / DeleteSecurityGroup
- AuthorizeSecurityGroupIngress / RevokeSecurityGroupIngress
- AuthorizeSecurityGroupEgress / RevokeSecurityGroupEgress
Your engineers may expand to related events (for example, security group updates, VPC-related changes), but don’t dilute the core requirement. Keep a written “event coverage list” as part of your control statement.
Step 3: Implement the CloudWatch Logs metric filter
- Choose the CloudWatch Logs group that contains CloudTrail events (often named for the trail).
- Create a metric filter pattern that matches the security group change events from Step 2.
- Configure the filter to emit a custom metric into a dedicated namespace (for example,
CIS/AWSFoundations), with a metric value increment for each match.
Practical tip: Get the pattern reviewed by someone who has parsed CloudTrail JSON before. Most failures come from filter patterns that never match real events.
Step 4: Create a CloudWatch alarm on that metric
- Create an alarm that triggers when the metric is greater than a threshold in a short evaluation window (the goal is alerting on any change, not trending).
- Set alarm actions to an SNS topic or equivalent that is actively monitored.
- Include a runbook link in the alarm description: “What does the on-call analyst do next?”
Minimum bar: An alarm that goes to an unmonitored email inbox fails the spirit of the control and commonly fails operational testing.
Step 5: Route notifications into a response workflow
Decide where these alerts go:
- SOC queue (ticket created)
- Pager/on-call rotation for high-risk environments
- ChatOps channel with ownership
Define triage expectations:
- Validate the actor (user/role), source IP, and time.
- Identify what changed (which security group, which rule, which ports/CIDRs).
- Confirm change approval (change ticket) or document emergency exception.
- Roll back if unauthorized.
Step 6: Test the control end-to-end
- Make a controlled security group change in a test account (or a sandbox VPC).
- Confirm CloudTrail records the event.
- Confirm the metric filter increments the metric.
- Confirm the alarm enters ALARM state.
- Confirm your notification arrives and results in an incident record (ticket/chat log).
Retain the test evidence. In audits, “we believe it works” is not evidence.
Step 7: Operationalize ongoing verification
Set a lightweight verification cadence:
- Review Security Hub findings for this control and investigate failures. 2
- Re-test after major logging changes (trail changes, org restructuring, region expansion).
- Monitor for alarm misconfigurations (disabled actions, deleted SNS subscriptions).
Required evidence and artifacts to retain
Retain evidence that proves design, implementation, and operation:
Design evidence (what you intended)
- Control statement: “We alert on security group changes via CloudTrail → CloudWatch metric filter → CloudWatch alarm.”
- Event coverage list (API actions included).
- Notification and triage runbook (who receives alerts, expected response actions).
Implementation evidence (what is configured)
- Screenshot or exported config of:
- CloudTrail trail settings and target CloudWatch Logs group
- Metric filter definition (pattern + metric namespace/name)
- Alarm configuration (metric, threshold logic, actions)
- SNS topic subscriptions / routing configuration
Operating evidence (proof it works)
- Test record showing a security group change and resulting alert, including timestamps.
- Recent alarm history or incident tickets tied to real events (redact as needed).
- Security Hub control status reports showing pass/fail and remediation tracking. 2
Common exam/audit questions and hangups
Use these as a prep checklist:
- “Show me the alarm action.” Auditors often find alarms created with no action or actions disabled.
- “Which log group is filtered?” Teams point to a metric filter on the wrong log group (empty, old, or not CloudTrail).
- “Is this enabled in all accounts and regions?” One-account implementation is a common gap in AWS Organizations environments.
- “How do you know the filter pattern matches real events?” Expect a request for a test or event sample.
- “Who receives the alert and what do they do?” A technical control without an owned response path reads as incomplete control operation.
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails | How to avoid it |
|---|---|---|
| CloudTrail not connected to CloudWatch Logs | Metric filters can’t see events | Validate trail delivery and log ingestion first |
| Filter pattern too narrow | Misses rule changes (ingress/egress) | Start with the core event set; test with real changes |
| Alarm not routed to monitored channel | No response, no accountability | Route to SOC/on-call; verify subscription health |
| Single-region assumptions | Changes in other regions go undetected | Use org-wide patterns and confirm region coverage |
| No end-to-end test evidence | Hard to prove operation | Perform a controlled change and retain artifacts |
Enforcement context and risk implications
No public enforcement cases were provided for this specific CIS AWS Foundations requirement in the supplied sources. Practically, the risk is straightforward: undetected security group changes increase the chance that misconfigurations persist long enough to be exploited or to violate internal access standards. From a governance perspective, this control supports auditability and change accountability by tying network exposure changes to identities and timestamps in CloudTrail. 1
Practical 30/60/90-day execution plan
Use phases rather than date promises. Your actual timing depends on how centralized your logging and AWS Organizations governance already is.
First 30 days: Establish minimum compliant detection
- Inventory AWS accounts/regions in scope and confirm CloudTrail logging path.
- Implement metric filters and alarms in priority accounts (production, shared services).
- Route notifications to an owned channel (SOC queue or on-call) and publish the triage runbook.
- Execute and store one end-to-end test per environment tier.
Days 31–60: Expand coverage and reduce false negatives
- Roll out via infrastructure-as-code across all in-scope accounts.
- Standardize naming conventions for metrics/alarms and tags for ownership.
- Validate filter patterns against real CloudTrail samples from multiple accounts.
- Add periodic review: Security Hub findings review and remediation tracking. 2
Days 61–90: Prove operations and make audits easy
- Implement continuous compliance checks (for example, Security Hub-based monitoring plus configuration drift detection).
- Build an evidence bundle template: configs + alarm history + test record + runbook.
- Run a tabletop exercise: “unauthorized security group change” triage and rollback steps, with documentation retained.
Where Daydream fits naturally: Daydream is useful once the control is deployed because it helps you keep the requirement mapped to concrete AWS configurations, collect repeatable evidence, and schedule periodic verification without rebuilding the audit packet every cycle.
Frequently Asked Questions
Do we need one metric filter per account, or can we centralize it?
You can centralize if CloudTrail logs are centralized into a log group you control and the metric/alarm still reliably reflects all account activity. Auditors will focus on coverage and proof, not the org design.
What security group events should the filter match?
Cover create/delete and rule authorization/revocation for both ingress and egress as a baseline. Keep a written list of API actions you match and show a test that proves at least one event triggers the alarm.
Does Security Hub automatically satisfy this requirement?
Security Hub reports whether the CIS-mapped control is passing and helps you find gaps, but it does not implement the logging, metric filter, alarm, or notification routing by itself. Use the Security Hub mapping to validate coverage. 2
How do we prove the alarm is “working” for an audit?
Provide configuration exports plus a dated end-to-end test showing a security group change event in CloudTrail and the corresponding alarm notification. Alarm history or an incident ticket tied to the alert strengthens the operating evidence.
What if we have a lot of expected security group changes and don’t want alert fatigue?
Route alerts to a triage queue with clear closure criteria, and tune response workflows before tuning detection away. If you suppress alerts, document the suppression logic and show you still detect risky changes.
Do we have to alert on changes made by automation (Terraform/CloudFormation)?
Yes, the requirement is about change visibility, regardless of actor. Your triage process can treat approved automation differently, but the detection should still fire and be attributable to the assumed role or pipeline identity.
Footnotes
Frequently Asked Questions
Do we need one metric filter per account, or can we centralize it?
You can centralize if CloudTrail logs are centralized into a log group you control and the metric/alarm still reliably reflects all account activity. Auditors will focus on coverage and proof, not the org design.
What security group events should the filter match?
Cover create/delete and rule authorization/revocation for both ingress and egress as a baseline. Keep a written list of API actions you match and show a test that proves at least one event triggers the alarm.
Does Security Hub automatically satisfy this requirement?
Security Hub reports whether the CIS-mapped control is passing and helps you find gaps, but it does not implement the logging, metric filter, alarm, or notification routing by itself. Use the Security Hub mapping to validate coverage. (Source: AWS Security Hub CIS AWS Foundations mapping table)
How do we prove the alarm is “working” for an audit?
Provide configuration exports plus a dated end-to-end test showing a security group change event in CloudTrail and the corresponding alarm notification. Alarm history or an incident ticket tied to the alert strengthens the operating evidence.
What if we have a lot of expected security group changes and don’t want alert fatigue?
Route alerts to a triage queue with clear closure criteria, and tune response workflows before tuning detection away. If you suppress alerts, document the suppression logic and show you still detect risky changes.
Do we have to alert on changes made by automation (Terraform/CloudFormation)?
Yes, the requirement is about change visibility, regardless of actor. Your triage process can treat approved automation differently, but the detection should still fire and be attributable to the assumed role or pipeline identity.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream