CIS AWS Foundations v1.2 3.12: Ensure a log metric filter and alarm exist for changes to network gateways

To meet cis aws foundations v1.2 3.12: ensure a log metric filter and alarm exist for changes to network gateways requirement, you must (1) log AWS API activity with CloudTrail into CloudWatch Logs, (2) create a CloudWatch Logs metric filter that matches network gateway change events, and (3) attach a CloudWatch alarm that notifies responders (typically via SNS) when those events occur. 1

Key takeaways:

  • You need both detection (metric filter) and response routing (alarm + notification) for network gateway changes.
  • Scope is AWS accounts where CloudTrail is expected to capture management events; gaps usually come from missing CloudTrail-to-CloudWatch integration.
  • Auditors will ask for proof of: filter pattern, alarm configuration, notification target, and a test or recent alert.

CIS AWS Foundations v1.2 control 3.12 focuses on a narrow but high-signal class of changes: modifications to network gateways. In AWS, gateway changes can reroute traffic, open paths between networks, or disrupt segmentation. Even in well-controlled environments, gateway changes are a common “blast radius” multiplier because they affect connectivity for many workloads at once.

This requirement is operational, not theoretical. You are expected to implement continuous monitoring that triggers quickly when certain API calls occur. AWS Security Hub maps this requirement as a CloudWatch-focused check, which usually implies the control is satisfied by the classic trio: CloudTrail management events → CloudWatch Logs → metric filter → CloudWatch alarm. 2

As the Compliance Officer, CCO, or GRC lead, your job is to make this repeatable across accounts and regions, with clean evidence. That means standardizing filter patterns, ensuring alarms route to an owned on-call path, and proving the control runs continuously (not “configured once and forgotten”).

Regulatory text

Excerpt (provided): “Implement CIS AWS Foundations Benchmark v1.2 requirement 3.12 as mapped in AWS Security Hub.” 1

Operator meaning: You must implement the CIS AWS Foundations v1.2 control that detects and alerts on changes to network gateways, using the mechanism AWS Security Hub expects for this benchmark mapping (CloudWatch metric filter + alarm). 2

What an assessor will expect you to demonstrate:

  1. CloudTrail is recording relevant management events.
  2. Those events are delivered to a CloudWatch Logs log group you control.
  3. A metric filter exists that matches gateway change activity.
  4. An alarm exists on that metric and sends a notification to a response channel. 1

Plain-English interpretation

If someone creates, attaches, detaches, deletes, or otherwise changes a network gateway-related resource, you should not find out during an outage or a post-incident review. You should get an alert near-real time, with enough context to start triage (account, region, principal, API action).

This is not a prevention control. It is a detection and escalation control. Your preventive controls (IAM boundaries, change management, infrastructure-as-code approvals) reduce likelihood, but 3.12 demands you can still detect the event if prevention fails or an emergency change happens.

Who it applies to

Entity scope: Any organization operating AWS accounts and claiming alignment with CIS AWS Foundations Benchmark v1.2. 3

Operational scope (what environments):

  • Production AWS accounts (always).
  • Shared services / networking accounts (especially).
  • Any account that can create or modify gateway resources (common in platform teams).
  • Multi-account organizations where networking is centralized but changes can be initiated from pipelines in other accounts.

Preconditions: This control assumes you have CloudTrail configured to capture management events and that you can deliver CloudTrail events to CloudWatch Logs for filtering and alarms. In practice, most “failed” implementations trace back to CloudTrail not sending to CloudWatch Logs, or sending to the wrong log group.

What you actually need to do (step-by-step)

Step 1 — Confirm CloudTrail logs management events to CloudWatch Logs

  1. Identify the CloudTrail trail(s) that cover your accounts and regions.
  2. Verify the trail includes management events (read/write management API calls).
  3. Confirm the trail is configured with a CloudWatch Logs log group destination (not only S3).
  4. Confirm the IAM role for CloudTrail has permissions to write to that log group.

Practical check: Pick a recent AWS Console action (for example, describing a VPC) and confirm a corresponding CloudTrail management event appears in the CloudWatch Logs log group. If nothing shows up, metric filters and alarms will never fire.

Step 2 — Define what “network gateways” means in your environment

CIS’s intent is to monitor changes that affect network egress/ingress or routing via gateway constructs. In AWS practice, teams often include events for:

  • Internet Gateways (IGW) attach/detach/create/delete
  • NAT Gateways create/delete
  • Virtual Private Gateways (VGW) and VPN-related gateway attachment changes
  • Transit Gateways (TGW) create/attach/detach/modify routes (common in enterprises)

Your exact API event list should be documented as part of your control narrative so an auditor can see the boundary you chose and why it is reasonable.

Step 3 — Create the CloudWatch Logs metric filter

  1. Choose a naming standard (example: cis-3-12-network-gateway-changes).
  2. In CloudWatch Logs, create a metric filter on the CloudTrail log group.
  3. Use a filter pattern that matches relevant eventName values for gateway changes.
  4. Emit to a dedicated CloudWatch metric namespace (example: CIS/AWS), with a metric name that makes reporting easy.

Implementation note (operator-focused):

  • Keep the filter focused on change events (creates, deletes, attaches, detaches, modifications). If you include high-volume read-only events, you will create alert fatigue and the control will get muted.
  • If you operate multi-region, confirm the log group receives events from all required regions, or deploy filters per region as your architecture requires.

Step 4 — Create the CloudWatch alarm and notification path

  1. Create an SNS topic (or use an existing security alert topic) that routes to:
    • Security operations mailbox, ticketing integration, chat integration, or pager workflow (whatever your response model supports).
  2. Create a CloudWatch alarm on the metric generated by the filter.
  3. Set the alarm action to publish to the SNS topic.
  4. Write an internal runbook: what responders do when the alarm fires.

Runbook minimums:

  • How to identify the triggering CloudTrail event(s).
  • How to validate whether the change was authorized (change ticket, pipeline run, emergency approval).
  • How to contain or roll back if unauthorized.
  • Who owns gateway infrastructure approval.

Step 5 — Make it repeatable across accounts (the difference between “configured” and “operationalized”)

For real programs, you need a rollout mechanism:

  • Infrastructure-as-code module for metric filters and alarms.
  • A baseline account blueprint for new accounts.
  • A periodic verification job that checks the filter and alarm exist and are enabled.

AWS Security Hub can help you detect drift by surfacing the mapped control status for accounts in scope. 2

Required evidence and artifacts to retain

Keep evidence that proves design, deployment, and ongoing operation:

Configuration evidence

  • Screenshot or export of CloudTrail trail settings showing CloudWatch Logs integration (log group name and role).
  • CloudWatch Logs metric filter definition (filter pattern, metric namespace/name).
  • CloudWatch alarm configuration (metric, threshold logic, actions).
  • SNS topic configuration and subscription list (showing where alerts go).

Operational evidence

  • A control narrative (one page) describing scope (accounts/regions), event coverage, and response path.
  • A test record (change a non-production gateway resource in a controlled way and capture the alert) or an incident/ticket generated from a real alert.
  • Periodic review record showing filters/alarms remain present and enabled.

Governance artifacts (what auditors ask for)

  • RACI: who owns network gateway changes, who owns alert triage.
  • Link to the runbook and on-call ownership.
  • Exceptions register for accounts that cannot yet meet the requirement, with compensating controls.

Common exam/audit questions and hangups

Auditor question What they are testing What to show
“Which gateway changes trigger an alert?” Coverage clarity Your eventName list and filter pattern; explain boundary choices
“How do you know CloudTrail events reach CloudWatch Logs?” Data pipeline integrity CloudTrail config + sample CloudTrail event in the log group
“Where do alerts go, and who responds?” Operational ownership SNS subscriptions + runbook + ticket/on-call evidence
“Is this deployed across all accounts/regions?” Scope completeness IaC rollout evidence or Security Hub results per account 2
“What prevents alert fatigue?” Signal quality Narrow change-only events; documented tuning decisions

Frequent implementation mistakes (and how to avoid them)

  1. Metric filter exists, but CloudTrail is not sending to CloudWatch Logs.
    Fix: treat the CloudTrail→CloudWatch Logs path as a dependency; test it with a known management event.

  2. Alarm routes to an unmonitored SNS subscription.
    Fix: require an owned queue, ticketing connector, or monitored mailbox; review subscriptions during access reviews.

  3. Overbroad patterns that trigger on reads or unrelated network events.
    Fix: keep to gateway “change verbs” and document the list; tune in non-prod before rollout.

  4. Single-account deployment in a multi-account org.
    Fix: ship as an account baseline; verify in CI/CD and in periodic compliance checks.

  5. No evidence of operation (only “configured once”).
    Fix: retain a test alert record and recurring verification output.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this specific CIS requirement. What matters for risk is straightforward: undetected gateway changes can enable data exfiltration paths, break network segmentation, or cause outages. If you cannot show that alerts fire and reach responders, auditors often rate the control as ineffective even if the filter technically exists.

Practical 30/60/90-day execution plan

First 30 days (establish the control baseline)

  • Inventory: identify which accounts/regions are in scope for CIS AWS Foundations reporting. 3
  • Validate CloudTrail coverage and CloudWatch Logs delivery for each in-scope account.
  • Implement metric filter + alarm in a pilot account (shared services or networking account first).
  • Define response ownership and publish a short runbook.

Next 60 days (scale and standardize)

  • Roll out via infrastructure-as-code or account factory baseline.
  • Standardize naming and tagging so evidence collection is consistent.
  • Connect SNS to your incident workflow (ticket creation or paging) and document the handoff.
  • Create an internal “control check” that confirms: filter exists, alarm enabled, SNS has active subscribers.

By 90 days (prove ongoing operation)

  • Run a controlled test in non-production and retain the alert evidence.
  • Establish a recurring review cadence (for example, as part of cloud security posture review) to catch drift.
  • If you use Security Hub, operationalize reporting on the mapped control so exceptions are visible and owned. 2
  • Optional: centralize alerting to a security account while preserving per-account metric filters for clean boundaries.

How Daydream helps (without changing your AWS architecture)

Most teams struggle less with “creating a filter” and more with proving it stays deployed everywhere and that evidence is audit-ready. Daydream helps you map CIS AWS Foundations v1.2 3.12 to concrete AWS configurations, define the evidence you will retain, and run periodic verification so your audit package is current and consistent across accounts. This aligns with the recommended control approach of mapping the requirement to account controls, evidence queries, and periodic verification. 1

Frequently Asked Questions

Do I need one metric filter per account or can I centralize it?

Either model can work, but the filter must evaluate CloudTrail events from the accounts in scope. Many organizations deploy per-account filters and centralize notifications to a security account to keep ownership clear and evidence simple.

Does Security Hub automatically create the metric filter and alarm for CIS 3.12?

No. Security Hub reports findings for the CIS mapping, but you still implement the underlying CloudWatch and CloudTrail configurations. 2

What gateway events should I include in the filter pattern?

Include gateway “change” API actions relevant to your network architecture (IGW, NAT GW, VGW/VPN, TGW). Document your selected event list so an auditor can see your intended coverage boundary.

What if we manage gateways only through Terraform with approvals? Do we still need the alarm?

Yes for CIS alignment. Change controls reduce likelihood, but CIS 3.12 expects detection and alerting when the change occurs, even if it was authorized. 3

What evidence is strongest for auditors: screenshots or exported config?

Exported configuration (IaC, JSON, CLI output) plus a test alert record is usually stronger than screenshots alone. Screenshots help for quick review, but they are weaker for proving ongoing operation.

How do we test this without risking production?

Trigger a controlled gateway change in a sandbox or non-production account (for example, create and delete a NAT gateway where permitted), then retain the CloudWatch alarm state change and the underlying CloudTrail event as evidence.

Footnotes

  1. CIS AWS Foundations Benchmark; AWS Security Hub CIS AWS Foundations mapping table

  2. AWS Security Hub CIS AWS Foundations mapping table

  3. CIS AWS Foundations Benchmark

Frequently Asked Questions

Do I need one metric filter per account or can I centralize it?

Either model can work, but the filter must evaluate CloudTrail events from the accounts in scope. Many organizations deploy per-account filters and centralize notifications to a security account to keep ownership clear and evidence simple.

Does Security Hub automatically create the metric filter and alarm for CIS 3.12?

No. Security Hub reports findings for the CIS mapping, but you still implement the underlying CloudWatch and CloudTrail configurations. (Source: AWS Security Hub CIS AWS Foundations mapping table)

What gateway events should I include in the filter pattern?

Include gateway “change” API actions relevant to your network architecture (IGW, NAT GW, VGW/VPN, TGW). Document your selected event list so an auditor can see your intended coverage boundary.

What if we manage gateways only through Terraform with approvals? Do we still need the alarm?

Yes for CIS alignment. Change controls reduce likelihood, but CIS 3.12 expects detection and alerting when the change occurs, even if it was authorized. (Source: CIS AWS Foundations Benchmark)

What evidence is strongest for auditors: screenshots or exported config?

Exported configuration (IaC, JSON, CLI output) plus a test alert record is usually stronger than screenshots alone. Screenshots help for quick review, but they are weaker for proving ongoing operation.

How do we test this without risking production?

Trigger a controlled gateway change in a sandbox or non-production account (for example, create and delete a NAT gateway where permitted), then retain the CloudWatch alarm state change and the underlying CloudTrail event as evidence.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream