CIS AWS Foundations v1.2 3.13: Ensure a log metric filter and alarm exist for route table changes

CIS AWS Foundations v1.2 3.13 requires you to detect and alert on AWS VPC route table changes by creating a CloudWatch Logs metric filter on CloudTrail logs and a CloudWatch alarm tied to that metric. Operationally, you must centralize CloudTrail, define the correct filter pattern, route alerts to an owned response channel, and retain evidence that the control continuously runs.

Key takeaways:

  • You need a CloudTrail-backed log metric filter for route table change events and a CloudWatch alarm on that metric.
  • “Exists” means it’s deployed in the right place (central logging account/region) and actually notifies responders.
  • Audits focus on proof: filter pattern, alarm configuration, notifications, and test alerts tied to CloudTrail events.

The cis aws foundations v1.2 3.13: ensure a log metric filter and alarm exist for route table changes requirement is a detection control. It does not prevent a route table change; it ensures you can see it quickly and respond. Route table edits are high-signal because they can reroute traffic (including to unintended egress paths), expose private workloads, or break segmentation assumptions.

CIS maps this requirement through AWS Security Hub (control identifier CloudWatch.13) and expects you to implement it in a way that is durable across accounts and regions, not as a one-off console configuration in a single workload account 1. In practice, teams satisfy the letter of the requirement (a filter exists) but fail the operational outcome (nobody receives or acts on the alert). That gap is where incidents hide and where auditors probe.

This page translates the requirement into an operator-ready implementation: where to deploy it, what to configure, what evidence to retain, and how to avoid common design mistakes that cause false negatives (no alert when a change happens) or noisy false positives (alerts nobody trusts).

Regulatory text

Excerpt (provided): “Implement CIS AWS Foundations Benchmark v1.2 requirement 3.13 as mapped in AWS Security Hub.” 2

Operator meaning:
You must implement the CIS AWS Foundations Benchmark control for route table changes by using CloudTrail logs to create:

  1. a CloudWatch Logs metric filter that detects route table change API activity, and
  2. a CloudWatch alarm that triggers when the metric increments 2.

What an operator must do is straightforward: ensure CloudTrail events are landing in CloudWatch Logs (or a path that supports metric filters), then define a filter pattern that matches route table change events, and finally alarm to a monitored notification target with clear ownership.

Plain-English interpretation (what “good” looks like)

You are compliant when route table changes produce an alert your team can see and respond to, and you can prove it. Practically:

  • CloudTrail records the management event.
  • The event is delivered to a log group that your metric filter is attached to.
  • The metric filter increments a named metric in a known namespace.
  • The CloudWatch alarm evaluates that metric and sends a notification (typically SNS) to an owned destination (email, ticketing integration, on-call tooling).
  • You periodically validate that the alert still fires after org/account changes 1.

Who it applies to

Entity scope: AWS cloud operators responsible for security monitoring, baseline compliance, and audit readiness in AWS accounts that run VPCs 3.

Operational scope (where this matters):

  • Multi-account AWS Organizations with centralized logging/security accounts.
  • Any environment where network paths matter: segmented VPCs, shared services VPCs, inspection VPCs, hybrid connectivity, regulated workloads, or production environments with strict egress controls.
  • Teams using IaC and CI/CD to manage networking. Route changes can be frequent; you still need detection that distinguishes expected change windows from anomalies.

What you actually need to do (step-by-step)

Step 1 — Confirm CloudTrail management events are captured centrally

  1. Verify you have an org trail or equivalent coverage that records management events for all relevant accounts and regions 3.
  2. Ensure CloudTrail delivers logs to a destination you can attach metric filters to (commonly CloudWatch Logs). If you only store in S3, metric filters won’t apply directly.
  3. Establish a “single pane” location (often the logging or security account) where detection controls live, so you don’t have to replicate per workload account.

Operator check: If Security Hub is flagging CloudWatch.13 failures, it often indicates the metric filter/alarm is missing in the account/region where CloudTrail logs are evaluated 1.

Step 2 — Identify the route table change events you want to detect

Route table changes occur through EC2/VPC API actions. Your filter should catch the core modifications and associations. Typical events to include:

  • CreateRoute / DeleteRoute / ReplaceRoute
  • CreateRouteTable / DeleteRouteTable
  • AssociateRouteTable / DisassociateRouteTable
  • ReplaceRouteTableAssociation

Design choice: Decide whether you want any route table change (broad) or only high-risk changes (narrow). CIS intent is broad detection of route table changes, so bias toward coverage first, then tune noise.

Step 3 — Create a CloudWatch Logs metric filter on the CloudTrail log group

  1. Go to the CloudTrail log group that receives management events.
  2. Create a metric filter with a pattern that matches the selected event names.
  3. Emit the metric to a dedicated namespace (example: CIS/CloudTrail) with a clear metric name (example: RouteTableChanges).
  4. Set metric value to 1 per match.

Practical guidance:

  • Include eventSource checks (for example, EC2 API events) to reduce unrelated matches.
  • If you run multiple trails/log groups, ensure the filter is attached to the authoritative one, not a legacy or partial log group.

Step 4 — Create an alarm tied to the metric

  1. Create a CloudWatch alarm on the metric emitted by the filter.
  2. Set the alarm to trigger when the metric is >= 1 over an evaluation period appropriate for near-real-time detection (your choice; keep it simple and defensible).
  3. Configure an SNS topic (or equivalent) for notifications.
  4. Subscribe an owned destination: security operations queue, on-call, or ticketing integration.

Ownership requirement (operational): The alarm must notify a team that will act. Document who owns triage and what “expected vs unexpected” looks like.

Step 5 — Add response runbook and change correlation

Write a short runbook that answers:

  • How to find the CloudTrail event and extract: actor, source IP, user agent, request parameters, and the route table ID.
  • How to determine whether the change was deployed via IaC pipeline (expected) or console/CLI (higher risk).
  • Containment options: revert route, detach association, restrict IAM, or isolate workload depending on impact.

Step 6 — Validate with a test (and keep the proof)

Run a controlled change in a non-production environment: perform a route table association or add/remove a route. Confirm:

  • The CloudTrail event appears.
  • The metric increments.
  • The alarm fires.
  • The notification arrives and is tracked to closure.

This test artifact is often the fastest way to satisfy an auditor that the control is not “paper compliance.”

Required evidence and artifacts to retain

Keep evidence that proves design, implementation, and ongoing operation:

Configuration evidence (screenshots or exports):

  • CloudTrail trail configuration showing management event logging and destinations 3.
  • CloudWatch Logs metric filter: name, log group, filter pattern, metric namespace/name.
  • CloudWatch alarm: threshold, metric linkage, notification actions (SNS topic), alarm state history.
  • SNS topic subscriptions (who receives it).

Operational evidence:

  • Latest alert(s) for route table changes, including timestamps and ticket/incident references.
  • A short runbook and ownership mapping (team, rotation, escalation).
  • Periodic verification record (change ticket or scheduled control test) showing the alarm still works after account changes.

Audit tip: Evidence should be tied to the centralized logging location that Security Hub evaluates for the CIS mapping 1.

Common exam/audit questions and hangups

What auditors ask What they mean What to show
“Where is CloudTrail captured and monitored?” Coverage across accounts/regions Trail settings and the log group where filters live 3
“Prove the alarm would fire.” They want operational validation Alarm history + a test event + notification proof
“Who receives alerts and what happens next?” Ownership and response SNS subscriptions + runbook + sample ticket
“Is this deployed consistently?” Drift control IaC, org-level deployment, or centralized security account design

Frequent implementation mistakes (and how to avoid them)

  1. Filter exists, but CloudTrail logs are not in that log group.

    • Fix: confirm the exact CloudTrail integration path. Metric filters only work on the log group they’re attached to.
  2. Alarm created in the wrong region.

    • Fix: align the alarm with where the metric is emitted. If your CloudTrail/log group is regional, deploy regionally or centralize intentionally.
  3. Too-narrow filter pattern misses events.

    • Fix: start broad on eventName coverage, then tune. Validate by generating multiple route table actions.
  4. Alert goes to an unmonitored inbox.

    • Fix: route to a system with accountability (ticketing/on-call). Keep subscription evidence.
  5. No runbook, so responders guess.

    • Fix: document the minimum triage steps and rollback options, even if your network team owns the change.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this specific CIS requirement, so you should treat this as a benchmark and audit-baseline control rather than a directly enforced rule in itself 3. The real risk is indirect: route table changes can enable data exfiltration paths, break segmentation controls, or create outage conditions. During incident reviews, the absence of detection and alerting becomes a governance failure: “you didn’t know your network path changed.”

Practical 30/60/90-day execution plan

First 30 days (stabilize and get to “exists and works”)

  • Confirm CloudTrail coverage and identify the authoritative CloudTrail log group 3.
  • Implement the metric filter and alarm in the monitored account/region per Security Hub CIS mapping expectations 1.
  • Route notifications to an owned channel and write a one-page runbook.
  • Perform a test route table change in a safe environment and retain the evidence.

Next 60 days (scale and reduce operational risk)

  • Standardize deployment through IaC or org-level automation so new accounts inherit the control.
  • Add tagging or naming standards for alarms (so operators can find them fast).
  • Tune patterns if noise is high, but do not narrow to the point of missing core route table API events.
  • Add a lightweight review process for “expected network changes,” so alerts map cleanly to change records.

By 90 days (continuous control operation)

  • Implement periodic verification (scheduled test or review of alarm history) and retain records.
  • Add response quality checks: confirm alerts result in a closed ticket with the CloudTrail event attached.
  • If you use a compliance platform like Daydream, map this requirement to your AWS accounts, attach evidence queries/screenshots, and schedule recurring verification so audits don’t turn into console archaeology 1.

Frequently Asked Questions

Does this control require CloudWatch Logs, or can I do it with S3-only CloudTrail logs?

The requirement is specifically about a log metric filter and alarm, which in AWS typically means CloudWatch Logs metric filters tied to CloudTrail logs 1. If you only log to S3, you’ll need an additional path that supports metric-based alarming.

Which account should own the metric filter and alarm in a multi-account AWS Organization?

Put the control where the CloudTrail log group is monitored and centralized, commonly a logging or security account 1. The key is consistent coverage and clear ownership, not scattering alarms across workload accounts.

What route table actions should the filter match?

Match the common route and association change actions (create/delete/replace routes; associate/disassociate/replace route table associations) to cover the meaningful change surface. Validate by testing more than one type of change so you don’t miss an event variant.

How do I prove to an auditor that the alarm is effective?

Show the metric filter configuration, alarm configuration, and a test event that produced a notification and a tracked response artifact (ticket/incident record). Alarm history alone is weaker without proof the end-to-end notification path works.

We deploy routes through Terraform. Won’t this alert constantly?

It may alert frequently if your environment changes routes often. Keep the alert but route it into a workflow that correlates to approved change windows, or include runbook steps for responders to confirm the change came from the CI/CD role expected for IaC.

What’s the minimum ongoing maintenance?

Re-verify after changes to CloudTrail, logging architecture, account vending, or region expansion. The common failure mode is drift: the alarm remains, but the log source moved.

Footnotes

  1. AWS Security Hub CIS AWS Foundations mapping table

  2. CIS AWS Foundations Benchmark; AWS Security Hub CIS AWS Foundations mapping table

  3. CIS AWS Foundations Benchmark

Frequently Asked Questions

Does this control require CloudWatch Logs, or can I do it with S3-only CloudTrail logs?

The requirement is specifically about a log metric filter and alarm, which in AWS typically means CloudWatch Logs metric filters tied to CloudTrail logs (Source: AWS Security Hub CIS AWS Foundations mapping table). If you only log to S3, you’ll need an additional path that supports metric-based alarming.

Which account should own the metric filter and alarm in a multi-account AWS Organization?

Put the control where the CloudTrail log group is monitored and centralized, commonly a logging or security account (Source: AWS Security Hub CIS AWS Foundations mapping table). The key is consistent coverage and clear ownership, not scattering alarms across workload accounts.

What route table actions should the filter match?

Match the common route and association change actions (create/delete/replace routes; associate/disassociate/replace route table associations) to cover the meaningful change surface. Validate by testing more than one type of change so you don’t miss an event variant.

How do I prove to an auditor that the alarm is effective?

Show the metric filter configuration, alarm configuration, and a test event that produced a notification and a tracked response artifact (ticket/incident record). Alarm history alone is weaker without proof the end-to-end notification path works.

We deploy routes through Terraform. Won’t this alert constantly?

It may alert frequently if your environment changes routes often. Keep the alert but route it into a workflow that correlates to approved change windows, or include runbook steps for responders to confirm the change came from the CI/CD role expected for IaC.

What’s the minimum ongoing maintenance?

Re-verify after changes to CloudTrail, logging architecture, account vending, or region expansion. The common failure mode is drift: the alarm remains, but the log source moved.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream