CIS AWS Foundations v1.2 3.7: Ensure a log metric filter and alarm exist for disabling or scheduled deletion of customer managed keys
To meet CIS AWS Foundations v1.2 3.7, you must detect and alert on any attempt to disable a customer managed AWS KMS key or schedule it for deletion by creating a CloudWatch Logs metric filter on CloudTrail events and an associated CloudWatch alarm (often routed to SNS/on-call). This gives you rapid response when encryption protections are being removed.
Key takeaways:
- You need two things: a log metric filter (detection) and an alarm (notification/response trigger).
- The filter must cover DisableKey and ScheduleKeyDeletion events for customer managed keys via CloudTrail logs.
- Auditors will expect evidence of configuration, alert routing, and proof it works (test event + ticket/notification).
Disabling or scheduling deletion of a customer managed KMS key is a high-impact action: it can break decryption for data at rest, stop services that depend on encryption, and create an availability incident that looks like “data loss” even when data still exists. Attackers and careless admins both target key lifecycle actions because they are fast, privileged, and frequently under-monitored.
The cis aws foundations v1.2 3.7: ensure a log metric filter and alarm exist for disabling or scheduled deletion of customer managed keys requirement focuses on one operational outcome: you get alerted when someone disables a customer managed key or sets it on a deletion path, so you can respond before a key becomes unavailable or permanently destroyed.
This page is written for a Compliance Officer, CCO, or GRC lead who needs to operationalize the requirement quickly across one or more AWS accounts. It prioritizes implementable steps, audit-ready evidence, and the common places teams fail (wrong log source, missing regions, no alarm action, or alerts that never page anyone).
Regulatory text
Excerpt / requirement statement: “Implement CIS AWS Foundations Benchmark v1.2 requirement 3.7 as mapped in AWS Security Hub.” 1
Operator interpretation: You must implement the CIS control that maps (in AWS Security Hub) to detecting KMS key disablement and scheduled deletion events, then alerting on them. In practice, that means:
- CloudTrail is logging KMS management events into CloudWatch Logs,
- a CloudWatch Logs metric filter matches the relevant CloudTrail events, and
- a CloudWatch alarm exists and routes notifications to a monitored channel.
2
Plain-English interpretation (what the control is really asking)
You need a reliable “tripwire” for KMS key lifecycle actions that remove encryption capability. The tripwire is built from:
- Signal source: CloudTrail events for AWS KMS API calls.
- Detection: A metric filter that increments when a matching event appears in logs.
- Response hook: A CloudWatch alarm that fires when the metric is non-zero and notifies humans and/or an incident workflow.
This control is not asking you to prevent key deletion (that’s separate, via IAM/SCPs and key policies). It is asking you to detect and alert so you can respond quickly.
Who it applies to
Entity types: AWS cloud operators responsible for security monitoring and baseline controls 3
Operational scope:
- Any AWS account where you create and manage customer managed KMS keys (CMKs).
- Any environment where KMS keys protect regulated data, production workloads, backups, logs, or secrets.
- Especially relevant for multi-account orgs using AWS Organizations, where an action in one account can break shared services or encrypted artifacts copied across accounts.
Ownership model (who does what):
- Cloud/security engineering: Implements metric filter, alarm, and routing.
- Platform/operations: Ensures on-call or ticketing intake works and is staffed.
- GRC: Defines evidence standards, testing frequency, and exception handling.
What you actually need to do (step-by-step)
Step 1 — Confirm CloudTrail is capturing KMS management events into CloudWatch Logs
- Identify the CloudTrail trail(s) covering your accounts and regions.
- Verify the trail delivers logs to CloudWatch Logs (not only S3) so metric filters can run on a log group.
- Confirm KMS management events are included. Most teams rely on “Management events: Read/Write” coverage; the key point is that KMS API calls like
DisableKeyandScheduleKeyDeletionappear in CloudTrail.
Evidence tip: Capture a screenshot or CLI output showing the trail, its log group, and that it is enabled.
Step 2 — Create a CloudWatch Logs metric filter for key disablement and scheduled deletion
Create (or standardize) a metric filter on the CloudTrail log group that matches these event names:
DisableKeyScheduleKeyDeletion
A common pattern is to match CloudTrail JSON fields such as:
eventSource=kms.amazonaws.comeventNamein (DisableKey,ScheduleKeyDeletion)
If your environment is mature, you can add additional scoping fields for investigation (for example: userIdentity.type, recipientAccountId, awsRegion). Keep the filter broad enough that you do not miss real events.
Operational note: This requirement specifically targets customer managed keys. CloudTrail events alone may not always distinguish key manager type cleanly at filter time. Most teams satisfy the intent by alerting on these event names for KMS and handling “AWS managed key” false positives during triage, then tuning later.
Evidence tip: Save the metric filter definition (JSON/text export) and the metric namespace/name you chose.
Step 3 — Create a CloudWatch alarm tied to the metric
- Create an alarm on the metric produced by the filter.
- Configure the alarm threshold so it triggers on the first matching event (typical design: alarm when metric is greater than zero within the evaluation window).
- Set alarm actions to notify a monitored destination:
- SNS topic subscribed by on-call email/paging
- ChatOps integration
- Incident/ticket automation
Design expectation: If the alarm fires but nobody receives it, auditors will treat the control as ineffective. Treat alert routing as part of the control, not “nice to have.”
Step 4 — Route alerts into an incident workflow with clear triage steps
Define a lightweight runbook. Minimum viable runbook:
- Identify the key ID/ARN from the CloudTrail event.
- Identify who made the change (principal, role, source IP).
- Determine whether the action is approved (change ticket) or suspicious.
- If suspicious or unapproved:
- Re-enable the key (if disabled).
- Cancel deletion (if scheduled) if possible under your key deletion waiting period.
- Contain the principal (disable access keys, revoke sessions, restrict role).
- Notify data/application owners if systems could be impacted.
Evidence tip: Store the runbook in your GRC repository or ops wiki and link it in your control record.
Step 5 — Validate by testing (and keep the proof)
Run a controlled test in a non-production account (or a test key in production with approval):
- Disable a test customer managed key (or schedule deletion with a long waiting period), then confirm:
- CloudTrail log entry exists
- Metric increments
- Alarm enters ALARM state
- Notification arrives
- Ticket/incident is created (if applicable)
Retain the test record (timestamp, event, alarm state change, notification/ticket ID). This is high-value audit evidence because it proves end-to-end operation.
Step 6 — Scale across accounts and regions (standardization)
For multi-account setups:
- Standardize via infrastructure-as-code (CloudFormation/Terraform) and apply consistently.
- Ensure coverage for all relevant regions where keys exist and CloudTrail logs are generated.
- Consider a centralized logging account pattern, but confirm metric filters and alarms are created where the log group lives.
Where Daydream fits naturally: Many GRC teams struggle to keep evidence current across dozens of AWS accounts. Daydream can track control coverage by account, store the metric filter/alarm artifacts, and schedule periodic verification aligned to the AWS Security Hub CIS mapping 2.
Required evidence and artifacts to retain
Keep evidence that shows design, implementation, and operation:
Design artifacts
- Control narrative describing detection + alerting objective for KMS key disable/deletion.
- Runbook for triage and response.
- Alert routing diagram (even a simple one-page flow).
Implementation artifacts
- CloudTrail configuration showing delivery to the CloudWatch Logs log group.
- Metric filter definition (name, pattern, log group, metric namespace/name).
- Alarm configuration (threshold, evaluation settings, alarm actions, SNS topic).
Operational artifacts
- Alarm notification test (screenshots, CloudTrail event snippet, metric graph, alarm state change).
- Incident/ticket records for any real events, including disposition (approved change vs security incident).
- Exception records if some accounts cannot support the control yet, with compensating monitoring.
Common exam/audit questions and hangups
Auditors and assessors tend to focus on these points:
- “Show me the alarm and prove it notifies someone.” They will ask who receives it and whether that channel is monitored.
- “How do you know it covers all accounts/regions?” Expect to demonstrate rollout coverage and identify any exclusions.
- “What happens when it fires?” If you cannot produce a runbook or examples of triage, the control looks theoretical.
- “Is CloudTrail actually logging KMS events?” If your trail only logs to S3, metric filters won’t work. If the trail is mis-scoped, you miss events.
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails | How to avoid |
|---|---|---|
| Metric filter created on the wrong log group | You never see KMS events | Confirm the CloudTrail log group name and test with a real event |
| Alarm exists but no alarm action | Alert doesn’t reach responders | Require SNS (or equivalent) action as part of change acceptance |
| Only created in one region/account | Partial coverage creates blind spots | Use IaC and a rollout checklist tied to your account inventory |
| Filter too narrow (matches only one event field variant) | Missed detections | Start broad (eventSource + eventName), then tune after observing events |
| No proof of operation | Control cannot be substantiated | Run a test and retain timestamped evidence |
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this specific CIS control. Treat this requirement as a baseline security monitoring expectation aligned to AWS Security Hub’s mapping of CIS AWS Foundations controls 2.
Risk-wise, disabling or deleting a customer managed KMS key can create:
- Availability risk: Encrypted resources can become unusable.
- Data access risk: Incident response and forensics become harder if key actions are not traceable and triaged quickly.
- Change control risk: A single privileged action can bypass slower governance paths unless monitoring is in place.
Practical execution plan (30/60/90)
Use phases rather than calendar promises. The goal is quick deployment with proof.
First 30 days (Immediate foundation)
- Inventory: list AWS accounts, regions, and CloudTrail trails that should be in scope.
- Confirm CloudTrail to CloudWatch Logs in each in-scope account (or central logging architecture with access to the log group).
- Implement the metric filter + alarm in a pilot account.
- Define alert routing and a short runbook.
- Perform a controlled test and store the evidence package.
By 60 days (Scale and standardize)
- Roll out IaC templates across in-scope accounts and regions.
- Centralize evidence collection: configuration exports, screenshots, or automated snapshots.
- Tune the alert to reduce noise without narrowing it so far that it misses real events.
- Train on-call responders on the runbook and escalation.
By 90 days (Operate and verify)
- Add periodic verification: confirm alarms are enabled, SNS subscriptions are active, and notifications are deliverable.
- Incorporate alerts into incident metrics and post-incident reviews when events occur.
- Formalize exceptions: document compensating controls and remediation dates where rollout is incomplete.
- Use a system of record (for example, Daydream) to keep artifacts current and audit-ready across accounts.
Frequently Asked Questions
Does this control require preventing KMS key deletion?
No. It requires a log metric filter and alarm to detect and alert on disablement or scheduled deletion actions 2. Prevention is handled with IAM, SCPs, and key policies as separate controls.
Do I need separate filters for DisableKey and ScheduleKeyDeletion?
You can do either. Many teams implement one combined filter for both event names to simplify operations, then route the same alarm to the same triage workflow.
What if CloudTrail logs only go to S3?
CloudWatch Logs metric filters require a CloudWatch Logs log group. If you only deliver to S3, you need to add CloudWatch Logs delivery (or implement an alternate detection mechanism and document an exception).
How do I prove the alarm works for an audit?
Keep a test record that shows the CloudTrail event, the metric increment, the alarm state change, and the notification or ticket created. Auditors prefer end-to-end proof over screenshots of configuration alone.
We use centralized logging in a security account. Where should the filter and alarm live?
Put the metric filter and alarm where the CloudTrail log group lives. If the log group is centralized, the detection controls will be centralized too; confirm access and ownership for response.
Will this generate noise for AWS-managed keys?
It can, depending on how your environment triggers KMS actions. Start with broad detection to meet the requirement intent, then tune based on observed alerts and documented triage outcomes.
Footnotes
Frequently Asked Questions
Does this control require preventing KMS key deletion?
No. It requires a **log metric filter and alarm** to detect and alert on disablement or scheduled deletion actions (Source: AWS Security Hub CIS AWS Foundations mapping table). Prevention is handled with IAM, SCPs, and key policies as separate controls.
Do I need separate filters for DisableKey and ScheduleKeyDeletion?
You can do either. Many teams implement one combined filter for both event names to simplify operations, then route the same alarm to the same triage workflow.
What if CloudTrail logs only go to S3?
CloudWatch Logs metric filters require a CloudWatch Logs log group. If you only deliver to S3, you need to add CloudWatch Logs delivery (or implement an alternate detection mechanism and document an exception).
How do I prove the alarm works for an audit?
Keep a test record that shows the CloudTrail event, the metric increment, the alarm state change, and the notification or ticket created. Auditors prefer end-to-end proof over screenshots of configuration alone.
We use centralized logging in a security account. Where should the filter and alarm live?
Put the metric filter and alarm where the CloudTrail log group lives. If the log group is centralized, the detection controls will be centralized too; confirm access and ownership for response.
Will this generate noise for AWS-managed keys?
It can, depending on how your environment triggers KMS actions. Start with broad detection to meet the requirement intent, then tune based on observed alerts and documented triage outcomes.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream