CIS AWS Foundations v1.2 3.6: Ensure a log metric filter and alarm exist for AWS Management Console authentication failures
To meet cis aws foundations v1.2 3.6: ensure a log metric filter and alarm exist for aws management console authentication failures requirement, you must (1) send CloudTrail logs to CloudWatch Logs, (2) create a CloudWatch Logs metric filter that detects console sign-in failures, and (3) attach a CloudWatch alarm that notifies responders (or triggers automation) when failures occur. This is a detection-and-response control mapped in AWS Security Hub. 1
Key takeaways:
- You need both a metric filter (detection) and an alarm (actionable alerting) for failed AWS Management Console logins.
- The control depends on CloudTrail log delivery into CloudWatch Logs; without that pipeline, the filter cannot work.
- Auditors will ask for proof of operation, not just configuration screenshots: alarm actions, notification routing, and recent test results.
Console authentication failures are an early signal for account compromise attempts: password spraying, stolen credentials, and misconfigured federated access often show up first as repeated sign-in errors. CIS AWS Foundations v1.2 requirement 3.6 expects you to convert those raw events into an actionable security signal by creating a log metric filter and alarm for AWS Management Console authentication failures. 2
Operationally, this requirement is less about “having logs” and more about shortening time-to-detection. Many organizations already enable CloudTrail, but they miss one of the two hard parts: (a) getting CloudTrail events into CloudWatch Logs in every relevant account/region, and (b) wiring the CloudWatch alarm to a real response path (ticket, pager, SOC queue, or an automated containment playbook). If the alarm goes to an unmonitored email inbox, you will struggle to defend it as an effective control in an audit.
AWS Security Hub maps this check to its CIS coverage (CloudWatch.6). Your fastest path is to implement the configuration, validate it with a controlled test, and retain evidence that shows the alert fires and reaches the right owners. 3
Regulatory text
Excerpt (provided): “Implement CIS AWS Foundations Benchmark v1.2 requirement 3.6 as mapped in AWS Security Hub.” 1
Operator interpretation: You must implement the CIS AWS Foundations v1.2 control that detects AWS Management Console authentication failures by creating:
- a CloudWatch Logs metric filter on the CloudTrail log group that matches failed console login events, and
- a CloudWatch alarm on that metric that notifies or triggers response.
The intent is consistent, automated detection of failed console sign-in attempts across your AWS footprint. 1
Plain-English interpretation (what the control is really asking)
You are expected to turn failed console sign-ins into an alert that a human (or automation) will see quickly enough to respond. That means:
- CloudTrail must record sign-in activity.
- Those logs must land in CloudWatch Logs where metric filters can evaluate them.
- A metric filter must count failures.
- An alarm must fire based on that count and route to a monitored channel.
This is a “baseline” control: it does not stop logins. It helps you catch brute-force attempts, stolen credentials, and broken SSO assumptions before an attacker succeeds.
Who it applies to
Entity types: AWS cloud operators and any organization running workloads in AWS accounts where people authenticate to the AWS Management Console. 2
Operational context where it matters most:
- Organizations with multiple AWS accounts (prod, dev, sandbox) where console access exists.
- Environments using AWS IAM users, AWS SSO/IAM Identity Center, or federated identity where sign-in failures can indicate misconfigurations or attacks.
- Regulated environments that must demonstrate detective controls over privileged access pathways.
Scope decisions you must make (document them):
- Which AWS accounts are in scope (ideally all).
- Which regions are in scope (ideally all regions where CloudTrail is enabled and where console sign-in events are recorded).
- Whether alarms should be centralized (security account/SIEM) or per-account.
What you actually need to do (step-by-step)
Step 1 — Confirm CloudTrail is capturing and delivering the right events
- Verify CloudTrail is enabled for management events (console sign-in events are management events).
- Deliver CloudTrail to CloudWatch Logs by configuring the trail to send events to a CloudWatch Logs log group.
- If you run multiple accounts, decide whether you will:
- configure this in each account, or
- centralize with an organization-level trail and a standardized log delivery pattern.
Practical checkpoint: If you cannot open CloudWatch Logs and see CloudTrail events arriving, stop. The metric filter will be a paper control.
Step 2 — Create a CloudWatch Logs metric filter for console authentication failures
Create a metric filter on the CloudTrail log group that matches failed AWS console sign-in attempts.
Implementation notes that auditors care about:
- The filter must be attached to the correct log group (the one receiving CloudTrail events).
- The filter pattern should be specific enough to avoid noisy false positives but broad enough to catch the primary failure modes.
Example approach (describe, don’t overfit):
- Match events associated with console login attempts where the outcome indicates failure (for example, failed authentication).
Your security engineering team can implement the exact pattern your organization standardizes on, but you should require a documented pattern and change control around it.
Control design decision: Create one metric namespace and metric name standard across accounts (example: Security/CloudTrail and ConsoleAuthFailureCount). Consistency simplifies monitoring and audit evidence.
Step 3 — Create a CloudWatch alarm on the metric
- Create a CloudWatch alarm that evaluates the metric generated by the filter.
- Configure alarm actions:
- Notify an SNS topic that routes to your SOC/on-call distribution, and/or
- Trigger an automated response workflow (ticket creation, SOAR playbook, or Lambda-based containment).
Alarm threshold guidance (qualitative, because your environment varies):
- Set it low enough to detect brute-force behavior early, but not so low that a single user typo pages the team constantly.
- Use separate severities if needed (example: warning for low volume, critical for bursts), but keep the first implementation simple and measurable.
Step 4 — Wire alert routing to an owned response process
Document who owns triage and what “done” means. Minimum viable response:
- Triage within your security monitoring function (SOC, on-call security engineer, or GRC-run security mailbox with an SLA you can defend).
- Identify the impacted principal (IAM user / federated role), source IP, and timeframe from CloudTrail.
- Validate whether failures are expected (new SSO rollout, password reset wave) or suspicious.
- Escalate for containment when suspicious (credential reset, MFA enforcement, session revocation, temporary access block per your process).
Step 5 — Validate with a controlled test and make it repeatable
Run a test that produces a failed console authentication event in a non-production account:
- Attempt sign-in with an invalid password (in an approved test identity).
- Confirm the event appears in CloudTrail logs.
- Confirm the metric increments.
- Confirm the alarm transitions state and the notification reaches the right place.
Capture the evidence once, then add a recurring validation (for example: after major IAM or logging changes, and during periodic control testing).
Step 6 — Operationalize at scale (multi-account)
To keep this from drifting:
- Standardize via infrastructure as code (IaC) and/or account bootstrap.
- Use AWS Security Hub’s mapping view to track coverage for the control (CloudWatch.6) and treat failures as findings that require remediation. 3
- Define exception handling (acquisitions, isolated research accounts) with approvals and compensating controls.
Required evidence and artifacts to retain
Keep evidence that shows design, implementation, and operation:
Design artifacts
- Control narrative: purpose, scope (accounts/regions), owner, and response steps.
- Metric filter pattern specification and rationale (change-controlled).
- Alarm configuration standard: routing targets, severity mapping.
Implementation artifacts
- CloudTrail configuration showing delivery to CloudWatch Logs (export or screenshots).
- CloudWatch Logs metric filter definition (name, log group, pattern, metric namespace/name).
- CloudWatch alarm definition (threshold logic, evaluation period settings, actions).
Operating effectiveness artifacts
- Alert routing proof: SNS topic subscriptions, ticketing integration configuration, or on-call routing configuration.
- Test record: timestamped failed login test, metric datapoint, alarm state change, and notification receipt.
- Evidence of periodic verification (runbooks, control test results, or Security Hub finding status history). 3
Tip for audit readiness: store these artifacts per account (or per organizational standard) in a single control evidence folder, and update it when the alarm/action routing changes.
Common exam/audit questions and hangups (and how to answer)
- “Show me that the alarm actually fires.”
Provide the controlled test record plus CloudWatch alarm history for the state transition. - “How do you know it covers all accounts?”
Provide your account inventory and deployment method (IaC, org-level guardrails), plus Security Hub control status across accounts. 3 - “Who responds and how fast?”
Point to the runbook, ownership (team/role), and incident intake mechanism (ticket queue/on-call rotation). - “How do you prevent alert fatigue?”
Explain threshold tuning, deduplication in your ticketing/SIEM, and escalation logic for bursts vs single failures. - “What happens if CloudTrail delivery breaks?”
Mature answer: separate monitoring exists for log delivery health; if not, record it as a dependency risk and add a follow-up control.
Frequent implementation mistakes (and how to avoid them)
- Mistake: Metric filter exists, but CloudTrail isn’t sending to CloudWatch Logs.
Avoid by verifying log ingestion first and keeping the trail/log group relationship documented. - Mistake: Alarm actions are empty or point to an unowned SNS topic.
Avoid by requiring an “alert destination owner” field and testing end-to-end delivery. - Mistake: Only deployed in one region/account.
Avoid by standardizing deployment and tracking exceptions explicitly. - Mistake: Pattern matches the wrong events (noise) or misses federated console failures.
Avoid by testing with representative auth paths (IAM user, SSO, federation) and reviewing sample CloudTrail events during implementation. - Mistake: No evidence of operation.
Avoid by scheduling periodic tests and retaining alarm history exports.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this specific CIS requirement, so you should treat this as a benchmark-driven expectation rather than a cited regulatory enforcement trend. 2
Risk impact is still concrete: if you cannot detect repeated console authentication failures, you increase the chance that credential attacks go unnoticed until after successful access. That can cascade into data access, privilege escalation, and incident response costs. From a GRC standpoint, the control also acts as a “proof point” that security monitoring is wired to privileged access pathways.
Practical 30/60/90-day execution plan
First 30 days — Establish minimum viable coverage
- Inventory in-scope AWS accounts and confirm where console access exists.
- Confirm CloudTrail coverage and CloudWatch Logs delivery in those accounts.
- Implement the metric filter + alarm in a pilot account.
- Wire notifications to a monitored channel and run the first controlled test.
- Start an evidence folder with baseline exports/screenshots and the test record.
By 60 days — Scale and standardize
- Roll out via IaC or account bootstrap to all in-scope accounts.
- Standardize naming, namespaces, and alarm routing.
- Add a short runbook for triage (who, what to check in CloudTrail, escalation path).
- Use AWS Security Hub to track the mapped control status and drive remediation workflow. 3
By 90 days — Prove operating effectiveness
- Run a second control test and retain results (include alarm history).
- Review alert volume and tune thresholds/routing to reduce noise without losing signal.
- Formalize exception handling and compensating controls for out-of-scope accounts.
- If you use Daydream for control management, map this requirement to owners, collect evidence on a schedule, and keep an audit-ready trail of verification outcomes.
Frequently Asked Questions
Do we need this in every AWS region?
Scope should match where CloudTrail is enabled and where your organization permits console access. Auditors usually expect consistent coverage; if you exclude regions, document why and how you prevent console access or detect events elsewhere. 2
Does this work if we only send CloudTrail to S3 and not CloudWatch Logs?
Not by itself. The metric filter required here is a CloudWatch Logs feature, so CloudTrail events must be delivered to a CloudWatch Logs log group for the filter and alarm to function.
What alert destination is acceptable for the alarm action?
Use an alert path that is monitored and owned (SOC queue, on-call paging, or ticketing). “Email a shared mailbox” can pass only if you can prove monitoring and response; otherwise it is a common audit finding.
How do we avoid paging on normal user typos?
Tune the alarm threshold and evaluation logic to your environment, and consider routing low-volume failures to a ticket queue while reserving paging for bursts or repeated failures. Keep the initial implementation simple, then tune based on observed noise.
What evidence is strongest for an audit?
Alarm configuration plus alarm history that shows a real state change, along with a dated test record and proof the notification reached the right responders. Pair that with your runbook and ownership assignments.
Can AWS Security Hub satisfy this requirement on its own?
Security Hub can detect and report whether the control is passing (mapped as CloudWatch.6), but you still need to implement the underlying CloudWatch Logs metric filter and alarm and retain evidence of operation. 3
Footnotes
Frequently Asked Questions
Do we need this in every AWS region?
Scope should match where CloudTrail is enabled and where your organization permits console access. Auditors usually expect consistent coverage; if you exclude regions, document why and how you prevent console access or detect events elsewhere. (Source: CIS AWS Foundations Benchmark)
Does this work if we only send CloudTrail to S3 and not CloudWatch Logs?
Not by itself. The metric filter required here is a CloudWatch Logs feature, so CloudTrail events must be delivered to a CloudWatch Logs log group for the filter and alarm to function.
What alert destination is acceptable for the alarm action?
Use an alert path that is monitored and owned (SOC queue, on-call paging, or ticketing). “Email a shared mailbox” can pass only if you can prove monitoring and response; otherwise it is a common audit finding.
How do we avoid paging on normal user typos?
Tune the alarm threshold and evaluation logic to your environment, and consider routing low-volume failures to a ticket queue while reserving paging for bursts or repeated failures. Keep the initial implementation simple, then tune based on observed noise.
What evidence is strongest for an audit?
Alarm configuration plus alarm history that shows a real state change, along with a dated test record and proof the notification reached the right responders. Pair that with your runbook and ownership assignments.
Can AWS Security Hub satisfy this requirement on its own?
Security Hub can detect and report whether the control is passing (mapped as CloudWatch.6), but you still need to implement the underlying CloudWatch Logs metric filter and alarm and retain evidence of operation. (Source: AWS Security Hub CIS AWS Foundations mapping table)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream