CMMC Level 2 Practice 3.5.11: Obscure feedback of authentication information
To meet the cmmc level 2 practice 3.5.11: obscure feedback of authentication information requirement, configure every system that authenticates users to avoid revealing whether a username exists or whether the password was incorrect, and to mask secrets during entry. Operationalize it by standardizing login error messages, masking password/OTP fields, and retaining test evidence across all CUI in-scope applications and identity providers. 1
Key takeaways:
- Use generic authentication failure messages and consistent behavior to prevent account enumeration. 1
- Mask authentication inputs (passwords, PINs, OTPs) and prevent logs/UI from exposing sensitive auth data. 1
- Treat this as a control with recurring evidence: configuration proof plus periodic testing across all in-scope entry points. 2
CMMC Level 2 inherits NIST SP 800-171 Rev. 2 practices for protecting Controlled Unclassified Information (CUI), and authentication is a high-frequency attack surface. Practice 3.5.11 focuses on a narrow, testable behavior: the system must not “help” an attacker by giving away details during login attempts. If your login screen says “user not found,” you have created an account-enumeration oracle. If your help desk portal displays a partially revealed password reset token, you have created an interception opportunity.
For a CCO or GRC lead, the fastest path is to treat 3.5.11 as a standard you apply everywhere a user authenticates: IdP, VPN, VDI, email, privileged access, SaaS used for CUI workflows, custom apps, and any administrative consoles. Then you document two things assessors care about: (1) the control design (what the standard is), and (2) the operating evidence (screenshots, configs, and test results showing it actually behaves that way). CMMC assessments are evidence-driven under the CMMC Program framework. 3
Requirement: Plain-English interpretation
Practice 3.5.11 requires you to obscure feedback during authentication so users (and attackers) do not learn whether they guessed the username, password, MFA factor, or other secret correctly. In practical terms:
- The UI must mask secrets during entry (passwords, PINs, OTPs).
- Error messages must be generic (avoid “username does not exist” vs. “bad password”).
- System behavior should be consistent enough to avoid easy account enumeration (for example, similar error text and response patterns for invalid user vs. invalid password).
1
This is not about making authentication “secure” in a broad sense. It is about preventing information leakage from the login process that materially reduces the effort required for credential stuffing, brute force, or targeted phishing.
Regulatory text
CMMC Level 2 Practice 3.5.11 is “mapped to NIST SP 800-171 Rev. 2 requirement 3.5.11 (Obscure feedback of authentication information).” 4
What the operator must do: ensure that authentication mechanisms (applications, systems, and services) do not display or otherwise disclose authentication information (like passwords) and do not provide overly specific feedback that would help an unauthorized user validate accounts or credentials. Implement it as a consistent standard and keep objective evidence for assessor review under the CMMC assessment approach. 5
Who it applies to (entity + operational context)
Applies to:
- Defense contractors and subcontractors handling CUI that need CMMC Level 2. 3
Applies across:
- All authentication entry points in the CUI boundary, including:
- Identity provider (SSO), MFA prompts, password reset flows
- Remote access (VPN, VDI, bastion hosts)
- Admin consoles, hypervisors, network devices, OT jump boxes (if in scope)
- Custom web apps and APIs that authenticate users
- Third-party SaaS where CUI is accessed or administered (still your responsibility to configure securely)
4
A common scope trap: teams fix the corporate SSO login page but miss local accounts on appliances, break-glass admin portals, or internally hosted tools used by engineering.
What you actually need to do (step-by-step)
1) Build an “authentication surface” inventory (scope map)
Create a list of every place a human (or service account operator) can authenticate in the CUI environment:
- SSO/IdP portals, local OS login, VPN, privileged access tooling
- Business apps touching CUI (ticketing, PLM, file sharing), admin consoles
- Password reset and account recovery flows (these often leak the most)
Output: “Authentication Entry Points Register” with owner, system, auth method, and where to configure error messaging/masking.
2) Define the standard (one-pager control language)
Write a short control standard you can enforce consistently:
- Password and secret fields are masked on entry.
- Authentication failures return non-specific messages (example: “Invalid credentials.”).
- Password reset and account recovery do not confirm whether an account exists (example: “If the account exists, you will receive an email.”).
- Logs and monitoring tools must not store secrets (passwords, OTPs) in plaintext.
Keep it requirement-level, mapped to the practice, and attach it to your SSP/control narrative. 4
3) Implement configuration changes by platform (common patterns)
Use a platform-by-platform checklist:
Web applications (custom and COTS you configure)
- Replace distinct errors (“user not found” / “wrong password”) with a single message.
- Ensure password and OTP inputs use masked fields.
- Ensure front-end and back-end validation do not return different error codes that reveal which field failed.
- Ensure “forgot password” does not disclose account existence.
Identity provider / SSO
- Configure sign-in error messages and lockout notices to avoid account existence confirmation where possible.
- Confirm MFA failure messaging is generic (avoid “SMS code correct but password wrong”).
VPN/remote access
- Ensure pre-auth banners and errors do not reveal directory details or usernames.
- Validate authentication logs do not capture shared secrets.
Privileged systems
- Confirm admin portals don’t show “valid username” prompts.
- Confirm break-glass access procedures do not email sensitive secrets in cleartext.
4) Test like an assessor (objective evidence)
For each entry point, run a small set of tests and capture evidence:
- Attempt login with a known-valid username + wrong password.
- Attempt login with a nonexistent username + any password.
- Compare error messages and visible behavior.
- Confirm password/OTP fields are masked.
- Check relevant logs for leakage (screenshots or exported log snippets with sensitive data redacted).
5) Operationalize: change control + recurring evidence
Make this stick:
- Add “authentication feedback obscuring” to secure configuration baselines.
- Add a test case to release checklists for any login UX changes.
- Re-test after major upgrades to IdP, VPN, web frameworks, or help desk portals.
If you use Daydream to manage control operations, track 3.5.11 as a recurring control with assigned system owners and an evidence schedule so you can produce assessor-ready proof without scrambling. 2
Required evidence and artifacts to retain
Assessors typically want objective evidence across in-scope systems. Retain:
- Policy/standard
- Authentication Feedback Standard (the one-pager)
- Secure configuration baseline sections referencing generic errors and masking
- System-level configuration evidence
- IdP settings screenshots or exported configuration showing generic sign-in errors (where configurable)
- Application config snippets (sanitized) for error handling
- VPN/remote access configuration excerpts (sanitized)
- Test evidence
- A test script (simple table is fine) and dated results per entry point
- Screenshots of:
- masked password/OTP fields
- generic error messages for both invalid-user and invalid-password cases
- password reset messaging that does not confirm account existence
- Change management linkage
- Tickets/PRs showing implementation and approval
- Release checklist item verifying 3.5.11 behavior
- SSP/control narrative
- Mapping statement that identifies the systems in scope and where evidence lives
4
Common exam/audit questions and hangups
Expect these:
- “Show me, live, what a failed login looks like for an invalid username vs. wrong password.”
- “What about password reset? Does it confirm the account exists?”
- “Do any admin consoles still show ‘unknown user’?”
- “Do your logs capture credentials or MFA codes?”
- “How do you ensure new apps follow the same pattern?”
Hangup: teams present a policy but cannot produce system-by-system evidence. CMMC assessments are not satisfied by intent alone. 6
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails 3.5.11 | How to prevent it |
|---|---|---|
| Different error text for unknown user vs wrong password | Enables account enumeration | Standardize on one generic message across apps and IdP where possible 1 |
| “Forgot password” confirms account existence | Another enumeration channel | Use neutral messaging and consistent response behavior 1 |
| API returns distinct HTTP codes (“404 user”, “401 bad password”) | Enumeration via API | Normalize responses and error bodies; log detail internally only |
| Logging secrets (password/OTP) during debugging | Direct auth info exposure | Add log scrubbing rules and code review checks |
| Fixing only the primary SSO page | Leaves alternate entry points exposed | Maintain an authentication surface inventory and test each entry point |
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so you should treat enforcement risk in practical terms: this control reduces the probability of credential-based compromise by removing “free signals” attackers use to validate usernames and tune attack paths. Under CMMC, failure is more likely to show up as an assessment finding due to missing evidence or inconsistent configuration across the environment. 7
Practical 30/60/90-day execution plan
First 30 days (stabilize scope + quick wins)
- Build the authentication surface inventory for all CUI in-scope systems.
- Publish the Authentication Feedback Standard and align app owners.
- Fix the highest-risk flows first: SSO login, VPN login, password reset portal, help desk portal.
By 60 days (coverage + evidence)
- Implement changes across remaining entry points, including admin consoles and internal tools.
- Run and document standardized tests for each entry point.
- Create a single evidence folder per system (config + test results + change tickets).
By 90 days (sustainment)
- Add checks to SDLC and change management for any login/auth changes.
- Set a recurring reassessment cadence tied to upgrades (IdP, VPN, app framework changes).
- Centralize evidence tracking in a GRC workflow (Daydream or equivalent) so the control stays “always ready.” 2
Frequently Asked Questions
Does 3.5.11 require the exact same error message everywhere?
It requires obscured feedback, so the safest approach is a single generic failure message per authentication channel. If a platform forces different text, document the constraint and show compensating consistency (for example, identical user-facing outcomes and no account confirmation). 1
Are password masking dots/asterisks enough to satisfy the requirement?
Masking is necessary but not sufficient. You also need to avoid informative feedback like “username not found” and prevent secrets from being exposed in logs, debug output, or reset flows. 1
What about account lockout messages that say “User is locked”?
Treat lockout messaging as authentication feedback. Where possible, keep user-facing responses generic and route detailed status to authenticated channels or internal admin views, then document the chosen configuration. 1
Does this apply to service accounts and API authentication?
Yes, if a human-facing or programmatic interface can be probed, distinct errors can reveal valid identifiers. Normalize API responses and ensure logs capture diagnostic detail without exposing secrets. 1
We use a third-party SaaS for a CUI workflow. Are we responsible for its login error messages?
You are responsible for the security of your CUI environment and for configuring third-party services appropriately where you have control. If you cannot configure it, document the limitation, the risk decision, and any compensating controls, and keep that in your assessment package. 2
What evidence is most persuasive in a CMMC assessment for 3.5.11?
Side-by-side test results (invalid user vs invalid password), screenshots of masked inputs, and exported configs/settings are typically the fastest to validate. Tie each artifact to the specific system in scope and keep it dated and repeatable. 7
Footnotes
Frequently Asked Questions
Does 3.5.11 require the exact same error message everywhere?
It requires obscured feedback, so the safest approach is a single generic failure message per authentication channel. If a platform forces different text, document the constraint and show compensating consistency (for example, identical user-facing outcomes and no account confirmation). (Source: NIST SP 800-171 Rev. 2)
Are password masking dots/asterisks enough to satisfy the requirement?
Masking is necessary but not sufficient. You also need to avoid informative feedback like “username not found” and prevent secrets from being exposed in logs, debug output, or reset flows. (Source: NIST SP 800-171 Rev. 2)
What about account lockout messages that say “User is locked”?
Treat lockout messaging as authentication feedback. Where possible, keep user-facing responses generic and route detailed status to authenticated channels or internal admin views, then document the chosen configuration. (Source: NIST SP 800-171 Rev. 2)
Does this apply to service accounts and API authentication?
Yes, if a human-facing or programmatic interface can be probed, distinct errors can reveal valid identifiers. Normalize API responses and ensure logs capture diagnostic detail without exposing secrets. (Source: NIST SP 800-171 Rev. 2)
We use a third-party SaaS for a CUI workflow. Are we responsible for its login error messages?
You are responsible for the security of your CUI environment and for configuring third-party services appropriately where you have control. If you cannot configure it, document the limitation, the risk decision, and any compensating controls, and keep that in your assessment package. (Source: DoD CMMC Program Guidance)
What evidence is most persuasive in a CMMC assessment for 3.5.11?
Side-by-side test results (invalid user vs invalid password), screenshots of masked inputs, and exported configs/settings are typically the fastest to validate. Tie each artifact to the specific system in scope and keep it dated and repeatable. (Source: DoD CMMC Program Guidance; Source: NIST SP 800-171 Rev. 2)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream