AT-2(3): Social Engineering and Mining
AT-2(3) requires you to deliver role-appropriate literacy training that teaches personnel how to recognize and report both attempted and successful social engineering and social mining. To operationalize it quickly, define “reportable” events, publish a simple reporting path, run targeted training plus exercises, and retain evidence that training happened and reporting works in practice. 1
Key takeaways:
- Train people to spot and report social engineering and social mining, not just “phishing.”
- Make reporting concrete: what to report, where to report it, and what happens next.
- Keep assessment-ready evidence: training content, attendance/completions, and incident/report workflow artifacts.
Footnotes
The at-2(3): social engineering and mining requirement is a narrow enhancement inside NIST SP 800-53’s awareness and training family. It is easy to “check the box” with a generic security awareness course and still fail an assessment because you cannot prove the training was specific to social engineering and social mining, or because employees do not know how to report suspicious contact quickly.
For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat AT-2(3) as an operational behavior change requirement with evidence. You need (1) a defined training module that covers realistic pretexting and data-gathering tactics, (2) a clear reporting mechanism aligned to your incident handling process, and (3) recurring proof that the mechanism is used, tested, and reinforced.
This page gives requirement-level implementation guidance you can hand to a control owner and expect action: scope, steps, artifacts, audit questions, common mistakes, and a practical execution plan. The language is written for teams supporting federal information systems or contractor environments handling federal data where NIST SP 800-53 control alignment is expected. 1
Regulatory text
AT-2(3): Social Engineering and Mining — “Provide literacy training on recognizing and reporting potential and actual instances of social engineering and social mining.” 2
What the operator must do:
- Deliver training that builds recognition skills (identify suspicious approaches and data-mining behavior).
- Deliver training that builds reporting behavior (how to report, what to include, urgency).
- Cover both potential instances (attempts, suspicious outreach) and actual instances (confirmed compromise, verified impersonation, data disclosure). 2
Plain-English interpretation
AT-2(3) expects more than “don’t click links.” Your workforce must understand that social engineering includes phone calls, SMS, collaboration tools, social media outreach, in-person tailgating, fake IT support, and third-party impersonation. Social mining extends that threat: attackers gather details from public sources, internal documents, org charts, conference bios, and social platforms to build convincing pretexts.
You pass AT-2(3) when an assessor can see:
- your training explicitly addresses social engineering and social mining,
- people can explain the reporting path, and
- your security/IT process can intake these reports and respond consistently. 2
Who it applies to (entity and operational context)
Entity types
- Federal information systems implementing NIST SP 800-53 controls. 1
- Contractor systems handling federal data where NIST SP 800-53 alignment is required by contract, authorization boundary, or program requirements. 1
Operational scope (practical)
- All personnel with access to organizational systems, data, or facilities where a social engineering attempt could lead to unauthorized access or disclosure.
- Roles at higher exposure should receive deeper training and more frequent reinforcement (helpdesk, finance/AP, HR, executives, admins, customer support, security operations). This is not mandated by the excerpt, but it is the easiest way to make training credible and effective.
What you actually need to do (step-by-step)
1) Assign ownership and define the “minimum viable” control
Owner: Security awareness/training program owner (often Security, IT, or GRC), with Incident Response as a required partner.
Define the control outcome: “Personnel can recognize and report social engineering and social mining attempts through approved channels.” 2
Daydream fit (practical): In Daydream, map AT-2(3) to a named control owner, a written procedure, and a recurring evidence set so the requirement stays auditable as staff and tools change. 2
2) Write a tight reporting standard (one page)
Your training will fail in practice if reporting is vague. Publish a short standard that answers:
- What to report: suspicious messages, calls, in-person requests, credential prompts, unexpected MFA pushes, requests for sensitive info, “change of bank details” requests, requests to bypass process, and anything that looks like pretexting.
- Where to report: a dedicated email/portal button, ticket category, hotline, or chat channel monitored by Security/IT.
- What to include: screenshots, sender details, phone number, time, platform, what was requested, what action was taken.
- What happens next: triage, containment steps, and user guidance (e.g., do not continue the conversation; preserve evidence).
Tie this directly to your incident handling workflow so reports become trackable events, not inbox clutter.
3) Build training content that explicitly covers social engineering + social mining
Your literacy training should include these modules:
- Social engineering patterns: impersonation (IT, HR, executives), urgency, authority pressure, out-of-band verification failures, “friendly” rapport-building.
- Social mining patterns: how attackers harvest information (LinkedIn-style profiles, org charts, conference talks, internal templates, email signatures) to craft believable pretexts.
- Reporting muscle memory: exact reporting steps, examples of a good report vs. a poor report, and when to escalate as an incident.
- Role-based scenarios: tailor scenarios for finance, HR, IT/helpdesk, executives, and customer-facing teams.
Keep it realistic. Use your own tools and communication channels in examples (email client, chat platform, ticketing system).
4) Deliver training and confirm completion
Operationalize delivery in a way you can prove:
- New hire onboarding includes AT-2(3) content before access is broadly granted (where feasible).
- Existing staff complete the module on a set cadence aligned to your AT-2 program.
- High-risk roles receive supplemental scenarios (short, frequent refreshers work well).
5) Test reporting behavior with exercises
AT-2(3) calls for literacy training; it does not explicitly require phishing simulations. Still, exercises are the cleanest way to show the training produces reporting.
- Run controlled simulations (email, SMS, voice pretexting, physical access prompts) appropriate to your environment.
- Track whether staff reported via the right channel and whether the response team handled it consistently.
- Feed lessons learned back into training content and the reporting standard.
6) Close the loop with incident handling
Train the receiving teams too:
- Helpdesk and SOC/IR must know how to tag and route suspected social engineering/social mining reports.
- Define severity criteria (e.g., credential disclosure, financial request, data shared, privileged account targeted).
- Track outcomes: false positive, contained attempt, confirmed compromise, training gap.
Required evidence and artifacts to retain
Keep evidence that proves both training occurred and reporting works.
Training artifacts
- Training policy/standard referencing social engineering and social mining (or training plan section).
- Training content (slides, LMS module outline, scripts, scenario library).
- Completion records (LMS exports, attestations, new hire checklist sign-offs).
- Role-based training matrix (who gets what content).
Reporting + operations artifacts
- Published reporting instructions (intranet page, handbook excerpt, onboarding docs).
- Ticket categories / mailbox configuration / “report phish” button configuration evidence.
- Sample (sanitized) reports and resulting tickets demonstrating triage and closure.
- Exercise plans and results (simulation summary, lessons learned, follow-up actions).
Control mapping artifacts
- Control statement, owner assignment, and evidence schedule mapping AT-2(3) to artifacts for each assessment cycle. 2
Common exam/audit questions and hangups
Assessors tend to probe for specificity and proof:
- “Show me where social mining is addressed in training materials.”
- “How do users report suspicious phone calls or in-person requests?”
- “What is a ‘potential instance’ versus an ‘actual instance’ in your process?”
- “Provide evidence of training completion for a sample of users, including privileged users.”
- “Show that reported events are tracked and handled, not just collected.”
Hangup to expect: teams can show an LMS completion report but cannot show the reporting path, the ticket workflow, or how phone-based social engineering is handled.
Frequent implementation mistakes and how to avoid them
-
Mistake: Treating AT-2(3) as phishing-only training.
Fix: Add voice, SMS, chat, physical access, and third-party impersonation scenarios; include social mining examples tied to your org’s public footprint. -
Mistake: “Report to IT” with no defined channel.
Fix: Provide a single, memorable reporting method plus alternates for outages; document who monitors it and expected response steps. -
Mistake: No coverage of “actual instances.”
Fix: Train what to do after a mistake (clicked link, shared info, approved MFA push): stop, preserve evidence, report immediately, cooperate with IR. -
Mistake: Evidence exists but is not assessment-ready.
Fix: Store artifacts in one control evidence folder with a simple index: content, completion, reporting workflow, exercises, improvements. -
Mistake: Forgetting third-party and contractor populations.
Fix: Include contractors and long-term third-party staff in training and completion tracking if they access systems or facilities.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite enforcement outcomes. Practically, AT-2(3) reduces operational risk tied to credential theft, fraudulent payment requests, sensitive data disclosure, and compromise via impersonation. Your strongest risk argument to leadership is simple: social engineering succeeds when reporting is slow or absent, and AT-2(3) is designed to shorten that gap. 1
Practical 30/60/90-day execution plan
First 30 days (stand up the control baseline)
- Name the control owner and backups; document responsibilities.
- Publish the reporting standard (what/where/how) and align it to incident handling.
- Inventory existing awareness content; identify gaps for social mining and non-email channels.
- Define evidence storage and an evidence index for AT-2(3).
Days 31–60 (deliver training + make reporting real)
- Launch the updated module to all users; include explicit social mining content and reporting drills.
- Add role-based add-ons for high-risk teams (finance, HR, helpdesk, exec assistants).
- Implement or tune reporting intake (button, mailbox, ticket category) and triage playbook.
- Start collecting sample artifacts (sanitized tickets, completion exports).
Days 61–90 (prove effectiveness and harden)
- Run exercises across multiple channels (at least one non-email scenario).
- Review outcomes with IR/helpdesk: response time, routing quality, repeat confusion points.
- Update training content with your own “near miss” examples (sanitized).
- Package the assessment-ready evidence set: policy/standard, content, completions, exercises, workflow.
Daydream operational tip: set recurring evidence requests and reminders tied to your training cycle so AT-2(3) stays current without manual chasing each assessment period. 2
Frequently Asked Questions
Does AT-2(3) require phishing simulations?
The excerpt requires literacy training on recognizing and reporting social engineering and social mining; it does not explicitly mandate simulations. Simulations are a practical way to demonstrate the training changes reporting behavior and to generate strong evidence. 2
What counts as “social mining” in training terms?
Social mining is the collection and use of personal or organizational details to craft more convincing manipulation. Cover how attackers pull information from public profiles, org announcements, and internal context clues, then use it in pretexts. 2
Who must take the training?
Scope it to personnel who can access systems, data, or facilities where social engineering could cause harm, including contractors in scope for your system boundary. Add role-based depth where exposure is higher. 1
What’s the minimum evidence an auditor will accept?
Expect to show training content that explicitly addresses social engineering and social mining, completion records, and proof of a working reporting channel connected to triage/incident handling. Keep a few sanitized examples of reports and outcomes. 2
How do we handle reporting for phone calls and in-person attempts?
Give employees a script: stop the interaction, verify via a known channel, and report via the same centralized reporting path used for email. Train helpdesk/security to log these as trackable events with consistent categorization. 2
We have multiple business units. Can we decentralize training?
You can, but standardize the minimum content and evidence requirements so completions and materials roll up cleanly for assessment. Centralize the reporting channel or make escalation consistent across units. 2
Footnotes
Frequently Asked Questions
Does AT-2(3) require phishing simulations?
The excerpt requires literacy training on recognizing and reporting social engineering and social mining; it does not explicitly mandate simulations. Simulations are a practical way to demonstrate the training changes reporting behavior and to generate strong evidence. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What counts as “social mining” in training terms?
Social mining is the collection and use of personal or organizational details to craft more convincing manipulation. Cover how attackers pull information from public profiles, org announcements, and internal context clues, then use it in pretexts. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Who must take the training?
Scope it to personnel who can access systems, data, or facilities where social engineering could cause harm, including contractors in scope for your system boundary. Add role-based depth where exposure is higher. (Source: NIST SP 800-53 Rev. 5)
What’s the minimum evidence an auditor will accept?
Expect to show training content that explicitly addresses social engineering and social mining, completion records, and proof of a working reporting channel connected to triage/incident handling. Keep a few sanitized examples of reports and outcomes. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How do we handle reporting for phone calls and in-person attempts?
Give employees a script: stop the interaction, verify via a known channel, and report via the same centralized reporting path used for email. Train helpdesk/security to log these as trackable events with consistent categorization. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
We have multiple business units. Can we decentralize training?
You can, but standardize the minimum content and evidence requirements so completions and materials roll up cleanly for assessment. Centralize the reporting channel or make escalation consistent across units. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream