Literacy Training and Awareness | Social Engineering and Mining

To meet NIST SP 800-53 Rev. 5 AT-2(3) in a FedRAMP context, you must deliver and track literacy training that teaches personnel how to recognize and report both attempted and successful social engineering and social mining. Operationalize it by defining the audience, assigning training with deadlines, running realistic exercises, and retaining completion and comprehension evidence for assessors. 1

Key takeaways:

  • Train people to detect and report social engineering and social mining, not just “general security awareness.” 1
  • Make it auditable: assignments, due dates, completion tracking, and proof that learners understood the content. 1
  • Tie training to your incident reporting path so “reporting” is a practiced behavior, not a slide. 1

AT-2(3) is a targeted enhancement to your security awareness program: it expects literacy training focused on recognizing and reporting social engineering and social mining. In FedRAMP reviews, this control often fails for one of two reasons: the organization treats it as generic annual training, or they cannot produce evidence that the right populations completed training and knew how to report a suspicious contact.

For a CCO, GRC lead, or security compliance operator, the fastest path is to build a small, repeatable training “product” with clear scope: what social engineering looks like in your environment (email, voice, chat, support tickets, collaboration tools), what social mining looks like (information gathering from public and internal sources), and what the reporting workflow is (who to notify, what tool to use, what details to include). Then wrap it in assignment, tracking, escalation, and record retention.

This page gives requirement-level implementation guidance you can put into a FedRAMP System Security Plan (SSP) narrative and actually run month to month, using standard FedRAMP documentation expectations and assessor-ready artifacts. 2

Regulatory text

Requirement (AT-2(3)): “Provide literacy training on recognizing and reporting potential and actual instances of social engineering and social mining.” 1

What the operator must do

You need a defined training component (not just a policy statement) that:

  1. teaches personnel how to identify social engineering and social mining attempts in channels your organization uses, and
  2. teaches personnel exactly how to report both suspected attempts and confirmed events through your organization’s reporting mechanism. 1

Assessors will look for operational proof: training content, assignment logic (who gets it), completion evidence, and a reporting path that employees can describe and use. 1

Plain-English interpretation (what AT-2(3) means in practice)

  • Social engineering: someone manipulating a person to gain access, information, or action (phishing, pretexting, impersonation, “urgent finance request,” fake helpdesk calls).
  • Social mining: information gathering about your people, systems, or processes from public sources (or internal breadcrumbs) to make later social engineering more effective.

AT-2(3) expects literacy: people should recognize patterns and respond correctly. “Respond correctly” includes reporting. If your workforce cannot explain how to report a suspected phish or impersonation attempt, the requirement is not met even if they watched training. 1

Who it applies to (scope and operational context)

Entities

  • Cloud Service Providers (CSPs) operating a FedRAMP-authorized cloud service offering (CSO), within the authorization boundary. 1
  • Federal Agencies responsible for implementing and maintaining the authorized baseline where applicable to their roles. 1

People (practical scoping)

Include any workforce member (employee or contractor) who:

  • accesses boundary systems or handles FedRAMP-related data,
  • can receive external communications (email, calls, messaging),
  • can approve payments, create users, reset credentials, change MFA, modify configurations, or release data.

A common scoping miss: excluding customer support, IT helpdesk, finance/AP, HR, and sales. Those functions are frequent social engineering targets because they can be manipulated into bypassing normal controls.

What you actually need to do (step-by-step)

Step 1: Define “social engineering” and “social mining” for your environment

Create a one-page internal definition sheet that lists:

  • common attack channels you use (email, collaboration tools, ticketing, phone),
  • top pretexts relevant to your org (password reset, “executive urgent request,” “agency sponsor needs access,” “procurement update”),
  • examples of social mining sources (LinkedIn profiles, public documentation, Git repos, status pages, job postings).

Keep it concrete and role-based. This becomes both training input and audit support. 1

Step 2: Map the reporting workflow (and make it frictionless)

Document a single “how to report” flow:

  • Where to report (e.g., security mailbox, phishing-report button, ticket queue, hotline).
  • What to include (message headers, callback number, screenshots, request details, who was impersonated).
  • What to do immediately (do not click, do not reply, do not forward broadly, isolate device if needed).
  • What happens next (triage, incident handling path).

Then test it: send a benign test email and validate that it lands in the correct queue with enough metadata for triage. 1

Step 3: Build the training module and role variants

Minimum training content should cover:

  • recognition cues (urgency, authority pressure, payment changes, credential prompts),
  • verification steps (call-back to known numbers, secondary approval, out-of-band confirmation),
  • how social mining works and what oversharing looks like,
  • your reporting path and what “good reports” contain.

Add role-based micro-modules where it matters:

  • Helpdesk: identity verification before resets, MFA reset controls, handling angry callers.
  • Finance: invoice change requests, banking detail changes, executive payment demands.
  • Engineering/admins: OAuth consent scams, secrets in repos, impersonated vendors.

Keep versioning: module name, version, last updated date, and owner. It makes continuous monitoring easier. 1

Step 4: Assign, track, and escalate completion

Operational requirements to implement:

  • Audience definitions (all personnel in scope; role-based add-ons for privileged or high-risk functions).
  • Assignment trigger (new hire onboarding; role change; annual refresh; ad-hoc after major incidents).
  • Completion timelines (set internal deadlines appropriate to your risk tolerance; the key is consistency and enforcement).
  • Escalation for overdue items (manager notification; access gating for high-risk roles if your governance permits).

Use your LMS or GRC workflow to generate an auditable completion report. 1

Step 5: Add practical exercises (phishing and impersonation simulations)

AT-2(3) is “literacy,” so add at least one practical measurement:

  • phishing simulations (email),
  • vishing/tabletop drills for helpdesk or finance,
  • “spot the pretext” scenarios in chat/tickets.

Track outcomes: completion, failure modes, and whether people reported through the correct channel. You’re proving the “recognize and report” loop works. 1

Step 6: Integrate with incident response and continuous monitoring

Tie training to:

  • incident reporting requirements (your internal IR plan),
  • lessons learned updates (refresh training content after real incidents),
  • monthly/quarterly metrics reporting for internal governance.

In a FedRAMP package, this linkage matters because assessors test operational maturity, not slide decks. 2

Required evidence and artifacts to retain (assessor-ready)

Maintain these artifacts in a single evidence folder mapped to AT-2(3):

  1. Training content
  • slides/module outline, scripts, screenshots, role-based variants, version history. 1
  1. Training policy or standard operating procedure (SOP)
  • who must take it, cadence, assignment method, escalation. 1
  1. Completion and acknowledgment records
  • LMS completion report, attendance logs, signed attestations where used. 1
  1. Comprehension proof
  • quiz results, minimum pass criteria (your internal standard), retake logs. 1
  1. Exercise/campaign results
  • phishing simulation metrics, scenario outcomes, reporting rates, corrective actions. 1
  1. Reporting workflow documentation and proof it works
  • screenshots of “report phish” tooling, ticket queue configuration, sample sanitized ticket showing required fields captured. 1

Tip for clean audits: store “point-in-time” snapshots (PDF exports) so you can prove what was true during the assessment period.

Common exam/audit questions and hangups

Expect these questions from assessors or agency reviewers:

  • “Show me the training content specific to social engineering and social mining.” Generic awareness may not satisfy AT-2(3). 1
  • “Who is required to take it, and how do you know they completed it?” They will ask for population lists and completion reports. 1
  • “How do personnel report suspected social engineering?” Interviewees should describe the workflow consistently. 1
  • “What happens when someone fails a simulation or quiz?” Have a defined remediation path (retraining, coaching, follow-up). 1
  • “How do you keep training current?” Provide version history and update triggers (incidents, new tooling, new threat patterns). 1

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: Treating AT-2(3) as annual generic awareness.
    Fix: create a discrete module or add a clearly labeled section that covers social engineering/social mining recognition and reporting with your internal workflow. 1

  2. Mistake: No evidence of comprehension.
    Fix: require a short quiz or scenario-based check and retain results. Attendance alone is weak evidence. 1

  3. Mistake: Reporting path exists but is not practiced.
    Fix: run simulations that require reporting through the real channel and verify the queue receives actionable data. 1

  4. Mistake: Contractors and third parties fall through the cracks.
    Fix: define training obligations in onboarding and third-party access processes. If a third party has boundary access, include them in your training population or document equivalent controls contractually and operationally. 1

  5. Mistake: “Social mining” is ignored.
    Fix: add content on oversharing, public artifacts, job postings, support forums, and how attackers build context for impersonation. 1

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so rely on FedRAMP assessment realities: failure modes usually show up as SSP narrative gaps, incomplete training records, poor joiner/mover/leaver coverage, or inability of staff to explain reporting during interviews. The risk is practical: social engineering bypasses technical controls, and weak reporting delays containment. 1

Practical 30/60/90-day execution plan (operator-focused)

Use this as an internal rollout plan. Adapt timing to your authorization stage and workforce size.

First 30 days (stand up the minimum auditable program)

  • Name an owner for AT-2(3) content and evidence collection.
  • Define training scope: in-scope roles, contractors, privileged users.
  • Document the reporting workflow end-to-end and test it with a benign report.
  • Build or update the training module with org-specific examples and the “how to report” steps.
  • Configure LMS assignments and escalation rules; confirm you can export completion reports. 1

By 60 days (prove comprehension and operational behavior)

  • Launch the training assignment to the full in-scope population.
  • Add quiz/scenario checks and define remediation for failures.
  • Run one simulation (phishing or role-specific impersonation drill) that measures reporting behavior.
  • Collect artifacts in an assessor-ready folder mapped to AT-2(3). 1

By 90 days (make it durable for continuous monitoring)

  • Implement role-based modules for high-risk functions (helpdesk, finance, admins).
  • Add a recurring campaign cadence and a content update trigger tied to incidents and process changes.
  • Validate interview readiness: managers and random staff can explain how to report and what to do.
  • If you use Daydream for control operations, set AT-2(3) as a tracked requirement with automated evidence reminders and a standing evidence checklist so each monitoring cycle becomes a refresh, not a scramble. 2

Frequently Asked Questions

Does AT-2(3) require phishing simulations, or is training enough?

The text requires literacy training on recognizing and reporting; it does not explicitly mandate simulations. Simulations are a practical way to prove “recognize and report” works and to generate stronger evidence than completion alone. 1

What counts as “social mining” for training content?

Social mining is information gathering that enables later attacks, such as learning org charts, workflows, vendors, and approvers from public or internal breadcrumbs. Include examples relevant to your environment (public profiles, documentation, support tickets, code repos). 1

Who must take this training in a FedRAMP boundary?

Anyone with access to boundary systems or data, plus roles likely to be targeted for impersonation or process bypass (helpdesk, finance, HR, customer support). Document the population definition and keep it consistent with how you provision access. 1

What evidence is strongest for an assessor?

Versioned training content, LMS assignment/completion exports, and proof of understanding such as quiz results or scenario outcomes. Campaign results that show users reported suspicious activity through the defined channel are particularly persuasive. 1

How do we handle third-party personnel (contractors) who have access?

Treat them as part of the workforce for training purposes if they access boundary resources, or contractually require equivalent training and retain proof. Avoid relying on contract language without operational evidence. 1

How should this show up in the SSP?

Describe the training content scope (social engineering and social mining), the assignment and tracking mechanism, reporting procedures taught, and the evidence you retain. Align your narrative and attachments to FedRAMP documentation expectations. 2

Footnotes

  1. NIST Special Publication 800-53 Revision 5

  2. FedRAMP documents and templates

Frequently Asked Questions

Does AT-2(3) require phishing simulations, or is training enough?

The text requires literacy training on recognizing and reporting; it does not explicitly mandate simulations. Simulations are a practical way to prove “recognize and report” works and to generate stronger evidence than completion alone. (Source: NIST Special Publication 800-53 Revision 5)

What counts as “social mining” for training content?

Social mining is information gathering that enables later attacks, such as learning org charts, workflows, vendors, and approvers from public or internal breadcrumbs. Include examples relevant to your environment (public profiles, documentation, support tickets, code repos). (Source: NIST Special Publication 800-53 Revision 5)

Who must take this training in a FedRAMP boundary?

Anyone with access to boundary systems or data, plus roles likely to be targeted for impersonation or process bypass (helpdesk, finance, HR, customer support). Document the population definition and keep it consistent with how you provision access. (Source: NIST Special Publication 800-53 Revision 5)

What evidence is strongest for an assessor?

Versioned training content, LMS assignment/completion exports, and proof of understanding such as quiz results or scenario outcomes. Campaign results that show users reported suspicious activity through the defined channel are particularly persuasive. (Source: NIST Special Publication 800-53 Revision 5)

How do we handle third-party personnel (contractors) who have access?

Treat them as part of the workforce for training purposes if they access boundary resources, or contractually require equivalent training and retain proof. Avoid relying on contract language without operational evidence. (Source: NIST Special Publication 800-53 Revision 5)

How should this show up in the SSP?

Describe the training content scope (social engineering and social mining), the assignment and tracking mechanism, reporting procedures taught, and the evidence you retain. Align your narrative and attachments to FedRAMP documentation expectations. (Source: FedRAMP documents and templates)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
Literacy Training and Awareness | Social Engineering and ... | Daydream