Attack Vectors and Indicators
To meet the NIST SP 800-61 Rev. 2 “Attack Vectors and Indicators” requirement, you must (1) identify and categorize the attack vectors most relevant to your environment and (2) establish concrete detection mechanisms for indicators of compromise (IOCs) tied to each vector. Operationally, that means maintaining a living vector-to-detection mapping and proving your monitoring can actually surface those indicators in time to support incident handling 1.
Key takeaways:
- Maintain an explicit, owned catalog of attack vectors relevant to your business and systems 1.
- For each vector, define the indicators you expect to see and the telemetry/tools that will detect them 1.
- Keep evidence that detection is active and works in practice: configs, rules, log sources, and validation results 1.
“Attack vectors and indicators” sounds conceptual until you have to defend it in an assessment or rely on it during an incident. NIST SP 800-61 Rev. 2 Section 3.2.1 requires two outcomes: you understand how attacks commonly enter or impact your environment, and you have mechanisms to detect the indicators those attacks leave behind 1.
For a Compliance Officer, CCO, or GRC lead, the fastest path is to convert the requirement into a small set of durable operational artifacts: a categorized list of attack vectors, a per-vector IOC/telemetry mapping, and proof that your monitoring stack is configured to detect those IOCs. This page gives you requirement-level implementation guidance you can hand to security operations while retaining governance over scope, approvals, exceptions, and evidence.
If you do only one thing, make it this: publish a “vector-to-indicator-to-detection” matrix that names owners, log sources, detection content, and escalation paths. That single artifact becomes your control plane for audits, incident readiness, and ongoing tuning.
Regulatory text
Requirement (NIST SP 800-61 Rev. 2, Section 3.2.1): “Identify and categorize common attack vectors and establish mechanisms to detect indicators of compromise for each vector.” 1
What the operator must do
- Identify and categorize common attack vectors that apply to your environment. NIST’s examples include web, email, removable media, attrition (availability attacks), impersonation, and loss/theft 1.
- Establish detection mechanisms for indicators associated with each category, so incidents can be discovered and triaged based on observable signs 1.
Plain-English interpretation
You need a practical map from “how attackers get in or cause harm here” to “what we will see when they try” to “where we detect it.” This is not a one-time list. It should evolve with your systems (cloud services, SaaS, endpoints, identity) and with the threats your incident team actually encounters.
A strong implementation produces repeatable outcomes:
- The organization can explain its top attack vectors in business terms.
- Security operations can show which signals are monitored for each vector.
- Incident handlers can quickly pivot from an alert to likely vectors and expected follow-on activity 1.
Who it applies to (entity and operational context)
Applies to: Organizations implementing incident handling practices aligned to NIST SP 800-61 Rev. 2, including federal agencies and other organizations adopting NIST guidance 1.
Operational owners typically include:
- SOC / Detection Engineering: builds and tunes detections, manages alerting.
- IR Lead / Incident Response: defines what “good detection” means for response readiness.
- IT / Cloud / Endpoint teams: ensure required logs and telemetry exist.
- GRC / Compliance: sets the requirement, ensures coverage, manages exceptions, and collects evidence.
Where it matters most:
- Environments with multiple log sources (cloud, SaaS, endpoints, network).
- Organizations with third parties that connect to systems, handle data, or administer infrastructure (because third-party access expands vectors such as impersonation and web/email entry points).
What you actually need to do (step-by-step)
Step 1: Define your attack vector taxonomy (start with NIST’s categories)
Create a short list of vector categories you will govern. A practical baseline aligns to NIST’s examples (web, email, removable media, attrition, impersonation, loss/theft) 1. Add subcategories only if you can assign ownership and detection coverage (for example, “identity-based impersonation” or “SaaS OAuth abuse”).
Deliverable: Attack Vector Catalog (owned document or GRC record), with:
- Vector name and definition
- Systems in scope
- Primary owners (SOC, IT)
- “Common scenarios we care about” (one line each)
Step 2: Build a per-vector “indicators” list you can detect
For each vector, define the indicators you expect. Keep indicators tied to observable events, not abstract tactics.
Example indicator types you can standardize:
- Identity indicators: suspicious logins, impossible travel (if you track it), MFA changes, new device registrations
- Endpoint indicators: unsigned binaries, persistence mechanisms, credential dumping signals
- Network indicators: unusual egress, DNS anomalies, connections to known-bad infrastructure (if you maintain threat intel)
- Application indicators: WAF blocks, repeated auth failures, abnormal API calls
- Data indicators: unusual downloads, bulk exports, encryption activity
Deliverable: Vector-to-Indicator register (table).
Step 3: Map indicators to telemetry and detection mechanisms
This is where many programs fail: they list indicators but cannot prove collection and alerting.
For each indicator, document:
- Telemetry source: identity provider logs, EDR, email security logs, WAF, firewall, SaaS audit logs
- Collection path: where it lands (SIEM/data lake)
- Detection mechanism: rule/correlation, EDR analytic, email gateway policy, SOAR playbook trigger
- Alert routing and severity: who gets paged or ticketed, and what “high” means
- Response linkage: runbook/playbook reference for triage steps
Deliverable: Detection Coverage Matrix (vector → indicators → log sources → detections → owner).
Step 4: Validate that detection works (not just configured)
You need proof that the mechanism can surface the indicator. Validation options:
- Tabletop review with the SOC: “If this happens, where do we see it and what alert fires?”
- Controlled testing in a non-production environment (where feasible)
- Review of past incidents: map real alerts to vectors and indicators encountered
Deliverable: Detection Validation Record: date, participants, what was validated, gaps found, tickets created.
Step 5: Operationalize change control and lifecycle
Attack vectors shift when you add a new SaaS app, expose a new web endpoint, or onboard a third party with admin access. Put the mapping under governance:
- Tie updates to security architecture reviews and major changes.
- Require a detection impact review for new systems and third-party integrations.
- Track exceptions when a vector is accepted temporarily (e.g., telemetry unavailable), with an owner and remediation plan.
Deliverable: A standing control procedure: “Attack Vector and Indicator Review,” including update triggers and approval workflow.
Step 6: Connect to incident handling workflows
NIST SP 800-61 is an incident handling guide; detection exists to support response 1. Ensure:
- Each vector has at least one triage playbook path.
- Alerts include enough context (asset, user, log excerpts) for first-responder decisions.
- Your incident classification aligns to vectors (so reporting and lessons learned improve the catalog).
Required evidence and artifacts to retain
Keep artifacts that prove both design and operation:
- Attack Vector Catalog with categories and scope 1.
- Vector-to-Indicator register showing expected indicators per vector 1.
- Detection Coverage Matrix mapping indicators to:
- log sources
- detection rules/policies
- owners
- alert routing
- Logging/telemetry enablement evidence
- example: proof key audit logs are enabled in identity provider / cloud / SaaS
- Detection content evidence
- rule exports, screenshots, configuration baselines, change tickets
- Validation records
- tabletop notes, test plans/results, gap remediation tickets
- Ongoing review evidence
- meeting notes, quarterly (or other cadence) review sign-offs, exception approvals
If you use a GRC platform like Daydream, store the matrix and validation records as the control’s “system of record” and link out to SIEM and configuration evidence. Audits move faster when evidence is traceable from requirement → mapping → proof.
Common exam/audit questions and hangups
Auditors and assessors tend to focus on “prove it” questions:
- “Show me your categorized attack vectors and how you decided what’s ‘common’ for your environment.” 1
- “For email/web/impersonation, what indicators do you monitor and where do those logs come from?” 1
- “Which systems are out of scope, and what is the risk acceptance?”
- “Demonstrate an alert and the associated triage steps. How do you know it would catch a real compromise?” 1
- “How do you keep this current when you add new apps or third parties?”
Common hangup: teams provide a threat list but cannot show the mechanism that detects the indicators (rule, policy, analytic, alert pipeline).
Frequent implementation mistakes and how to avoid them
-
Catalog is generic and unowned.
Fix: assign owners per vector and require reviews tied to change management. -
Indicators are not measurable.
Fix: rewrite indicators as observable events (“new inbox forwarding rule created”) with a specific log source. -
Telemetry gaps are discovered during an incident.
Fix: treat “log source enabled + forwarded + parsed” as a prerequisite control, not an IR afterthought. -
Detections exist but aren’t routed or triaged.
Fix: document alert destinations, on-call expectations, and tickets/queues; test the full path. -
No linkage to incident handling.
Fix: for each vector, define a minimum triage procedure and escalation threshold aligned to incident classification 1.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement. Practically, weak vector/indicator coverage increases the chance that incidents are detected late or mis-triaged, which can expand operational disruption and complicate incident handling activities referenced in NIST SP 800-61 1.
Practical 30/60/90-day execution plan
First 30 days (establish the minimum viable control)
- Publish the Attack Vector Catalog using NIST categories as the baseline 1.
- Build the first version of the Detection Coverage Matrix for your highest-risk systems (identity, email, endpoints, externally facing web).
- Identify top telemetry gaps and open remediation tickets with clear owners.
By 60 days (make it defensible)
- Complete vector-to-indicator mapping for all in-scope vectors.
- Confirm collection and alerting paths for each indicator category (identity, endpoint, network, app).
- Run a validation workshop with SOC/IR and document results and gaps.
By 90 days (make it operational)
- Put the catalog and matrix under change control with defined triggers (new SaaS, new third-party connection, major architecture changes).
- Add recurring review and metrics that your team already tracks (alert volume by vector, false positive themes, top gaps).
- Close the highest-impact telemetry and detection gaps, then re-validate.
Frequently Asked Questions
What counts as an “attack vector” for this requirement?
Use categories that describe how attacks reach or affect your environment, such as web, email, removable media, attrition, impersonation, and loss/theft 1. Keep it practical: a vector should map to specific systems, logs, and detections.
Do we need threat intelligence feeds to satisfy “indicators of compromise”?
No. The requirement is to establish mechanisms to detect indicators for each vector 1. Many indicators come from your own telemetry (identity logs, EDR events, email security logs) even without external feeds.
How do we prove “mechanisms to detect” during an audit?
Provide the vector-to-indicator-to-detection mapping plus evidence that logs are enabled, collected, and alerting is configured 1. Add validation notes that show you tested or reviewed detection paths end to end.
How do we handle third parties in this requirement?
Treat third-party access paths as part of your vectors, usually under impersonation and web/email depending on the integration. Document their connectivity (SSO, VPN, API, admin portals) and ensure the relevant audit logs and alerts are in your matrix.
What if we can’t detect an indicator for a vector (telemetry doesn’t exist)?
Record the gap as an exception with an owner and a remediation plan, and adjust your incident response assumptions accordingly 1. Auditors usually accept a managed gap more readily than an undocumented one.
How often should we update the attack vector and indicator mapping?
Update on meaningful change: new systems, new externally exposed services, major identity changes, and new third-party integrations. Also set a recurring review cadence so the catalog stays current, even if no major change occurs.
Footnotes
Frequently Asked Questions
What counts as an “attack vector” for this requirement?
Use categories that describe how attacks reach or affect your environment, such as web, email, removable media, attrition, impersonation, and loss/theft (Source: Computer Security Incident Handling Guide). Keep it practical: a vector should map to specific systems, logs, and detections.
Do we need threat intelligence feeds to satisfy “indicators of compromise”?
No. The requirement is to establish mechanisms to detect indicators for each vector (Source: Computer Security Incident Handling Guide). Many indicators come from your own telemetry (identity logs, EDR events, email security logs) even without external feeds.
How do we prove “mechanisms to detect” during an audit?
Provide the vector-to-indicator-to-detection mapping plus evidence that logs are enabled, collected, and alerting is configured (Source: Computer Security Incident Handling Guide). Add validation notes that show you tested or reviewed detection paths end to end.
How do we handle third parties in this requirement?
Treat third-party access paths as part of your vectors, usually under impersonation and web/email depending on the integration. Document their connectivity (SSO, VPN, API, admin portals) and ensure the relevant audit logs and alerts are in your matrix.
What if we can’t detect an indicator for a vector (telemetry doesn’t exist)?
Record the gap as an exception with an owner and a remediation plan, and adjust your incident response assumptions accordingly (Source: Computer Security Incident Handling Guide). Auditors usually accept a managed gap more readily than an undocumented one.
How often should we update the attack vector and indicator mapping?
Update on meaningful change: new systems, new externally exposed services, major identity changes, and new third-party integrations. Also set a recurring review cadence so the catalog stays current, even if no major change occurs.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream