PE-6(2): Automated Intrusion Recognition and Responses

PE-6(2) requires you to automatically recognize defined intrusion conditions and trigger predefined response actions using defined mechanisms (for example, integrated detection-to-response tooling in your facility and supporting systems). To operationalize it quickly, you need a written “intrusion automation” specification, instrumented detection sources, tested response playbooks that can execute automatically, and repeatable evidence that the automation ran and was reviewed. 1

Key takeaways:

  • Define exactly what “intrusion” conditions you will auto-recognize, and document the trigger logic and data sources.
  • Pre-approve automated response actions, with safety checks to prevent business-disrupting false positives.
  • Produce assessor-ready evidence: configs, alert→action logs, test results, and recurring reviews tied to an owner.

The pe-6(2): automated intrusion recognition and responses requirement sits in the Physical and Environmental Protection (PE) control family, but it touches security operations, facilities, and IT/OT engineering. The operator challenge is rarely the idea of detection or response; it’s making the recognition and response “automated” in a way that is specific, defensible, and safe.

This requirement expects you to (1) name the intrusion conditions you care about, (2) prove you can recognize them automatically, and (3) prove your system initiates the intended response actions automatically using defined mechanisms. The fastest path is to treat PE-6(2) like a closed loop: sensors and signals feed correlation/logic, which triggers actions, which produces logs, which get reviewed and tested.

If you’re a CCO, GRC lead, or control owner supporting a federal system or a contractor environment handling federal data, your goal is simple: convert PE-6(2) from “we have cameras and alarms” into a measurable control with clear scope, deterministic triggers, and evidence an assessor can replay. 2

Regulatory text

NIST SP 800-53 Rev. 5 PE-6(2): “Recognize [organization-defined events/conditions] and initiate [organization-defined responses] using [organization-defined mechanisms].” 1

What the operator must do

You must make three decisions and implement them end-to-end:

  1. Define the intrusion conditions you want the environment to recognize (examples below).
  2. Define the responses that must start automatically when those conditions are recognized.
  3. Define the mechanisms (tools, integrations, and systems) that perform recognition and initiate response, then retain evidence they worked. 1

Plain-English interpretation

PE-6(2) means you can’t rely only on a human guard noticing an issue and deciding what to do. You need a system that detects specified intrusion scenarios automatically and starts response steps automatically, based on pre-approved rules. The “automation” can be partial (for example, automatic alerting plus automatic door lockdown), but you must document what is automated and show it operates as designed.

Think of it as “detect-to-action without waiting for a person,” while still keeping humans in the loop for oversight and safety where needed.

Who it applies to

Entity types

  • Federal information systems
  • Contractor systems handling federal data (including hosted environments, colocation, and managed facilities) 1

Operational contexts where PE-6(2) shows up in audits

  • Data centers, comms rooms, and secure labs
  • Colocation cages and shared-building environments where you control some but not all physical layers
  • Facilities with badge access, visitor systems, mantraps, cameras, or alarms
  • OT/ICS sites where physical intrusion correlates to safety or process disruption

What you actually need to do (step-by-step)

Step 1: Set scope and ownership (make it assessable)

  • Name a control owner (often Facilities Security, Physical Security, or Corporate Security) and a technical owner (often SecOps or IT Infrastructure).
  • Define in-scope spaces (rooms, cages, floors) and supporting systems (badge system, video management, alarm panel, SIEM/SOAR if used).
  • Record dependencies on third parties (building management, guard force, monitoring center, colocation provider). Your control can still pass, but you need contracts/SLAs and evidence paths.

Artifact to create: PE-6(2) control implementation statement (one page) with scope, owners, and system list.

Step 2: Define “intrusion recognition” conditions (make them testable)

Create an “intrusion conditions catalog” with clear trigger logic. Examples you can tailor:

  • Door forced open / door held open beyond threshold
  • Access attempt with invalid credential, repeated failures, or access outside approved hours
  • Tailgating detected by mantrap sensors
  • Motion detected in a restricted room during closed hours
  • Alarm panel tamper, camera obstruction, or sensor offline in a restricted area

Write each condition as IF (signal) AND (context) THEN (intrusion condition). Avoid vague entries like “suspicious activity.” Assessors test determinism.

Artifacts to retain: intrusion conditions catalog; sensor/signal mapping; data flow diagram from sensors to correlation engine.

Step 3: Define automated responses (pre-approved, safe, and staged)

For each condition, define:

  • Primary automated response (what starts automatically)
  • Secondary response (what escalates if not acknowledged)
  • Safety rails (what the automation must not do without human confirmation)

Common automated responses:

  • Generate an alarm and notify monitoring (on-site guard, SOC, or third-party monitoring)
  • Lock or unlock specific doors based on life-safety rules
  • Start/zoom camera recording on relevant zones
  • Create an incident ticket with required fields pre-populated (location, device, time, access logs)
  • Trigger a SOC runbook step (for example, correlate with logical access anomalies)

Practical decision matrix (use in design reviews):

Intrusion condition Automation allowed? Default automated response Requires human confirm? Evidence source
Door forced open Yes Alarm + camera focus + ticket No (notify immediately) Alarm logs, VMS logs, ticket
Repeated badge failures Yes Alert + correlate to user Sometimes Access logs, SIEM
Sensor offline/tamper Yes Alert + maintenance ticket No Health monitoring logs

Step 4: Implement mechanisms (show the “using” part)

“Mechanisms” should be explicitly named and configured, such as:

  • Physical Access Control System (PACS) rules and alerts
  • Alarm panels and monitoring station integrations
  • Video Management System (VMS) analytics and event triggers
  • Central logging/SIEM ingestion of PACS/VMS/alarm events
  • SOAR or workflow tool that auto-creates incidents and pages on-call

Your assessor will ask: “Where is the rule configured, and what proves it fired?” So capture:

  • Configuration exports or screenshots
  • Integration diagrams (PACS→SIEM, VMS→ticketing)
  • Message formats and routing (email/SMS/pager, API calls, webhook logs)

Step 5: Test the automation (prove recognition and response)

Design tests that mirror each defined condition:

  • Execute a controlled door-held-open event
  • Simulate tamper (where safe and permitted)
  • Use a test badge to trigger invalid attempts
  • Validate that the automated response occurred (alarm + ticket + notification) and that logs match timestamps

Record: test steps, expected results, actual results, issues, and fixes.

Step 6: Operationalize recurring reviews (keep it working)

Automation drifts. Badge readers fail, cameras get replaced, integrations break.

  • Schedule periodic review of: top intrusion alerts, false positives, response times, and mechanism health.
  • Review third-party monitoring performance and escalation quality if outsourced.
  • Track changes: any remodel, PACS upgrade, VMS upgrade, or network segmentation change should trigger a re-test.

How Daydream fits without adding complexity: Daydream can map PE-6(2) to a named owner, a documented procedure, and a recurring evidence set so you stop rebuilding the audit package each cycle. That directly addresses the common risk factor: missing implementation evidence. 1

Required evidence and artifacts to retain (assessor-ready checklist)

Keep evidence that covers design, implementation, and operating effectiveness:

Design / governance

  • PE-6(2) implementation statement: scope, owners, systems
  • Intrusion conditions catalog with trigger logic and mappings
  • Response playbooks/runbooks with approvals (security + facilities + safety)

Technical implementation

  • PACS/VMS/alarm configuration exports or screenshots showing rules
  • Integration documentation (event routing, SIEM ingestion, ticket automation)
  • Mechanism inventory (systems and versions, where relevant to your environment)

Operational evidence

  • Alert/event logs showing recognized conditions and triggered responses
  • Tickets/incidents generated automatically with timestamps
  • Notification records (paging, email logs) as applicable
  • Test records and remediation notes
  • Review meeting notes or metrics summaries (false positives, sensor health)

Common exam/audit questions and hangups

  1. “What intrusion conditions did you define?” Auditors want a bounded list, not “any intrusion.”
  2. “Show me the automation.” They will trace one event from sensor to response initiation.
  3. “Is it truly automated, or just an email?” Email-only can pass if it is your defined automated response, but you must show it is automatic and reliable.
  4. “What happens on failure?” They look for monitoring of sensor health and integration failure.
  5. “How do you prevent unsafe lockouts?” Life-safety and egress controls must be addressed in the playbook approvals.

Frequent implementation mistakes (and how to avoid them)

  • Mistake: Vague definitions (“suspicious activity”).
    Fix: Write deterministic conditions tied to measurable signals (badge log event types, alarm codes). 1

  • Mistake: Automation exists, but no one can produce evidence quickly.
    Fix: Establish a standard evidence bundle: monthly event samples, config exports after changes, and test results after material updates.

  • Mistake: Single point of failure (integration breaks silently).
    Fix: Add mechanism health monitoring and alerting for “no events received,” device offline, or connector failure.

  • Mistake: Over-aggressive auto-responses that disrupt operations.
    Fix: Use staged response (alert first, then lock down only for high-confidence triggers), and document safety rails.

  • Mistake: Assuming the colocation provider “covers it.”
    Fix: Treat the provider as a third party dependency. Contract for logs, incident notifications, and test participation, then retain those artifacts.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement, so you should plan for assessment scrutiny rather than case-law driven expectations. The real risk is operational: delayed detection of physical intrusion and lack of provable, consistent response. In federal assessments, weak evidence often becomes a finding even when controls exist informally. 1

Practical 30/60/90-day execution plan

First 30 days (foundation)

  • Assign control owner and technical owner; confirm in-scope spaces and systems.
  • Draft intrusion conditions catalog and response matrix; get security/facilities sign-off.
  • Identify mechanisms currently in place (PACS, VMS, alarms, monitoring center) and gaps in integrations/logging.

Days 31–60 (build and document automation)

  • Configure or refine rules for each intrusion condition in PACS/VMS/alarm tools.
  • Implement alert routing and ticket automation (even basic auto-ticketing is acceptable if documented as the response).
  • Stand up a central evidence location: config exports, log samples, and runbooks with version control.

Days 61–90 (test, tune, and make it repeatable)

  • Run controlled tests for each condition; record results and fix failures.
  • Tune for false positives; add safety rails and human confirmations where needed.
  • Establish recurring review cadence and triggers for re-testing after changes; automate evidence collection where practical (for example, scheduled log exports).

Frequently Asked Questions

Does PE-6(2) require a SIEM or SOAR platform?

No specific tool is mandated; the control requires automated recognition and automated initiation of responses using your defined mechanisms. A SIEM/SOAR can help with correlation and evidence, but PACS/VMS/alarm-native automation can also meet the requirement if documented. 1

What counts as an “automated response” for PE-6(2)?

Any pre-defined action that starts automatically after the intrusion condition is recognized, such as triggering an alarm, starting camera recording, sending a page, or creating an incident ticket. The key is that it is automatic, defined, and evidenced. 1

We outsource monitoring to a third party. Can we still comply?

Yes, but you must show the automation chain still works end-to-end and that you can obtain logs and incident records from the third party. Put evidence access and notification expectations into the contract and keep the artifacts.

How do we handle false positives without failing the control?

Document thresholds and staged responses, then show tuning and review. Assessors react well to controls that are precise, tested, and continuously adjusted rather than overly broad rules that operators ignore.

Do we have to automatically lock doors on every intrusion event?

No. Your response must be defined and appropriate to the condition and safety constraints. Many programs start with automated alerting and ticketing, then add lockdown only for specific high-confidence scenarios.

What evidence is the fastest to produce during an audit?

A configuration snapshot of the rule, one or more event logs showing the trigger fired, and the corresponding automatically created ticket/notification. Package these by intrusion condition so the assessor can trace the chain quickly.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5; Source: NIST SP 800-53 Rev. 5 OSCAL JSON

Frequently Asked Questions

Does PE-6(2) require a SIEM or SOAR platform?

No specific tool is mandated; the control requires automated recognition and automated initiation of responses using your defined mechanisms. A SIEM/SOAR can help with correlation and evidence, but PACS/VMS/alarm-native automation can also meet the requirement if documented. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What counts as an “automated response” for PE-6(2)?

Any pre-defined action that starts automatically after the intrusion condition is recognized, such as triggering an alarm, starting camera recording, sending a page, or creating an incident ticket. The key is that it is automatic, defined, and evidenced. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

We outsource monitoring to a third party. Can we still comply?

Yes, but you must show the automation chain still works end-to-end and that you can obtain logs and incident records from the third party. Put evidence access and notification expectations into the contract and keep the artifacts.

How do we handle false positives without failing the control?

Document thresholds and staged responses, then show tuning and review. Assessors react well to controls that are precise, tested, and continuously adjusted rather than overly broad rules that operators ignore.

Do we have to automatically lock doors on every intrusion event?

No. Your response must be defined and appropriate to the condition and safety constraints. Many programs start with automated alerting and ticketing, then add lockdown only for specific high-confidence scenarios.

What evidence is the fastest to produce during an audit?

A configuration snapshot of the rule, one or more event logs showing the trigger fired, and the corresponding automatically created ticket/notification. Package these by intrusion condition so the assessor can trace the chain quickly.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream