03.14.06: System Monitoring
To meet the 03.14.06: system monitoring requirement, you must continuously monitor in-scope systems that process, store, or transmit CUI, detect suspicious events, and act on them with documented procedures and retained evidence. Operationalize it by defining monitoring scope, centralizing logs, tuning detections, triaging alerts, and proving ongoing operation through repeatable records. 1
Key takeaways:
- Scope monitoring to the CUI environment first, then expand to supporting services and identity layers. 1
- “Monitoring” must produce actions and evidence, not just raw logs. 1
- Build an audit-ready package: architecture, log sources, detections, triage workflow, and recurring proof of review. 1
03.14.06: system monitoring requirement is a practical control: can you see what is happening across the systems that handle CUI, and can you show that you regularly look at that telemetry and respond? Assessors typically fail teams here for two reasons: the monitoring stack is incomplete (key log sources are missing or not retained), or monitoring exists but cannot be demonstrated as an operating process (no alert handling records, no tuning history, no review cadence).
For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat 03.14.06 as an evidence-driven operational requirement. Start with a crisp definition of the “system” you are monitoring (your CUI boundary and dependencies), then map required telemetry to concrete log sources, routes, and retention. After that, document how alerts become tickets, how investigation is performed, what “review” means, and how exceptions are handled.
This page gives requirement-level implementation guidance you can execute quickly: who owns what, what to deploy or configure, what evidence to retain, and how to avoid common assessment traps. It is written to support NIST SP 800-171 Rev. 3 alignment for nonfederal systems handling CUI. 1
Regulatory text
Excerpt (as provided): “NIST SP 800-171 Rev. 3 requirement 03.14.06 (System Monitoring).” 1
Operator interpretation of what you must do:
You must implement ongoing monitoring of in-scope systems to detect conditions that could indicate misuse, compromise, or control failure, and you must be able to prove that monitoring is configured, operating, and acted on. “System monitoring” is not satisfied by having logs available somewhere; it requires (1) defined scope, (2) collection and analysis, (3) response workflow, and (4) repeatable evidence. 1
Plain-English interpretation
- What it means: You need enough visibility across your CUI environment to detect suspicious activity and investigate it promptly. 1
- What auditors look for: Documented monitoring coverage (what you collect, from where), alerting/detection content, and proof that people review and respond. 1
- What “good” looks like in practice: Centralized telemetry (SIEM or managed detection platform), high-signal detections aligned to your environment, an owned triage process, and a steady stream of dated artifacts that show operation. 1
Who it applies to
Entity types: Federal contractors and other organizations operating nonfederal systems handling CUI. 1
Operational context (what’s in scope):
- Endpoints, servers, and network/security devices in the CUI enclave or boundary.
- Identity infrastructure used to authenticate to CUI systems (IdP, directory services, MFA, VPN).
- Cloud control planes and SaaS audit logs for any cloud services in the CUI environment.
- Shared services that can affect confidentiality or integrity of CUI (time sync, DNS, email gateways, bastions).
All scoping decisions must be explicit and defensible because “we didn’t think that was in scope” is a common assessment failure mode. 1
What you actually need to do (step-by-step)
1) Define monitoring scope and ownership
- Document the CUI boundary and list systems that store/process/transmit CUI, plus identity and admin paths into those systems. 1
- Assign accountable owners:
- Control owner: usually Security Operations (or IT if small org).
- Evidence owner: GRC or compliance operations.
- System owners: for each major platform (endpoints, cloud, network).
- Write a short System Monitoring Standard: what is monitored, minimum log sources, alert review expectations, and escalation path. Keep it implementable.
2) Build a log source inventory and close gaps
Create a table and treat it as a living artifact.
Minimum log source coverage (practical baseline):
- Identity: authentication successes/failures, MFA events, privileged role changes.
- Endpoints: EDR alerts, process execution, malware/quarantine events.
- Servers: OS security logs, admin access, service account behavior.
- Network/security: firewall allow/deny, VPN, IDS/IPS, proxy, DNS.
- Cloud/SaaS (if used for CUI): admin actions, API calls, storage access, key management events.
For each source, record:
- Where logs are generated
- How they are forwarded/collected
- Parsing/normalization status
- Retention location
- Responsible team
This inventory becomes your assessment map for the 03.14.06: system monitoring requirement. 1
3) Centralize collection and protect integrity
- Forward logs to a centralized platform (SIEM, MDR portal, or log analytics).
- Restrict access to logs (security team, limited admins).
- Ensure time synchronization across sources so correlations work.
- Define retention aligned to your risk and contract needs, then implement it consistently.
Assessors commonly ask whether logs can be altered or deleted by the same admins you are monitoring. Separate duties where practical and document compensating controls where not. 1
4) Define detections tied to real threats against CUI
Write “monitoring use cases” that turn data into action. Start with a small set you can operate well:
- Suspicious logins (impossible travel patterns, repeated failures, new geo, disabled MFA)
- Privilege escalation and new admin creation
- New or unusual remote access paths into the enclave
- Large data transfers or abnormal access to CUI repositories
- Endpoint compromise indicators on systems that access CUI
- Changes to logging configuration or agent removal (monitor the monitoring)
For each detection, document:
- Data sources required
- Trigger condition
- Severity
- Triage steps
- Escalation criteria
- False-positive tuning notes
5) Operationalize alert triage and investigation
Create a workflow that produces artifacts:
- Alert arrives in SIEM/MDR.
- Analyst (or IT on-call) classifies: benign, suspicious, confirmed incident.
- Create a ticket with required fields (system, user, timestamp, evidence links, disposition).
- If incident: follow incident handling playbook, preserve evidence, perform containment, document closure.
If your organization does not run a SOC, use an MDR or shared on-call rotation. The requirement is operational, not organizational. 1
6) Prove recurring review (this is where audits get real)
Define what “review” means and retain proof:
- Daily/regular alert queue review (tickets or MDR reports)
- Periodic review of coverage and detections (change log, tuning notes)
- Periodic health checks (agent coverage, ingestion failures, dropped logs)
Daydream (as a GRC system) fits well here as the place to map 03.14.06 to your monitoring standard, log source inventory, and a recurring evidence schedule so you do not rebuild the audit package every cycle.
Required evidence and artifacts to retain
Maintain an audit-ready folder (or control record) with:
- System Monitoring Policy/Standard mapped to 03.14.06. 1
- Scope statement for CUI boundary and monitored assets.
- Log source inventory with collection method and owners.
- Architecture diagram showing log flow from sources to SIEM/MDR.
- Configuration evidence: screenshots/exports of enabled audit logs, forwarding rules, agent deployment status.
- Detection catalog (use cases) with change history.
- Alert/ticket samples showing triage, investigation notes, and closure rationale.
- Review evidence: queue review records, weekly/monthly summaries, ingestion health reports.
- Exceptions register: what is not monitored yet, risk acceptance, compensating controls, target remediation.
Common exam/audit questions and hangups
- “Show me what systems are in scope for monitoring and why.” Expect pushback if scope is implied rather than documented. 1
- “Which log sources are collected from endpoints, identity, network, and cloud?” Missing identity or cloud admin logs is a frequent gap.
- “How do you know monitoring is working?” You need ingestion health evidence, not only alert screenshots.
- “Show evidence you reviewed alerts and acted.” Raw SIEM dashboards without tickets rarely satisfy.
- “Who can disable logging, and how would you detect it?” Address this directly with access controls and a detection for logging changes.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Treating logging as monitoring.
Fix: Define detections and a triage workflow with tickets and dispositions. 1 -
Mistake: Monitoring excludes identity.
Fix: Make IdP/directory logs first-class sources; many compromises start with credentials. -
Mistake: No proof of recurring operation.
Fix: Schedule recurring evidence capture (queue review exports, MDR monthly reports, health dashboards) and store it consistently. -
Mistake: Over-alerting that trains the team to ignore the console.
Fix: Start with fewer, higher-confidence detections; tune with documented rationale. -
Mistake: Ingestion breaks quietly.
Fix: Add ingestion-failure alerts and a defined response for “no logs received” conditions.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this specific requirement. Practically, weak system monitoring increases the chance that a CUI incident becomes prolonged, harder to investigate, and harder to report with confidence because you cannot reconstruct timeline and impact. That becomes a contractual and assessment risk even where formal penalties are not specified in the provided materials. 1
Practical execution plan (30/60/90-day)
First 30 days (stabilize scope and telemetry)
- Define the CUI monitoring scope, owners, and written monitoring standard. 1
- Build the log source inventory and identify missing sources.
- Turn on and forward high-value logs: identity, endpoints, critical servers, perimeter/VPN.
- Stand up a basic triage ticket workflow and required fields.
By 60 days (detections and repeatable operations)
- Implement a small detection set aligned to common attack paths into CUI.
- Establish alert severity definitions and escalation criteria.
- Start recurring evidence capture: weekly alert review export, ingestion health report, and a tuning/change record.
- Run a tabletop test: simulate one alert through triage to closure and retain artifacts.
By 90 days (coverage, tuning, and assessment readiness)
- Expand coverage to remaining systems and cloud/SaaS admin logs in the CUI environment.
- Add monitoring for monitoring (agent removal, logging disabled, ingestion stopped).
- Review exceptions, document compensating controls, and set remediation commitments.
- Package evidence into an assessor-ready control binder in Daydream: mapping, artifacts, and an evidence calendar tied to 03.14.06. 1
Frequently Asked Questions
Does 03.14.06 require a SIEM?
NIST SP 800-171 Rev. 3 does not prescribe a specific tool in the provided excerpt, but you must centralize and operate monitoring in a way you can evidence. Many teams meet this with a SIEM or MDR portal plus a ticketing system. 1
What systems should be monitored first for CUI environments?
Start with identity, endpoints that access CUI, and the servers or repositories where CUI lives. Then add network security controls and cloud control plane logs for any cloud services in the CUI scope. 1
What evidence satisfies auditors that monitoring is “performed”?
Tickets or cases showing alerts, investigation notes, decisions, and closure are the most persuasive. Pair them with ingestion health records and a log source inventory that proves coverage. 1
We have an MSP. Can they own monitoring and response?
Yes, but you still need clear responsibility, SLAs/expectations, and direct access to evidence. Keep copies of MDR/MSP reports and a record of escalations and your internal decisions. 1
How do we handle systems that cannot forward logs?
Record the exception, document compensating controls (segmentation, restricted admin access, added endpoint telemetry), and track a remediation plan. Auditors mainly object when the gap is undocumented or indefinite. 1
What is the quickest way to get assessment-ready for 03.14.06?
Produce a single control package that ties scope, log sources, detections, and alert-handling evidence together, then keep it current with a recurring evidence schedule. Daydream helps by keeping the mapping and evidence collection in one place. 1
Footnotes
Frequently Asked Questions
Does 03.14.06 require a SIEM?
NIST SP 800-171 Rev. 3 does not prescribe a specific tool in the provided excerpt, but you must centralize and operate monitoring in a way you can evidence. Many teams meet this with a SIEM or MDR portal plus a ticketing system. (Source: NIST SP 800-171 Rev. 3)
What systems should be monitored first for CUI environments?
Start with identity, endpoints that access CUI, and the servers or repositories where CUI lives. Then add network security controls and cloud control plane logs for any cloud services in the CUI scope. (Source: NIST SP 800-171 Rev. 3)
What evidence satisfies auditors that monitoring is “performed”?
Tickets or cases showing alerts, investigation notes, decisions, and closure are the most persuasive. Pair them with ingestion health records and a log source inventory that proves coverage. (Source: NIST SP 800-171 Rev. 3)
We have an MSP. Can they own monitoring and response?
Yes, but you still need clear responsibility, SLAs/expectations, and direct access to evidence. Keep copies of MDR/MSP reports and a record of escalations and your internal decisions. (Source: NIST SP 800-171 Rev. 3)
How do we handle systems that cannot forward logs?
Record the exception, document compensating controls (segmentation, restricted admin access, added endpoint telemetry), and track a remediation plan. Auditors mainly object when the gap is undocumented or indefinite. (Source: NIST SP 800-171 Rev. 3)
What is the quickest way to get assessment-ready for 03.14.06?
Produce a single control package that ties scope, log sources, detections, and alert-handling evidence together, then keep it current with a recurring evidence schedule. Daydream helps by keeping the mapping and evidence collection in one place. (Source: NIST SP 800-171 Rev. 3)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream