TSC-CC7.2 Guidance
To meet the tsc-cc7.2 guidance requirement, you must run and document monitoring that can detect anomalies across in-scope system components (infrastructure, applications, databases, and security tooling), then prove the monitoring actually operated during the audit period. Auditors will look for defined alert coverage, consistent review/triage, and retained evidence showing anomalies were identified and handled.
Key takeaways:
- Monitoring must cover system components and their operation, not just perimeter security logs.
- You need documented procedures + operational evidence (alerts, reviews, tickets, and tuning decisions).
- The fastest path is an asset-to-telemetry map, alert standards, and a lightweight review cadence with audit-ready records.
TSC-CC7.2 sits in the SOC 2 Common Criteria “System Operations” family and tests a simple operational truth: you cannot respond to what you do not observe. The requirement is narrower than “have a SIEM” and broader than “collect logs.” It expects you to monitor the components that make up your system and how those components behave, with enough signal quality to spot anomalies that could indicate failures, misuse, or security events.
For a CCO, compliance officer, or GRC lead, the operational challenge is usually not choosing tools. It’s translating a general requirement into: (1) clear monitoring expectations per component type, (2) ownership and review routines that actually happen, and (3) evidence that an auditor can test without reverse-engineering your environment.
This page breaks TSC-CC7.2 into implementable actions: define scope, map components to telemetry, set anomaly detection and alert handling standards, operationalize review and escalation, and retain artifacts that prove the control ran during the audit period. Source: AICPA Trust Services Criteria 2017 (TSC-CC7.2) (AICPA TSC 2017, 2017).
Regulatory text
Excerpt (TSC-CC7.2): “The entity monitors system components and the operation of those components for anomalies.” (AICPA TSC 2017, 2017)
Operator meaning (what you must do)
You must implement monitoring that:
- Covers in-scope system components (not just corporate IT, and not just security devices).
- Observes operation/behavior (availability, performance, errors, configuration drift, unusual access, and suspicious system activity).
- Detects anomalies (unexpected patterns that merit investigation).
- Produces evidence that monitoring ran, anomalies were surfaced, and your team reviewed and handled them.
Auditors typically test this by sampling alerts and tracing them to triage actions, or by selecting key components and asking you to show the telemetry and review process for each.
Plain-English interpretation of the requirement
TSC-CC7.2 expects an “early warning system” for your production environment. If a database starts throwing unusual errors, if a service begins restarting, if admin access spikes, or if an endpoint protection agent stops reporting, your monitoring should catch it and route it to a human (or an automated response) with documented follow-through.
“Anomalies” does not require machine learning. In practice, auditors accept rules, thresholds, baselines, and correlation logic if they are reasonable for your system and you can show they are reviewed and tuned.
Who it applies to (entity and operational context)
Applies to: Any organization undergoing a SOC 2 audit that includes the Common Criteria, for the systems in scope of the report. (AICPA TSC 2017, 2017)
Operational contexts where this control is heavily tested:
- SaaS and cloud-native services (AWS/Azure/GCP), especially production environments
- Regulated customer segments (financial services, healthcare, enterprise procurement) where monitoring maturity is a gating item
- Environments with shared responsibility boundaries (cloud provider, managed service providers, third parties)
Typical in-scope component categories:
- Cloud accounts/subscriptions/projects, IAM, and control plane logs
- Network components (VPC/VNET flow logs, firewalls, load balancers)
- Compute (VMs, containers, Kubernetes nodes/control plane)
- Applications and APIs (service logs, APM traces, error monitoring)
- Datastores (DB audit logs, performance metrics, replication health)
- CI/CD and configuration management (build/deploy logs, drift detection)
- Security tooling (EDR, vulnerability scanners, email security) with health/coverage monitoring
What you actually need to do (step-by-step)
1) Define monitoring scope that matches your SOC 2 system description
- List the production environments, major services, and supporting components in scope.
- Decide what is explicitly out of scope (and why).
- Assign owners: Security owns detection logic; Engineering/SRE owns service health; GRC owns evidence hygiene.
Deliverable: “Monitoring scope statement” aligned to your SOC 2 boundaries.
2) Build an asset-to-telemetry map (the control’s backbone)
Create a table that maps each component type to:
- Telemetry sources (logs/metrics/traces/events)
- Where data lands (SIEM, log platform, APM, cloud-native tools)
- Detection coverage (key alerts/use cases)
- Review owner (role/team)
- Retention location (where evidence can be exported)
Example (keep it short, but complete):
| Component | Telemetry | Example anomaly signals | Owner | Evidence |
|---|---|---|---|---|
| Cloud IAM | Cloud audit logs | Unusual admin role grants; denied auth spikes | Security | SIEM alerts + ticket |
| API service | APM + app logs | Error rate spike; latency regression | SRE | APM incident + postmortem |
| Database | DB metrics + audit logs | Replication lag; unusual queries | SRE/Data | Alert + investigation notes |
Why auditors care: this map proves you didn’t “monitor whatever the tool collects.” You made an intentional coverage design.
3) Define “anomaly” in your environment with alert standards
Write a short procedure that specifies:
- What constitutes an anomaly (threshold, baseline deviation, known-bad patterns)
- Severity levels and routing rules (paging vs. ticket vs. daily review)
- Expected triage steps and documentation requirements
- False-positive tuning expectations
Keep the standard operational:
- Every alert has an owner and a destination (queue/channel/on-call).
- Every meaningful alert results in a record (ticket/incident) or a documented close reason.
Artifact: Monitoring & anomaly detection procedure (policy-level is fine; procedure is better).
4) Implement monitoring and review routines that are easy to prove
Audits reward consistency. Pick routines you can execute without heroics:
- Real-time alerting: paging/on-call for high severity service health and security alerts.
- Daily review: review a defined alert queue and log exceptions.
- Weekly review: trends, noisy rules, coverage gaps, and disabled telemetry checks.
- Periodic assessment: validate sensors are still deployed and logs are still flowing after major changes.
Practical tip: add “monitoring health” alerts (agent stopped reporting, log ingestion dropped, audit logs disabled). Those are anomalies too, and they are easy to evidence.
5) Tie anomalies to your incident management and change management workflows
For each anomaly that crosses a defined threshold:
- Open an incident or investigation ticket.
- Link supporting data (alert ID, dashboards, log queries).
- Record disposition (benign, false positive, confirmed issue) and next steps.
- If the fix requires a change, link the change record.
This linkage is what turns “we have alerts” into “we operate a control.”
6) Make evidence exportable and retention predictable
Decide where auditors will pull proof from:
- SIEM alert exports
- Ticketing system records
- On-call logs
- APM incident timelines
- Post-incident reviews/postmortems
Set retention that meets your audit period needs and allows sampling across the full window. (AICPA TSC 2017, 2017)
7) Test control effectiveness (before the auditor does)
Run an internal mini-test:
- Pick a sample of alerts across the period.
- Confirm each has triage evidence, timestamps, and closure rationale.
- Identify gaps: missing tickets, missing review logs, unclear severities, disabled telemetry.
If you use Daydream to track control operation, store the mapping table, evidence links, and review attestations in one place so sampling takes minutes instead of days.
Required evidence and artifacts to retain
Auditors generally need design evidence (what you intended) and operating evidence (what happened). Keep both.
Design / configuration artifacts
- Monitoring and anomaly detection policy/procedure aligned to TSC-CC7.2 (AICPA TSC 2017, 2017)
- Asset-to-telemetry map for in-scope systems
- Alert catalogue (rules/use cases) with severity and routing
- Access controls for monitoring tools (who can disable alerts, edit rules)
Operating effectiveness artifacts
- Alert samples with timestamps and payloads (screenshots or exports)
- Ticket/incident records showing triage, investigation notes, and closure
- Daily/weekly review logs (even a lightweight checklist with sign-off)
- Postmortems for major incidents (include monitoring signal and detection time narrative)
- Evidence of tuning decisions (rule change records, false-positive rationale)
- Monitoring health proof (log ingestion dashboards, agent heartbeat reports)
Common exam/audit questions and hangups
Questions you should expect
- “Show us the monitoring coverage for the SOC 2 in-scope system components.” (AICPA TSC 2017, 2017)
- “How do you define an anomaly, and how are alerts prioritized?”
- “Who reviews alerts, how often, and where is that review documented?”
- “Provide samples of alerts and show the investigation workflow end-to-end.”
- “How do you know logs are complete and monitoring tools are working?”
- “What happens when alerts are noisy or false positives occur?”
Hangups that cause delays or findings
- Tool sprawl with unclear ownership (Security vs. SRE vs. IT)
- Monitoring exists, but no one can show review evidence for the entire audit period
- Critical telemetry disabled for parts of the environment (common after migrations)
- Informal triage in chat with no durable record
Frequent implementation mistakes and how to avoid them
-
Mistake: Monitoring only security events (SIEM) and ignoring operational anomalies.
Fix: Include service health (availability, latency, error rates), capacity, job failures, and dependency failures in the control narrative. (AICPA TSC 2017, 2017) -
Mistake: “We collect logs” treated as equivalent to “we monitor for anomalies.”
Fix: Define alerting/detection logic and show review and disposition evidence. -
Mistake: No monitoring of monitoring.
Fix: Add alerts for dropped log ingestion, disabled audit logs, expired certificates on logging endpoints, or agent inactivity. -
Mistake: Evidence trapped in dashboards that change.
Fix: Export alert samples and preserve ticket links and immutable timestamps in your GRC repository (Daydream can serve as the index). -
Mistake: Alerts routed to shared inboxes with no accountable owner.
Fix: Assign each detection to a team/role and define backup coverage.
Enforcement context and risk implications
SOC 2 is an audit framework rather than an enforcement regime, so the immediate “penalty” is typically commercial: audit exceptions, delayed reports, and customer friction in security reviews. Weak anomaly monitoring also increases the chance that incidents expand in scope before detection, which can trigger contractual notice obligations, regulatory reporting duties, or customer trust issues depending on your business model. (AICPA TSC 2017, 2017)
Practical 30/60/90-day execution plan
Days 0–30: Get to a defensible baseline
- Confirm SOC 2 scope boundaries and list in-scope components.
- Build the asset-to-telemetry map for the top services and security-relevant components.
- Write a short monitoring/anomaly procedure: severity definitions, routing, triage, documentation.
- Identify missing telemetry (no audit logs, no APM, no endpoint coverage) and open remediation tasks.
- Start a lightweight review log (daily or weekly) with named owners.
Exit criteria: you can point to each major component and show “we collect X, alert on Y, and Z team reviews it.”
Days 31–60: Prove operation and reduce audit friction
- Create an alert catalogue with owners and destinations.
- Ensure tickets/incidents are created for meaningful anomalies; standardize close codes.
- Add monitoring health alerts (log flow, agent heartbeat).
- Run an internal test: sample alerts and trace to triage evidence.
- Centralize evidence pointers in Daydream (control description, map, and evidence links).
Exit criteria: you can satisfy a typical auditor sample request quickly without reconstructing history.
Days 61–90: Harden and make it sustainable
- Tune noisy detections; document rationale and approvals for changes.
- Expand coverage to remaining components, including third-party dependencies that can fail your service.
- Add periodic assessment of monitoring coverage after major releases or infrastructure changes.
- Prepare your SOC 2 narrative: how anomalies are detected, reviewed, escalated, and recorded.
Exit criteria: monitoring and review routines survive vacations, reorganizations, and platform changes.
Frequently Asked Questions
What counts as an “anomaly” under TSC-CC7.2?
An anomaly is any unexpected condition in system component behavior that warrants investigation, such as unusual authentication failures, error spikes, or a logging pipeline stopping. The key is that you define what your organization treats as anomalous and show that alerts are reviewed and resolved. (AICPA TSC 2017, 2017)
Do we need a SIEM to satisfy the tsc-cc7.2 guidance requirement?
No specific tool is required. You need effective monitoring across in-scope components, plus evidence that alerts or reviews occur and anomalies are handled. A SIEM can help for security events, but it is not the entire control. (AICPA TSC 2017, 2017)
Our engineers triage in Slack. Is that acceptable evidence?
Chat can support context, but auditors usually need durable records that show review, investigation, and closure. Create a ticket or incident record and link relevant chat threads as supporting material.
How do we handle false positives without creating audit problems?
Track alert dispositions (false positive, expected behavior, confirmed issue) and document tuning decisions with a change record or rule update note. Auditors mainly want to see a controlled process rather than ad hoc disabling of detections.
Does TSC-CC7.2 require 24/7 monitoring?
The criterion does not prescribe a coverage model. Choose review and escalation practices appropriate to your risk and customer commitments, then document them and operate them consistently. (AICPA TSC 2017, 2017)
What’s the minimum evidence set we should have ready for the auditor?
Keep (1) your monitoring/anomaly procedure, (2) an asset-to-telemetry map, and (3) a set of alert-to-ticket samples that span the audit period. Add review logs and tuning records to reduce follow-up questions. (AICPA TSC 2017, 2017)
Related compliance topics
Frequently Asked Questions
What counts as an “anomaly” under TSC-CC7.2?
An anomaly is any unexpected condition in system component behavior that warrants investigation, such as unusual authentication failures, error spikes, or a logging pipeline stopping. The key is that you define what your organization treats as anomalous and show that alerts are reviewed and resolved. (AICPA TSC 2017, 2017)
Do we need a SIEM to satisfy the tsc-cc7.2 guidance requirement?
No specific tool is required. You need effective monitoring across in-scope components, plus evidence that alerts or reviews occur and anomalies are handled. A SIEM can help for security events, but it is not the entire control. (AICPA TSC 2017, 2017)
Our engineers triage in Slack. Is that acceptable evidence?
Chat can support context, but auditors usually need durable records that show review, investigation, and closure. Create a ticket or incident record and link relevant chat threads as supporting material.
How do we handle false positives without creating audit problems?
Track alert dispositions (false positive, expected behavior, confirmed issue) and document tuning decisions with a change record or rule update note. Auditors mainly want to see a controlled process rather than ad hoc disabling of detections.
Does TSC-CC7.2 require 24/7 monitoring?
The criterion does not prescribe a coverage model. Choose review and escalation practices appropriate to your risk and customer commitments, then document them and operate them consistently. (AICPA TSC 2017, 2017)
What’s the minimum evidence set we should have ready for the auditor?
Keep (1) your monitoring/anomaly procedure, (2) an asset-to-telemetry map, and (3) a set of alert-to-ticket samples that span the audit period. Add review logs and tuning records to reduce follow-up questions. (AICPA TSC 2017, 2017)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream