TSC-CC6.8 Guidance
TSC-CC6.8 requires you to implement and operate controls that prevent, detect, and respond to the introduction of unauthorized or malicious software across in-scope systems. To operationalize it fast, combine (1) controlled software introduction (allowlisting and change control), (2) malware prevention/detection (EDR/AV, email/web controls), and (3) documented response actions with retained evidence. 1
Key takeaways:
- Define “authorized software” and enforce it through technical controls plus change management for endpoints, servers, and cloud workloads.
- Prove ongoing operation: monitoring alerts, triage outcomes, exceptions, and periodic reviews matter as much as tooling.
- Auditors look for end-to-end traceability from policy to configuration to alerts to response tickets for the audit period.
The tsc-cc6.8 guidance requirement is a SOC 2 Common Criteria expectation focused on malware and unauthorized software risk: your systems should not accept unknown code silently, and when bad code appears, your team must detect it and act. This criterion is broader than “install antivirus.” It covers the full path by which software enters the environment: user endpoints, servers, CI/CD pipelines, cloud images, browser downloads, email attachments, removable media, and third-party tools. It also covers what you do next: isolate, eradicate, recover, and document.
For a CCO or GRC lead, the fastest path is to translate CC6.8 into three audit-ready control statements: (1) software is introduced through approved channels with guardrails; (2) you monitor for malicious or unauthorized software with defined investigation steps; (3) you retain evidence that the controls ran consistently throughout the audit period. If you already have security operations tooling, the work is usually control design clarity, scope mapping, and evidence packaging rather than new technology.
Regulatory text
Excerpt (TSC-CC6.8): “The entity implements controls to prevent or detect and act upon the introduction of unauthorized or malicious software.” 1
What the operator must do:
You must (1) reduce the chance that unauthorized or malicious software can be introduced into in-scope environments, (2) detect when it happens anyway, and (3) take defined action. Auditors will expect this to be implemented as a repeatable process with technical enforcement, monitoring, response workflows, and evidence that it operated during the audit window. 1
Plain-English interpretation (what CC6.8 is really asking)
CC6.8 asks a simple question: “Can random code enter your environment without you noticing or stopping it, and if it does, do you respond consistently?” Your program should show:
- Prevention: You control what gets installed/executed and how it gets there (approved software, controlled admin rights, controlled deployment paths).
- Detection: You can see suspicious binaries/scripts, malware behaviors, and policy violations.
- Action: Alerts lead to triage, containment, eradication, and lessons learned, not ignored notifications.
Treat this as a lifecycle control: policy → configuration → monitoring → tickets → closure → periodic review.
Who it applies to (entity and operational context)
Applies to: organizations undergoing a SOC 2 audit using the AICPA Trust Services Criteria, where the Security category Common Criteria are in scope. 1
Typical in-scope operational areas:
- Endpoints: employee laptops and privileged workstations that access production systems or sensitive data.
- Servers and cloud workloads: production instances, containers, managed compute, and administrative bastions.
- Software delivery: CI/CD runners, artifact repositories, image registries, package management.
- User-facing ingestion paths: email, browsers, file uploads, collaboration tools.
- Third party software: agents, remote support tools, and any externally sourced binaries/scripts introduced into environments.
If your SOC 2 scope includes a subset of systems, CC6.8 applies to that subset. Auditors will still ask how you prevent “scope creep,” such as engineers using unmanaged devices to administer scoped systems.
What you actually need to do (step-by-step)
Below is a pragmatic implementation sequence aligned to what auditors test: design, operation, and effectiveness. 1
1) Define “authorized software” and the approval path
- Write a short standard: what counts as authorized (IT-managed installs, approved packages, signed code, corporate app catalog).
- Define who can approve exceptions (IT, Security, Engineering leadership) and where approvals are recorded (ticketing system).
- Clarify treatment of scripts (PowerShell, Bash, Python): allowed locations, signing requirements, and execution policies where feasible.
Operational tip: auditors do not need a perfect global software list, but they do need a defensible mechanism that prevents ad hoc installs in sensitive environments.
2) Put technical guardrails around software introduction
Choose controls that fit your architecture; document what you selected and where it applies.
Common patterns:
- Endpoint controls: EDR/AV, host firewall, local admin restriction, application allowlisting for high-risk roles, device management enforcement.
- Server/workload controls: golden images, hardened base AMIs, restricted package repositories, immutable infrastructure practices where possible.
- Email/web controls: attachment scanning, link filtering, browser download restrictions for managed devices.
- CI/CD controls: restricted build agents, dependency controls (approved registries), artifact signing/verification where feasible.
Tie each pattern to an in-scope system inventory so you can answer “which systems are covered and which aren’t.”
3) Implement monitoring and triage with clear decision points
Create a short runbook that states:
- What alerts you monitor for unauthorized/malicious software (EDR detections, allowlist violations, suspicious execution).
- Who reviews alerts and how often (SOC, IT, on-call security).
- Severity levels and time-to-action expectations (your internal targets; don’t invent regulatory timelines).
- Required outcomes: false positive closure rationale, containment steps, escalation to incident response, and eradication confirmation.
Minimum viable triage workflow:
- Alert generated (tool or log source).
- Analyst validates (malicious, suspicious, policy violation, false positive).
- Action taken (quarantine/isolate host, kill process, remove software, block hash/domain, reset credentials if needed).
- Ticket updated with evidence and closure notes.
- Root cause identified for recurring issues (optional but strong).
4) Integrate with change management and access management
CC6.8 fails in practice when admins can bypass controls casually.
- Require changes for software deployment into production (or into a controlled endpoint software catalog).
- Restrict who can install software on endpoints and servers (role-based admin, just-in-time elevation, break-glass with logging).
- For emergency installs, require post-facto review and documented approval.
This creates an audit trail that connects “software was introduced” to “it was approved and reviewed.”
5) Maintain an audit trail that an auditor can sample
Set evidence expectations upfront:
- Centralize logs (EDR console exports, SIEM searches, ticketing).
- Standardize ticket fields for malware/unauthorized software events (system, detection source, classification, actions, timestamps, approver if exception).
- Keep configuration baselines and policy settings snapshots (MDM profiles, EDR policy exports).
6) Conduct periodic assessments of effectiveness
At a minimum, run periodic checks that your controls still work:
- Review EDR/AV coverage for in-scope assets (missing agents, stale devices).
- Review allowlisting/installation exception tickets for trends.
- Validate that alert routing is functioning (no broken integrations).
- Perform a tabletop or retrospective on at least one detection-to-response workflow if you have incidents to learn from.
This maps directly to the expectation to “prevent or detect and act upon” and supports “test control effectiveness.” 1
Required evidence and artifacts to retain (audit-ready)
Auditors typically test both design and operating effectiveness. Prepare evidence in these buckets:
Policies and procedures (design evidence)
- Malware protection / endpoint security policy referencing unauthorized software handling
- Secure configuration standard for endpoints/servers (as relevant)
- Incident response procedures covering malware events
- Change management procedure for software deployments
- Exception process (who approves, how documented)
Configuration and system evidence (implementation evidence)
- EDR/AV policy settings exports (scan, quarantine, tamper protection)
- MDM configuration profiles showing installation controls/admin restrictions
- Allowlisting rules (if used) and scope of enforcement groups
- Email/web security policy configurations (as applicable)
- Asset inventory or CMDB extract for in-scope systems mapped to control coverage
Operating evidence (operating effectiveness)
- Sample of alerts with corresponding tickets and closure notes
- Evidence of response actions (isolation/quarantine logs, remediation steps)
- Exception/waiver tickets for unauthorized software with approvals and expiry dates
- Periodic review records (meeting notes, coverage reports, action items)
Common exam/audit questions and hangups
Expect these questions in a SOC 2 fieldwork walkthrough for CC6.8:
-
“What prevents employees from installing unapproved software?”
Auditors want a mix of policy plus enforcement (admin rights, MDM, allowlisting for sensitive roles). -
“Show me that malware alerts are reviewed and acted on.”
Be ready with a clean sample set: alert → ticket → action → closure evidence. -
“How do you know all in-scope systems are covered?”
This is a scope-to-tool coverage reconciliation: asset inventory vs EDR agent enrollment. -
“How do you handle exceptions?”
Missing expirations and missing approvals are common findings. -
“How do you control software introduced through CI/CD and images?”
If production is built from pipelines, auditors may focus there rather than on workstation installs.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Buying tools but not defining “act upon.”
Fix: require tickets for high/medium alerts and document containment/eradication steps. -
Mistake: Partial coverage with no visibility into gaps.
Fix: maintain a coverage report that reconciles in-scope assets to EDR/MDM enrollment and review it periodically. -
Mistake: Exceptions become permanent.
Fix: require expirations and re-approval, and report on overdue exceptions. -
Mistake: Relying only on endpoint AV for server and cloud workloads.
Fix: document workload-level controls (image governance, runtime protection, restricted package sources) where endpoints are not the main risk. -
Mistake: Evidence is scattered and hard to sample.
Fix: build an “audit binder” folder structure by control: policy, configuration exports, alert samples, review records.
Enforcement context and risk implications
SOC 2 is an audit framework rather than a regulatory enforcement regime, so CC6.8 is enforced through audit opinion outcomes and customer expectations, not direct regulator penalties. 1 The practical risk is still real: malware or unauthorized tools can enable credential theft, data exfiltration, service disruption, and downstream customer notification obligations depending on contracts and applicable laws.
Practical 30/60/90-day execution plan
This plan assumes you need an audit-ready implementation path for the tsc-cc6.8 guidance requirement and that you will tune depth based on your environment complexity.
First 30 days (get to “defined and deployed”)
- Confirm SOC 2 scope: systems, endpoints, production accounts, CI/CD components.
- Publish or refresh the malware/unauthorized software policy and exception workflow.
- Validate EDR/AV deployment status for in-scope assets; fix obvious gaps.
- Stand up a single triage queue (ticket type + routing) for malware/unauthorized software alerts.
- Create an evidence checklist and start collecting configuration exports.
Days 31–60 (make it auditable and consistent)
- Implement controlled software introduction for key paths (admin restriction, app catalog, production deployment approvals).
- Write and socialize the triage runbook with severity definitions and required ticket fields.
- Run a coverage reconciliation: asset inventory vs EDR/MDM enrollment; document results and remediation.
- Do a small internal control test: pick a sample of alerts and verify end-to-end evidence exists.
Days 61–90 (prove operating effectiveness)
- Perform a periodic review cycle and retain minutes/outputs (coverage, exceptions, alert trends).
- Tune alerting to reduce noise but retain high-signal detections; document tuning approvals.
- Run a malware-response tabletop or retrospective using your runbook; log action items.
- Package your audit evidence by control and time period, ready for sampling.
Where Daydream fits naturally: Daydream helps you track control ownership, standardize evidence requests (alerts, tickets, configuration exports), and keep an audit-ready trail across security and IT systems without chasing screenshots at the last minute.
Frequently Asked Questions
Do we need application allowlisting to meet TSC-CC6.8?
No single tool is required by the criterion; auditors assess whether your controls prevent or detect and act on unauthorized or malicious software. Allowlisting is a strong prevention control for high-risk roles, but many organizations meet the requirement with restricted admin rights, managed software distribution, and strong EDR plus response evidence. 1
What counts as “unauthorized software” in practice?
Define it in your policy as software not approved through your IT/security process or not deployed through controlled mechanisms (MDM, approved packages, CI/CD). Auditors will expect your definition to map to how you technically enforce and monitor it. 1
How do we show we “act upon” detections?
Maintain tickets that document validation, containment/eradication actions, and closure rationale, linked to the originating alert. If you quarantine a host or remove software, keep the console log/export or incident notes that show the action occurred. 1
Our EDR generates lots of false positives. Will that fail us?
Noise does not fail you; lack of triage discipline does. Document tuning changes, keep evidence that alerts are reviewed, and show that true positives and policy violations lead to consistent actions. 1
Does CC6.8 apply to third-party managed services and SaaS tools?
If a third party manages in-scope infrastructure or can introduce software into your environment, include them in your control design (contractual requirements, access limits, monitoring of their actions). For pure SaaS where you cannot control underlying endpoints, focus on your administrative access, integrations, and monitoring within your scope statement. 1
What evidence is easiest for auditors to sample for this control?
A small set of complete “chains” works best: EDR alert → ticket → response action evidence → closure notes, plus a periodic coverage/exceptions review record. Pair that with the policy and the relevant configuration exports for the same period. 1
Related compliance topics
- 2025 SEC Marketing Rule Examination Focus Areas
- Access and identity controls
- Access Control (AC)
- Access control and identity discipline
- Access control management
Footnotes
Frequently Asked Questions
Do we need application allowlisting to meet TSC-CC6.8?
No single tool is required by the criterion; auditors assess whether your controls prevent or detect and act on unauthorized or malicious software. Allowlisting is a strong prevention control for high-risk roles, but many organizations meet the requirement with restricted admin rights, managed software distribution, and strong EDR plus response evidence. (Source: AICPA Trust Services Criteria 2017)
What counts as “unauthorized software” in practice?
Define it in your policy as software not approved through your IT/security process or not deployed through controlled mechanisms (MDM, approved packages, CI/CD). Auditors will expect your definition to map to how you technically enforce and monitor it. (Source: AICPA Trust Services Criteria 2017)
How do we show we “act upon” detections?
Maintain tickets that document validation, containment/eradication actions, and closure rationale, linked to the originating alert. If you quarantine a host or remove software, keep the console log/export or incident notes that show the action occurred. (Source: AICPA Trust Services Criteria 2017)
Our EDR generates lots of false positives. Will that fail us?
Noise does not fail you; lack of triage discipline does. Document tuning changes, keep evidence that alerts are reviewed, and show that true positives and policy violations lead to consistent actions. (Source: AICPA Trust Services Criteria 2017)
Does CC6.8 apply to third-party managed services and SaaS tools?
If a third party manages in-scope infrastructure or can introduce software into your environment, include them in your control design (contractual requirements, access limits, monitoring of their actions). For pure SaaS where you cannot control underlying endpoints, focus on your administrative access, integrations, and monitoring within your scope statement. (Source: AICPA Trust Services Criteria 2017)
What evidence is easiest for auditors to sample for this control?
A small set of complete “chains” works best: EDR alert → ticket → response action evidence → closure notes, plus a periodic coverage/exceptions review record. Pair that with the policy and the relevant configuration exports for the same period. (Source: AICPA Trust Services Criteria 2017)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream