TSC-CC3.2 Guidance

TSC-CC3.2 requires you to run a repeatable risk identification and analysis process tied to business objectives, then prove it operated during the SOC 2 audit period. Operationalize it by defining risk assessment scope and cadence, maintaining a risk register with documented scoring, and tracking treatment decisions with ownership and evidence. 1

Key takeaways:

  • You need a documented method to identify and analyze risks to objectives, not an ad hoc list. 1
  • Auditors look for traceability: objectives → risks → analysis → response → evidence of periodic operation. 1
  • The fastest path is a lightweight enterprise/security risk assessment program with clear governance, artifacts, and change triggers. 1

TSC-CC3.2 (COSO Principle 7) is the SOC 2 “show your work” requirement for risk assessment: you must identify risks to achieving your objectives and analyze those risks in a structured way. A policy alone rarely passes; your auditor will test whether the process actually ran, whether it covered relevant objectives and systems in scope, and whether the outputs drove decisions. 1

For most service organizations, this requirement becomes the backbone connecting your SOC 2 scope, your control design, and your exception handling. If your risk assessment is stale, purely qualitative, or disconnected from operational changes (new systems, new third parties, major releases), you will struggle to justify why your controls are appropriate and complete. 1

This page focuses on fast operationalization. You’ll get a plain-English interpretation, who it applies to, a step-by-step implementation playbook, the artifacts to retain, common audit questions, and a practical execution plan. The goal is simple: make CC3.2 easy to evidence and hard to break.

Regulatory text

Excerpt (TSC-CC3.2): “COSO Principle 7: The entity identifies risks to the achievement of its objectives and analyzes risks.” 1

What the operator must do:
You must (1) define objectives relevant to the services and systems in SOC 2 scope, (2) systematically identify risks that could prevent achieving those objectives, and (3) analyze those risks in a consistent way so management can decide how to respond. Your auditor will expect documented controls, evidence the controls operated, and some form of review/testing showing the process is effective. 1

Plain-English interpretation (what CC3.2 “means” in practice)

  • You maintain an inventory of meaningful objectives (security, availability, confidentiality, processing integrity, privacy, plus business/operational objectives that affect delivery).
  • You identify threats and failure modes that could block those objectives (people, process, technology, and third-party risks).
  • You score or rate risks using defined criteria (likelihood/impact or similar), then decide on treatment (mitigate, transfer, accept, avoid).
  • You revisit the analysis when conditions change and at a defined cadence, and you keep an audit trail. 1

Who it applies to

Entity types: Any organization undergoing a SOC 2 audit that includes the Common Criteria. 1

Operational context (where this shows up):

  • SaaS and cloud services (production infrastructure, CI/CD, identity, data pipelines)
  • Managed services (customer-admin access, ticketing, runbooks, subcontractors)
  • Fintech, healthtech, and other regulated-adjacent services (data confidentiality and third-party dependency risks)
  • Any environment with material reliance on third parties (cloud providers, payment processors, customer support platforms) because those dependencies create risks to objectives you must analyze. 1

Control owners (typical):

  • CISO / Head of Security or GRC lead (process owner)
  • Engineering and Infrastructure leaders (technical risk inputs, remediation commitments)
  • Product and Operations (availability/processing integrity risks)
  • Procurement / Third-Party Risk (third-party and subcontractor risk inputs)

What you actually need to do (step-by-step)

Step 1: Define “objectives” for the assessment (and map to SOC 2 scope)

  1. Write a short list of objectives tied to your in-scope services (example: protect customer data from unauthorized access; maintain service availability; ensure changes are authorized and tested).
  2. Map each objective to in-scope systems/components (apps, databases, cloud accounts, identity provider, logging).
  3. Confirm ownership for each objective (an exec owner and an operational owner).

Execution tip: keep objectives stable; allow risks and controls to evolve around them. Auditors like stable anchors.

Step 2: Establish a documented risk assessment procedure

Create a procedure that states:

  • Scope (which products/services/environments are included)
  • Inputs (incident trends, vuln scans, customer requirements, third-party reviews, architecture changes)
  • Method (how you identify risks, how you score them, what “high/medium/low” means)
  • Governance (who reviews/approves, escalation, how risk acceptance works)
  • Cadence and triggers (scheduled reviews plus event-based reassessments like major releases, new third parties, or significant incidents)
  • Recordkeeping (where artifacts live, retention expectations)

This is where many teams fail CC3.2: the “method” is implied but not written, so the process cannot be tested reliably. 1

Step 3: Build and maintain a risk register with consistent analysis

Minimum viable fields that make an auditor comfortable:

  • Risk statement (cause → event → impact)
  • Objective impacted
  • Asset/system in scope
  • Inherent risk rating (before controls)
  • Key existing controls (control references)
  • Residual risk rating (after controls)
  • Risk owner
  • Treatment decision (mitigate/accept/transfer/avoid) with rationale
  • Target dates and status for mitigation work
  • Approval evidence for acceptance (who, when)

Practical scoring: you don’t need fancy quant. You do need consistent criteria and documented rationale for ratings.

Step 4: Link risks to controls and to real work

For each high or medium risk (based on your criteria):

  • Identify the control(s) that mitigate it.
  • Confirm the control is in your SOC 2 control set and that it has operating evidence.
  • If a control is missing or weak, open remediation work items (tickets) with owners and due dates.

This is where CC3.2 supports your broader SOC 2 story: “We designed these controls because these risks exist.”

Step 5: Run management review and document decisions

Hold a formal review meeting (security steering, risk committee, or equivalent). Capture:

  • Attendees and roles
  • Risks reviewed and decisions made
  • Approved exceptions and risk acceptances
  • Required follow-up actions

Keep minutes concise, but explicit enough that an auditor can trace “risk discussed → decision → action.”

Step 6: Prove operation during the audit period (and keep an audit trail)

Auditors test operating effectiveness. Plan evidence as you execute:

  • Version-controlled risk procedure/policy with approval history
  • Risk register change history (created/updated dates)
  • Tickets for mitigation work
  • Meeting notes / approvals for acceptances
  • Periodic assessment outputs and sign-offs
  • Any testing you do of the process (spot checks, internal audit reviews) 1

Step 7: Add monitoring and “change triggers”

Your risk assessment cannot be a once-a-year exercise if the environment changes frequently. Add triggers such as:

  • Significant architecture changes
  • New critical third party or subcontractor
  • Major incident or near-miss
  • Entry into a new regulated market or customer segment
  • Material changes to data flows or data types

Document the triggers and keep evidence when they fire (e.g., an “event-driven risk review” record).

Required evidence and artifacts to retain

Use this as your SOC 2 request list for the tsc-cc3.2 guidance requirement:

Artifact What “good” looks like Owner
Risk assessment policy/procedure Scope, method, cadence/triggers, governance, approvals GRC / Security
Risk register Complete fields, consistent scoring, mapping to objectives/systems GRC with Engineering inputs
Management review evidence Agenda, minutes, attendance, decisions, approvals CCO/GC/Exec sponsor or GRC
Risk treatment plans Tickets/projects tied to risks, due dates, status Engineering/IT/Security
Risk acceptance records Signed/approved acceptances with rationale and expiry/review date Risk owners + approver
Audit trail Change history, version control, evidence repository structure GRC

Common exam/audit questions and hangups

Auditors commonly probe:

  • “Show me the methodology.” Where are scoring definitions documented?
  • “How do you know the assessment is complete?” How do you ensure all in-scope systems/objectives are covered?
  • “Prove this happened during the period.” Can you show timestamps, meeting notes, and updates?
  • “What changed because of the risk assessment?” Where did you add/modify controls or create remediation work?
  • “Who can accept risk?” Is there a defined approval authority and evidence they used it? 1

Hangups that cause SOC 2 exceptions:

  • Risk register exists, but no evidence of review/approval.
  • “High/medium/low” assigned without criteria.
  • Risks listed, but no linkage to objectives, systems, or controls.
  • Mitigation work not tracked to completion, or ownership is unclear.

Frequent implementation mistakes (and how to avoid them)

  1. A risk list with no analysis.
    Fix: define scoring criteria and require rationale text for each rating.

  2. No boundary between inherent and residual risk.
    Fix: record both. Residual risk forces you to identify actual controls and judge whether they work.

  3. Risk acceptance is casual (Slack approval, no record).
    Fix: standardize a risk acceptance template and store it with the risk record.

  4. No link to third parties.
    Fix: include critical third parties in the risk identification step, especially where service delivery depends on them (cloud hosting, support tooling, payment rails).

  5. Evidence scattered across tools.
    Fix: define a single evidence folder structure and a naming convention. If you use Daydream, map risks to controls and attach evidence directly to each control and risk record to reduce audit scrambles.

Enforcement context and risk implications

SOC 2 is an audit framework, not a regulatory enforcement regime, and no public enforcement cases were provided in the source catalog for TSC-CC3.2. 1
Your practical risk is commercial and contractual: a weak CC3.2 implementation can lead to SOC 2 findings, delayed reports, scope disputes, and customer trust friction during security reviews.

Practical 30/60/90-day execution plan

Days 1–30: Stand up the foundation

  • Confirm SOC 2 scope boundaries and list service objectives tied to scope.
  • Draft the risk assessment procedure (method, cadence/triggers, governance, recordkeeping).
  • Create a risk register template and populate an initial set of risks from known inputs (incidents, pen test findings, architecture diagrams, third-party list).
  • Decide and document who can approve risk acceptances.

Days 31–60: Run the first full cycle and produce evidence

  • Facilitate cross-functional risk identification workshops (engineering, ops, security, product).
  • Score inherent and residual risk; document rationales.
  • Map top risks to controls; open remediation tickets where control gaps exist.
  • Hold a management review meeting; capture minutes and approvals.

Days 61–90: Operationalize and make it repeatable

  • Implement event-driven triggers (new third party, major change, incident) and document the workflow.
  • Add monitoring/review: periodic check that risks are updated, treatments progress, acceptances are reviewed.
  • Perform a lightweight internal test: sample risks and confirm evidence exists for scoring, review, and treatment.
  • If you use Daydream, build an audit-ready package: procedure, risk register export, review minutes, and traceability from risks to controls and evidence.

Frequently Asked Questions

What does an auditor need to see for “analyzes risks” under TSC-CC3.2?

A documented method for rating risks and evidence that you applied it to in-scope objectives and systems. The risk register should show consistent scoring criteria and a rationale for key ratings. 1

Do we need a formal enterprise risk management (ERM) program to meet CC3.2?

No. You need a repeatable risk identification and analysis process with governance and evidence. A lightweight risk register plus periodic management review can satisfy the criterion if it is complete and operates as written. 1

How do we handle third-party risks under CC3.2?

Include critical third parties as risk sources and document how their failure or compromise affects your objectives. Then map those risks to controls such as due diligence, contractual requirements, monitoring, and incident coordination evidence. 1

What’s the difference between CC3.2 and vulnerability management?

Vulnerability management identifies technical weaknesses; CC3.2 requires you to analyze risks to objectives, which includes technical, operational, and third-party risks. Vulnerability findings should feed into the risk assessment, but they do not replace it. 1

If we accept a risk, does that fail SOC 2?

Risk acceptance is allowed if you document the decision, approver, rationale, and any conditions or review triggers. Auditors typically focus on whether acceptance follows your governance and whether residual risk is understood and owned. 1

What evidence is usually missing when CC3.2 fails?

Missing methodology, missing proof of periodic review, and missing traceability from risks to controls and remediation work are common gaps. Centralize artifacts and keep dated approvals to make operating effectiveness easy to test. 1

Related compliance topics

Footnotes

  1. AICPA TSC 2017

Frequently Asked Questions

What does an auditor need to see for “analyzes risks” under TSC-CC3.2?

A documented method for rating risks and evidence that you applied it to in-scope objectives and systems. The risk register should show consistent scoring criteria and a rationale for key ratings. (Source: AICPA TSC 2017)

Do we need a formal enterprise risk management (ERM) program to meet CC3.2?

No. You need a repeatable risk identification and analysis process with governance and evidence. A lightweight risk register plus periodic management review can satisfy the criterion if it is complete and operates as written. (Source: AICPA TSC 2017)

How do we handle third-party risks under CC3.2?

Include critical third parties as risk sources and document how their failure or compromise affects your objectives. Then map those risks to controls such as due diligence, contractual requirements, monitoring, and incident coordination evidence. (Source: AICPA TSC 2017)

What’s the difference between CC3.2 and vulnerability management?

Vulnerability management identifies technical weaknesses; CC3.2 requires you to analyze risks to objectives, which includes technical, operational, and third-party risks. Vulnerability findings should feed into the risk assessment, but they do not replace it. (Source: AICPA TSC 2017)

If we accept a risk, does that fail SOC 2?

Risk acceptance is allowed if you document the decision, approver, rationale, and any conditions or review triggers. Auditors typically focus on whether acceptance follows your governance and whether residual risk is understood and owned. (Source: AICPA TSC 2017)

What evidence is usually missing when CC3.2 fails?

Missing methodology, missing proof of periodic review, and missing traceability from risks to controls and remediation work are common gaps. Centralize artifacts and keep dated approvals to make operating effectiveness easy to test. (Source: AICPA TSC 2017)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream