TSC-CC2.1 Guidance

TSC-CC2.1 requires you to obtain or generate relevant, quality information and use it to run, monitor, and prove your internal controls. To operationalize it fast, define what “quality” means for each control, build reliable data sources and review checks, and retain evidence that control owners actually reviewed and acted on the information 1.

Key takeaways:

  • TSC-CC2.1 is an information-quality control requirement, not a specific security tool requirement 1.
  • Auditors will test whether your control decisions are based on complete, accurate, timely, and authorized information, with evidence 1.
  • The fastest path is an “information inventory” mapped to your SOC 2 controls, with defined owners, validation checks, and audit-ready retention.

The tsc-cc2.1 guidance requirement (COSO Principle 13) sits underneath almost every SOC 2 program problem that shows up as “we had the control, but we can’t prove it.” The criterion is simple: internal control only works if the information feeding it is relevant and high quality, and if the organization actually uses that information to operate and monitor controls 1.

For a Compliance Officer, CCO, or GRC lead, the practical job is to turn “quality information” into a defined standard your teams can execute: what reports are used for access reviews, incident tracking, change management approvals, third-party risk decisions, and vulnerability remediation; who owns them; what makes them complete and accurate; and how you prove review occurred. This requirement is also where many organizations get tripped up by over-reliance on screenshots, informal Slack approvals, or dashboards with no documented data lineage.

This page gives requirement-level implementation guidance you can assign to control owners immediately, plus the evidence set your auditor will expect for a Type I or Type II SOC 2 report 1.

Regulatory text

Excerpt (TSC-CC2.1 / COSO Principle 13): “The entity obtains or generates and uses relevant, quality information to support the functioning of internal control” 1.

Operator meaning: you must (1) identify the information each control depends on, (2) ensure that information is fit for purpose (relevant and quality), and (3) show that teams actually use it to perform control activities and monitoring 1. If your control says “Security reviews alerts weekly,” the auditor will look for the alert source, a definition of what “weekly” review means, proof the data is complete/accurate, and evidence someone reviewed and responded.

Plain-English interpretation (what you’re really being asked to prove)

TSC-CC2.1 is satisfied when you can demonstrate all of the following across your scoped SOC 2 system:

  • Relevance: the inputs to a control match the control objective. Example: access review decisions are based on authoritative identity and entitlement data, not an ad-hoc export with unknown filters.
  • Quality: the inputs are sufficiently complete, accurate, timely, and authorized for the control’s purpose.
  • Use: control owners review the information, make decisions, and record actions (or document why no action was needed).
  • Traceability: you can reconstruct “what we knew at the time” during the audit period via retained artifacts and logs.

Who it applies to (entity and operational context)

Applies to: any organization undergoing a SOC 2 audit using the AICPA Trust Services Criteria, because TSC-CC2.1 is part of the Common Criteria 1.

Operational context where it shows up most:

  • ITGC-style controls: user access provisioning/deprovisioning, periodic access reviews, change approvals, backup/restore monitoring.
  • Security operations: alert triage, incident response tracking, vulnerability and patch management reporting.
  • Third-party risk management: intake, due diligence, and ongoing monitoring decisions based on vendor/third-party information (questionnaires, attestations, contract terms, SOC reports).
  • Governance: KPI/KRI dashboards presented to leadership, risk register updates, exception reporting.

If your organization uses multiple systems (IdP, ticketing, cloud consoles, EDR, SIEM, HRIS), TSC-CC2.1 effectively requires a defensible story for which system is the “system of record” for each control input.

What you actually need to do (step-by-step)

Use the following sequence to operationalize quickly and make it auditable.

Step 1: Build an “information inventory” tied to each SOC 2 control

Create a table for each control (or each control family) with:

  • Control ID/name (your internal naming is fine)
  • Control owner
  • Control inputs (information used): reports, dashboards, tickets, logs, exports
  • Source system(s) and whether it is authoritative
  • Output artifact (what gets produced: approvals, tickets closed, sign-offs)
  • Retention location

Practical example: For quarterly access reviews, list the entitlement export source (IdP/SSO), HR roster source (HRIS), and the review record (ticket, GRC workflow, or signed report).

Step 2: Define “quality” for each input (quality attributes checklist)

For each control input, document the minimum quality conditions the control owner relies on:

  • Completeness: does it include the full population (all users, all prod systems, all vendors in scope)?
  • Accuracy: are fields correct (roles, group membership, termination dates), and are filters documented?
  • Timeliness: is the report recent enough for the decision (example: access review based on current entitlements, not a stale export)?
  • Authorization and integrity: who can generate/modify the report, and how do you prevent tampering?
  • Consistency: are definitions stable (what counts as “critical alert,” “high severity vuln,” “in-scope third party”)?

Write this as a short procedure section under the control narrative or as a separate “Information Quality Standard” referenced by controls 1.

Step 3: Implement validation checks that match the risk

Add lightweight checks that your auditor can test:

  • Population reconciliation: compare a report count to a known baseline (HR headcount vs. active accounts; CMDB assets vs. scanner targets).
  • Change detection: track when report logic changes (saved queries, filters, scripts).
  • Access restrictions: limit who can run/admin the reporting source; log report generation where feasible.
  • Peer review: a second person reviews the output for high-impact controls (access reviews, production changes).

These checks do not need to be heavy, but they must be consistent and evidenced.

Step 4: Make “use of information” auditable (decision + action trail)

For each control execution, require the control owner to capture:

  • What input was reviewed (link to report, query, dashboard snapshot, ticket list)
  • What decisions were made (approved, rejected, exception granted)
  • What actions were taken (tickets created, access removed, patch scheduled)
  • Who approved and when

Avoid “Reviewed ✅” with no context. Auditors look for enough detail to show the review was meaningful.

Step 5: Establish monitoring and periodic review of the information supply chain

Create a cadence where you review:

  • Are the right inputs still available and correct?
  • Did any source systems change (tool migration, new SIEM, HRIS changes)?
  • Are control owners following the evidence standard?
  • Are there recurring exceptions and what do they imply?

This maps directly to “establish monitoring and review process” and “conduct periodic assessments” recommended for this criterion 1.

Step 6: Centralize evidence retention and make it searchable

Pick a retention pattern and stick to it:

  • GRC tool record (preferred where possible)
  • Ticketing system with required fields and attachments
  • Controlled folder with naming conventions and access controls

If you use Daydream, configure each SOC 2 control with required evidence fields (input link, reviewer, date, decision summary) and enforce completion before marking the control run “done.” That prevents the common failure mode: controls executed in practice, but evidence scattered across systems and not reconstructable during audit.

Required evidence and artifacts to retain

Auditors typically want proof of (a) design and (b) operating effectiveness. Maintain:

Policy/procedure artifacts (design)

  • Information Quality Standard (or equivalent section inside each control procedure)
  • Control narratives showing which inputs are required and why 1
  • Data lineage notes: source system, report/query definition, owner, and access restrictions

Operational evidence (effectiveness)

  • Dated access review packages: input exports, reviewer notes, remediation tickets, completion proof
  • Monitoring logs: alert review records, incident tickets, vulnerability management workflow outputs
  • Change management approvals that show the approver saw relevant information (risk/impact, test results)
  • Audit trail logs where available (ticket history, system logs showing who ran exports)

Testing/assessment evidence

  • Internal control testing results demonstrating controls operated as designed 1
  • Documented exceptions, root cause notes, and corrective actions

Common exam/audit questions and hangups

Expect your auditor to probe these areas 1:

  1. “What information do you rely on to perform this control, and where does it come from?”
    Hangup: teams can’t name a single authoritative source.

  2. “How do you know the population is complete?”
    Hangup: access review excludes service accounts, contractors, or certain environments without documented rationale.

  3. “How do you prevent or detect manipulation of reports?”
    Hangup: shared admin accounts or editable spreadsheets with no change tracking.

  4. “Show me evidence of review and follow-up.”
    Hangup: screenshots with no timestamp, or approvals in chat with no durable record.

  5. “Did the input or process change during the audit period?”
    Hangup: tooling changes mid-period with no documented transition controls.

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails TSC-CC2.1 Fix
Treating a dashboard screenshot as “quality information” No lineage, no population definition, hard to reproduce Save the query/report definition and export metadata; record counts and filters
No documented criteria for “quality” Auditor can’t evaluate if info was fit for purpose Add a one-page standard or control-level checklist
Evidence stored in personal drives Integrity and availability issues Move to a controlled repository with access control and retention
Control owners “review” but don’t record decisions No proof the information supported internal control Require decision notes + remediation links in tickets/GRC workflow
Missing periodic review of inputs Inputs drift as systems change Add quarterly (or defined cadence) review of control inputs and report logic

Enforcement context and risk implications

SOC 2 is an audit framework rather than a regulatory enforcement regime, and no public enforcement cases are provided for this criterion in the supplied sources 1. Operationally, the risk is commercial and contractual: failed SOC 2 reports, qualified opinions, delayed sales cycles, and increased customer scrutiny of your control environment.

30/60/90-day execution plan

This plan prioritizes audit-readiness and minimal disruption. Timeboxes are guidance for execution planning, not a cited requirement.

Days 1–30: Define and map information

  • Confirm SOC 2 scope (systems, boundaries, control set) with audit lead.
  • Build the information inventory for each in-scope control.
  • Identify authoritative sources for identity, asset inventory, ticketing, logging, and third-party inventory.
  • Draft the Information Quality Standard (or add “quality criteria” sections to each procedure).
  • Pick your evidence retention pattern (GRC tool, ticketing, controlled repository) and document it.

Days 31–60: Implement validation checks and evidence workflows

  • Add population reconciliation checks to high-risk controls (access reviews, vuln management, changes).
  • Lock down report/query permissions and enable logging where feasible.
  • Update control runbooks so each execution produces a consistent evidence packet.
  • Train control owners on “decision-grade evidence” (inputs + decision + action).
  • Pilot internal testing on a small sample to confirm artifacts are sufficient 1.

Days 61–90: Operationalize monitoring and prove sustainability

  • Run controls for at least one full cycle using the new evidence standard.
  • Perform periodic assessment of inputs: confirm reports still match populations and definitions.
  • Document exceptions and corrective actions; ensure re-performance evidence exists.
  • Conduct a mock audit walkthrough for two to three critical controls focused on TSC-CC2.1: “show inputs, show quality checks, show use, show retention.”
  • If using Daydream, configure control checklists and required evidence fields, then produce an auditor-ready evidence index by control.

Frequently Asked Questions

What counts as “quality information” for TSC-CC2.1?

Information is “quality” when it is complete, accurate, timely, and protected from unauthorized change for the decision the control owner is making 1. Document those criteria per control input so an auditor can test them.

Do we need a standalone “Information Quality Policy” to pass SOC 2?

No single document is required by the criterion, but you do need documented controls that define required inputs and quality expectations 1. Many teams implement this as a short standard referenced by control procedures.

We rely on spreadsheets for access reviews. Is that automatically a finding?

Spreadsheets can pass if you control integrity (versioning/access), define population completeness, and retain the underlying source exports and review evidence 1. Most findings happen when spreadsheets become the system of record with no lineage.

How do we show we “used” the information, not just collected it?

Your evidence should show a reviewer, date, what they reviewed, what decision they made, and what actions followed (tickets, approvals, changes). A simple sign-off without decision context is usually weak support.

What is the fastest way to operationalize this across many control owners?

Standardize evidence packets and require the same fields every time: input link/export, population count, reviewer, decision notes, and remediation links. A GRC workflow (including Daydream) helps enforce consistency and keep artifacts searchable.

Does TSC-CC2.1 require formal control testing?

The criterion expects information to support internal control functioning, and SOC 2 examinations commonly include testing of control effectiveness through evidence review 1. Plan internal testing so you find evidence gaps before your auditor does.

Related compliance topics

Footnotes

  1. AICPA Trust Services Criteria 2017

Frequently Asked Questions

What counts as “quality information” for TSC-CC2.1?

Information is “quality” when it is complete, accurate, timely, and protected from unauthorized change for the decision the control owner is making (Source: AICPA Trust Services Criteria 2017). Document those criteria per control input so an auditor can test them.

Do we need a standalone “Information Quality Policy” to pass SOC 2?

No single document is required by the criterion, but you do need documented controls that define required inputs and quality expectations (Source: AICPA Trust Services Criteria 2017). Many teams implement this as a short standard referenced by control procedures.

We rely on spreadsheets for access reviews. Is that automatically a finding?

Spreadsheets can pass if you control integrity (versioning/access), define population completeness, and retain the underlying source exports and review evidence (Source: AICPA Trust Services Criteria 2017). Most findings happen when spreadsheets become the system of record with no lineage.

How do we show we “used” the information, not just collected it?

Your evidence should show a reviewer, date, what they reviewed, what decision they made, and what actions followed (tickets, approvals, changes). A simple sign-off without decision context is usually weak support.

What is the fastest way to operationalize this across many control owners?

Standardize evidence packets and require the same fields every time: input link/export, population count, reviewer, decision notes, and remediation links. A GRC workflow (including Daydream) helps enforce consistency and keep artifacts searchable.

Does TSC-CC2.1 require formal control testing?

The criterion expects information to support internal control functioning, and SOC 2 examinations commonly include testing of control effectiveness through evidence review (Source: AICPA Trust Services Criteria 2017). Plan internal testing so you find evidence gaps before your auditor does.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream