TSC-PI1.1 Guidance
TSC-PI1.1 requires you to define what “processing integrity” objectives matter for your system, then make sure the organization consistently produces, uses, and shares accurate, timely information that proves those objectives are being met. To operationalize it, establish documented reporting, monitoring, and communication routines tied to each objective, and retain evidence that the routines ran as designed.
Key takeaways:
- Tie “relevant, quality information” to specific processing integrity objectives, not generic KPIs.
- Build repeatable reporting and escalation paths, with ownership and timestamps, and keep an audit trail.
- Expect auditors to test both design (is it defined?) and operation (did it happen, reliably, over time?).
A SOC 2 Processing Integrity opinion rises or falls on whether you can demonstrate that your system processes data completely, accurately, timely, and as authorized, based on what you promise in your commitments and system description. TSC‑PI1.1 is the “information layer” criterion: it focuses on whether the organization obtains or generates the right information, uses it to run the process, and communicates it to the right people to meet processing integrity objectives 1.
For a CCO, GRC lead, or control owner, the fastest way to implement the tsc-pi1.1 guidance requirement is to treat it like a governed measurement and communication program: define your objectives, define the information that proves performance, set who reviews it and how often, and document what happens when indicators breach thresholds. Then retain clean evidence.
This page gives requirement-level steps, artifacts to retain, and audit-ready checks you can implement without turning PI into a research project. The goal is simple: if a reasonable auditor asks “How do you know processing integrity is working?”, you can answer with documented controls, records, and review trails.
Regulatory text
Requirement (excerpt): “The entity obtains or generates, uses, and communicates relevant, quality information regarding the objectives” 1.
What the operator must do
You must (1) identify the processing integrity objectives that apply to your scoped system, (2) define what information is “relevant” to those objectives, (3) ensure the information is high quality (accurate, complete, timely, consistent, traceable), and (4) ensure it is actively used and communicated to drive decisions and corrective action.
Auditors typically look for two things:
- Design: documented definitions, ownership, sources of truth, review routines, and escalation paths mapped to each objective.
- Operating effectiveness: evidence that the routines ran across the audit period (tickets, logs, dashboards, meeting minutes, sign-offs, alerts, postmortems).
Plain-English interpretation (what this really means)
TSC‑PI1.1 is your “prove it” control family for Processing Integrity. It is not asking you to invent perfect metrics. It is asking you to show that the organization does not operate blind.
If your objective is “customer files are processed completely and on time,” then relevant, quality information might include job completion logs, queue depth, exception reports, retry rates, reconciliations between input and output counts, and incident records. If those artifacts exist but nobody reviews them, or reviews are informal with no trace, you will struggle to pass.
Who it applies to
Entity scope: Any organization undergoing a SOC 2 audit that includes the Processing Integrity category and has processing integrity objectives for an in-scope system 1.
Operational context where it matters most:
- High-volume or automated processing (ETL pipelines, billing runs, payroll, payments, claims, provisioning)
- Event-driven systems (queues, webhooks, asynchronous jobs)
- Data transformations or multi-step workflows where completeness and accuracy can break
- Customer-facing SLAs tied to processing timeliness
What you actually need to do (step-by-step)
Step 1: Define processing integrity objectives in auditor-ready language
Create 3–8 objectives that reflect what the system must achieve for Processing Integrity. Keep them specific enough to measure. Examples:
- “Transactions submitted through the API are processed completely and accurately.”
- “Batch jobs complete within defined processing windows.”
- “Exceptions are detected, routed, and resolved through an approved workflow.”
Map each objective to the system components and teams responsible.
Step 2: Build an “information register” per objective
For each objective, document:
- Information needed: metrics, reports, logs, reconciliations, control totals, exception lists
- Source of truth: tool/system generating the data (SIEM, data warehouse, job scheduler, application logs)
- Quality checks: how you know the information is reliable (access controls, immutability/log integrity, validation rules, reconciliations)
- Owner: role accountable for review and response
- Communication path: who receives the information (Ops, Product, Compliance), where (ticketing system, Slack channel, email distribution list), and what triggers escalation
A simple table works well:
| Objective | Key information | Source | Review owner | Review evidence | Escalation |
|---|---|---|---|---|---|
| Timely batch completion | Job run status + duration + failures | Scheduler logs | SRE on-call | Daily ticket or dashboard sign-off | Incident + RCA if recurring |
Step 3: Implement monitoring + review routines that create evidence by default
Pick a small number of routines you can run consistently:
- Operational monitoring: alerts on failures, backlog growth, missed schedules
- Exception management: automatic ticket creation for failed jobs or out-of-tolerance reconciliations
- Periodic review: recurring review of trends, recurring exceptions, and unresolved items
- Management reporting: summary to leadership when thresholds breached or recurring errors appear
Build the evidence into the workflow (tickets and dashboards with timestamps, required fields, and assignees). Avoid manual screenshots as your primary evidence.
Step 4: Define what “quality information” means in your environment
Auditors will ask what makes the information reliable. Document controls such as:
- Access restrictions for dashboards/reports that drive decisions
- Change management for queries and dashboards (who can edit metrics definitions)
- Reconciliations or validations (input vs output counts, totals, hash checks)
- Log retention and integrity controls so records are available for the audit period
If you can’t show integrity of the underlying data, the “information” part of PI1.1 becomes a weak link.
Step 5: Communicate and escalate consistently
Communication is part of the requirement 1. Define:
- Who must know about failures (on-call, engineering manager, compliance contact if customer impact)
- What constitutes a notifiable event internally (processing delays, data corruption, repeated retries)
- How you record the communication (incident channel transcript, ticket comments, postmortem)
Step 6: Test the control and close gaps before the audit window
Run a mini internal test:
- Select a sample of processing exceptions and trace from alert → ticket → resolution → post-incident learning.
- Select a sample of “normal” processing days and show evidence of review (dashboard access logs, daily checklist tickets, meeting notes).
If you use Daydream to manage control checklists and evidence collection, set up recurring tasks per objective and attach artifacts directly to the control activity so you can answer sampling requests fast, without scraping multiple tools.
Required evidence and artifacts to retain
Store evidence in a way that supports sampling and ties back to objectives.
Design artifacts (static or versioned):
- Processing Integrity objectives statement for the in-scope system
- PI1.1 control narrative (what info is produced, used, communicated; by whom; how often)
- Information register (sources, owners, thresholds, escalation paths)
- Definitions for key metrics and reports (data dictionary or runbook entries)
- Policies/procedures covering monitoring, incident management, and change management as applicable 1
Operating artifacts (time-stamped records):
- Alert history (from monitoring tool) showing triggers and timestamps
- Tickets for exceptions, failed jobs, reconciliations out of tolerance, including resolution notes
- Job execution logs and retry records where applicable
- Reconciliation outputs and sign-offs
- Meeting notes or weekly review records for recurring exceptions and trend analysis
- Postmortems / RCAs for material incidents, with action items tracked to closure
- Evidence of periodic assessments/testing of the control 1
Common exam/audit questions and hangups
Auditors tend to probe the same pressure points:
-
“What are your processing integrity objectives?”
Hangup: objectives are implied, not written, or they don’t match the system description. -
“Show me the information you rely on to know processing is complete/accurate/timely.”
Hangup: you have dashboards but no defined owner, review cadence, or thresholds. -
“Prove the reviews happened over the audit period.”
Hangup: verbal reviews, ad-hoc Slack messages, or missing records during holidays/on-call rotations. -
“How do you know the reports are correct?”
Hangup: metrics definitions change without change control, or data pipelines are not validated. -
“What happens when something breaks?”
Hangup: incident handling exists, but it’s not tied back to processing integrity objectives or doesn’t capture customer impact and remediation.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: Treating PI1.1 as generic ‘metrics exist.’
Fix: map each metric/report to a specific objective and decision it supports. -
Mistake: No evidence of use.
Fix: require a ticket comment, checklist completion, or dashboard sign-off that shows review and action. -
Mistake: Overbuilding metrics nobody trusts.
Fix: start with a small set that covers completeness, accuracy, timeliness, and exceptions; add only when there’s a clear control gap. -
Mistake: Logging exists but retention doesn’t cover the audit period.
Fix: align retention and access to audit needs early; confirm you can retrieve historical records. -
Mistake: Exception handling is informal.
Fix: standardize severity levels, ownership, and closure criteria; make “root cause” required for recurring issues.
Enforcement context and risk implications
SOC 2 is an audit framework rather than a regulatory enforcement regime, so you should frame risk as assurance, customer trust, and contractual exposure, not fines. Weak PI1.1 implementation commonly leads to:
- Qualified opinions or control exceptions because you can’t prove operation
- Customer escalations during due diligence because you can’t explain how you detect and correct processing failures
- Operational risk: silent failures, missed batches, inaccurate billing, or incomplete data propagation
Practical 30/60/90-day execution plan
Days 1–30: Define scope, objectives, and information sources
- Confirm the in-scope system boundaries and processing flows.
- Write processing integrity objectives and get engineering sign-off.
- Build the information register (metrics, logs, reconciliations, owners).
- Draft PI1.1 control narrative and align it to current tools and practices.
- Identify evidence gaps (no tickets, no sign-offs, missing retention).
Days 31–60: Operationalize monitoring, review, and communication
- Implement/standardize alerting and exception ticket workflows.
- Create recurring review tasks (daily/weekly) with named owners and required evidence fields.
- Document metric definitions and apply change control for dashboards/queries.
- Train owners on what must be recorded for audit evidence.
Days 61–90: Test, harden, and make it audit-ready
- Run an internal sampling test across multiple weeks of records.
- Fix weak evidence patterns (missing approvals, inconsistent closure notes).
- Add periodic assessment/testing results to your evidence set 1.
- Hold a mock audit walkthrough: objective → information → review → exception → remediation.
Frequently Asked Questions
What counts as “relevant, quality information” for TSC‑PI1.1?
It’s information that directly shows whether your processing integrity objectives are being met and is reliable enough to drive action. In practice, that means exception reports, reconciliations, job logs, and incident/ticket records tied to completeness, accuracy, and timeliness 1.
Do we need formal policies, or are runbooks enough?
You need documented guidance that is stable, reviewable, and attributable to owners. Many teams meet this with a policy plus operational runbooks, as long as the documents clearly define the control activities and responsibilities 1.
If we have dashboards, is that sufficient evidence?
Dashboards help, but auditors usually want proof they were reviewed and used. Add sign-offs, tickets, or recurring review records that show who looked, when, what they found, and what they did about it.
How do we handle PI1.1 if we outsource key processing to a third party?
Treat the third party’s outputs as an information source and define how you validate them (reconciliations, completeness checks, SLA monitoring). Keep third-party reports, your review records, and escalation tickets when performance deviates.
What if our metric definitions change during the audit period?
Track changes and approvals so you can show continuity and explain differences. Without change control, an auditor may question whether the information remained “quality” and comparable over time.
What’s the fastest way to make PI1.1 audit-ready without adding heavy process?
Standardize exception-to-ticket workflows and recurring reviews that automatically generate timestamps and ownership. A tool like Daydream can help by assigning control tasks, collecting evidence in one place, and simplifying sampling responses.
Related compliance topics
- 2025 SEC Marketing Rule Examination Focus Areas
- Access and identity controls
- Access Control (AC)
- Access control and identity discipline
- Access control management
Footnotes
Frequently Asked Questions
What counts as “relevant, quality information” for TSC‑PI1.1?
It’s information that directly shows whether your processing integrity objectives are being met and is reliable enough to drive action. In practice, that means exception reports, reconciliations, job logs, and incident/ticket records tied to completeness, accuracy, and timeliness (Source: AICPA Trust Services Criteria 2017).
Do we need formal policies, or are runbooks enough?
You need documented guidance that is stable, reviewable, and attributable to owners. Many teams meet this with a policy plus operational runbooks, as long as the documents clearly define the control activities and responsibilities (Source: AICPA Trust Services Criteria 2017).
If we have dashboards, is that sufficient evidence?
Dashboards help, but auditors usually want proof they were reviewed and used. Add sign-offs, tickets, or recurring review records that show who looked, when, what they found, and what they did about it.
How do we handle PI1.1 if we outsource key processing to a third party?
Treat the third party’s outputs as an information source and define how you validate them (reconciliations, completeness checks, SLA monitoring). Keep third-party reports, your review records, and escalation tickets when performance deviates.
What if our metric definitions change during the audit period?
Track changes and approvals so you can show continuity and explain differences. Without change control, an auditor may question whether the information remained “quality” and comparable over time.
What’s the fastest way to make PI1.1 audit-ready without adding heavy process?
Standardize exception-to-ticket workflows and recurring reviews that automatically generate timestamps and ownership. A tool like Daydream can help by assigning control tasks, collecting evidence in one place, and simplifying sampling responses.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream