The entity obtains or generates, uses, and communicates relevant, quality information regarding the objectives
To meet the the entity obtains or generates, uses, and communicates relevant, quality information regarding the objectives requirement, you must define what “quality information” means for your processing integrity objectives, ensure that information is produced or collected from controlled sources, and prove it is communicated to the right owners in time to act. Auditors will look for repeatable workflows plus retained evidence.
Key takeaways:
- Define objective-linked information needs (what, who, when, quality criteria) and assign owners.
- Control information quality at the source (systems of record, validations, reconciliations, and change control).
- Prove communication and use (reviews, dashboards, alerts, meeting minutes, tickets) with durable audit evidence.
This SOC 2 Processing Integrity criterion (TSC-PI1.1) is a practical requirement about information flow: you cannot meet processing objectives if the organization lacks trustworthy inputs, metrics, and reporting. In a SOC 2 exam, this control often becomes a “show me” exercise. The auditor wants to see that you (1) know which information matters to your processing integrity objectives, (2) generate or obtain it consistently, (3) check that it’s accurate and complete enough to make decisions, and (4) communicate it to the people who must respond.
Operationalizing TSC-PI1.1 is less about writing a policy and more about building an evidence-ready information lifecycle: defining quality criteria, identifying systems of record, implementing validation and reconciliation, and documenting how exceptions are escalated and tracked to closure. Done well, it reduces both control risk (audit findings) and operational risk (silent failures, incorrect outputs, missed SLAs, and customer-impacting incidents). This page gives you a requirement-level implementation approach you can run as a CCO or GRC lead without turning it into a multi-quarter program.
Regulatory text
Requirement (SOC 2, Processing Integrity): “The entity obtains or generates, uses, and communicates relevant, quality information regarding the objectives.” 1
What the operator must do:
You must (a) identify the information needed to achieve and demonstrate your processing integrity objectives, (b) ensure that information is sufficiently accurate, complete, timely, and authorized for its purpose, and (c) ensure it is delivered to the right roles (process owners, control owners, and decision-makers) with a mechanism to act on exceptions. The audit burden is proof: you need artifacts that show the information exists, is reviewed/used, and drives follow-up.
Plain-English interpretation
If your objective is “process customer transactions correctly and on time,” you need reliable information that tells you whether transactions were processed correctly and on time. That includes operational metrics, exception logs, reconciliations, job completion status, incident trends, and customer-impact indicators. “Quality” means the data is fit for decision-making: it comes from authoritative sources, is protected from tampering, is updated when needed, and has checks that catch missing or incorrect values.
“Uses and communicates” is where teams fail. It’s not enough to generate dashboards. Someone must review them, interpret exceptions, create follow-up work, and escalate material issues. Auditors expect a closed loop: information → review → action → evidence.
Who it applies to (entity and operational context)
Applies to: Service organizations pursuing a SOC 2 report that includes the Processing Integrity category. 1
Operational scope (typical):
- Product and engineering teams that run processing pipelines (batch jobs, APIs, ETL/ELT, payments, provisioning, billing).
- Operations/SRE teams who monitor processing health and reliability.
- Support teams who receive customer-reported processing errors and route them for remediation.
- Compliance/GRC teams who define control expectations and collect evidence.
- Data/analytics teams if they own operational reporting used for control performance.
Where it shows up in practice:
- System-of-record selection (what’s authoritative for processing status and outcomes).
- Monitoring and alerting design for processing failures or anomalies.
- Reconciliations between inputs and outputs.
- Exception management and issue tracking (tickets, problem management).
- Management reporting to show processing objectives are being met.
What you actually need to do (step-by-step)
Step 1: Define your processing integrity objectives and map “required information”
Create a short list of processing objectives that are in-scope for SOC 2 (for example: completeness, accuracy, timeliness, validity/authorization of processing). For each objective, define:
- Decisions to be made: What actions should be taken when the objective is at risk?
- Information required: Metrics, logs, reconciliations, and reports needed to make those decisions.
- Quality criteria: Accuracy, completeness, timeliness, access restrictions, retention needs.
- Owners and audiences: Who produces it, who reviews it, and who receives exceptions.
Artifact to produce: “Processing Integrity Information Register” (a table works best).
Minimum table fields (recommended):
- Objective
- Key risk / failure mode
- Information item (metric/report/log/recon)
- System of record
- Frequency/trigger
- Reviewer + backup
- Acceptance criteria / thresholds (where applicable)
- Escalation path
- Evidence produced (screenshot/export/link + retention location)
Step 2: Establish systems of record and data provenance
For each required information item, document the authoritative source and how it is generated:
- Monitoring system (for job success/failure, latency, queue depth)
- Application logs (for processing errors)
- Data warehouse reports (for completeness checks)
- Ticketing system (for customer-impact exceptions and remediation)
- CI/CD or change management records (if processing integrity depends on controlled changes)
Control intent: An auditor should be able to trace a reported metric back to a controlled source and understand how it is protected from unauthorized changes.
Step 3: Implement information quality checks (prevent and detect)
Choose checks that fit your processing architecture. Common patterns:
- Input validation: Reject malformed or unauthorized requests; log failures with reason codes.
- Completeness controls: Reconcile record counts between ingestion and output; flag missing batches.
- Accuracy controls: Hash totals, field-level validation, sampling verification, or dual-run comparisons after major changes.
- Timeliness controls: Track processing latency against internal targets; alert on breaches.
- Change integrity: Ensure metric definitions and queries are version-controlled and reviewed before changes.
Evidence goal: Show that the information itself is trustworthy, not just that it exists.
Step 4: Prove “use and communication” with a review-and-action loop
Design a lightweight operating rhythm:
- Daily/weekly operational review: Owners review dashboards/alerts and document exceptions.
- Exception management: Create tickets for exceptions; assign owners; track to closure.
- Escalation: Define what triggers escalation to incident management, leadership, or customer communications.
- Management reporting: Summarize trends and material exceptions in recurring ops/compliance reviews.
What auditors typically accept as proof:
- Dated screenshots or exports of dashboards with reviewer annotations.
- Alert notifications with on-call acknowledgments and linked incident/ticket.
- Meeting minutes showing review of processing metrics and decisions made.
- Ticket history showing exceptions identified from monitoring and resolved.
Step 5: Document the control design and retain operating evidence
Write a control narrative that matches how the business really runs:
- What information is produced/collected
- How quality is assured (checks, reconciliations, access controls)
- Who reviews and how often (or event-triggered)
- What happens when issues are found (ticketing/escalation)
- What evidence is retained and where
This aligns with the recommended control approach: document control design and retain operating evidence for this requirement. 1
Practical note: Daydream can help by standardizing the evidence request list, mapping each objective to specific artifacts, and keeping review proof and exceptions in one place so evidence collection does not depend on heroics.
Required evidence and artifacts to retain
Retain evidence that covers the full chain: obtain/generate → quality → communicate → use.
Core artifacts (recommended):
- Processing Integrity Information Register (objective-to-information mapping)
- Data/source documentation (systems of record, metric definitions, query logic, log sources)
- Access control evidence for reporting systems (role membership, least-privilege rationale where applicable)
- Reconciliations / validation outputs (reports, scripts output, run logs)
- Monitoring and alerting configuration (alert rules, routing, on-call schedules where applicable)
- Review evidence (screenshots/exports with dates, meeting minutes, sign-offs)
- Exceptions and remediation trail (tickets/incidents with timestamps, root cause notes, closure evidence)
- Change records for metric definitions/queries and monitoring rules (PRs, approvals)
Retention approach: Store in a dedicated audit evidence repository with consistent naming (control name + date + system). Make it searchable by period.
Common exam/audit questions and hangups
Questions you should expect:
- “Show the objectives for processing integrity and the information you rely on to manage them.”
- “How do you know these dashboards/reports are accurate and complete?”
- “Who reviews this information, and how do you prove the review occurred?”
- “What happens when thresholds are breached? Show an example end-to-end.”
- “How do you control changes to metric definitions and alert rules?”
- “If a key system is down, how do you obtain the information needed to manage processing?”
Hangups that cause findings:
- Dashboards exist, but no documented reviewer or proof of review.
- Reports are manually edited before sharing, with no control over the edits.
- Exception tickets exist, but they are not clearly tied back to monitoring/reconciliations.
- Metric definitions drift over time without change control, creating inconsistent reporting across periods.
Frequent implementation mistakes and how to avoid them
-
Mistake: Treating “quality information” as a generic statement.
Avoidance: Define quality criteria per information item (source, refresh cadence, validation, access). -
Mistake: Relying on a single person’s knowledge.
Avoidance: Assign a primary and backup reviewer; document the review workflow. -
Mistake: Building metrics that cannot be reproduced.
Avoidance: Version-control queries and metric logic; document data lineage at a high level. -
Mistake: Evidence that proves monitoring exists but not that it is used.
Avoidance: Keep review logs, meeting notes, and tickets that show response and closure. -
Mistake: Over-collecting evidence.
Avoidance: Identify the minimum artifacts that demonstrate operation each period. Standardize the monthly/quarterly evidence packet.
Enforcement context and risk implications
SOC 2 is an attestation framework, so the direct consequence is typically an adverse observation, a qualified opinion, or a report that raises uncomfortable customer questions. The operational risk is broader: weak information quality and weak communication loops increase the chance that processing errors persist undetected, customer outputs become incorrect, and incident response starts late. For regulated customers, this becomes a trust and contract issue fast because their due diligence depends on your evidence.
Practical 30/60/90-day execution plan
First 30 days: Define and map
- Identify in-scope processing integrity objectives and top failure modes.
- Build the Processing Integrity Information Register for each core workflow.
- Confirm systems of record and owners for each information item.
- Draft the control narrative for TSC-PI1.1 aligned to how teams operate today.
- Decide where evidence will be stored and who is responsible for monthly collection.
Days 31–60: Implement quality checks and review cadence
- Add or formalize reconciliations/validations for the highest-risk workflows.
- Standardize dashboard/report definitions; document logic and refresh cadence.
- Implement a review schedule (ops review, incident review) and define what gets recorded.
- Train reviewers on what constitutes an exception and how to open/route tickets.
- Run a “tabletop” audit: pick one exception and prove you can show evidence end-to-end.
Days 61–90: Prove operating effectiveness and tighten evidence
- Collect at least one full cycle of operating evidence (reviews, alerts, exceptions, remediations).
- Tune alert thresholds and reduce noise so reviewers take action consistently.
- Add change control expectations for monitoring rules and metric definitions.
- Perform a gap check against the Information Register; close missing evidence areas.
- If you use Daydream, configure recurring evidence requests and automated reminders so evidence capture becomes routine.
Frequently Asked Questions
What counts as “quality information” for TSC-PI1.1?
Information is “quality” when it is fit for decision-making about your processing integrity objectives: it is accurate enough, complete enough, timely enough, and protected from unauthorized changes. You prove this through controlled sources, validations/reconciliations, and consistent definitions.
Do we need a formal policy to satisfy this requirement?
A policy helps but does not carry the audit by itself. Auditors usually care more about a documented control description, assigned owners, and operating evidence that shows information is generated, reviewed, and acted on.
Can we satisfy “communicates” with a dashboard alone?
Only if you can prove the dashboard reaches the right people and they review it. Add review evidence such as meeting minutes, sign-offs, alert acknowledgments, or tickets created from exceptions.
We have automated alerts. Do we still need periodic reviews?
Often yes, because alerts cover known failure modes, while periodic reviews catch trends, silent degradations, and definition drift. If you rely solely on alerts, be prepared to show complete coverage and evidence of response for the full period.
How do we handle manual reports created in spreadsheets?
Treat them as high risk unless tightly controlled. Lock down the source data, document the transformation steps, restrict edit access, and retain the exact version shared along with reviewer sign-off and reconciliation to the system of record.
What’s the minimum evidence packet to keep each period?
Keep one or more samples that show the lifecycle: the report/metric output, proof of review, and at least one example of an exception with ticket/incident closure. Add configuration evidence for the monitoring/reconciliation that produced the information.
Related compliance topics
- 2025 SEC Marketing Rule Examination Focus Areas
- Access and identity controls
- Access Control (AC)
- Access control and identity discipline
- Access control management
Footnotes
Frequently Asked Questions
What counts as “quality information” for TSC-PI1.1?
Information is “quality” when it is fit for decision-making about your processing integrity objectives: it is accurate enough, complete enough, timely enough, and protected from unauthorized changes. You prove this through controlled sources, validations/reconciliations, and consistent definitions.
Do we need a formal policy to satisfy this requirement?
A policy helps but does not carry the audit by itself. Auditors usually care more about a documented control description, assigned owners, and operating evidence that shows information is generated, reviewed, and acted on.
Can we satisfy “communicates” with a dashboard alone?
Only if you can prove the dashboard reaches the right people and they review it. Add review evidence such as meeting minutes, sign-offs, alert acknowledgments, or tickets created from exceptions.
We have automated alerts. Do we still need periodic reviews?
Often yes, because alerts cover known failure modes, while periodic reviews catch trends, silent degradations, and definition drift. If you rely solely on alerts, be prepared to show complete coverage and evidence of response for the full period.
How do we handle manual reports created in spreadsheets?
Treat them as high risk unless tightly controlled. Lock down the source data, document the transformation steps, restrict edit access, and retain the exact version shared along with reviewer sign-off and reconciliation to the system of record.
What’s the minimum evidence packet to keep each period?
Keep one or more samples that show the lifecycle: the report/metric output, proof of review, and at least one example of an exception with ticket/incident closure. Add configuration evidence for the monitoring/reconciliation that produced the information.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream