TSC-PI1.3 Guidance

TSC-PI1.3 requires you to implement and operate documented policies and procedures that govern how your system processes data so processing is complete, accurate, timely, and authorized. To operationalize it fast, define “system processing” in scope, map each processing step to a control, run monitoring and reviews, and retain evidence that the controls ran as designed 1.

Key takeaways:

  • Write and approve a system processing policy/procedure set that matches your actual workflows, not a generic template 1.
  • Prove operation with audit-ready artifacts: logs, tickets, reconciliations, monitoring results, and review sign-offs 1.
  • Test effectiveness with periodic assessments and fix gaps with tracked remediation 1.

“TSC-PI1.3 Guidance” is the Processing Integrity criterion in the AICPA Trust Services Criteria that expects your organization to run system processing under defined, repeatable rules 1. Auditors read this as: you can’t rely on tribal knowledge for how data enters, transforms, and exits your system. You need documented policies and procedures, implemented in daily operations, with evidence that they consistently produce intended results.

For a CCO, GRC lead, or compliance owner, the fastest path is to treat “system processing” like a production line: inputs, transformations, outputs, and exception handling. Then control each step. That means defining who can trigger processing, what validations exist, how failures are detected, who investigates, and how you prevent recurrence. It also means proving it. Policies that are not adopted into operational workflows fail in audits because you cannot show they actually governed processing during the audit period.

This page gives requirement-level implementation guidance you can assign to engineering, operations, and product owners, with the artifacts auditors routinely request and the hangups that slow SOC 2 examinations. Target keyword: tsc-pi1.3 guidance requirement.

Regulatory text

Excerpt (TSC-PI1.3): “The entity implements policies and procedures over system processing” 1.

Operator interpretation: You must (1) document how processing should work, (2) implement those expectations as controls embedded in systems and operations, and (3) retain evidence that controls operated and were reviewed 1. For most organizations, this shows up as a combination of SDLC controls, configuration standards, job scheduling/runbooks, data validation rules, reconciliation checks, monitoring/alerting, incident handling, and post-incident corrective actions.

Plain-English interpretation (what the auditor is really testing)

Auditors typically evaluate whether:

  • Processing is authorized (only approved jobs, workflows, and changes run).
  • Processing is complete (no missing records or dropped events without detection).
  • Processing is accurate (validations, calculations, and transformations behave as intended).
  • Processing is timely (work is executed within defined windows; delays are detected and managed).
  • Exceptions are handled (failed jobs, retries, partial runs, and manual interventions are controlled and logged).

TSC-PI1.3 is satisfied when your written procedures match real operations and you can demonstrate, with evidence, that they ran during the audit period 1.

Who it applies to

Entity types: Organizations undergoing a SOC 2 audit that include the Processing Integrity category in scope 1.

Operational context (in scope examples):

  • SaaS platforms processing customer transactions, messages, or configuration changes
  • Data pipelines (ETL/ELT), batch processing, asynchronous queues, and scheduled jobs
  • Billing, invoicing, payment processing, and usage-based metering
  • Provisioning/deprovisioning workflows that “process” entitlements or access states
  • AI/ML pipelines where inputs are transformed into outputs customers rely on

Common scoping mistake: Treating “processing” as only batch jobs. In SOC 2, “system processing” can include real-time API flows, background workers, and human-in-the-loop steps if they affect customer outcomes 1.

What you actually need to do (step-by-step)

Use this as an execution checklist for the tsc-pi1.3 guidance requirement.

Step 1: Define “system processing” for your SOC 2 scope

  1. List in-scope products/services and the “system” boundary (apps, infrastructure, third parties, manual steps).
  2. Identify critical processing flows that create customer-impacting outputs (reports, invoices, messages, entitlements, exported files).
  3. For each flow, document: inputs → transformations → outputs → storage → downstream dependencies.

Deliverable: Processing inventory (table) aligned to audit scope 1.

Step 2: Write policies and procedures that match how you operate

Create a small set of documents people will follow:

  • System Processing Policy (requirements-level): authorization, validation, monitoring, exception handling, logging, change control expectations.
  • Procedures/Runbooks (how-to): job run procedures, on-call steps, reprocessing steps, reconciliation steps, manual override controls.

Keep it practical. Auditors penalize “paper controls” that don’t match reality because operations cannot produce consistent evidence 1.

Minimum content to include:

  • What “complete and accurate processing” means for your service
  • Validation rules (format checks, range checks, schema validation, idempotency)
  • Approval requirements for new/changed processing logic (tie to change management)
  • Monitoring requirements and alert thresholds (even if thresholds are internal)
  • Procedures for failures, retries, backfills, and manual interventions
  • Required logging and retention expectations for audit trails

Step 3: Map each processing risk to a control

Build a simple control matrix:

Processing risk Control activity Owner Tool/system Evidence
Dropped messages/events Queue monitoring + dead-letter handling SRE Queue/monitoring tool Alert history, DLQ metrics
Wrong calculations Automated tests + peer review Eng CI system Test results, PR approvals
Unauthorized changes Change approval + restricted deploy Eng Manager Git/CI/CD Change tickets, deploy logs
Silent job failures Scheduled job monitoring + on-call runbook SRE Scheduler Run logs, incident tickets
Data mismatch Reconciliation report + review Ops/Finance BI tool Reconciliation output + sign-off

This matrix becomes your audit backbone for TSC-PI1.3 1.

Step 4: Implement monitoring and review routines

TSC-PI1.3 expects more than “we have dashboards.” You need a defined review process 1.

Operationalize with:

  • Daily/weekly checks for critical jobs (automated where possible)
  • Alert routing and on-call ownership
  • Triage process that results in tickets
  • Management review for recurring processing issues (trend review)

Daydream fit (earned mention): If you struggle to keep evidence organized across systems (CI, ticketing, logs, monitoring), Daydream can act as a control evidence workspace that ties each processing control to the exact artifact an auditor will sample, with ownership and review workflows.

Step 5: Maintain an audit trail of processing and interventions

Auditors need to see what happened, who did it, and when 1. Configure systems so you can show:

  • Job run history (success/fail, timestamps)
  • Manual re-runs/backfills with approvals
  • Production changes that could affect processing
  • Access and privilege changes for processing components
  • Incident records tied to processing failures

Avoid storing key evidence only in ephemeral systems. Export or retain logs per your retention policy.

Step 6: Conduct periodic assessments (and retain testing results)

Treat this as your internal control testing cycle:

  • Select a sample of processing controls (high-risk flows first)
  • Test design: does the control, if followed, prevent/detect the failure?
  • Test operation: did it run? Is evidence complete? Was review performed?
  • Track remediation with due dates and accountable owners

This aligns directly to the expectation to “test control effectiveness” and close the loop 1.

Required evidence and artifacts to retain

Build an evidence checklist by control. Common artifacts include:

  • Approved System Processing Policy and supporting procedures/runbooks 1
  • Processing inventory and data flow diagrams for in-scope systems
  • Change management records tied to processing logic (PR approvals, change tickets, release notes)
  • CI/CD test results relevant to processing accuracy (unit/integration tests)
  • Scheduler/job logs, batch run reports, queue metrics, and failure logs
  • Monitoring dashboards snapshots (where snapshots are needed) and alert history
  • Incident tickets for processing failures, with root cause and corrective action
  • Reconciliation outputs (billing vs. usage, input vs. output counts) and review sign-offs
  • Evidence of periodic assessments/control testing and remediation tracking 1

Retention tip: define where each artifact “lives” and who can export it on request. Audits fail on retrieval friction as often as control gaps.

Common exam/audit questions and hangups

Expect auditors to ask:

  • “Show me the policies/procedures over processing for this in-scope service” 1.
  • “Pick a processing job. How do you know it ran completely and accurately last month?”
  • “What happens when a job fails? Show evidence of a failure and your response.”
  • “How do you control manual reprocessing or backfills?”
  • “How do you ensure changes to processing logic are approved and tested?”
  • “Who reviews monitoring results, and where is the sign-off?”

Hangups that slow audits:

  • Evidence scattered across tools with no mapping to controls
  • Policies written at too high a level (“we monitor systems”) without procedure-level proof
  • Logs exist but aren’t retained or searchable for the full audit period

Frequent implementation mistakes (and how to avoid them)

  1. Policy-only compliance. A policy without workflow adoption fails. Tie procedures to tickets, on-call rotations, and CI/CD gates 1.
  2. No defined “complete/accurate” criteria. Define measurable checks per flow (record counts, schema validation, reconciliation outputs).
  3. Manual interventions without controls. Require tickets/approvals for reprocessing and keep an immutable record of who initiated it.
  4. Monitoring without review evidence. Alerts alone don’t show governance. Add a review cadence and capture sign-offs or ticket outcomes 1.
  5. Testing that doesn’t cover processing risks. Unit tests are good, but auditors will ask about failure modes: partial runs, retries, duplicate events, and boundary conditions.
  6. Evidence retention gaps. If logs roll over quickly or dashboards don’t preserve history, you will scramble during fieldwork. Establish a retention and export plan.

Enforcement context and risk implications

SOC 2 is an audit framework, not a regulatory enforcement regime, so you should not expect “TSC-PI1.3 fines” 1. The risk is commercial and operational: a qualified/adverse SOC 2 opinion, control exceptions, customer trust impact, and higher scrutiny in security reviews.

Processing integrity failures also create downstream exposure: incorrect billing, missing notifications, wrong access states, or corrupted customer data. Those outcomes escalate quickly into contractual disputes and incident response work.

Practical 30/60/90-day execution plan

First 30 days (define and document)

  • Confirm in-scope systems and list top processing flows.
  • Draft the System Processing Policy and 2–4 high-impact procedures/runbooks.
  • Build a control-to-evidence matrix for each flow.
  • Identify evidence sources and retention gaps (logs, tickets, CI).

Days 31–60 (implement and instrument)

  • Add or tighten validations, monitoring, and alerting for critical flows.
  • Formalize exception handling: failure tickets, severity, escalation paths.
  • Add approval controls for manual reprocessing/backfills.
  • Start collecting evidence in a single place (folder structure or a tool like Daydream).

Days 61–90 (operate, test, and remediate)

  • Run the review cadence (ops reviews, reconciliation reviews) and capture sign-offs.
  • Perform an internal control test: sample artifacts and confirm they meet auditor expectations 1.
  • Remediate gaps with tracked issues and verify fixes.
  • Prepare an “auditor packet” per flow: policy/procedure + control matrix + evidence samples.

Frequently Asked Questions

Does TSC-PI1.3 require automated controls?

No. It requires implemented policies and procedures over processing 1. Automation improves consistency and evidence quality, but a well-controlled manual step can pass if it is documented, approved, and evidenced.

What counts as “system processing” for a SaaS product?

Any workflow that transforms inputs into customer-relevant outputs: API requests, background jobs, billing calculations, provisioning, and exports. If a failure could create wrong or missing customer outcomes, treat it as in scope for TSC-PI1.3 1.

How do we prove completeness and accuracy without heavy data reconciliation?

Start with lightweight checks tied to failure modes: record counts, idempotency controls, schema validation, and exception queues. Then add targeted reconciliations for the flows with the highest customer or financial impact.

What evidence is usually strongest for auditors?

Time-stamped, system-generated artifacts: CI test results, deploy logs, scheduler logs, monitoring alert history, and ticketing records. Pair them with documented procedures and a clear mapping from control to evidence 1.

Our processing relies on third parties (payments, messaging). How does TSC-PI1.3 apply?

You still need policies and procedures over the end-to-end process, including third-party dependencies. Document what you monitor (webhooks, retries, reconciliation), how you detect third-party failures, and how you respond.

Can we satisfy TSC-PI1.3 with one policy document?

Rarely. A single policy can define expectations, but auditors typically need procedure-level detail and operational evidence for the critical flows. Keep the policy short, and put “how-to” steps in runbooks that teams actually follow.

Related compliance topics

Footnotes

  1. AICPA TSC 2017

Frequently Asked Questions

Does TSC-PI1.3 require automated controls?

No. It requires implemented policies and procedures over processing (Source: AICPA TSC 2017). Automation improves consistency and evidence quality, but a well-controlled manual step can pass if it is documented, approved, and evidenced.

What counts as “system processing” for a SaaS product?

Any workflow that transforms inputs into customer-relevant outputs: API requests, background jobs, billing calculations, provisioning, and exports. If a failure could create wrong or missing customer outcomes, treat it as in scope for TSC-PI1.3 (Source: AICPA TSC 2017).

How do we prove completeness and accuracy without heavy data reconciliation?

Start with lightweight checks tied to failure modes: record counts, idempotency controls, schema validation, and exception queues. Then add targeted reconciliations for the flows with the highest customer or financial impact.

What evidence is usually strongest for auditors?

Time-stamped, system-generated artifacts: CI test results, deploy logs, scheduler logs, monitoring alert history, and ticketing records. Pair them with documented procedures and a clear mapping from control to evidence (Source: AICPA TSC 2017).

Our processing relies on third parties (payments, messaging). How does TSC-PI1.3 apply?

You still need policies and procedures over the end-to-end process, including third-party dependencies. Document what you monitor (webhooks, retries, reconciliation), how you detect third-party failures, and how you respond.

Can we satisfy TSC-PI1.3 with one policy document?

Rarely. A single policy can define expectations, but auditors typically need procedure-level detail and operational evidence for the critical flows. Keep the policy short, and put “how-to” steps in runbooks that teams actually follow.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream