TSC-PI1.2 Guidance

TSC-PI1.2 requires you to implement documented, consistently performed controls over system inputs so you can reasonably assure inputs are complete, accurate, authorized, and processed as intended within your SOC 2 scope 1. Operationalize it by mapping every input path, defining validation and authorization checks, logging and monitoring input exceptions, and retaining evidence that the controls ran.

Key takeaways:

  • Inventory and classify all system input channels, including APIs, files, UI forms, integrations, and admin tooling.
  • Put in place input controls: authorization, validation, completeness checks, error handling, and reconciliations.
  • Retain evidence: control docs, configs, logs, exception tickets, and periodic review results 1.

The fastest way to fail Processing Integrity in a SOC 2 is to treat “inputs” as only UI form fields. Auditors usually see failures in the messy edges: batch imports, partner feeds, internal admin tools, and background jobs that bypass normal validation. The tsc-pi1.2 guidance requirement is narrowly focused on one thing: do you have policies and procedures that control what goes into the system so the processing that follows has a reliable starting point 1?

For a Compliance Officer, CCO, or GRC lead, the job is not to design each validator rule personally. The job is to ensure (1) every input path has an owner, (2) control expectations are written down, (3) engineering and operations can show the controls are actually operating, and (4) exceptions are handled in a way that prevents silent data corruption. The outcome you want is straightforward: if someone asks “how do you know bad or unauthorized data didn’t enter the system,” you can answer with a control story and evidence, not beliefs.

Regulatory text

Requirement (excerpt): “The entity implements policies and procedures over system inputs to provide reasonable assurance” 1.

What the operator must do:
You must define, implement, and operate controls that govern how data enters in-scope systems. “Policies and procedures” must translate into working mechanisms such as input validation, access restrictions, approvals, automated checks, reconciliations, error handling, and review of exceptions. Auditors will look for (a) documentation, (b) evidence of operation, and (c) some form of testing or review that demonstrates the controls are effective 1.

Plain-English interpretation (what “system inputs” means in practice)

“System inputs” are any data or commands entering your in-scope environment that affect processing results. Treat inputs broadly:

  • User-entered data via UI forms (customer portal, internal console)
  • API requests from customers, partners, mobile apps
  • File-based transfers (SFTP, uploads, EDI, CSV imports)
  • Event and message ingestion (queues, webhooks, streaming)
  • Admin actions that change configuration or data (back office tools, scripts)
  • Third-party integrations that push records or triggers into your system

A clean control design prevents or detects:

  • Unauthorized inputs (wrong actor, wrong permissions)
  • Invalid inputs (bad format, impossible values, missing required fields)
  • Incomplete inputs (partial file loads, truncated payloads)
  • Duplicate or out-of-order inputs (replays, retries, idempotency failures)
  • Inputs that bypass required approvals (manual overrides, support “quick fixes”)

Who it applies to (entity and operational context)

This applies to any organization undergoing a SOC 2 examination with Processing Integrity in scope, and to the systems, services, and workflows described in the SOC 2 system description 1.

Operationally, TSC-PI1.2 sits at the intersection of:

  • Product/engineering (validation rules, API gateways, schema enforcement)
  • IT/operations (batch jobs, import tooling, job monitoring)
  • Security (authN/authZ, service accounts, key management)
  • Data/finance operations (reconciliations, completeness checks)
  • Compliance/GRC (policy, evidence, control ownership, audit readiness)

If you rely on third parties to provide input data (payment processors, data brokers, integration partners), the requirement still lands on you to control ingestion into your system, even if upstream controls exist.

What you actually need to do (step-by-step)

Step 1: Build an input inventory (the “input register”)

Create a register that lists each input path and answers:

  1. Source (customer, internal user, third party system, scheduled job)
  2. Method (UI, API endpoint, webhook, queue topic, SFTP folder, import job)
  3. Data type (transactions, profile data, configuration, pricing, entitlements)
  4. System of record and downstream impact
  5. Owner (team + individual)
  6. Control points (authorization, validation, reconciliation, monitoring)

Practical tip: auditors accept a spreadsheet if it’s maintained, owned, and tied to evidence. Keep it aligned to your SOC 2 scope boundaries.

Step 2: Define required controls per input type (minimum control baseline)

For each input channel, document the minimum set of controls you expect. A workable baseline:

A. Authorization controls

  • Require authenticated identities (human or service account)
  • Enforce role-based access to input functions (create/update/import)
  • Restrict high-impact inputs (pricing, entitlements, payouts) to elevated roles with change tracking

B. Validation controls

  • Schema/format validation (types, required fields, allowed ranges)
  • Business rule validation (status transitions, referential integrity, cross-field checks)
  • Malware scanning for file uploads where applicable

C. Completeness and integrity controls

  • Checksums / record counts for file loads
  • Idempotency keys / deduplication for APIs and event ingestion
  • Reconciliations between source totals and ingested totals for batch workflows

D. Error handling and exception management

  • Reject invalid inputs with clear errors
  • Quarantine suspicious files/messages
  • Create tickets for failed imports and require resolution with documented disposition

Step 3: Turn “policy” into runnable procedures (runbooks + SOPs)

Write short procedures that answer:

  • How new input channels are approved (design review gate)
  • How validation rules are changed (change management link)
  • How ingestion failures are triaged (on-call steps, severity, escalation)
  • How completeness is verified after batch jobs (who checks, what evidence is produced)

Keep procedures implementer-friendly. One page beats a twenty-page policy.

Step 4: Instrument logs and an audit trail you can actually retrieve

TSC-PI1.2 often fails on evidence, not intent. Ensure you can produce:

  • API gateway logs (request metadata, auth context, response codes)
  • Import job logs (start/end, counts, failures, file identifiers)
  • Admin action logs (who changed what, when)
  • Exception workflow logs (tickets, approvals, resolutions)

Decide where logs live, retention, and who can access them. Make retrieval part of the procedure.

Step 5: Monitoring and periodic review (prove the controls stay healthy)

Set up a control operation rhythm:

  • Review ingestion error rates and failed jobs
  • Review exception tickets for root causes and recurring patterns
  • Confirm reconciliations were performed for high-risk feeds
  • Track changes to validation logic and confirm peer review occurred

If you use Daydream, teams commonly map each input channel to a control, assign an owner, and attach recurring evidence tasks (for example, monthly reconciliation sign-off and quarterly exception trend review) so audit requests don’t become a scramble.

Step 6: Test control effectiveness (lightweight, defensible)

Testing can be simple but must be real:

  • Sample ingestion events and confirm validation/authorization worked
  • Re-perform a reconciliation from source to ingested totals
  • Verify a known-invalid payload is rejected in a test environment
  • Inspect a sample of admin changes and confirm audit trail completeness

Document what you tested, the sample selection method, results, and any remediation 1.

Required evidence and artifacts to retain (audit-ready list)

Keep evidence tied to each input channel. Typical artifacts:

  • Policy/SOPs for input governance and exception handling 1
  • Input register with owners, control mapping, and last review date
  • Configuration evidence: API gateway settings, validation schemas, queue policies, file scanning settings
  • Logging/audit trail samples: representative logs showing auth context, validation failures, job outcomes
  • Reconciliation outputs: signed checklists, reports, screenshots, or exported results
  • Exception records: tickets, approvals, root cause notes, closure evidence
  • Control testing results: test plan, samples, findings, remediation tracking 1
  • Periodic review evidence: meeting notes, dashboards, attestations, action items

Common exam/audit questions and hangups

Auditors commonly probe these areas:

  • “List all ways data enters the system in scope. How do you know the list is complete?”
  • “Show input validation for your highest-risk API endpoints.”
  • “How do you prevent support/admin tooling from bypassing controls?”
  • “What happens when an import fails halfway through?”
  • “Show evidence the control operated throughout the period, not just today.” 1
  • “How do you detect duplicates and replays for asynchronous processing?”

Hangup pattern: teams demonstrate validation in application code but cannot show consistent monitoring, exception handling, and review evidence.

Frequent implementation mistakes (and how to avoid them)

  1. Only documenting UI inputs.
    Fix: require every engineering team to enumerate APIs, jobs, partner feeds, and admin tooling in the input register.

  2. Validation exists, but no proof.
    Fix: keep versioned schemas/configs and retain logs that show rejects, not just accepts.

  3. Batch imports without completeness checks.
    Fix: add record counts/checksums and a reconciliation sign-off for high-impact feeds.

  4. Exceptions handled in chat.
    Fix: route ingestion failures into a ticketing workflow with required fields (cause, impact, resolution, approver where needed).

  5. No periodic review.
    Fix: calendar recurring review of ingestion failures and reconciliations; attach outputs as evidence 1.

Risk implications (why auditors care)

Input weaknesses create processing integrity failures that don’t look like “security incidents” until customers notice incorrect results: wrong invoices, missing transactions, incorrect entitlements, broken reporting, or downstream compliance errors. Even if no data leaves the system, bad inputs can invalidate your service commitments and undermine the reliability claims your SOC 2 report supports.

30/60/90-day execution plan

Days 1–30: Establish scope and baseline controls

  • Confirm in-scope systems and data flows for Processing Integrity.
  • Build the input register and assign owners for each input channel.
  • Identify “high-impact” inputs (financial, entitlement, provisioning, customer-facing results).
  • Draft or refresh SOPs: input validation standards, batch import procedures, exception handling, logging expectations.

Days 31–60: Implement monitoring, evidence capture, and exception workflows

  • Ensure logs exist for each input channel and can be retrieved by the control owner.
  • Add or formalize reconciliations for batch and partner feeds.
  • Route ingestion failures to tickets with defined severity and SLA expectations.
  • Start periodic review cadences and capture first-cycle evidence (meeting notes, dashboards, sign-offs).

Days 61–90: Test, remediate, and harden for the audit period

  • Perform control tests for representative input channels and document results.
  • Remediate gaps: missing logs, incomplete validation, unmanaged admin tools, weak deduplication.
  • Run a mock evidence pull: can you produce artifacts across the full lookback period?
  • Finalize control narratives so each input path has a clear “control story” mapped to evidence 1.

Frequently Asked Questions

Does TSC-PI1.2 require validation on every single field?

It requires policies and procedures over inputs that provide reasonable assurance 1. In practice, focus depth on high-impact inputs and enforce a baseline schema validation everywhere.

Are third-party-provided inputs covered?

Yes. Even if a third party originates the data, you control ingestion into your system. Document how you authenticate the source, validate payloads, and handle exceptions.

What evidence is strongest for auditors?

Time-bound evidence of operation: logs showing rejects and failures, reconciliation outputs, and tickets that document exception handling. Pair that with written procedures and testing results 1.

How do we handle “manual fixes” by support or engineering?

Treat them as privileged inputs. Require access controls, approval where impact is high, and an audit trail that records who did what, when, and why.

We have microservices. Do we need input controls at every service boundary?

You need controls over system inputs within scope. Document where validation occurs (edge gateway vs. each service) and prove the approach prevents invalid or unauthorized data from entering processing.

What if we can’t reconcile certain event streams end-to-end?

Document the limitation and implement compensating controls such as idempotency, dead-letter queues with review, and monitoring of ingestion failure patterns. Auditors will expect an intentional design and evidence it runs.

Related compliance topics

Footnotes

  1. AICPA Trust Services Criteria 2017

Frequently Asked Questions

Does TSC-PI1.2 require validation on every single field?

It requires policies and procedures over inputs that provide reasonable assurance (Source: AICPA Trust Services Criteria 2017). In practice, focus depth on high-impact inputs and enforce a baseline schema validation everywhere.

Are third-party-provided inputs covered?

Yes. Even if a third party originates the data, you control ingestion into your system. Document how you authenticate the source, validate payloads, and handle exceptions.

What evidence is strongest for auditors?

Time-bound evidence of operation: logs showing rejects and failures, reconciliation outputs, and tickets that document exception handling. Pair that with written procedures and testing results (Source: AICPA Trust Services Criteria 2017).

How do we handle “manual fixes” by support or engineering?

Treat them as privileged inputs. Require access controls, approval where impact is high, and an audit trail that records who did what, when, and why.

We have microservices. Do we need input controls at every service boundary?

You need controls over system inputs within scope. Document where validation occurs (edge gateway vs. each service) and prove the approach prevents invalid or unauthorized data from entering processing.

What if we can’t reconcile certain event streams end-to-end?

Document the limitation and implement compensating controls such as idempotency, dead-letter queues with review, and monitoring of ingestion failure patterns. Auditors will expect an intentional design and evidence it runs.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream