TSC-PI1.5 Guidance
To meet the tsc-pi1.5 guidance requirement, you must define and operate documented procedures for how your organization stores inputs, items in processing, and outputs across each in-scope system, including access controls, retention, integrity checks, backup/restore, and audit logging. Your SOC 2 auditor will look for both the written procedure and evidence that teams consistently follow it.
Key takeaways:
- Scope “inputs / processing / outputs” by system and data flow, then assign accountable owners.
- Write storage procedures that cover location, access, integrity, retention, backup, and deletion/disposal.
- Keep proof of operation (configs, logs, tickets, reviews) and test control effectiveness.
TSC-PI1.5 sits under the SOC 2 Trust Services Criteria for Processing Integrity and focuses on a deceptively practical question: where does data live while it enters your system, moves through it, and leaves it, and do you control that storage consistently? The requirement is not asking you to prove your application logic is correct. It asks you to show that storage of data at each stage is governed by policy and implemented in day-to-day operations.
For most teams, the audit risk is not “we store data badly.” It’s “we store data in more places than we admit,” like transient queues, caches, object storage buckets, CI/CD artifacts, analytics exports, support attachments, and third-party integrations. Those hidden stores often lack defined retention, access review, encryption settings, or logging. Auditors then conclude the control is not designed effectively or is not operating consistently.
This page translates the TSC-PI1.5 criterion into an implementation checklist a CCO, GRC lead, or control owner can run quickly: scope the storage points, document the procedure, wire it into engineering/IT operations, and retain audit-ready evidence.
Regulatory text
Excerpt (TSC-PI1.5): “The entity implements policies and procedures to store inputs, items in processing, and outputs.” 1
What the operator must do
You must be able to show, for each in-scope system and processing workflow, that you have:
- Defined storage expectations for data at three stages (inputs, processing, outputs).
- Implemented those expectations in technical configurations and operational routines.
- Maintained evidence that the procedures run as designed over the audit period.
All three elements are necessary for a SOC 2 conclusion under this criterion 1.
Plain-English interpretation (what auditors mean)
TSC-PI1.5 expects you to control “data at rest” across the processing lifecycle:
- Inputs: Data before and at ingestion (files uploaded, API requests, batch feeds, web forms, inbound messages).
- Items in processing: Data in intermediate states (staging tables, queues, temporary files, caches, job artifacts, ETL pipelines).
- Outputs: Data produced for customers or other systems (reports, exports, invoices, notifications, API responses, data extracts).
Auditors typically test whether storage is intentional and governed, not accidental. They look for:
- Approved storage locations (systems of record, buckets, databases, queues).
- Access controls (who/what can write, read, delete).
- Integrity safeguards (checksums, transactional controls, immutability where needed, controlled overwrites).
- Retention and disposal (how long, how deleted, and exceptions).
- Backup/restore and monitoring (recoverability supports processing integrity).
- Logging/audit trails (who accessed or modified stored data and when).
Who it applies to
Entities: Organizations undergoing a SOC 2 examination where Processing Integrity is in scope 1.
Operational context where it matters most:
- SaaS platforms processing customer transactions or workflows.
- Data platforms (ETL/ELT, analytics pipelines, data products).
- FinTech/payment-adjacent processing (even if not regulated, customers expect strong controls).
- Any service with meaningful batch jobs, asynchronous processing, or file-based ingestion.
Teams you will need:
- Engineering (application data stores, queues, caches, code-level storage practices)
- IT/Cloud platform (infrastructure storage, backups, IAM)
- Security (logging, encryption standards, access governance)
- Data/Analytics (exports, warehouses, transformation stages)
- Support/Operations (ticket attachments, manual uploads, ad hoc exports)
What you actually need to do (step-by-step)
Step 1: Define scope by data flow (make “storage points” explicit)
Create a simple inventory that maps each critical workflow to its storage locations:
- Input stores: upload bucket, inbound SFTP landing zone, API gateway logs, message queue topics.
- Processing stores: staging DB schema, temp object store path, cache layer, job-run artifacts.
- Output stores: customer export bucket, report database, outbound queue, emailed attachments archive.
Practical tip: If you can’t draw the data flow, you can’t defend the storage procedure in an audit.
Deliverable: “Processing Storage Map” (table is fine) with system owner and environment (prod/non-prod).
Step 2: Write the storage procedure (policy + runbook level)
Auditors want more than a one-line policy. Build a short standard with a runbook feel:
Minimum content your procedure should cover
- Approved storage locations per stage (inputs/processing/outputs).
- Data classification link (how sensitive data changes the storage rules).
- Access control rules
- Human access (roles/groups; approval; periodic review).
- Service access (least privilege; key/secret management).
- Integrity controls
- How you prevent partial writes/corruption (transactions, idempotency controls, write-once patterns, validation checks).
- How you handle reprocessing and overwrites.
- Retention and disposal
- Default retention per store.
- Deletion method and verification.
- Exceptions and approval path.
- Backup/restore expectations
- Which stores are backed up.
- Restore testing expectations.
- Logging and traceability
- What events are logged (read/write/delete, permission changes).
- Log retention and access controls.
- Monitoring and review
- What gets reviewed, by whom, and what triggers an incident or ticket.
Deliverable: “TSC-PI1.5 Storage Procedure” approved by the right owner (Security, Engineering, or GRC, depending on your governance).
Step 3: Implement guardrails in the platform (make the procedure real)
Translate the written procedure into enforced settings. Typical controls include:
- Standard bucket/container policies (deny public access, restrict cross-account access).
- Centralized IAM patterns for data stores (role-based access, break-glass).
- Default encryption settings where supported (align to your internal standard).
- Controlled paths/naming conventions for temporary processing artifacts.
- Automated lifecycle rules for retention/disposal in object storage.
- Backup policies for databases and critical object stores.
- Central logging for storage activity and permission changes.
Deliverable: Config evidence (screenshots/exports), plus change records showing implementation.
Step 4: Operationalize: monitoring, review, and audit trail
TSC-PI1.5 is easy to “document” and hard to “prove.” Build recurring operating activities:
- Access reviews for storage systems in scope.
- Review of lifecycle/retention jobs and failures.
- Exceptions tracking (who approved, why, expiration).
- Periodic assessment/testing of the control (walkthrough + sample checks).
Deliverable: Review tickets, sign-offs, automated reports, and exception register.
Step 5: Test effectiveness (before your auditor does)
Run an internal test aligned to how auditors sample:
- Pick a workflow, trace an input through processing to output.
- Verify each storage point matches the procedure (location, permissions, retention, logs).
- Pull evidence for a small sample of changes (IAM policy change, retention rule change, backup restore test).
Deliverable: Internal test worksheet + remediation tickets for gaps.
Required evidence and artifacts to retain
Use this list as your audit request packet for the tsc-pi1.5 guidance requirement:
Core documentation
- Storage policy/procedure covering inputs, items in processing, outputs 1
- Processing Storage Map (data flow + storage points + owners)
- Data classification/handling standard cross-reference (if separate)
Configuration and operational evidence (examples)
- Access control configurations for key stores (IAM role/group listings, bucket policies, database grants)
- Retention/lifecycle configurations and proof they’re active (policy exports, screenshots, IaC snippets)
- Backup policy configs and restore test evidence (tickets, test logs)
- Logging configuration and sample logs for storage events
- Monitoring/alert definitions and evidence of review (weekly/monthly review tickets)
- Exception register with approvals and expirations
- Change management records for storage-related changes (tickets/PRs)
Testing evidence
- Control testing plan and results (internal or by an independent function)
- Remediation evidence for identified gaps
Common exam/audit questions and hangups
Auditors often ask:
- “Show me where inputs are stored before validation.” If you have an upload bucket or landing zone, it’s in scope.
- “What counts as ‘items in processing’ in your architecture?” Queues, caches, staging tables, temp files, ETL workspaces.
- “How do you prevent output exports from living forever?” They want retention and deletion proof.
- “Who can access production storage and how is that reviewed?” Expect sampling of access reviews.
- “Can you produce an audit trail of changes to storage permissions?” Change history and logs matter.
Hangups that stall audits:
- Teams treat “temporary” storage as out of scope.
- Retention is defined, but lifecycle jobs fail silently.
- Logging exists, but no one reviews alerts or exceptions.
- Procedures say “encrypted,” but configs differ across environments.
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails in SOC 2 | How to avoid it |
|---|---|---|
| A single generic “Data Storage Policy” with no system mapping | Doesn’t show coverage of inputs/processing/outputs | Maintain a storage map per workflow/system and link it to the policy |
| Ignoring non-prod and support tooling | Data often gets copied to dev/test or tickets | Define rules for masked/synthetic data and restrict attachments/exports |
| No defined retention for outputs | Outputs sprawl into buckets, email, shared drives | Set default retention, automate lifecycle deletion, track exceptions |
| No evidence of operation | Auditors need proof across the period | Keep review tickets, config exports, log samples, and test results |
| No periodic assessment | Controls drift over time | Add a recurring control self-test and tie it to change management |
Risk implications (why operators should care)
TSC-PI1.5 sits in Processing Integrity, but breakdowns show up as:
- Incorrect outputs due to stale or overwritten intermediate files.
- Customer-impacting incidents when reprocessing uses wrong inputs.
- Data exposure when “temporary” stores are broadly accessible.
- Audit findings when retention/disposal isn’t consistent with customer commitments.
SOC 2 is an audit framework rather than a regulatory enforcement regime, so the immediate consequence is usually a control deficiency, scope carve-out, or a qualified opinion risk depending on severity and pervasiveness 1.
Practical 30/60/90-day execution plan
Days 0–30: Scope and document the control
- Confirm which products/workflows fall under Processing Integrity scope for your SOC 2.
- Build the Processing Storage Map for top workflows (start with the ones auditors will sample).
- Draft the TSC-PI1.5 storage procedure with owners, approvals, and exception path.
- Identify quick gaps: unknown buckets, unmanaged exports, missing retention rules.
Output: Approved procedure + scoped storage inventory.
Days 31–60: Implement guardrails and start generating evidence
- Standardize storage configurations (IAM patterns, retention/lifecycle, logging).
- Implement monitoring for lifecycle failures and permission changes.
- Establish the review cadence (access reviews, retention checks, exception review).
- Start collecting evidence in a consistent repository.
Output: Configs aligned to the procedure + first cycle of operational evidence.
Days 61–90: Test and harden for audit
- Run an internal walkthrough: trace sample transactions from input to output storage.
- Perform an internal operating effectiveness test (sample changes, reviews, logs).
- Remediate gaps; document compensating controls where needed.
- Package evidence by workflow/system for fast auditor response.
Output: Test results + remediation closure + audit-ready evidence packet.
Where Daydream fits (if you need speed and control traceability)
If you manage many storage points across cloud services and third parties, Daydream can help you keep the storage map, procedures, evidence, and testing results in one place so you can answer auditor samples without rebuilding context each time. The value is coordination and audit readiness, not replacing engineering work.
Frequently Asked Questions
Does TSC-PI1.5 require encryption at rest?
The criterion text focuses on having policies and procedures for storing inputs, items in processing, and outputs 1. Encryption is a common way to meet storage expectations, but your auditor will primarily test whether your documented procedure matches what you actually configured and operate.
What counts as “items in processing” in a modern cloud stack?
Include any intermediate storage that can affect correctness or traceability: queues/topics, staging schemas, caches, temp object storage prefixes, and job artifacts. If data can be reprocessed or modified there, auditors typically treat it as in scope for this requirement.
Do we need separate procedures for each system?
You can have one standard procedure with system-specific appendices. Auditors usually want to see that your written control clearly maps to each in-scope workflow and storage location.
How do we handle customer-requested exceptions to retention (longer or shorter)?
Maintain an exception register with documented approval, scope (which stores/paths), and an expiration or review date. Make sure the technical configuration matches the approved exception and keep evidence of periodic review.
What evidence is most persuasive for operating effectiveness?
Time-stamped artifacts: access review sign-offs, lifecycle policy exports, monitoring alerts and closures, change tickets/PRs for storage settings, and log samples showing storage events. Auditors tend to discount “we do this” statements without artifacts.
Our outputs include ad hoc exports created by Support. Are those in scope?
If ad hoc exports include customer data and are part of delivering the service, they frequently become “outputs” for TSC-PI1.5 purposes. Put a procedure around where exports may be stored, retention, and who can create them, then keep ticket-based evidence.
Related compliance topics
- 2025 SEC Marketing Rule Examination Focus Areas
- Access and identity controls
- Access Control (AC)
- Access control and identity discipline
- Access control management
Footnotes
Frequently Asked Questions
Does TSC-PI1.5 require encryption at rest?
The criterion text focuses on having policies and procedures for storing inputs, items in processing, and outputs (Source: AICPA Trust Services Criteria 2017, 2017). Encryption is a common way to meet storage expectations, but your auditor will primarily test whether your documented procedure matches what you actually configured and operate.
What counts as “items in processing” in a modern cloud stack?
Include any intermediate storage that can affect correctness or traceability: queues/topics, staging schemas, caches, temp object storage prefixes, and job artifacts. If data can be reprocessed or modified there, auditors typically treat it as in scope for this requirement.
Do we need separate procedures for each system?
You can have one standard procedure with system-specific appendices. Auditors usually want to see that your written control clearly maps to each in-scope workflow and storage location.
How do we handle customer-requested exceptions to retention (longer or shorter)?
Maintain an exception register with documented approval, scope (which stores/paths), and an expiration or review date. Make sure the technical configuration matches the approved exception and keep evidence of periodic review.
What evidence is most persuasive for operating effectiveness?
Time-stamped artifacts: access review sign-offs, lifecycle policy exports, monitoring alerts and closures, change tickets/PRs for storage settings, and log samples showing storage events. Auditors tend to discount “we do this” statements without artifacts.
Our outputs include ad hoc exports created by Support. Are those in scope?
If ad hoc exports include customer data and are part of delivering the service, they frequently become “outputs” for TSC-PI1.5 purposes. Put a procedure around where exports may be stored, retention, and who can create them, then keep ticket-based evidence.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream