The entity implements policies and procedures to store inputs, items in processing, and outputs
To meet the entity implements policies and procedures to store inputs, items in processing, and outputs requirement, you must define where production data enters your systems, how it is stored at each processing stage, and how final outputs are retained, protected, and retrievable. Operationalize it by mapping data flows, standardizing storage controls, and keeping audit-ready evidence of consistent execution.
Key takeaways:
- Treat “inputs, in-process items, outputs” as a complete lifecycle, not three separate silos.
- Standardize storage locations, access controls, retention, integrity checks, and recovery expectations per data class.
- Auditors will ask for evidence of operation: tickets, logs, configurations, and runbooks tied to real production activity.
This SOC 2 Processing Integrity requirement is about storage discipline across the processing lifecycle. “Inputs” are the data you receive (from customers, integrations, internal systems). “Items in processing” are intermediate states (queues, staging tables, temporary files, cached objects, working directories, ETL jobs). “Outputs” are what your system produces (reports, exports, API responses, generated files, downstream datasets). Storage can be cloud object stores, databases, message queues, file shares, endpoint disks, or managed services.
A common control gap is assuming “we have encryption at rest” satisfies the requirement. Encryption helps, but PI1.5 expects policies and procedures that define approved storage locations, handling rules, retention, protection against unauthorized access or alteration, and how you prove data is stored as intended throughout processing. Another common miss is focusing only on the final database while ignoring transient storage (job runners, build artifacts, temp buckets, debug logs, dead-letter queues).
Your goal as a Compliance Officer, CCO, or GRC lead is to make storage predictable and auditable. That means: (1) define the storage standard, (2) implement it in systems and workflows, and (3) retain evidence that it operates consistently for the SOC 2 period.
Regulatory text
Requirement (SOC 2 Trust Services Criteria): “The entity implements policies and procedures to store inputs, items in processing, and outputs.” 1
What the operator must do
You need documented, approved, and operational procedures that specify:
- Where inputs, intermediate data, and outputs are permitted to be stored (approved systems/services and environments).
- How they are protected (access control, change control, integrity safeguards, backup/recovery expectations).
- How long they are retained and how they are disposed of, aligned to business, legal, and customer commitments.
- How you prove the procedures run in practice (technical configurations, logs, tickets, and repeatable runbooks).
This is a Processing Integrity control. Auditors generally evaluate whether storage practices prevent loss, corruption, unauthorized changes, or inability to reproduce outputs from defined inputs.
Plain-English interpretation of the requirement
If your system processes customer data, you must control the storage of that data at every stage so it doesn’t end up in unapproved places, doesn’t get overwritten or corrupted without detection, and can be retrieved to support operations, incident response, and customer inquiries. “Policy” answers what should happen; “procedure” answers who does it, how, and with what evidence.
Who it applies to (entity and operational context)
Applies to service organizations pursuing or maintaining a SOC 2 report under the Trust Services Criteria, especially where Processing Integrity is in scope 1. In practice, it touches:
- Product/Engineering: application data stores, pipelines, batch jobs, caching layers, queues.
- Infrastructure/IT: cloud storage accounts, IAM, endpoint storage, backups, retention tooling.
- Security: access control standards, key management expectations, logging, monitoring.
- Data/Analytics: ETL staging, warehouses, extracts, BI outputs and exports.
- Support/Operations: ad hoc exports, troubleshooting bundles, attachments in ticketing tools.
- Third parties: processors, managed databases, file storage, CDNs, integration platforms (where your data is stored during processing).
Scope this requirement to systems that store or handle production inputs, intermediate states, or outputs for in-scope services.
What you actually need to do (step-by-step)
Step 1: Define the lifecycle and storage “zones”
Create a simple dataflow inventory that answers:
- Inputs: sources (API, UI uploads, SFTP, webhooks), ingestion points, initial landing storage.
- In-process: queues, temp buckets, staging DB schemas, scratch volumes, job runner disks, caches.
- Outputs: customer-visible exports, reports, derived datasets, audit logs, downstream integrations.
Deliverable: Data Storage Flow Map per in-scope service (diagram is fine if it is clear).
Step 2: Standardize approved storage locations and rules
Write (or update) a Storage Standard that includes:
- Approved storage services (by environment) and prohibited locations (examples: engineer laptops, personal cloud drives, public buckets).
- Classification-based rules (customer data vs operational logs vs derived analytics).
- Required controls per storage type:
- Access model (RBAC, least privilege, break-glass).
- Change control expectations for schema/storage configuration.
- Integrity expectations (checksums, immutability/WORM where relevant, versioning).
- Backup and restore expectations for critical stores.
- Retention and disposal rules per data category.
Keep this short enough to be used by engineering. You want fewer exceptions, not more pages.
Step 3: Turn the standard into procedures people follow
Create procedures that map to real workflows:
- Provisioning procedure: how new buckets/DBs/queues are requested, reviewed, and created with baseline controls.
- Data pipeline procedure: where staging data can live, how temp storage is cleaned up, how failures and retries are handled without duplicating/corrupting stored items.
- Export procedure: how customer exports are generated, where they’re stored, how long they stay available, and who can access them.
- Exception procedure: how teams request deviations (with time-bound approval) and how exceptions are tracked.
A good test: a new engineer should be able to choose the correct storage option without guessing.
Step 4: Implement technical guardrails (so policy is enforceable)
Pick guardrails that prevent drift:
- IAM baselines: groups/roles for read/write/admin; limit direct human write access to production stores.
- Environment separation: prevent dev/test from writing into prod stores; separate accounts/projects where feasible.
- Configuration baselines: encryption at rest where supported, bucket versioning where needed, database auditing/logging where possible, private networking defaults where applicable.
- Monitoring hooks: alert on public exposure, anomalous deletes, disabled versioning, unexpected data egress, or new storage resources outside approved templates.
- Retention automation: lifecycle rules for object stores, log retention configurations, scheduled cleanup for temp locations.
You do not need every control everywhere. You need a coherent baseline and proof it is applied to in-scope storage.
Step 5: Build the evidence set as you operate
Design your evidence capture so it is low-friction:
- For each storage zone, identify: “What configuration proves compliance?” and “What operational record proves the procedure ran?”
- Tie evidence to change tickets, IaC pull requests, or deployment records so you can sample quickly during the audit period.
Daydream (if you use it) fits naturally here: track storage controls, link systems to control statements, collect recurring evidence (PRs, screenshots, logs) by period, and keep exceptions and compensating controls in one place.
Required evidence and artifacts to retain
Keep artifacts that prove both design and operating effectiveness:
Policy/design artifacts
- Data Storage & Retention Policy (approved, versioned).
- Storage Standard / Engineering Requirements (approved).
- Data Flow / Storage Map for in-scope services.
- Data classification scheme (if used) and mapping to storage rules.
- Exception register template and approval workflow.
Operating artifacts (examples auditors accept)
- IaC repositories or change records showing baseline settings applied to storage resources.
- Access control evidence: role definitions, group membership change tickets, periodic access review outputs (if you perform them).
- Samples of export jobs or runbooks showing where outputs are stored and how access is granted.
- Lifecycle/retention configuration screenshots or configuration exports for key stores.
- Backup/restore job configuration evidence and at least one restore test record where required by your internal standards.
- Cleanup job logs for temp/intermediate stores (or run evidence from schedulers).
Aim for evidence that is repeatable and time-stamped within the SOC 2 review period.
Common exam/audit questions and hangups
Auditors commonly probe these areas for PI1.5 1:
- “Show me where inputs land and how you prevent them from being stored elsewhere.”
- “Where does intermediate processing data live (queues, temp buckets, worker disks)? Who can access it?”
- “How do you prevent or detect unintended modification or loss in storage?”
- “What’s your retention policy for outputs and exports? How do you enforce it?”
- “How do you handle failed processing and re-runs so you don’t create duplicate outputs?”
- “Prove this was followed during the audit period. Show samples across time.”
Hangup to expect: teams forget “items in processing” includes transient and operational data stores, not just the primary database.
Frequent implementation mistakes and how to avoid them
-
Mistake: No inventory of intermediate storage.
Fix: require every pipeline/service to document queues, staging, caches, and temp file locations in the dataflow map. -
Mistake: Policy says “store securely,” but procedures don’t exist.
Fix: write procedures aligned to how storage is provisioned, how exports are generated, and how exceptions are granted. -
Mistake: Storage sprawl across accounts/projects with inconsistent controls.
Fix: use templates/IaC modules and restrict creation of storage resources outside the pipeline. -
Mistake: Retention is defined but not enforced.
Fix: implement lifecycle policies and scheduled cleanup; keep evidence of configuration and execution. -
Mistake: Customer data copied into tickets/chats during troubleshooting.
Fix: add a support procedure and approved tooling for diagnostics; train and enforce redaction rules through review and spot checks.
Enforcement context and risk implications
No public enforcement cases were provided for this specific SOC 2 criterion in the supplied source catalog. That said, the business risk is straightforward: weak storage controls create paths for data loss, unauthorized access, accidental disclosure, or irreproducible processing results. For SOC 2, the practical consequence is a control deficiency that can drive exceptions in your report, increase customer scrutiny during security reviews, and create contractual friction if you cannot demonstrate controlled handling of customer inputs and outputs 1.
Practical 30/60/90-day execution plan
Day 0–30: Define scope, map flows, publish the standard
- Confirm in-scope services and environments for Processing Integrity.
- Build the Storage Flow Map for each in-scope service (inputs → in-process → outputs).
- Publish a Storage Standard with approved locations, required protections, retention, and disposal rules.
- Stand up an exceptions process (simple ticket form plus approval workflow).
Day 31–60: Implement guardrails and procedures
- Convert the standard into 3–5 procedures: provisioning, pipelines, exports, exceptions, troubleshooting.
- Implement baseline technical controls via IaC or templates for the major storage types you use.
- Identify highest-risk gaps (public access risk, unmanaged temp storage, uncontrolled exports) and remediate first.
- Define an evidence plan: what you will sample monthly/quarterly, and where it will be stored.
Day 61–90: Prove operating effectiveness and make audits easy
- Run a mock audit: pick samples of storage resources and show policy → config → change record → access model.
- Collect evidence across time (not all from one day) and store it in your GRC repository.
- Train engineering/support on the procedures; update onboarding checklists.
- If using Daydream, configure the control, map systems, and automate evidence requests/collection so audit prep becomes routine.
Frequently Asked Questions
What counts as an “item in processing” for this requirement?
Any intermediate state between input and output, including message queues, staging tables, temporary object storage, cache layers, and worker node disks. If data can persist long enough to be accessed, replayed, or lost, treat it as in-scope storage.
Do we need separate policies for inputs, in-process items, and outputs?
You can use one policy if it clearly covers all three stages and defines storage locations, protections, retention, and disposal for each. Auditors care more about clarity and consistent operation than document count.
How do we handle ad hoc customer exports without failing PI1.5?
Define an export procedure: approved tooling, approved storage location, time-bound access, and enforced retention. Prohibit sending exports through unmanaged channels and require ticketing/approval for nonstandard requests.
Does encryption at rest satisfy the requirement by itself?
No. Encryption is one protection, but PI1.5 expects end-to-end storage procedures: where data is stored, who can access it, how changes are controlled, retention, disposal, and evidence that the process is followed 1.
What evidence is strongest for auditors?
Time-stamped configuration evidence (IaC, screenshots/exports), access control records, and change tickets tied to storage provisioning and updates. Add samples showing retention settings and cleanup/backup jobs actually ran during the period.
How do we scope third parties under this requirement?
Identify where third parties store your inputs, intermediate items, or outputs (for example, managed databases or integration platforms). Document the storage responsibility split and retain due diligence evidence consistent with your SOC 2 scope and commitments.
Related compliance topics
- 2025 SEC Marketing Rule Examination Focus Areas
- Access and identity controls
- Access Control (AC)
- Access control and identity discipline
- Access control management
Footnotes
Frequently Asked Questions
What counts as an “item in processing” for this requirement?
Any intermediate state between input and output, including message queues, staging tables, temporary object storage, cache layers, and worker node disks. If data can persist long enough to be accessed, replayed, or lost, treat it as in-scope storage.
Do we need separate policies for inputs, in-process items, and outputs?
You can use one policy if it clearly covers all three stages and defines storage locations, protections, retention, and disposal for each. Auditors care more about clarity and consistent operation than document count.
How do we handle ad hoc customer exports without failing PI1.5?
Define an export procedure: approved tooling, approved storage location, time-bound access, and enforced retention. Prohibit sending exports through unmanaged channels and require ticketing/approval for nonstandard requests.
Does encryption at rest satisfy the requirement by itself?
No. Encryption is one protection, but PI1.5 expects end-to-end storage procedures: where data is stored, who can access it, how changes are controlled, retention, disposal, and evidence that the process is followed (Source: AICPA TSC 2017).
What evidence is strongest for auditors?
Time-stamped configuration evidence (IaC, screenshots/exports), access control records, and change tickets tied to storage provisioning and updates. Add samples showing retention settings and cleanup/backup jobs actually ran during the period.
How do we scope third parties under this requirement?
Identify where third parties store your inputs, intermediate items, or outputs (for example, managed databases or integration platforms). Document the storage responsibility split and retain due diligence evidence consistent with your SOC 2 scope and commitments.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream