The entity implements policies and procedures over system processing

To meet the the entity implements policies and procedures over system processing requirement, you need documented, approved, and operating procedures that control how your in-scope system receives inputs, processes data, produces outputs, and handles errors or exceptions. Then you must prove those procedures actually run in practice with repeatable evidence an auditor can test.

Key takeaways:

  • Write processing procedures that match your real data flows (inputs → processing → outputs → error handling).
  • Connect procedures to day-to-day control operation (monitoring, reconciliations, incident handling, change management).
  • Retain evidence that shows consistent operation across the SOC 2 examination period (not just “policy exists”).

This requirement sits in the SOC 2 Processing Integrity criteria and is routinely underestimated because it sounds like paperwork. In practice, it is a reliability control: you are defining how processing is supposed to work and then making that behavior testable. Auditors look for a tight chain from (1) system processing commitments and objectives, to (2) documented policies and procedures, to (3) operating evidence that the procedures were followed.

“System processing” is broader than batch jobs or ETL. For most service organizations, it includes API transactions, user-driven workflows, background workers, scheduled tasks, data imports/exports, report generation, and customer-facing calculations. If any of those break silently, you can have processing integrity failures: incomplete processing, inaccurate outputs, duplicate transactions, or delayed processing outside expectations.

A fast way to operationalize TSC-PI1.3 is to treat it like a production playbook plus governance: define the processing steps, define what “correct” looks like, define who checks it, define what happens when it is not correct, and keep the evidence. The rest of this page gives you a step-by-step implementation, the artifacts to retain, and the audit questions that tend to stall SOC 2 timelines.

Regulatory text

Requirement (SOC 2 Processing Integrity): “The entity implements policies and procedures over system processing” 1.

What the operator must do:
You must implement written policies and procedures that govern how the system processes data end-to-end, including how processing is initiated, validated, completed, and corrected when errors occur. “Implement” means the procedures are not aspirational: they are approved, communicated, and used, and you can produce evidence that they operated throughout the audit period 1.

Plain-English interpretation (what auditors are really testing)

Auditors are testing whether your processing is defined, controlled, and repeatable:

  • Defined: You can explain your system’s key processing flows and the expected results.
  • Controlled: You have checks to prevent or detect incomplete, inaccurate, unauthorized, or late processing.
  • Repeatable: The same controls run consistently, and you can prove it.

This requirement is satisfied when an independent person can read your documentation, understand how processing is supposed to work, and then trace evidence showing the controls ran as described during the period.

Who it applies to (entity and operational context)

Applies to: Service organizations pursuing SOC 2 where Processing Integrity is in scope 1.

Operational scope: Any production processing that can impact customer outcomes, including:

  • Transaction processing (API calls, UI actions, order/claim/payment workflows)
  • Data pipeline processing (imports, exports, transformations, scheduled jobs)
  • Reporting and analytics outputs promised to customers
  • System-to-system integrations (webhooks, queues, file transfers)
  • Exception handling (retries, dead-letter queues, manual reprocessing)

Teams involved: Engineering/SRE, product operations, support/incident response, security/compliance (for governance), and any business operations team that performs manual processing steps.

What you actually need to do (step-by-step)

Use this sequence to get to “audit-ready” quickly.

Step 1: Define “system processing” for SOC 2 scope

  1. List your in-scope products/services and the in-scope system boundary.
  2. Identify the “material” processing flows: the flows that, if wrong, would cause customer impact (wrong output, missing records, duplicates, late processing).
  3. For each flow, document:
    • Inputs (source, format, authorization)
    • Processing steps (services/jobs involved)
    • Outputs (where they land, who consumes them)
    • Failure modes (common errors, timeouts, partial writes)

Tip: Auditors respond well to a one-page data flow diagram per major workflow plus a short narrative.

Step 2: Write processing policies (governance-level)

Create or update a “System Processing Integrity Policy” that states:

  • Processing objectives (completeness, accuracy, timeliness, authorization)
  • Required processing controls (validation, logging, monitoring, reconciliations)
  • Exception handling expectations (triage, correction, customer notification criteria)
  • Ownership (process owners, approvers) and review cadence
  • Tooling expectations (ticketing, monitoring, runbooks)

Keep this policy short; put details in procedures and runbooks.

Step 3: Create procedures/runbooks for each major processing flow

For each flow, create a procedure that answers:

  • How processing starts (trigger, schedule, manual initiation)
  • What validations run (schema checks, required fields, auth checks, idempotency keys)
  • What “success” looks like (status codes, job completion states, record counts)
  • What monitoring exists (alerts, dashboards, error budgets if you use them)
  • What to do on failure (retry logic, rollback, manual reprocessing steps)
  • How you prevent unauthorized changes to processing logic (tie-in to change management)

Keep procedures operational. If a support engineer cannot execute it during an incident, it is not a procedure.

Step 4: Implement control activities that prove procedures are followed

Most organizations need a small set of repeatable control activities:

A. Input controls

  • Automated validation at ingest (format, required fields, auth)
  • Rejection handling (dead-letter queue, quarantine bucket, error reports)
  • Access control for who can submit/trigger processing

B. Processing controls

  • Idempotency and duplicate handling for event-driven flows
  • Job scheduling controls (ownership, change approval for schedules)
  • Segregation of duties for manual processing where feasible (maker-checker)

C. Output controls

  • Reconciliations (record counts, totals, completeness checks)
  • Post-processing validations (sanity checks, referential integrity checks)
  • Customer-facing report validation where outputs are relied on

D. Exception handling

  • Standard incident workflow (detect → triage → fix → verify → close)
  • Root cause analysis for recurring processing failures
  • Defined criteria for customer notification if outputs were incorrect or delayed

Tie each control activity back to a named procedure so auditors can map “policy → procedure → evidence.”

Step 5: Add documentation control and change control around processing procedures

To show “implemented,” you need basic governance:

  • Version control for procedures/runbooks (wiki with versioning, or Git-based docs)
  • Approval and periodic review workflow (ticket or document approval record)
  • Change management linkage for processing logic changes (release tickets, pull requests)

Step 6: Build an evidence plan (don’t wait for the auditor)

Create an “evidence register” for the SOC 2 period. For each processing control, define:

  • Evidence type (screenshot, export, ticket, log excerpt, report)
  • Collection method (automated export, monthly screenshot, ticket tag)
  • Owner and frequency (aligned to how the control operates)

If you have Daydream, this is a good place to track control design, assign owners, and standardize evidence requests so you do not rebuild the same packet every audit cycle.

Required evidence and artifacts to retain

Auditors test both design and operation. Keep evidence that shows consistency across the period.

Core artifacts

  • System Processing Integrity Policy (approved, versioned) 1
  • Processing flow documentation (diagrams + narratives)
  • Procedures/runbooks for key workflows (version history visible)
  • Monitoring and alerting definitions (alert rules, dashboard descriptions)
  • Incident and problem management records tied to processing failures
  • Reconciliation outputs (exports, signed checklists, job completion reports)
  • Change management evidence for processing logic (PRs, release approvals)

What “good evidence” looks like (examples)

  • A monthly reconciliation report export plus a ticket comment showing review and disposition of exceptions.
  • An incident ticket showing detection, response steps referencing the runbook, verification of corrected output, and closure approval.
  • A change ticket/PR referencing an updated processing procedure and showing review/approval before deployment.

Common exam/audit questions and hangups

Auditors tend to get stuck in the same places:

  1. “Show me your key processing flows.”
    Hangup: Teams provide a generic architecture diagram with no inputs/outputs or failure handling.

  2. “How do you know processing is complete and accurate?”
    Hangup: Heavy reliance on “we monitor errors” without reconciliations or defined acceptance criteria.

  3. “What happens when processing fails?”
    Hangup: Ad hoc heroics in Slack, no tickets, no post-incident verification evidence.

  4. “How do you control changes to processing?”
    Hangup: Procedures say one thing; engineering practice differs (hotfixes, undocumented job schedule edits).

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails SOC 2 testing Fix
Writing a policy but no workflow procedures Auditors cannot trace policy to operations Write one runbook per material processing flow and map controls
Treating logs as evidence without review Logs show events, not control performance Add a review step: reconciliations, exception queues, ticket sign-offs
No defined “correct output” criteria Completeness/accuracy can’t be tested Define acceptance checks (counts, totals, state transitions)
Manual reprocessing with no controls High risk of duplicate/inaccurate processing Use maker-checker where possible; require ticketed steps and verification
Evidence collected only at audit time Gaps appear across the period Maintain an evidence register and collect continuously

Risk implications (why this requirement matters operationally)

Processing integrity failures create customer harm fast: incorrect invoices, missed transactions, incomplete exports, inaccurate dashboards, or data inconsistencies that cascade into downstream systems. From a SOC 2 perspective, weak processing procedures usually lead to:

  • Control design exceptions (documentation exists but does not cover key flows)
  • Operating effectiveness exceptions (controls not performed consistently; no evidence)

Even with “medium” severity in many internal mappings, this requirement can become a report issue if your product’s value depends on correct processing outputs.

Practical 30/60/90-day execution plan

Days 1–30: Scope + minimum viable documentation

  • Confirm in-scope systems and identify the top processing workflows.
  • Draft/approve the System Processing Integrity Policy (keep it short).
  • Produce processing flow docs for the top workflows.
  • Create initial runbooks for failure handling and reprocessing.
  • Start an evidence register with owners and collection methods.

Days 31–60: Operationalize controls and evidence

  • Implement or formalize reconciliations for key outputs (even if manual at first).
  • Standardize incident tickets for processing failures (required fields, links to runbooks).
  • Ensure monitoring/alerting coverage exists for processing failure modes.
  • Align change management so processing changes link to PRs/releases and procedure updates.
  • Pilot evidence collection on a recurring cadence and fix gaps.

Days 61–90: Tighten, test, and audit-proof

  • Run a tabletop exercise for a processing failure and validate the runbook works.
  • Perform an internal control test: pick samples across the period and verify evidence is complete.
  • Reduce “tribal knowledge” steps by updating runbooks and training operators.
  • Package an auditor-ready walkthrough: one workflow traced from input → output → exception handling with evidence.

Frequently Asked Questions

What counts as “system processing” for this requirement?

Any in-scope workflow that transforms inputs into outputs customers rely on, including automated jobs, APIs, integrations, and manual processing steps in production. Document the flows that would create customer impact if they fail.

Do we need separate procedures for every microservice?

No. Group by customer-impacting workflows. Auditors want coverage of material processing, not a catalog of every component.

Are automated controls required, or can we start with manual checks?

Manual reconciliations can work if they are defined, performed consistently, and evidenced. Plan to automate where manual steps create recurring errors or weak evidence.

What evidence is strongest for operating effectiveness?

Evidence that shows a control was performed and reviewed, such as reconciliation outputs with documented sign-off, incident tickets referencing runbooks, and change records that tie releases to updated procedures.

How do we handle processing done by a third party service?

Treat it as part of your system processing chain: define what you expect from the third party, how you validate their outputs, and how you handle failures. Keep evidence of your validation and exception handling.

How does Daydream fit into TSC-PI1.3?

Use Daydream to assign control ownership, standardize procedure-to-control mappings, and run a consistent evidence collection workflow so you can prove procedures operated throughout the audit period.

Related compliance topics

Footnotes

  1. AICPA TSC 2017

Frequently Asked Questions

What counts as “system processing” for this requirement?

Any in-scope workflow that transforms inputs into outputs customers rely on, including automated jobs, APIs, integrations, and manual processing steps in production. Document the flows that would create customer impact if they fail.

Do we need separate procedures for every microservice?

No. Group by customer-impacting workflows. Auditors want coverage of material processing, not a catalog of every component.

Are automated controls required, or can we start with manual checks?

Manual reconciliations can work if they are defined, performed consistently, and evidenced. Plan to automate where manual steps create recurring errors or weak evidence.

What evidence is strongest for operating effectiveness?

Evidence that shows a control was performed and reviewed, such as reconciliation outputs with documented sign-off, incident tickets referencing runbooks, and change records that tie releases to updated procedures.

How do we handle processing done by a third party service?

Treat it as part of your system processing chain: define what you expect from the third party, how you validate their outputs, and how you handle failures. Keep evidence of your validation and exception handling.

How does Daydream fit into TSC-PI1.3?

Use Daydream to assign control ownership, standardize procedure-to-control mappings, and run a consistent evidence collection workflow so you can prove procedures operated throughout the audit period.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream