COSO Principle 12: The entity deploys control activities through policies and procedures
COSO Principle 12 requires you to translate control expectations into written, approved policies and executable procedures, then prove people follow them consistently. To operationalize it for SOC 2, define each control activity, map it to a policy/procedure, assign owners, run it on a set cadence, and retain auditable evidence of both design and operation 1.
Key takeaways:
- Policies state the “what” and “why”; procedures define the “how,” “who,” and “when” for each control activity 1.
- Auditors will look for traceability: risk → control activity → policy/procedure → evidence of operation 1.
- The fastest path is a control-to-documentation map plus an evidence plan that makes compliance repeatable, not heroic.
Footnotes
A SOC 2 report does not pass on good intentions. It passes on documented controls that are executed the same way across teams and time, with evidence you can hand to an auditor without a scramble. The coso principle 12: the entity deploys control activities through policies and procedures requirement sits in the middle of that reality: you need written guidance that turns control design into day-to-day behavior.
For a Compliance Officer, CCO, or GRC lead, Principle 12 becomes practical fast. You are building a system where (1) control activities exist and are appropriate, (2) teams know exactly what to do, (3) changes are managed, and (4) proof accumulates as a byproduct of operations.
This page is requirement-level implementation guidance for SOC 2 service organizations using the AICPA Trust Services Criteria. It gives you a step-by-step way to document policies and procedures, connect them to control activities, and create an evidence trail that holds up in an examination 1.
Regulatory text
Requirement (excerpt): “COSO Principle 12: The entity deploys control activities through policies and procedures” 1.
What the operator must do:
You must (a) define control activities that address relevant risks, (b) deploy those control activities through policies (management expectations) and procedures (detailed steps), and (c) operate them consistently enough that an auditor can test them and reach a conclusion on design and operating effectiveness within the SOC 2 period 1.
Plain-English interpretation
- Policies are the rules of the road: what the organization requires (access must be approved, changes must be reviewed, incidents must be tracked).
- Procedures are the playbooks: who does the work, what tools they use, what approvals are required, what evidence is produced, and where it is stored.
- Principle 12 is “real” only when the procedure is runnable by the person assigned, and produces evidence without improvisation.
A common audit failure mode is having a policy that sounds right, but no procedure that proves it happens. Another is having procedures, but no control intent, ownership, or evidence standard. Principle 12 is where you close that gap.
Who it applies to
Entity scope: Service organizations preparing for or maintaining a SOC 2 examination under the Trust Services Criteria 1.
Operational context (where this bites):
- Fast-growing companies where processes live in Slack threads and individual knowledge.
- Distributed engineering and IT operations teams where change/access practices vary by squad.
- Regulated customers (financial services, healthcare, enterprise buyers) expecting consistent governance artifacts.
- Environments with multiple systems of record (ticketing + Git + cloud consoles) where evidence is fragmented.
Functions typically involved:
- Security/GRC (control ownership, documentation standards, evidence governance)
- Engineering and IT (change, access, vulnerability, configuration procedures)
- People/HR (onboarding/offboarding procedures tied to access)
- Finance/RevOps (if controls touch billing, customer commitments, or system access for revenue tools)
- Vendor/third-party owners (if third parties perform key control steps)
What you actually need to do (step-by-step)
Use this sequence to operationalize the requirement with minimal rework.
Step 1: Inventory control activities in scope
Create a list of the control activities you expect to be tested in SOC 2 (for example: access provisioning approvals, termination deprovisioning, change review, logging review, incident response). For each, capture:
- Control objective (what risk it addresses)
- System(s) in scope
- Frequency/cadence (event-driven, daily, weekly, per release)
- Control owner (accountable) and operators (doers)
- Evidence produced (what an auditor will inspect)
Output: Control register (even a spreadsheet works if it’s disciplined).
Step 2: Map each control activity to policy and procedure
Build a one-row-per-control “traceability map”:
| Control activity | Policy section | Procedure doc | Tool/workflow | Evidence location |
|---|
Rules that reduce audit friction:
- Every control activity must point to one procedure that an operator can follow end-to-end.
- Every procedure must state the evidence artifact it produces (ticket ID, screenshot, report export, approval log).
Step 3: Write (or fix) the policy layer
Policy content should be stable and executive-approved. Keep it short, but specific enough that procedures can’t contradict it:
- Scope (systems/teams covered)
- Mandatory requirements (approvals, segregation expectations, retention expectations)
- Roles (who can approve, who can implement)
- Exceptions process (how you approve deviations and how you document them)
Practical tip: policies fail when they read like a blog post. Use “must” statements that can be tested.
Step 4: Write runnable procedures tied to real workflows
A good SOC 2 procedure answers five questions in the first page:
- Who performs it (role, not name)
- When it runs (trigger/cadence)
- Systems used (Jira, GitHub, Okta, AWS, etc.)
- Steps with approvals/checkpoints
- Evidence created and where saved
Procedure examples that map cleanly to testing:
- User access provisioning procedure: request intake → manager approval → admin provisioning → validation → ticket closure with approval log attached.
- Change management procedure: PR opened → peer review → required checks pass → approval → deploy → rollback plan documented → ticket/PR link retained.
Step 5: Add a change-control mechanism for the documents themselves
Auditors commonly ask: “How do you know people are following the current process?” Solve it with lightweight document governance:
- Versioning (document history or repository commits)
- Approval workflow for material changes (policy owner approval)
- Effective date and review cadence
- Distribution/availability (where operators find procedures)
Step 6: Define an evidence standard per control (what “good evidence” looks like)
Create an evidence guide that standardizes what you will retain. For each control:
- What counts as evidence (ticket, system log, report)
- Minimum fields (approver, date/time, scope)
- How to handle screenshots (include URL/identifier and timestamp context)
- Retention location and naming convention
This is where teams save time. Without a standard, operators submit inconsistent proof and audits turn into forensics.
Step 7: Operationalize with ownership and QA
Assign:
- Control owner: accountable for design and ongoing performance
- Control operator(s): perform steps
- Evidence steward (often GRC): checks evidence quality before the audit
Add a monthly control check meeting or asynchronous review where owners confirm controls ran and evidence exists.
Step 8: Test like an auditor before the auditor does
Run an internal “sample test”:
- Pick a small set of control instances across the period.
- Verify the procedure was followed.
- Confirm evidence is complete, readable, and traceable.
- Document gaps as issues with remediation dates.
If you use Daydream, this is where teams usually get relief: you can centralize the control narrative, map procedures to controls, and run an evidence collection workflow that prompts owners for the right artifacts on a schedule, instead of chasing them at quarter-end.
Required evidence and artifacts to retain
Retain artifacts that prove both design and operation:
Design evidence (shows controls are defined and deployed)
- Approved policies (security, access control, change management, incident response)
- Procedures/playbooks linked to each control activity
- Control register and traceability map (control → policy/procedure → evidence)
- Roles and responsibilities (RACI or ownership list)
- Document governance (version history, approvals, review logs)
Operating evidence (shows controls ran during the period)
- Access requests with approvals and provisioning logs
- Offboarding tickets showing deprovisioning steps
- Change records: PR reviews, CI results, approvals, deployment records
- Incident tickets: triage notes, timelines, post-incident reviews
- Monitoring/review outputs: reports, alerts, review sign-offs
- Exception approvals and compensating controls evidence
Evidence should be stored in a durable system of record with consistent naming so you can retrieve it quickly by date range and control.
Common exam/audit questions and hangups
Auditors tend to press on these areas:
- “Show me the procedure your team followed for this control instance.”
- “Where is management approval for the policy and for updates?”
- “How do you ensure the procedure is followed across teams?”
- “How do you handle exceptions, emergency changes, or break-glass access?”
- “What evidence is retained, and how do you ensure completeness for the full period?”
- “If a tool changed mid-period, how did you update the procedure and ensure continuity?”
Hangups that slow exams:
- Evidence spread across tools with no index.
- Procedures that describe intent but omit exact steps.
- Controls performed, but not provable because the ticket lacks approver/date/system identifiers.
Frequent implementation mistakes and how to avoid them
-
One generic policy pretending to cover everything
Fix: keep policy high-level, but create separate procedures per workflow/system where execution differs. -
Procedures that don’t match reality
Fix: write procedures from the operator’s screen. Validate by having a new team member follow it. -
No explicit evidence requirement
Fix: add an “Evidence” section to every procedure and standardize what must be attached to tickets. -
Control ownership is vague
Fix: name a role as owner for each control, and publish it in the control register. -
Document changes without approval trail
Fix: require approvals for material changes and keep version history accessible. -
Exception handling is informal
Fix: define an exception workflow with required approvals, time bounds, and compensating steps.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite enforcement actions.
Operationally, Principle 12 is still high-impact: weak policy/procedure deployment increases the risk of inconsistent control execution, audit findings, and customer trust issues. For SOC 2, the practical risk is a gap between “control is designed” and “control is operating,” usually discovered through missing or low-quality evidence 1.
Practical 30/60/90-day execution plan
Days 0–30: Build the backbone (documentation + mapping)
- Inventory in-scope control activities and owners.
- Create the control-to-policy/procedure traceability map.
- Identify top audit-critical procedures (access, change, incidents) and draft or correct them first.
- Stand up an evidence folder structure or system-of-record convention.
- Define evidence standards (what to attach, where, naming).
Deliverables: control register, traceability map, initial policy set, top procedures, evidence guide.
Days 31–60: Operationalize and start producing clean evidence
- Train control operators on procedures and evidence expectations.
- Run controls under the new procedures; collect evidence as you go.
- Implement document governance: versioning and approvals.
- Perform an internal sample test and log issues.
Deliverables: training record, evidence samples per control, document approval logs, remediation tracker.
Days 61–90: Harden for audit and reduce variance
- Close gaps found in sample testing (missing approvals, inconsistent tickets, unclear steps).
- Add exception workflows and emergency scenarios to procedures.
- Run a second internal test across different teams/systems.
- Prepare an auditor-ready evidence index (by control and date range).
Deliverables: updated procedures, exception records, second test results, evidence index.
Frequently Asked Questions
Do we need both a policy and a procedure for every control?
For SOC 2 testing, you need documentation that states expectations and documentation that tells operators how to execute. Many teams use one umbrella policy with multiple procedures underneath; the key is that each control activity maps to runnable steps and evidence 1.
What’s the difference between a control, a policy, and a procedure in an audit?
A control is what you do to address a risk, a policy is the rule requiring it, and a procedure is the step-by-step workflow that produces evidence. Auditors test controls by inspecting whether procedures were followed and whether evidence supports operation 1.
How detailed should procedures be?
Detailed enough that a qualified backup can execute them without asking the author for missing steps. If the procedure can’t be followed to produce the expected evidence artifact, it is not audit-ready.
Can we rely on tool configuration as “the procedure”?
Tool configuration can be part of the control design, but auditors still expect documented steps for approvals, review, exceptions, and how you verify the configuration remains in place. Write a procedure that references the tool settings and defines who checks them and what evidence they retain.
What evidence format is most audit-friendly?
Evidence that is time-stamped, attributable (who approved/performed), and directly tied to the system or transaction in scope. Tickets linked to system logs and approvals are usually easier to validate than standalone screenshots.
How do we handle fast-changing engineering processes without rewriting procedures constantly?
Keep policy stable, write procedures around decision points and required approvals, then reference living runbooks or system configuration details in appendices. Use version control and a lightweight approval workflow so updates produce an auditable history.
Related compliance topics
- 2025 SEC Marketing Rule Examination Focus Areas
- Access and identity controls
- Access Control (AC)
- Access control and identity discipline
- Access control management
Footnotes
Frequently Asked Questions
Do we need both a policy and a procedure for every control?
For SOC 2 testing, you need documentation that states expectations and documentation that tells operators how to execute. Many teams use one umbrella policy with multiple procedures underneath; the key is that each control activity maps to runnable steps and evidence (Source: AICPA TSC 2017).
What’s the difference between a control, a policy, and a procedure in an audit?
A control is what you do to address a risk, a policy is the rule requiring it, and a procedure is the step-by-step workflow that produces evidence. Auditors test controls by inspecting whether procedures were followed and whether evidence supports operation (Source: AICPA TSC 2017).
How detailed should procedures be?
Detailed enough that a qualified backup can execute them without asking the author for missing steps. If the procedure can’t be followed to produce the expected evidence artifact, it is not audit-ready.
Can we rely on tool configuration as “the procedure”?
Tool configuration can be part of the control design, but auditors still expect documented steps for approvals, review, exceptions, and how you verify the configuration remains in place. Write a procedure that references the tool settings and defines who checks them and what evidence they retain.
What evidence format is most audit-friendly?
Evidence that is time-stamped, attributable (who approved/performed), and directly tied to the system or transaction in scope. Tickets linked to system logs and approvals are usually easier to validate than standalone screenshots.
How do we handle fast-changing engineering processes without rewriting procedures constantly?
Keep policy stable, write procedures around decision points and required approvals, then reference living runbooks or system configuration details in appendices. Use version control and a lightweight approval workflow so updates produce an auditable history.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream