COSO Principle 13: The entity obtains or generates and uses relevant, quality information to support the functioning of internal control
COSO Principle 13 requires you to identify the information your controls depend on, ensure that information is relevant and quality (accurate, complete, timely, and appropriately detailed), and prove it is produced, governed, and used in control operation. To operationalize it quickly for SOC 2, build an “information inventory,” assign owners, implement data-quality checks for key reports/feeds, and retain repeatable evidence.
Key takeaways:
- Treat “information” as a control dependency: reports, dashboards, tickets, logs, metrics, spreadsheets, and vendor attestations.
- Auditors expect report integrity: defined sources, transformations, access controls, review checks, and retained evidence that the report was used.
- Your fastest path is a scoped inventory + quality criteria + recurring validations tied to specific controls.
Compliance teams often document controls well, then fail the audit on something more basic: proving that the inputs to those controls were reliable. COSO Principle 13 addresses that gap. In SOC 2 work, it shows up when a control says “Security reviews X report weekly,” but no one can demonstrate where the report came from, whether it was complete, whether it could be altered, or whether the reviewer would notice if key data was missing.
For a CCO, Compliance Officer, or GRC lead, the practical job is to make “quality information” an operational system. That means (1) knowing which information assets directly support controls, (2) defining quality criteria for each (accuracy, completeness, timeliness, validity, authorization), (3) implementing lightweight but defensible checks and governance, and (4) keeping evidence that the information was generated and used as designed.
This page gives requirement-level implementation guidance for the coso principle 13: the entity obtains or generates and uses relevant, quality information to support the functioning of internal control requirement, with SOC 2 operator realities in mind: report integrity, ticketing evidence, logging coverage, and audit-ready artifacts.
Regulatory text
Requirement excerpt (SOC 2 Trust Services Criteria): “COSO Principle 13: The entity obtains or generates and uses relevant, quality information to support the functioning of internal control” 1.
What the operator must do
You must be able to demonstrate, in a repeatable way, that:
- Relevant information exists to run your controls (you can name it, locate it, and explain why it matters).
- That information is quality information (it is accurate, complete, timely, and appropriately protected from unauthorized change).
- The information is actually used as part of control performance (not just produced and ignored).
- You can prove all of the above with evidence for the audit period 1.
Plain-English interpretation (what auditors are really testing)
Auditors are testing whether your internal control system runs on trustworthy inputs. If your controls rely on:
- a vulnerability scan report,
- an access review export,
- a SOC alert queue,
- a Jira ticket workflow,
- a cloud configuration baseline,
- an uptime/SLA dashboard,
- a third-party assurance report,
…then Principle 13 expects governance around those inputs. The common failure mode is “report-based controls” where the report can be edited, filtered incorrectly, generated from the wrong scope, or pulled inconsistently, with no validation steps and no retained proof.
Who it applies to (entity and operational context)
In-scope entities (SOC 2 context)
This applies to service organizations preparing for or maintaining a SOC 2 report under the AICPA Trust Services Criteria 1. Practically, that includes SaaS, cloud service providers, managed service providers, and any company delivering services where customers rely on your controls.
In-scope functions and teams
You will touch multiple owners because “information” is generated across the org:
- Security (SIEM, EDR, vuln management, alerting, access tooling)
- IT and Identity (IdP reports, joiner/mover/leaver records, device inventories)
- Engineering/SRE (change management evidence, deployment logs, monitoring)
- Compliance/GRC (risk registers, policy attestations, exception tracking)
- People/HR (security training completion exports)
- Third-party management/procurement (due diligence evidence, contract inventories)
What you actually need to do (step-by-step)
Step 1: Identify “control-supporting information” (build the inventory)
Create an inventory of information assets that are direct inputs/outputs to controls. Keep it scoped; do not try to inventory every dataset in the company.
Minimum fields to capture
- Control(s) supported (e.g., “quarterly access review”)
- Information name (e.g., “Okta user export,” “AWS Config compliance report”)
- System of record (tool + environment)
- Owner (person/team responsible for correctness)
- Generation method (manual export, automated job, API pull)
- Frequency/trigger (on-demand, weekly, monthly, per change)
- Storage location (where evidence is retained)
- Known transformations (filters, pivots, joins, scripts)
Operator tip: if a control narrative contains the word “report,” “export,” “dashboard,” “log,” or “ticket,” you almost always have a Principle 13 dependency to document.
Step 2: Define “quality” criteria per information asset
For each inventory item, define quality criteria that match how it is used. Keep the criteria practical and testable.
A workable set:
- Accuracy: data reflects the underlying system correctly.
- Completeness: includes all in-scope populations (all prod accounts, all employees, all endpoints).
- Timeliness: reflects the required period and is refreshed when needed.
- Integrity: protected from unauthorized modification.
- Traceability: you can show where it came from and how it was produced.
Document these criteria in an “Information Quality Standard” or attach them to each inventory record.
Step 3: Implement report integrity controls (the audit hot spot)
Where information is a report or export, implement controls that make it defensible:
Design checklist (choose what fits):
- Source is authoritative (system of record documented).
- Role-based access limits who can generate and edit outputs.
- Where possible, generate reports in a way that is hard to tamper with (read-only views, system-generated PDFs, immutable exports, or access-controlled storage).
- If you transform data (scripts/spreadsheets), control the transformation:
- store the script in version control,
- restrict edit access,
- document parameters and filters,
- keep run logs or screenshots.
Validation checklist (lightweight but real):
- Reconcile totals to a known population (example: user export count reconciles to HR headcount logic for in-scope users).
- Check time window (example: “last 30 days” vs “audit month”).
- Spot check a sample of entries back to source records (example: 3 terminated users show disabled in IdP).
Step 4: Embed the information into control operation (prove it’s used)
Auditors look for evidence that the information is not just produced, but used in the control.
For each control, ensure the operating evidence shows:
- the report/log/export identifier (name, link, timestamp),
- who reviewed it,
- what they looked for (criteria),
- the outcome (no issues / issues logged),
- follow-up tickets for exceptions.
Step 5: Retain evidence consistently (make it repeatable)
Create an evidence retention pattern that is consistent across teams:
- a single evidence repository (GRC tool, controlled drive, ticketing system),
- naming conventions,
- minimum metadata (control, date, owner),
- access controls.
Daydream (and similar GRC workflows) becomes useful here because it can tie each control to its information dependencies, schedule evidence requests, and keep a clean audit trail without teams hunting through shared drives right before fieldwork.
Step 6: Test it like an auditor (before the auditor does)
Run a “report integrity walkthrough” for high-risk controls:
- Can a new team member regenerate the report using your documented steps?
- Do the filters match the SOC 2 scope?
- Is there a tamper-resistant trail from source to retained evidence?
- Do screenshots show enough context (URL/tool name, date/time, scope filters)?
Required evidence and artifacts to retain
Maintain artifacts that prove both design and operation.
Design-time artifacts
- Information inventory (control-supporting information list)
- Data/report quality criteria (standard or per-asset)
- Procedures for generating key reports/exports (including filters/parameters)
- Access control description for systems producing key evidence (who can run/alter reports)
- Change control for scripts/queries used to produce compliance evidence (if applicable)
Operating evidence (examples)
- Dated exports/reports used in control execution
- Screenshots showing query parameters, scope, and timestamps
- Review sign-offs (ticket approvals, documented reviewer notes, attestations)
- Exception tickets and remediation evidence
- System logs demonstrating automated report generation (where used)
Common exam/audit questions and hangups
Auditors commonly ask:
- “Show me how you know this report is complete for the SOC 2 scope.”
- “Who can modify this spreadsheet/export after it’s generated?”
- “Walk me through how this metric is calculated.”
- “How do you know this log source covers all production systems?”
- “Where is the evidence retained, and can you show the audit-period instances?”
Hangups that slow audits:
- Reports produced manually with no documented steps.
- Screenshots without visible timestamps, scope filters, or system context.
- Multiple “versions” of the same report in different folders.
- Evidence that shows a report existed, but not that someone reviewed it.
Frequent implementation mistakes (and how to avoid them)
-
Inventory too broad.
Fix: inventory only information that supports controls in-scope for SOC 2 1. -
Treating spreadsheets as inherently acceptable evidence.
Fix: if spreadsheets are required, lock them down, document transformations, and retain source exports alongside the spreadsheet. -
No ownership.
Fix: every information asset needs an accountable owner who can explain source, logic, and quality checks. -
No linkage between “information” and “control.”
Fix: each key report/log should map to at least one control procedure and appear in the evidence package. -
No proof of completeness.
Fix: add a reconciliation step (population totals, scope lists, asset inventories) and retain that check.
Enforcement context and risk implications
No public enforcement cases were provided in the supplied source catalog for this requirement. In practice, the risk is operational and audit-driven: weak information quality causes SOC 2 exceptions, customer trust issues during security reviews, and control failures that cascade into incident response and contractual problems. Treat Principle 13 as a reliability requirement for your control system, not a documentation exercise 1.
Practical 30/60/90-day execution plan
Days 1–30: Scope, inventory, and quick wins
- Confirm SOC 2 scope boundaries (systems, environments, products).
- Build the control-supporting information inventory for the highest-risk controls (access, change, vulnerability, logging/monitoring).
- Assign owners and storage locations.
- Standardize evidence capture (what every screenshot/export must show).
- Patch obvious gaps: add timestamps, add reviewer notes, centralize retention.
Days 31–60: Quality criteria and integrity checks
- Define quality criteria per information asset (accuracy, completeness, timeliness, integrity, traceability).
- Add at least one validation method per key report/export (reconciliation or sampling).
- Document report generation procedures and parameters.
- Restrict access to evidence outputs and shared folders.
- Put scripts/queries used for compliance evidence under change control where feasible.
Days 61–90: Operationalize and test
- Run an internal “mock selection” of evidence across the audit period and re-perform the report generation steps.
- Confirm each control’s evidence package proves the report was used (reviewer, date, outcome, exceptions).
- Train control owners on the evidence standard and common pitfalls.
- If using Daydream, configure recurring evidence workflows tied to the inventory so the process runs without manual chasing.
Frequently Asked Questions
What counts as “information” for COSO Principle 13 in SOC 2?
Any input or output that a control depends on: system logs, dashboards, exports, tickets, alerts, scans, spreadsheets, and third-party assurance documents 1. If it shows up in your control procedure, treat it as in-scope information.
Do we need formal data quality tooling to satisfy this requirement?
No. You need defined quality criteria and repeatable checks that match the risk and the control’s reliance on the information 1. Simple reconciliations, sampling, access restrictions, and documented report steps often satisfy auditors.
How do we prove a report is complete for the audit scope?
Keep the scope definition close to the report: filters, included accounts/environments, and a reconciliation to a known population (for example, an asset inventory or authoritative account list). Retain the reconciliation evidence with the report.
Are screenshots acceptable evidence?
Usually, yes, if they show system context, date/time, scope parameters, and the result used in the control. Screenshots that only show a partial table without context often get challenged.
What if our control uses a spreadsheet that someone can edit?
Keep the original source export, restrict who can edit the spreadsheet, and document the transformation logic (formulas, pivots, filters). If possible, move transformations into scripted queries with version control for cleaner traceability.
How does Daydream help with Principle 13 specifically?
Principle 13 fails most often on missing or inconsistent evidence. Daydream can track each control’s information dependencies, schedule evidence collection, and keep an audit trail that shows the information was generated and used consistently across the audit period.
Related compliance topics
- 2025 SEC Marketing Rule Examination Focus Areas
- Access and identity controls
- Access Control (AC)
- Access control and identity discipline
- Access control management
Footnotes
Frequently Asked Questions
What counts as “information” for COSO Principle 13 in SOC 2?
Any input or output that a control depends on: system logs, dashboards, exports, tickets, alerts, scans, spreadsheets, and third-party assurance documents (Source: AICPA TSC 2017). If it shows up in your control procedure, treat it as in-scope information.
Do we need formal data quality tooling to satisfy this requirement?
No. You need defined quality criteria and repeatable checks that match the risk and the control’s reliance on the information (Source: AICPA TSC 2017). Simple reconciliations, sampling, access restrictions, and documented report steps often satisfy auditors.
How do we prove a report is complete for the audit scope?
Keep the scope definition close to the report: filters, included accounts/environments, and a reconciliation to a known population (for example, an asset inventory or authoritative account list). Retain the reconciliation evidence with the report.
Are screenshots acceptable evidence?
Usually, yes, if they show system context, date/time, scope parameters, and the result used in the control. Screenshots that only show a partial table without context often get challenged.
What if our control uses a spreadsheet that someone can edit?
Keep the original source export, restrict who can edit the spreadsheet, and document the transformation logic (formulas, pivots, filters). If possible, move transformations into scripted queries with version control for cleaner traceability.
How does Daydream help with Principle 13 specifically?
Principle 13 fails most often on missing or inconsistent evidence. Daydream can track each control’s information dependencies, schedule evidence collection, and keep an audit trail that shows the information was generated and used consistently across the audit period.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream