Principle 13: Uses relevant, high-quality information

To meet the principle 13: uses relevant, high-quality information requirement, you must define what “decision-grade” information means for each key control, prove your data is complete/accurate/timely, and show it reaches the right people in a usable form. Operationalize this by mapping controls to information inputs, setting data quality checks and ownership, and retaining evidence that the information actually supported control performance.

Key takeaways:

  • Tie each key control to specific information inputs, sources, and owners, not generic “reports.”
  • Build lightweight data quality standards (accuracy, completeness, timeliness, relevance) with documented checks and exceptions.
  • Keep audit-ready evidence: data lineage, control runbooks, quality test results, and issue remediation.

Principle 13 is where many internal control programs fail quietly: controls look fine on paper, but the information feeding them is stale, incomplete, or not relevant to the decision the control owner is supposed to make. Auditors and examiners rarely accept “the dashboard exists” as proof. They want to see that the right information exists, that it’s trustworthy, that it’s produced consistently, and that control owners actually use it to perform the control.

This requirement page is written for a Compliance Officer, CCO, or GRC lead who needs to implement Principle 13 quickly and defensibly. It focuses on what you need to define, who needs to own it, what workflows must exist, and what evidence you should retain so you can pass a walkthrough without scrambling. COSO frames this as using “relevant, high-quality information” to support internal control functioning 1. The practical goal is simple: decision-makers get the right information at the right time, and you can prove it.

Regulatory text

Framework excerpt (provided): “COSO internal control principle 13 implementation expectation.”
Principle summary: “Principle 13: Uses relevant, high-quality information” 2.

Operator interpretation of the text: you must implement internal controls so they are supported by information that is:

  • Relevant to the specific control objective (not “nice to have” metrics).
  • High quality enough for control decisions (complete, accurate, timely, consistent, and appropriately granular).
  • Available and understandable to the control performer and reviewer, in a form they can act on.

Auditors typically test this through design evidence (how you defined information needs and quality standards) and operating evidence (examples showing the information was produced, checked, and used for actual control execution).

Plain-English interpretation (what Principle 13 really demands)

Principle 13 requires you to treat information as a controlled input to your control environment. If a control depends on a report, a query, a ticket queue, or a third-party attestation, then that information becomes part of the control. You should be able to answer, without improvising:

  • What information does the control require?
  • Where does it come from (system/source of record)?
  • Who owns it and who certifies it?
  • How do you know it is complete and accurate?
  • What happens when it is wrong, late, or missing?

If you cannot answer those questions consistently, you have an “information control gap,” even if the control narrative reads well.

Who it applies to (entity and operational context)

Applies to: organizations adopting COSO Internal Control–Integrated Framework concepts for financial reporting controls, operational controls, compliance controls, or enterprise risk governance 1.

Operational contexts where Principle 13 is routinely tested:

  • SOX / ICFR environments: key reports supporting controls (KRCs), IPE (information produced by the entity), EUCs (end-user computing) such as spreadsheets.
  • Compliance monitoring: alerts, surveillance reports, QA sampling populations, case management data.
  • Third-party risk management: due diligence artifacts, SOC reports, SLAs, performance metrics, incident notifications.
  • Security/GRC reporting: vulnerability metrics, access reviews, logging coverage, incident response timelines.

If your controls rely on information generated outside your organization (for example, a third party’s SOC report), Principle 13 still applies: you must validate relevance and quality for your control objective, then retain evidence that validation occurred.

What you actually need to do (step-by-step)

Use the sequence below to stand up a workable Principle 13 implementation without boiling the ocean.

Step 1: Inventory “control-critical information”

Start with your control library and identify information dependencies for each key control:

  • Inputs (reports, extracts, dashboards, tickets, attestations, emails)
  • Systems of record (ERP, HRIS, GRC tool, ITSM, data warehouse)
  • Manual transformations (filters, pivot tables, copy/paste steps)

Output: an “Information Dependency Register” mapping control → information inputs → owner → system/source.

Practical tip: if a control performer says “I look at the dashboard,” require the exact dashboard name, the field list, the filter criteria, and the refresh cadence. Ambiguity is where quality fails.

Step 2: Define “relevant” for each control (control-by-control)

Relevance is not a generic attribute. Document:

  • Control objective (what decision is being made)
  • Minimum data elements needed to make that decision
  • Required granularity (transaction-level vs summary)
  • Required population coverage (full population vs sample vs thresholds)

Output: a short “Information Requirements” section inside each control procedure/runbook.

Example: For a user access recertification control, relevance might require fields for user ID, role, entitlement, manager, last login, and termination status. A report missing termination status is not relevant enough to support the decision.

Step 3: Set information quality criteria and acceptance thresholds

Define quality dimensions that matter operationally:

  • Completeness: population coverage and missing-field tolerance.
  • Accuracy: reconciliation to a source of record or independent check.
  • Timeliness: data as-of date aligned to the control frequency.
  • Consistency: stable definitions (what “active user” means) and version control.
  • Integrity: controls over changes to report logic and access to modify it.

Output: an “IPE/Report Quality Standard” and per-report validation steps.

Keep it lightweight. The goal is repeatability and defendability.

Step 4: Implement repeatable validation checks (design + operate)

For each control-critical report or dataset, implement a validation routine proportional to risk:

  • One-time (design-time) validation: confirm logic, data sources, joins, filters, parameters, and field definitions.
  • Recurring validation: reconcile record counts, totals, or key fields; verify refresh date; confirm parameter settings; spot-check samples against source.

Output: a “Report Validation Checklist” with sign-off and exception handling.

If you have many reports, tier them:

  • Tier 1: supports key controls or high-risk decisions → stronger validation and change control.
  • Tier 2: supports monitoring only → basic checks.

Step 5: Put ownership and change control around information

Assign and document:

  • Information owner (accountable for definition and quality)
  • Report developer/maintainer (responsible for logic)
  • Control owner (responsible for using the information correctly)
  • Approver for report logic changes

Implement minimum change controls:

  • Versioning for report logic and EUCs.
  • Access controls on who can edit queries/spreadsheets.
  • Change log entries tied to tickets/approvals.

Output: RACI + change log evidence.

Step 6: Embed information checks into the control execution itself

Control procedures should explicitly require:

  • “Confirm report as-of date”
  • “Confirm parameters set to X”
  • “Perform completeness check (record count)”
  • “Retain report output and validation evidence”

Output: updated control narratives and operating evidence packets.

Step 7: Centralize evidence for auditability (don’t chase screenshots later)

Store artifacts in a consistent location with naming conventions. Daydream can help by structuring the evidence request list per control, mapping each artifact back to Principle 13, and keeping a clean audit trail of approvals and exceptions based on your control ownership model 2.

Required evidence and artifacts to retain

Retain evidence that shows both design adequacy and operating effectiveness of information quality.

Core artifacts (typical minimum set):

  • Information Dependency Register (control → inputs → systems → owners)
  • Control runbooks/procedures with embedded information checks
  • Report/data dictionary: definitions, fields, logic summary, refresh cadence
  • Initial validation package for each control-critical report (logic review, source confirmation)
  • Recurring validation checklists with sign-offs
  • Change logs and approvals for report logic/EUC updates
  • Exception records: identified data quality issues, impact assessment, remediation, and retest
  • Access control evidence for report editing rights (where applicable)

What auditors commonly accept: a clean, repeatable packet that shows “this is the report, this is how we know it’s correct enough, and here’s this period’s control execution using it.”

Common exam/audit questions and hangups

Expect these questions in walkthroughs and testing:

  • “Show me how you know this report is complete and accurate.”
  • “Who can change the logic of this report? How would you know?”
  • “What is the source of record, and how does the report reconcile to it?”
  • “How do you validate parameters/filters are correct each time?”
  • “Where is the evidence that the control owner reviewed the information and acted on exceptions?”
  • “If the report was wrong last month, how did you assess impact on prior control conclusions?”

Hangup patterns:

  • “We validated it once years ago.” Auditors often want evidence of ongoing validation or a rationale for why the original validation remains sufficient.
  • “It’s from System X, so it’s accurate.” Source systems still have data quality issues; you need at least basic checks.

Frequent implementation mistakes and how to avoid them

  1. Mistake: treating dashboards as self-evidently correct.
    Avoidance: document field definitions, refresh cadence, and validation checks. Retain evidence of each period’s output and checks.

  2. Mistake: control narratives don’t mention the information checks.
    Avoidance: add explicit steps for as-of date, parameters, completeness, and exception handling into the control procedure.

  3. Mistake: spreadsheet transformations with no controls.
    Avoidance: register EUCs, version them, restrict edit access, and document tie-outs to the source report.

  4. Mistake: unclear ownership (“IT owns data”).
    Avoidance: assign an information owner per dataset/report and a separate control owner per control. Document accountability.

  5. Mistake: no exception workflow.
    Avoidance: require impact assessment (which controls/periods), remediation ticketing, and retest evidence.

Risk implications (what goes wrong if you miss this)

Operationally, Principle 13 gaps create false assurance: controls “pass” because the inputs are wrong, incomplete, or not aligned to the control objective. In SOX contexts, weak IPE controls often lead to control failures during external audit because the auditor cannot rely on the report. In compliance monitoring, weak input quality produces missed alerts, biased samples, and unreliable KRIs, which can cascade into late issue detection.

Practical execution plan (30/60/90-day)

First 30 days (stabilize and scope)

  • Identify your key controls and list their information inputs.
  • Stand up the Information Dependency Register.
  • Pick the highest-risk reports/EUCs and draft validation checklists.
  • Update control procedures to include minimum information checks and evidence retention steps.

By 60 days (implement controls and produce operating evidence)

  • Complete initial validations for the highest-risk reports.
  • Implement recurring validation sign-offs for at least one operating cycle.
  • Add change control steps for report logic and EUCs (tickets, approvals, versioning).
  • Train control owners on the “what to save” evidence packet.

By 90 days (expand coverage and harden audit readiness)

  • Extend the approach to remaining control-critical information sources.
  • Run an internal walkthrough: ask audit-style questions and confirm evidence is retrievable.
  • Review exceptions found to date; confirm remediation and retesting artifacts exist.
  • Move evidence collection into a consistent workflow (Daydream can centralize request lists, ownership, and evidence mapping to Principle 13 for audit response).

Frequently Asked Questions

Does Principle 13 require formal data governance?

It requires governance over information that supports internal control functioning 1. You can meet it with targeted ownership, definitions, validation checks, and change controls around control-critical information, without building an enterprise-wide program first.

What counts as “high-quality” information in practice?

Information is “high-quality” when it is complete, accurate, timely, consistent, and fit for the specific control decision. Define those attributes per report/dataset, then retain evidence that checks were performed and exceptions were handled.

If a report comes directly from a system of record, do we still need validation?

Yes. Auditors often expect evidence that the report output is complete and accurate for the purpose of the control, including correct parameters and refresh timing. Keep the validation proportional to risk.

How do we handle EUCs (spreadsheets) used in controls?

Treat the spreadsheet as part of the control. Register it, version it, restrict edit access, document the transformation steps, and perform tie-outs back to the source data so you can prove integrity.

What evidence is most commonly missing during audits?

Teams often miss (1) initial report logic validation, (2) proof that parameters/filters were correct each run, and (3) change control evidence when report logic or spreadsheets change. Build those into the control procedure so evidence is created by default.

Can we centralize this in a GRC tool?

Yes, if the tool can map controls to information inputs, assign owners, and store validation and operating evidence in a consistent structure. Daydream is useful when you need Principle 13 mapped evidence expectations and a repeatable audit response workflow aligned to COSO 3.

Footnotes

  1. COSO Internal Control guidance page

  2. COSO Internal Control guidance page; Weaver summary of COSO 17 principles

  3. Weaver summary of COSO 17 principles

Frequently Asked Questions

Does Principle 13 require formal data governance?

It requires governance over information that supports internal control functioning (Source: COSO Internal Control guidance page). You can meet it with targeted ownership, definitions, validation checks, and change controls around control-critical information, without building an enterprise-wide program first.

What counts as “high-quality” information in practice?

Information is “high-quality” when it is complete, accurate, timely, consistent, and fit for the specific control decision. Define those attributes per report/dataset, then retain evidence that checks were performed and exceptions were handled.

If a report comes directly from a system of record, do we still need validation?

Yes. Auditors often expect evidence that the report output is complete and accurate for the purpose of the control, including correct parameters and refresh timing. Keep the validation proportional to risk.

How do we handle EUCs (spreadsheets) used in controls?

Treat the spreadsheet as part of the control. Register it, version it, restrict edit access, document the transformation steps, and perform tie-outs back to the source data so you can prove integrity.

What evidence is most commonly missing during audits?

Teams often miss (1) initial report logic validation, (2) proof that parameters/filters were correct each run, and (3) change control evidence when report logic or spreadsheets change. Build those into the control procedure so evidence is created by default.

Can we centralize this in a GRC tool?

Yes, if the tool can map controls to information inputs, assign owners, and store validation and operating evidence in a consistent structure. Daydream is useful when you need Principle 13 mapped evidence expectations and a repeatable audit response workflow aligned to COSO (Source: Weaver summary of COSO 17 principles).

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream