Relevant Information Use

To meet the relevant information use requirement, you must define what information your internal controls need, collect it from reliable internal and external sources, convert it into “quality” information (accurate, complete, timely, and usable), and prove it is actually used to run and test controls. This is an operational discipline, not a documentation exercise. (COSO IC-IF (2013))

Key takeaways:

  • Define control-specific information requirements, not generic “data needs.” (COSO IC-IF (2013))
  • Put ownership, quality checks, and traceability around key reports, metrics, and data feeds that drive controls. (COSO IC-IF (2013))
  • Retain evidence that information was used in control performance, oversight, issue management, and decision-making. (COSO IC-IF (2013))

“Relevant information use” is where internal control programs succeed or quietly fail. A control can be well-designed on paper and still break in practice if the inputs are wrong, late, incomplete, or ignored. COSO’s Principle 13 sets a simple expectation: the organization obtains or generates and uses relevant, quality information to support internal control. (COSO IC-IF (2013))

For a Compliance Officer, CCO, or GRC lead, the operational goal is to build a repeatable method to (1) identify the information each key control depends on, (2) ensure that information is fit for purpose, and (3) demonstrate consistent use of that information in control execution and oversight. This usually centers on a manageable set of “control-driving information assets”: key reports, exception dashboards, reconciliations, case queues, third-party monitoring outputs, access logs, and risk and compliance metrics.

If you operationalize Principle 13 correctly, audits get easier because you can trace: control objective → required information → source system → transformation → review/approval → action taken → retained evidence. That traceability is the difference between a control that “exists” and a control that works. (COSO IC-IF (2013))

Regulatory text

COSO Principle 13 – Information and Communication: “The organization obtains or generates and uses relevant, quality information to support the functioning of internal control.” (COSO IC-IF (2013))

Operator interpretation (what you must do):

  • Obtain or generate information needed to run internal controls (from internal systems and external sources). (COSO IC-IF (2013))
  • Ensure information quality so it is accurate enough, complete enough, and timely enough for the control’s purpose. (COSO IC-IF (2013))
  • Use the information as an input to control activities, supervision, and remediation; it cannot just be available in a system. (COSO IC-IF (2013))

Plain-English requirement interpretation (what “relevant, quality” means in practice)

For control execution, “relevant” means the information:

  • Maps to a specific control objective (for example, “detect policy exceptions” or “prevent unauthorized access”).
  • Covers the correct population (the right accounts, transactions, third parties, products, geographies, time period).
  • Has enough detail to enable a decision (pass/fail, exception handling, escalation).

“Quality” means the information is:

  • Accurate: values reflect reality; calculations are correct. (COSO IC-IF (2013))
  • Complete: no missing records that would change control conclusions. (COSO IC-IF (2013))
  • Timely: available when the control must be performed and when management needs it. (COSO IC-IF (2013))
  • Understandable/usable: the reviewer can interpret it and knows what actions to take. (COSO IC-IF (2013))

Who it applies to (entity and operational context)

Applies to: organizations implementing internal control consistent with COSO, including teams supporting management’s system of internal control and internal auditors assessing it. (COSO IC-IF (2013))

Operationally, it applies wherever controls depend on information inputs, including:

  • Financial reporting controls: reconciliations, journal entry monitoring, close reporting packages.
  • Compliance controls: surveillance reports, training completion, monitoring and testing results, regulatory change logs.
  • Security/IT controls: access review populations, privileged access logs, vulnerability scan outputs.
  • Third-party risk management: due diligence results, ongoing monitoring alerts, SLA/KPI reports, incident notifications from third parties.
  • Operational controls: QA defect reports, inventory variance, customer complaints and root cause metrics.

What you actually need to do (step-by-step)

1) Build an “information requirements” view of your key controls

Create (or update) a register for key controls with an Information Required field that is specific enough to test. For each key control, capture:

  • Control name and objective
  • Decision the control owner must make (approve, reject, investigate, escalate)
  • Inputs required (reports, logs, dashboards, data extracts)
  • Minimum necessary attributes (population, filters, data fields, time window)
  • Output evidence expected (signed report, ticket, approval record)

Practical example

  • Control: Quarterly third-party access recertification
  • Information required: list of active third-party accounts in scope, last login, privilege level, manager, system owner, termination status from HR feed, and exceptions queue for non-responses.

2) Inventory “control-driving” information assets and assign accountable owners

Identify the reports and data feeds that drive control performance. For each information asset, document:

  • Business owner (control owner) and technical owner (system/report owner)
  • Source system(s) of record
  • Transformations (joins, filters, calculations, manual edits)
  • Distribution (who receives it, where it’s stored)
  • Failure modes (late feeds, missing data, access issues)

This is where many teams find “shadow reporting” (spreadsheet extracts and manual merges). Don’t ban it by default; govern it.

3) Define quality criteria and checks tied to control purpose

Quality checks should match the risk. Common control-relevant checks include:

  • Completeness checks: reconciliation to source totals; record counts; “in-scope population” validation.
  • Accuracy checks: spot checks to source records; formula validation; parameter checks (correct date range, correct business unit).
  • Timeliness checks: delivery SLAs; “as of” timestamps; escalation when late.
  • Integrity controls: access restrictions to report logic; change control for queries; versioning for critical spreadsheets.

Write these checks into the control procedure or into a supporting “key report” procedure. Keep the checks small, repeatable, and evidenceable. (COSO IC-IF (2013))

4) Prove “use,” not just availability

Auditors will ask, “How do you know this information was used to operate the control?” Build use into the workflow:

  • Require annotation of exceptions and disposition directly in the report, ticketing system, or GRC workflow.
  • Record who reviewed it, when, what they concluded, and what they did next.
  • Link the information to resulting actions: escalations, remediation tickets, approvals/denials.

Good evidence pattern: report generated → reviewer sign-off → exceptions logged → follow-up closure evidence.

5) Control access, retention, and traceability

Information used for controls should be:

  • Accessible to the right performers and reviewers
  • Protected from inappropriate edits (especially report logic and extract criteria)
  • Retained in line with your control testing and audit needs

Avoid storing critical control inputs only in personal drives or ephemeral tools.

6) Monitor and improve information quality as part of issue management

When information defects recur (late feeds, wrong populations, inconsistent definitions), treat them as control issues:

  • Log as issues with root cause
  • Assign ownership and remediation
  • Track to closure This closes the loop that COSO expects: internal control improves based on information quality performance. (COSO IC-IF (2013))

Required evidence and artifacts to retain

Maintain artifacts that let a reviewer reconstruct the control and its information inputs:

Core artifacts (most environments)

  • Information requirements mapping for key controls (control → inputs → sources)
  • Key report inventory (including system of record and owner)
  • Data lineage notes for critical inputs (high-level is acceptable if consistent)
  • Documented quality checks (what is checked, by whom, how often)
  • Samples of executed controls showing “use” (annotated reports, approvals, exceptions, tickets)
  • Access controls for report logic / data extracts (permissions evidence)
  • Change history for critical report definitions/queries/spreadsheets
  • Issue logs for recurring information-quality defects and remediation records

Third-party risk management artifacts (common)

  • External monitoring sources used (alerts, attestations, incident notifications)
  • Evidence of evaluation and action on third-party signals (triage notes, escalations, risk acceptance)

Common exam/audit questions and hangups

Expect questions like:

  • “Which reports are key to your control environment, and how do you know they are complete and accurate?” (COSO IC-IF (2013))
  • “Show me evidence that the reviewer used the report and resolved exceptions.”
  • “What changes were made to this report logic, and were they approved?”
  • “How do you validate externally sourced information used in controls?”
  • “If a data feed is late or wrong, what happens to the control? Do you have a fallback?”

Hangups that slow exams:

  • No clear definition of “key reports” versus convenience reporting.
  • Control owners cannot explain filters, joins, or report parameters.
  • Evidence proves the report was generated, not reviewed and acted upon.
  • Overreliance on spreadsheets without versioning, access control, or peer review.

Frequent implementation mistakes (and how to avoid them)

  1. Documenting data sources at a high level only
  • Fix: require control-by-control specificity (population, fields, time window).
  1. Treating “quality” as a generic data governance problem
  • Fix: implement “fitness for control purpose” checks that a control owner can perform and evidence.
  1. Relying on screenshots
  • Fix: retain the underlying report/export plus review notes and exception disposition.
  1. No ownership for key reports
  • Fix: assign a business owner and a technical owner; make them accountable for definition changes and access.
  1. External information is accepted without challenge
  • Fix: define acceptance criteria and sanity checks for third-party provided metrics, SOC reports, or attestations; record who reviewed and what they concluded.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement. Practically, the risk is operational and audit-facing: if you cannot show that relevant, quality information is used, auditors will challenge whether controls operate effectively, and management may miss issues that the control environment is supposed to detect. (COSO IC-IF (2013))

Practical 30/60/90-day execution plan

First 30 days: stabilize the highest-impact control inputs

  • Identify your most important controls (financial reporting, compliance monitoring, access governance, third-party oversight).
  • For each, document the exact information inputs and where they come from.
  • Create a short list of key reports/data feeds that drive those controls.
  • Add lightweight review evidence requirements (sign-off + exception disposition).

By 60 days: put quality checks and ownership in place

  • Assign business and technical owners for each key report/feed.
  • Define and implement minimum quality checks (completeness, accuracy, timeliness, integrity) tied to control purpose. (COSO IC-IF (2013))
  • Implement change controls for critical report logic and critical spreadsheets (versioning, approvals).
  • Centralize retention of control inputs and evidence.

By 90 days: make it testable and repeatable

  • Run an internal “mock audit” on a sample of controls: trace from control → input → quality checks → evidence of use.
  • Turn recurring information defects into tracked issues with root cause and remediation.
  • Standardize templates: key report profile, information requirements, review checklist.

Tooling note (where Daydream fits) If you manage many controls and third parties, Daydream can serve as the system to map control-to-information dependencies, track ownership, collect evidence, and keep an audit-ready trail of “use” (review, exceptions, actions) without chasing files across email and shared drives.

Frequently Asked Questions

What counts as “information” under the relevant information use requirement?

Any input that a control depends on: reports, dashboards, logs, reconciliations, data extracts, monitoring alerts, and third-party provided metrics. If a reviewer uses it to decide pass/fail or to handle exceptions, it is in scope. (COSO IC-IF (2013))

Do we need formal data lineage documentation for every control report?

Focus on reports that drive key controls. For those, capture the source system(s), transformation logic at a practical level, owners, and change control so you can explain and defend the output. (COSO IC-IF (2013))

How do we prove the information was actually used?

Retain evidence that shows review and action: sign-offs with dates, annotated exceptions, tickets created, approvals/denials, and follow-up closure. A generated report alone rarely proves operation.

What if the control relies on a spreadsheet?

Allow it only with guardrails: locked formulas or protected ranges, version control, access restrictions, documented review steps, and a reproducible tie-out to the system of record.

How should we handle externally sourced information (for example, from a third party)?

Define acceptance criteria and basic validation steps (scope, time period, completeness, reasonableness) and document the reviewer’s conclusion. If external information is critical to a control, assign ownership and retain it with the control evidence. (COSO IC-IF (2013))

Who owns information quality: IT, data governance, or compliance?

Control owners own fitness for control purpose because they rely on the information to execute the control. IT/data teams often own technical correctness and pipelines, but accountability should be explicit in your key report inventory. (COSO IC-IF (2013))

Frequently Asked Questions

What counts as “information” under the relevant information use requirement?

Any input that a control depends on: reports, dashboards, logs, reconciliations, data extracts, monitoring alerts, and third-party provided metrics. If a reviewer uses it to decide pass/fail or to handle exceptions, it is in scope. (COSO IC-IF (2013))

Do we need formal data lineage documentation for every control report?

Focus on reports that drive key controls. For those, capture the source system(s), transformation logic at a practical level, owners, and change control so you can explain and defend the output. (COSO IC-IF (2013))

How do we prove the information was actually used?

Retain evidence that shows review and action: sign-offs with dates, annotated exceptions, tickets created, approvals/denials, and follow-up closure. A generated report alone rarely proves operation.

What if the control relies on a spreadsheet?

Allow it only with guardrails: locked formulas or protected ranges, version control, access restrictions, documented review steps, and a reproducible tie-out to the system of record.

How should we handle externally sourced information (for example, from a third party)?

Define acceptance criteria and basic validation steps (scope, time period, completeness, reasonableness) and document the reviewer’s conclusion. If external information is critical to a control, assign ownership and retain it with the control evidence. (COSO IC-IF (2013))

Who owns information quality: IT, data governance, or compliance?

Control owners own fitness for control purpose because they rely on the information to execute the control. IT/data teams often own technical correctness and pipelines, but accountability should be explicit in your key report inventory. (COSO IC-IF (2013))

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
COSO Relevant Information Use: Implementation Guide | Daydream