Ongoing and Separate Evaluations

“Ongoing and Separate Evaluations” requires you to run continuous monitoring and periodic independent reviews to confirm your internal control components exist and operate as intended, then to document and remediate what you find. You operationalize it by defining what gets monitored, who evaluates it, how often, what evidence proves it, and how issues get tracked to closure. (COSO IC-IF (2013))

Key takeaways:

  • Build an ongoing monitoring layer from day-to-day control signals (metrics, reconciliations, exceptions, supervision).
  • Schedule separate evaluations (independent testing, internal audit-style reviews) based on risk and change.
  • Treat findings as a closed-loop workflow: severity, owner, due date, remediation evidence, retest, and governance reporting.

A control environment that looked good at design time can drift quickly due to personnel changes, system releases, new third parties, process workarounds, and shifting risk. COSO Principle 16 puts a hard operational expectation on that reality: you must actively evaluate whether internal control components are “present and functioning,” using a mix of ongoing monitoring and separate evaluations. (COSO IC-IF (2013))

For a Compliance Officer, CCO, or GRC lead, the practical challenge is not understanding the words. It is building a monitoring system that produces reliable signals, is owned by the business, and stands up in audit. That means: (1) defining the control population and mapping monitoring to key risks, (2) establishing independent testing for higher-risk areas and recent changes, (3) maintaining evidence that is reviewable without heroics, and (4) ensuring exceptions flow into a remediation process with accountable owners and retesting.

This page gives requirement-level implementation guidance you can execute quickly: scoping, operating model, step-by-step buildout, artifacts to retain, common exam hangups, and a practical execution plan.

Regulatory text

COSO Principle 16 (Monitoring Activities): “The organization selects, develops, and performs ongoing and/or separate evaluations to ascertain whether the components of internal control are present and functioning.” (COSO IC-IF (2013))

Operator meaning: you must (a) choose monitoring methods, (b) design them so they can detect control breakdowns, and (c) execute them on a recurring basis, with documentation that shows what was evaluated, what was found, and what was fixed. The phrase “ongoing and/or separate” is not permission to pick only one in practice; serious programs use both: ongoing monitoring embedded in operations plus separate evaluations that are more independent and periodic. (COSO IC-IF (2013))

Plain-English interpretation (what the requirement is asking for)

You need a repeatable monitoring program that answers two questions for your key control areas:

  1. Are the controls still there? (present: implemented, assigned, and performed)
  2. Are they working? (functioning: operating effectively to manage the risk)

“Ongoing evaluations” are the checks already happening as part of running the business: supervisory review, automated alerts, variance analyses, exception queues, reconciliations, operational metrics, and ticket workflows. “Separate evaluations” are the more formal, independent reviews: internal audits, compliance testing, control self-assessments with independent validation, or targeted reviews after major change. (COSO IC-IF (2013))

Who it applies to (entity and operational context)

This requirement applies to any organization using COSO as its internal control framework, including:

  • Public and private companies using COSO-aligned internal control over financial reporting (ICFR) or broader internal control.
  • Regulated firms that align controls, risk, and assurance to COSO concepts for governance and oversight.
  • Internal audit, compliance, and risk functions that provide assurance, even if operational monitoring is owned by the first line. (COSO IC-IF (2013))

Operationally, it applies anywhere you have “key controls” or “key risk and control indicators,” including:

  • Financial close and reporting controls
  • Access management and change management controls
  • Third-party risk management controls (due diligence refresh, SLA monitoring, issue follow-up)
  • Privacy/security controls (logging, incident response exercises, vulnerability remediation tracking)
  • Operational resilience and business continuity controls

What you actually need to do (step-by-step)

Step 1: Define the control population you will monitor

  1. Inventory controls that matter for material risks (financial, operational, compliance, security).
  2. Identify key controls: the controls that, if they fail, create unacceptable risk exposure or could lead to material error.
  3. Record control attributes you will later test: owner, frequency, system(s), evidence type, dependencies (including third parties).

Deliverable: a control register that is specific enough to test.

Step 2: Split monitoring into “ongoing” vs “separate” on purpose

Create a simple mapping table:

Control area Ongoing evaluation (embedded) Separate evaluation (independent) Trigger for ad hoc separate review
Example: user access provisioning Daily exception queue review; monthly access recertification metrics Periodic sample-based access testing IAM system change; org restructure
Example: third-party SOC report review Intake workflow with completeness checks; SLA uptime monitoring Independent review of SOC exceptions and mapping to your controls New critical third party; major outage

The goal is coverage without duplication: ongoing catches drift; separate evaluations validate the system and challenge assumptions. (COSO IC-IF (2013))

Step 3: Set evaluation cadence based on risk and change

Avoid arbitrary calendars. Instead:

  • Increase evaluation intensity where inherent risk is high, the control is manual, or the process is changing.
  • Reduce intensity where controls are automated, outcomes are stable, and change is low.

Document your rationale. Auditors routinely ask why you picked your frequencies.

Step 4: Define evaluation procedures that produce decision-grade results

For each evaluation, specify:

  • Objective: what “present and functioning” means for this control
  • Population and scope: which systems, business units, third parties, time period
  • Method: inquiry, inspection, reperformance, observation, data analytics
  • Sampling approach: how you pick items; how you handle exceptions
  • Pass/fail criteria: what constitutes a deficiency vs a minor issue
  • Reviewer independence: how you avoid self-review for separate evaluations

Keep procedures short enough that a new tester can follow them without oral history.

Step 5: Build a findings-to-remediation workflow (closed loop)

Monitoring that produces issues but no closure will fail in audit. Minimum workflow fields:

  • Issue statement (condition, criteria, cause, impact)
  • Severity/rating (define categories internally)
  • Control/risk mapping
  • Owner and accountable executive
  • Corrective action plan and target date
  • Evidence of remediation
  • Retest results and closure approval
  • Trend reporting (repeat issues, aging)

Daydream can help here by centralizing evaluations, evidence, and issue management in one place so you can demonstrate traceability from monitoring activity to closure without chasing screenshots across teams.

Step 6: Report results to the right governance forum

Define where monitoring outcomes go:

  • Operational metrics to process owners and first-line leadership
  • Separate evaluation results to compliance/risk committees and the board/audit committee as appropriate
  • Escalation thresholds for significant deficiencies

Your documentation should show not only that you found issues, but that leadership saw them and made decisions.

Required evidence and artifacts to retain

Keep evidence that answers: “What did you test, what did you find, and what changed as a result?”

Core artifacts

  • Monitoring policy/standard describing ongoing vs separate evaluations and roles (COSO IC-IF (2013))
  • Control inventory/register with owners and frequencies
  • Annual/rolling monitoring plan (separate evaluations), with risk-based rationale
  • Ongoing monitoring dashboards/metrics definitions (what the metric means, data source, thresholds)
  • Test plans and workpapers for separate evaluations (procedures, population, samples, results)
  • Evidence repository (tickets, logs, recon sign-offs, approvals, reports)
  • Issues register with remediation plans, closure evidence, and retesting results
  • Governance materials: meeting agendas/minutes, committee packs, escalation records

Evidence quality rules (practical)

  • Evidence must be dated, attributable (who performed/reviewed), and complete (shows the conclusion).
  • Avoid “proof by screenshot” without context; pair screenshots with a short annotation or exported report that ties to the control objective.

Common exam/audit questions and hangups

Expect these lines of questioning:

  • Coverage: “Show me how you know each key control is monitored.” Bring the mapping of controls to ongoing and separate evaluations. (COSO IC-IF (2013))
  • Independence: “How is separate evaluation independent from the operator?” Clarify second-line testing vs first-line ownership, and how you prevent self-review.
  • Rationale: “Why this frequency, why this scope?” Produce a risk/change-based rationale.
  • Exception handling: “What happens when you find a failure?” Walk the issue workflow, show time to remediation qualitatively, and show retest evidence.
  • Sustainability: “Can this run without the one person who knows it?” Provide procedures, templates, and a system of record.

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: treating monitoring as a yearly audit event.
    Fix: formalize ongoing monitoring signals (exceptions, KPIs, reconciliations) and make them reviewable. (COSO IC-IF (2013))

  2. Mistake: “separate evaluation” equals a checklist with no testing.
    Fix: require at least inspection/reperformance for key controls; document population and results.

  3. Mistake: unclear pass/fail criteria.
    Fix: define what constitutes a deficiency, what requires escalation, and what evidence closes an issue.

  4. Mistake: monitoring without remediation governance.
    Fix: issue owners, due dates, retesting, and closure approvals are part of the control system, not admin overhead.

  5. Mistake: evidence scattered across tools with no traceability.
    Fix: centralize references in a system of record (for example, Daydream) and standardize naming and retention.

Enforcement context and risk implications

No public enforcement cases were provided in the supplied sources for this requirement, so you should treat this as a framework expectation rather than a case-law-driven mandate. (COSO IC-IF (2013)) The risk is still concrete: without credible ongoing and separate evaluations, control failures can persist unnoticed, and leadership cannot credibly assert that internal control is functioning.

Practical 30/60/90-day execution plan

First 30 days (Immediate)

  • Confirm scope: which control domains are in (ICFR only, enterprise controls, third-party risk controls).
  • Build or refresh the control register with owners and evidence types.
  • Identify existing ongoing monitoring signals you can formalize (exception queues, metrics, reconciliations).
  • Draft the monitoring standard: definitions, roles, and minimum documentation. (COSO IC-IF (2013))

Next 60 days (Near-term)

  • Publish a separate evaluation plan (risk- and change-informed).
  • Write test procedures for the highest-risk key controls.
  • Stand up an issue workflow with required fields, ownership, and retesting expectations.
  • Pilot: run at least one separate evaluation end-to-end (test, issue, remediate, retest, close).

By 90 days (Operationalized)

  • Expand separate evaluations across remaining key areas based on the plan.
  • Convert ongoing monitoring into reviewable artifacts (dashboards with thresholds, supervision logs, exception closure reports).
  • Produce governance reporting that shows themes, repeat issues, and management action.
  • Perform a program health review: evidence quality, cycle times, and gaps in coverage. (COSO IC-IF (2013))

Frequently Asked Questions

Do we have to do both ongoing monitoring and separate evaluations?

COSO requires ongoing and/or separate evaluations to ascertain whether controls are present and functioning. In practice, programs that rely on only one approach struggle to detect drift and to demonstrate independent challenge. (COSO IC-IF (2013))

What counts as an “ongoing evaluation” in a real operation?

Ongoing evaluations are the operational checks embedded in execution, like exception review, supervisory approvals, reconciliations, and metric monitoring. They need to be documented well enough that an auditor can see what was reviewed, by whom, and what happened with exceptions. (COSO IC-IF (2013))

What makes an evaluation “separate”?

A separate evaluation is performed with more independence and formality than day-to-day operations, often by compliance testing, risk, or internal audit. The procedures should allow an independent reviewer to validate that a control is operating, not just that someone says it is. (COSO IC-IF (2013))

How do we prove controls are “present and functioning” without creating a paperwork factory?

Standardize evidence types per control (system report, ticket export, approval log) and store them consistently. Limit narrative to what ties evidence to the control objective and the tester’s conclusion.

How should third-party controls fit into Principle 16 monitoring?

Treat third-party risk controls as part of your internal control system: due diligence refresh, contract compliance checks, SLA monitoring, and issue follow-up should have ongoing signals plus periodic independent review. Record when third-party changes trigger a separate evaluation.

Can a control owner test their own control as a separate evaluation?

For separate evaluations, self-review weakens independence and invites audit pushback. If the first line performs self-assessments, add second-line validation or internal audit coverage for higher-risk areas. (COSO IC-IF (2013))

Frequently Asked Questions

Do we have to do both ongoing monitoring and separate evaluations?

COSO requires ongoing and/or separate evaluations to ascertain whether controls are present and functioning. In practice, programs that rely on only one approach struggle to detect drift and to demonstrate independent challenge. (COSO IC-IF (2013))

What counts as an “ongoing evaluation” in a real operation?

Ongoing evaluations are the operational checks embedded in execution, like exception review, supervisory approvals, reconciliations, and metric monitoring. They need to be documented well enough that an auditor can see what was reviewed, by whom, and what happened with exceptions. (COSO IC-IF (2013))

What makes an evaluation “separate”?

A separate evaluation is performed with more independence and formality than day-to-day operations, often by compliance testing, risk, or internal audit. The procedures should allow an independent reviewer to validate that a control is operating, not just that someone says it is. (COSO IC-IF (2013))

How do we prove controls are “present and functioning” without creating a paperwork factory?

Standardize evidence types per control (system report, ticket export, approval log) and store them consistently. Limit narrative to what ties evidence to the control objective and the tester’s conclusion.

How should third-party controls fit into Principle 16 monitoring?

Treat third-party risk controls as part of your internal control system: due diligence refresh, contract compliance checks, SLA monitoring, and issue follow-up should have ongoing signals plus periodic independent review. Record when third-party changes trigger a separate evaluation.

Can a control owner test their own control as a separate evaluation?

For separate evaluations, self-review weakens independence and invites audit pushback. If the first line performs self-assessments, add second-line validation or internal audit coverage for higher-risk areas. (COSO IC-IF (2013))

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
COSO Ongoing and Separate Evaluations: Implementation Guide | Daydream