Separate Evaluations Scope

To meet the COSO “Separate Evaluations Scope” requirement, you must run periodic, independent evaluations (often internal audit or an independent compliance review) where the scope and frequency are explicitly risk-based and the results provide an objective view of internal control effectiveness (COSO IC-IF (2013)). Operationalize it by documenting a risk-ranked audit/universe, a rationale for what gets reviewed and when, and evidence that evaluators are sufficiently independent of the controls they test.

Key takeaways:

  • Your separate evaluation plan must show risk-based scope and frequency, not a fixed calendar or legacy rotation (COSO IC-IF (2013)).
  • Independence is the point: separate evaluations must be objective and insulated from control ownership (COSO IC-IF (2013)).
  • Exams and audits look for traceability from risk assessment → evaluation plan → workpapers → issues → remediation validation.

Separate evaluations are one of the few control expectations that quickly exposes whether your assurance program is real or performative. Management self-assessments, control owner checklists, and “we reviewed it in a meeting” activities can support monitoring, but they do not satisfy the need for independent, periodic evaluations that provide an objective assessment of internal control effectiveness (COSO IC-IF (2013)).

For a CCO or GRC lead, the operational challenge is rarely “do we have internal audit?” It’s proving that (1) the evaluation scope is rational, (2) the cadence changes as risk changes, and (3) the evaluators are sufficiently independent from day-to-day control operation. This is also where third-party risk management programs get tested: if critical third parties support key controls, your separate evaluations need a clear position on whether and how you independently evaluate those control dependencies.

This page gives requirement-level implementation guidance you can execute quickly: define the evaluation universe, build a risk-based scoping method, set frequency rules, document independence, and retain artifacts that will survive audit scrutiny.

Regulatory text

Requirement (excerpt): “Separate evaluations are conducted periodically and provide objective assessments including scope and frequency based on risk.” (COSO IC-IF (2013))

Operator interpretation: You need a repeatable program of independent evaluations that is (a) periodic, (b) objective, and (c) explicitly risk-based in both what you review (scope) and how often you review it (frequency) (COSO IC-IF (2013)). The regulator or auditor expectation is less about a specific cadence and more about whether your rationale is defensible and your coverage matches your risk profile.

Plain-English interpretation (what this means in practice)

  • “Separate evaluations” means the evaluator is not the same person/team that operates the control day to day. Internal audit is the common model, but an independent compliance testing function or qualified external reviewer can serve the role if independence is real.
  • “Periodically” means it happens on a planned basis, not only after incidents, not only during annual SOX or year-end panic.
  • “Objective assessments” means the evaluation includes evidence-based testing and a conclusion (effective / needs improvement), not informal opinions.
  • “Scope and frequency based on risk” means you can explain, on paper, why certain areas are reviewed more often and why lower-risk areas are reviewed less often (COSO IC-IF (2013)).

Who it applies to

Entity types: Organizations, including internal audit functions or any assurance function tasked with independent testing (COSO IC-IF (2013)).

Operational contexts where this shows up fast:

  • Regulated or audit-heavy environments where management must demonstrate internal control effectiveness.
  • Organizations with material third-party reliance (cloud providers, payment processors, outsourced operations) where third-party control dependencies can be a key source of risk.
  • Organizations with rapid change (new systems, reorganizations, acquisitions) where legacy audit cycles no longer match actual risk.

What you actually need to do (step-by-step)

Step 1: Define your “evaluation universe” (what could be evaluated)

Create a complete list of control areas and control-dependent processes that merit independent evaluation. Keep it pragmatic. Typical buckets:

  • Entity-level governance and compliance oversight
  • Financial reporting controls (if applicable)
  • Information security and access controls
  • Change management and SDLC controls
  • Third-party risk management lifecycle controls (due diligence, contracting, monitoring, offboarding)
  • Privacy and data governance controls
  • Incident management and business continuity controls

Artifact: “Separate Evaluation Universe” register with owner, control objective, systems, and key third parties implicated.

Step 2: Establish a risk-scoring method you can defend

You need a method to decide scope and frequency based on risk (COSO IC-IF (2013)). Keep the model explainable. A practical method:

  • Inherent risk drivers: sensitivity of data, transaction volumes (qualitative is fine), customer impact, regulatory exposure, fraud potential.
  • Change drivers: new system implementations, high control turnover, prior audit issues, major third-party changes.
  • Control reliance/criticality: whether downstream processes rely on the control; whether the control is preventive vs detective.

Output a ranked list (High / Medium / Low) and document the rationale in plain language.

Artifact: Risk ranking worksheet for the evaluation universe, including the date and approver.

Step 3: Set frequency rules tied to the risk tiers

The requirement does not prescribe an exact cycle; it requires that frequency be risk-based (COSO IC-IF (2013)). Define rules such as:

  • Higher-risk domains are evaluated more frequently or with deeper testing.
  • Lower-risk domains are evaluated less frequently, or only upon material change triggers.
  • Any domain with repeated deficiencies gets escalated in frequency until stabilized.

Avoid hardcoding a single “everyone gets reviewed annually” rule unless your risk profile truly supports it.

Artifact: “Separate Evaluation Plan” showing domains, risk tier, planned period, and why that timing is appropriate.

Step 4: Lock independence and objectivity (make it real, not implied)

Document independence at two levels:

  1. Organizational independence: reporting lines, audit committee/board visibility, or equivalent governance.
  2. Engagement independence: the evaluator for a domain cannot be the operator or designer of the control under review.

Also define minimum objectivity standards for workpapers: evidence sources, sampling approach (if used), and criteria for conclusions.

Artifacts:

  • Independence statement for the evaluation function
  • Conflict checks for evaluators (lightweight is fine, but written)
  • Evaluation/testing methodology standard

Step 5: Perform evaluations with consistent documentation

For each evaluation:

  • Define objective, scope boundaries, and period under review.
  • Map testing to control objectives (what “effective” means).
  • Collect evidence, perform testing, and record results.
  • Rate issues consistently, assign owners, and set remediation actions.

Artifact: Evaluation report package (planning memo, test steps, evidence index, results, issue log).

Step 6: Close the loop with remediation validation

Separate evaluations must feed control improvement. Validate remediation with follow-up testing that matches the original issue. If you accept risk, document who accepted it and why.

Artifacts:

  • Remediation plans with dates and owners
  • Retest workpapers and closure memo
  • Risk acceptance record (where applicable)

Step 7: Build change triggers that update scope/frequency mid-cycle

Risk-based frequency is not “set and forget.” Define triggers that force re-scoping:

  • Material control failures or incidents
  • Major system changes
  • Significant third-party events affecting control reliance (e.g., a critical third party changes hosting model)

Artifact: Trigger log and updated plan version history.

Required evidence and artifacts to retain (exam-ready checklist)

Keep these in a single “Separate Evaluations” folder/library so you can produce them fast:

  • Approved evaluation universe and inventory
  • Documented risk ranking and the scoring rationale
  • Current and prior evaluation plans, with change history
  • Independence documentation (org chart excerpt, charters, conflict checks)
  • Engagement-level workpapers: scope, test steps, evidence, conclusions
  • Issue logs, remediation plans, retest evidence, and closure approvals
  • Governance reporting (committee decks, summaries, escalation records)

Common exam/audit questions and hangups

  • “Show me how you decided what to review this year.” They want traceability from risk assessment to plan (COSO IC-IF (2013)).
  • “Why is this high-risk area not in scope?” If you excluded it, you need a written rationale and compensating assurance.
  • “Who performed the testing and how are they independent?” Titles don’t prove independence; responsibilities do.
  • “How do you ensure objectivity and consistency?” Expect questions about methodology, documentation standards, and issue rating criteria.
  • “What changed your plan mid-year?” A static plan in a changing environment reads as non-risk-based.

Frequent implementation mistakes (and how to avoid them)

  1. Mistake: Confusing management monitoring with separate evaluations.
    Fix: Label management self-assessments as ongoing monitoring, and maintain a distinct independent evaluation plan and workpaper standard (COSO IC-IF (2013)).

  2. Mistake: A calendar-driven rotation with weak risk rationale.
    Fix: Keep the schedule, but anchor it to a ranked risk universe and documented triggers for change (COSO IC-IF (2013)).

  3. Mistake: Independence by org chart only.
    Fix: Add engagement-level conflict checks and ensure control operators do not test themselves.

  4. Mistake: Reporting findings without verifying remediation.
    Fix: Treat retesting as required closure evidence for anything above a minor observation.

  5. Mistake: Third-party control dependencies are missing from scope.
    Fix: Explicitly state how you evaluate controls that rely on third parties: review your own oversight controls, and where needed, incorporate third-party assurance artifacts into your evaluation approach.

Enforcement context and risk implications

No public enforcement cases were provided for this requirement in the supplied sources. Practically, the risk is audit and supervisory: if you cannot show risk-based scope and independent testing, auditors may conclude your monitoring component is ineffective, which can cascade into broader control effectiveness concerns (COSO IC-IF (2013)). Internally, the business impact shows up as late discovery of control failures, repeat issues, and limited confidence in compliance attestations.

Practical 30/60/90-day execution plan

First 30 days: Stand up the structure

  • Inventory the evaluation universe and identify control owners.
  • Define independence criteria and document the evaluator conflict-check process.
  • Draft the risk ranking method and run a first-pass scoring workshop with audit/compliance, security, and key business stakeholders.
  • Produce a draft annual evaluation plan tied to the risk tiers.

Days 31–60: Execute and prove the method works

  • Run separate evaluations for the highest-risk areas first.
  • Standardize workpapers and issue write-ups so results are comparable across domains.
  • Establish governance reporting (monthly or quarterly) that includes plan status, exceptions, and emerging risk triggers.

Days 61–90: Close loops and harden for audit

  • Perform remediation validation on early findings and document closure.
  • Update risk rankings and the plan based on what you learned (control failures should move areas up in priority).
  • Package artifacts into an exam-ready library and run a mock audit request: “Produce your plan, rationale, independence proof, and two completed evaluations.”

Where Daydream fits naturally

If you’re coordinating separate evaluations across multiple functions (internal audit, compliance testing, security assurance, third-party risk), Daydream can help centralize the evaluation universe, link risk rankings to scoped reviews, and keep workpapers, evidence, and remediation validation tied to the same requirement and control objective. The value is operational: fewer gaps between “plan” and “proof.”

Frequently Asked Questions

Does internal audit have to own separate evaluations?

No. The requirement is independent, objective evaluation with risk-based scope and frequency (COSO IC-IF (2013)). Internal audit is common, but an independent compliance testing function or qualified external reviewer can meet the intent if independence is credible.

Can we satisfy this with control self-assessments (CSAs)?

CSAs support monitoring, but they are not “separate” if control owners assess their own work. If you use CSAs, pair them with independent testing in higher-risk areas and document where CSAs fit versus where separate evaluations apply (COSO IC-IF (2013)).

What proves “scope and frequency based on risk” to an auditor?

A risk-ranked evaluation universe, written scoring rationale, and an approved plan that clearly ties higher risk to more frequent or deeper reviews (COSO IC-IF (2013)). Also keep evidence of plan updates when risk changes.

How do we handle third parties in separate evaluations?

Evaluate your internal controls over third-party selection, contracting, and monitoring, and document how you get assurance over third-party-dependent controls (for example, through oversight controls and relevant assurance artifacts). Make the dependency explicit in scope statements so omissions are intentional, not accidental.

What if we don’t have resources to cover every high-risk area this cycle?

Document the constraint, escalate through governance, and record a risk acceptance or deferral rationale with compensating actions. Auditors react poorly to silent gaps; they can accept transparent risk decisions with evidence of oversight.

How do we show evaluator independence in practice?

Keep a simple conflict check per engagement and confirm the evaluator does not design, operate, or directly report into the function being tested. Independence should be provable from roles and approvals, not implied by job titles.

Frequently Asked Questions

Does internal audit have to own separate evaluations?

No. The requirement is independent, objective evaluation with risk-based scope and frequency (COSO IC-IF (2013)). Internal audit is common, but an independent compliance testing function or qualified external reviewer can meet the intent if independence is credible.

Can we satisfy this with control self-assessments (CSAs)?

CSAs support monitoring, but they are not “separate” if control owners assess their own work. If you use CSAs, pair them with independent testing in higher-risk areas and document where CSAs fit versus where separate evaluations apply (COSO IC-IF (2013)).

What proves “scope and frequency based on risk” to an auditor?

A risk-ranked evaluation universe, written scoring rationale, and an approved plan that clearly ties higher risk to more frequent or deeper reviews (COSO IC-IF (2013)). Also keep evidence of plan updates when risk changes.

How do we handle third parties in separate evaluations?

Evaluate your internal controls over third-party selection, contracting, and monitoring, and document how you get assurance over third-party-dependent controls (for example, through oversight controls and relevant assurance artifacts). Make the dependency explicit in scope statements so omissions are intentional, not accidental.

What if we don’t have resources to cover every high-risk area this cycle?

Document the constraint, escalate through governance, and record a risk acceptance or deferral rationale with compensating actions. Auditors react poorly to silent gaps; they can accept transparent risk decisions with evidence of oversight.

How do we show evaluator independence in practice?

Keep a simple conflict check per engagement and confirm the evaluator does not design, operate, or directly report into the function being tested. Independence should be provable from roles and approvals, not implied by job titles.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
COSO Separate Evaluations Scope: Implementation Guide | Daydream