Principle 16: Conducts ongoing and/or separate evaluations
Principle 16 requires you to regularly evaluate whether internal controls are present and working, using a mix of ongoing monitoring and periodic independent reviews, then act on what you find. To operationalize it fast, define a monitoring plan by risk, assign owners, run a repeatable test cadence, track issues to closure, and retain evidence that proves evaluation and remediation.
Key takeaways:
- Build a documented evaluations plan that covers ongoing monitoring plus separate (independent) evaluations, scaled by risk.
- Treat evaluation results as an issues-management workflow: severity, root cause, remediation, validation, closure.
- Keep audit-ready evidence: what you tested, how you tested, what you found, and what changed as a result.
A COSO program fails in practice when controls exist on paper but nobody can show they still work after systems change, teams reorganize, or third parties get added. The principle 16: conducts ongoing and/or separate evaluations requirement is COSO’s mechanism to prevent “set-and-forget” control environments. It expects you to continuously watch control performance through day-to-day activities and to periodically step back with more independent, structured evaluations to confirm the system of internal control remains effective. 1
For a Compliance Officer, CCO, or GRC lead, the fastest path is to turn Principle 16 into an operating rhythm: define what “ongoing monitoring” means for your key risks, define what qualifies as a “separate evaluation,” and connect both to a single issues log with accountable remediation. The goal is exam-ready traceability from risk to control to evaluation to corrective action. COSO frames this as part of monitoring activities within the Internal Control – Integrated Framework. 2
Requirement overview (plain-English)
Plain-English interpretation: You must regularly check that your internal controls exist and work as intended, using (1) ongoing monitoring embedded in operations and (2) periodic separate evaluations that are more structured and independent. When weaknesses are identified, you must communicate them and drive corrective actions to completion. 2
What “ongoing” vs. “separate” means in practice
- Ongoing evaluations: Continuous or routine supervisory activities, operational metrics, exception reports, control self-checks, and workflow validations that reveal whether controls are operating. Think: daily reconciliations, access review exceptions, automated alerts, ticket aging dashboards.
- Separate evaluations: Periodic reviews performed by an independent function or reviewer (often internal audit, compliance testing, or a control assurance team) with defined scope, testing steps, and documented conclusions.
Your program should use both, because ongoing monitoring catches drift early, and separate evaluations detect blind spots and validate that monitoring itself is reliable. 1
Who it applies to
Entity types: COSO is a framework used by enterprise organizations across industries. 1
Operational contexts where Principle 16 becomes “must-have” (not “nice-to-have”):
- Financial reporting and close processes: reconciliations, journal entry controls, SOX-like environments.
- Regulated operations: compliance obligations where control failures become reportable events.
- IT and security control environments: identity access management, change management, incident management.
- Third-party risk and outsourcing: controls that rely on third parties (including SOC reports and complementary user entity controls).
- High-change environments: cloud migrations, ERP implementations, reorganizations, M&A integration.
Regulatory text
Framework excerpt (provided): “COSO internal control principle 16 implementation expectation.” 1
Operator translation: Treat this as a requirement to (1) design a monitoring approach that fits your risks and control types, (2) execute it on a defined rhythm, (3) document the results and exceptions, and (4) remediate and validate fixes. COSO describes this as part of the monitoring component of internal control. 2
What you actually need to do (step-by-step)
Use the sequence below as your build order. It is designed so you can show progress quickly, even before the whole program is mature.
1) Set evaluation scope by risk and “key control” status
- Start with your control inventory (or create one if missing) and tag key controls tied to material risks and critical processes.
- Identify where controls are most likely to drift: manual controls, spreadsheet controls, controls performed by new teams, controls dependent on third parties, and controls in systems undergoing change. Output: a scoped list of controls that must be evaluated under Principle 16, with rationale.
2) Define what qualifies as ongoing monitoring for each control
For each key control, specify:
- Monitoring mechanism: metric, report, workflow check, supervisory review, or automated alert.
- Owner: named role accountable for performing/overseeing monitoring.
- Trigger conditions: what constitutes an exception (late completion, threshold breach, override, missing approval).
- Action path: where exceptions go (ticketing, issue log, corrective action plan). Example (operational): For privileged access, ongoing monitoring can include alerts for new privileged group membership and a weekly exception review by IAM operations, with exceptions logged and investigated.
3) Build a separate evaluations plan (independence + repeatability)
Define:
- Evaluator: internal audit, compliance testing, second-line assurance, or a trained independent reviewer not performing the control day-to-day.
- Testing steps: inspection, re-performance, sampling logic (if you sample, document method), and evidence requirements.
- Frequency logic: do not default to calendar cadence. Base it on risk, change volume, prior issues, and reliance (for example, controls supporting external reporting get higher scrutiny). Tip: If independence is hard in a small organization, document compensating independence (peer review, cross-functional review, or executive oversight) and why it is reasonable for your size.
4) Standardize evaluation workpapers (make evidence easy)
Create templates so every evaluation produces consistent artifacts:
- Test plan (scope, control description, period, population/source)
- Test steps performed (what was checked and how)
- Evidence captured (links, screenshots, exports, tickets)
- Results (pass/fail, exceptions, impact)
- Conclusion and sign-off
This “paper trail” is often the difference between “we do this” and “we can prove this.” 1
5) Centralize findings into an issues-management workflow
Treat evaluation results as governed issues:
- Severity rating: tie to risk impact (financial, regulatory, operational, security).
- Root cause: process gap, training, system configuration, unclear ownership, third-party failure.
- Corrective action plan (CAP): tasks, owners, due dates, dependencies.
- Validation: retest or evidence-based verification after remediation.
- Closure criteria: documented proof the control operates and the weakness is resolved.
6) Report monitoring outcomes to management with decision-ready summaries
Provide a periodic monitoring report that includes:
- Controls evaluated (ongoing and separate)
- Exceptions and open issues
- Repeat findings and aging issues
- Thematic root causes (training, tooling, third parties)
- Management decisions needed (policy change, staffing, system fixes)
COSO expects monitoring to inform management judgments about internal control effectiveness. 2
Required evidence and artifacts to retain
Auditors and examiners look for proof that evaluations happened and that you acted on results. Keep:
Core artifacts (minimum set)
- Monitoring & separate evaluations policy/procedure mapped to Principle 16. 1
- Monitoring plan / assurance calendar showing scope and responsibilities.
- Control inventory with key-control designation and evaluation approach.
- Completed evaluation workpapers (test plans, evidence, conclusions, sign-offs).
- Exception and issues log with status, owners, and remediation evidence.
- Remediation validation records (retest results, screenshots, change tickets, updated procedures).
- Management reporting (dashboards, steering committee minutes, risk committee updates).
Helpful add-ons (reduce debate in audits)
- Evidence retention standard (where evidence lives, naming conventions, retention period aligned to your policy)
- Change-management linkage (show that major changes trigger updated evaluations)
- Third-party dependency mapping (controls reliant on SOC reports and complementary controls)
Common exam/audit questions and hangups
Expect these lines of questioning:
-
“Show me your monitoring strategy.”
They want a documented plan, not a verbal description. -
“How do you know monitoring is working?”
Be ready to show separate evaluations that test not only controls, but the monitoring activities themselves. -
“How do you ensure independence for separate evaluations?”
If internal audit does it, easy. If not, document how reviewer independence is achieved. -
“Walk me from exception to closure.”
Have a clean chain: finding → risk impact → CAP → validation → closure approval. -
“What changed this year, and how did evaluations adapt?”
Show that system/process changes trigger updates to the evaluation plan.
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails | Fix |
|---|---|---|
| Treating ongoing monitoring as “business as usual” with no documentation | You can’t prove it happened or that it is effective | Define monitoring activities and retain outputs (reports, logs, sign-offs) |
| Separate evaluations that are ad hoc | Creates gaps and inconsistent coverage | Publish an evaluations plan with clear scope and methods |
| No linkage between findings and remediation | Findings repeat; control environment degrades | Run a governed issues-management workflow with validation before closure |
| Over-testing low-risk controls | Burns time and creates noise | Focus separate evaluations on key controls and high-change areas |
| Ignoring third-party dependent controls | Breaks end-to-end control chains | Map third-party reliance and test complementary controls you own |
Enforcement context and risk implications
COSO itself is a framework, not a regulator, so “enforcement” usually shows up indirectly: external auditors, regulators, or stakeholders may cite ineffective monitoring as a governance failure when control breakdowns occur. The operational risk is straightforward: without ongoing and separate evaluations, control failures persist unnoticed, weaknesses repeat, and leadership cannot credibly assert control effectiveness. 1
Practical 30/60/90-day execution plan
Because your environment and risk profile drive timing, use phases rather than fixed durations to avoid false precision.
First 30 days (stabilize and define)
- Identify and confirm the population of key controls in scope for Principle 16.
- Assign owners for monitoring and separate evaluations (RACI).
- Draft monitoring and separate evaluations procedures mapped to Principle 16. 1
- Stand up a single issues log with required fields (severity, owner, CAP, validation).
Days 31–60 (execute and generate evidence)
- Launch ongoing monitoring for the highest-risk controls with documented outputs.
- Run initial separate evaluations on a targeted subset (pick controls with known change or prior issues).
- Start management reporting: open issues, repeat findings, remediation status.
Days 61–90 (expand coverage and harden governance)
- Expand separate evaluations to the remaining key controls per your risk-based plan.
- Add remediation validation as a formal gate for closure.
- Tune monitoring triggers and thresholds based on exceptions observed.
- Prepare an audit-ready package: policy/procedure, evaluation plan, workpapers, issues log, and management reporting.
Where Daydream fits (practical, non-disruptive)
If your bottleneck is coordination and evidence sprawl, Daydream can centralize control ownership, testing workpapers, and issue tracking so Principle 16 evidence is consistently captured and easy to export for auditors. Keep it simple: map key controls to monitoring activities and store proofs in one place.
Frequently Asked Questions
Do we need both ongoing monitoring and separate evaluations to meet Principle 16?
Principle 16 explicitly contemplates “ongoing and/or separate evaluations,” but in practice most mature programs use both because they detect different failure modes. Document your rationale if you emphasize one approach more heavily. 2
What counts as a “separate evaluation” if we don’t have internal audit?
Use a reviewer who is not responsible for performing the control day-to-day, such as a second-line compliance tester or a cross-functional peer review. Document independence, scope, and testing steps so the evaluation is defensible. 1
How detailed should evaluation workpapers be?
Detailed enough that another qualified person can reperform your work from the documentation: what you tested, where evidence came from, and how you concluded pass/fail. If your workpapers cannot stand alone, auditors will treat the test as not performed.
How do we handle automated controls versus manual controls?
Automated controls still need evaluation, but the focus shifts to configuration, change management, access, and evidence that the automation ran as designed. Manual controls need clearer proof of performance (sign-offs, reconciliations, approvals) and are more prone to drift.
Can operational KPIs serve as ongoing monitoring evidence?
Yes, if the KPI is clearly tied to a control objective, has an owner, and triggers defined action when out of tolerance. Retain the KPI output and the follow-up records for exceptions.
What evidence is most commonly missing in audits for Principle 16?
Teams often have a list of controls but no consistent proof of evaluations, no documented exceptions workflow, and no remediation validation. Fix this by standardizing templates and forcing every finding into the issues log with closure evidence. 1
Footnotes
Frequently Asked Questions
Do we need both ongoing monitoring and separate evaluations to meet Principle 16?
Principle 16 explicitly contemplates “ongoing and/or separate evaluations,” but in practice most mature programs use both because they detect different failure modes. Document your rationale if you emphasize one approach more heavily. (Source: COSO Internal Control guidance page; Weaver summary of COSO 17 principles)
What counts as a “separate evaluation” if we don’t have internal audit?
Use a reviewer who is not responsible for performing the control day-to-day, such as a second-line compliance tester or a cross-functional peer review. Document independence, scope, and testing steps so the evaluation is defensible. (Source: COSO Internal Control guidance page)
How detailed should evaluation workpapers be?
Detailed enough that another qualified person can reperform your work from the documentation: what you tested, where evidence came from, and how you concluded pass/fail. If your workpapers cannot stand alone, auditors will treat the test as not performed.
How do we handle automated controls versus manual controls?
Automated controls still need evaluation, but the focus shifts to configuration, change management, access, and evidence that the automation ran as designed. Manual controls need clearer proof of performance (sign-offs, reconciliations, approvals) and are more prone to drift.
Can operational KPIs serve as ongoing monitoring evidence?
Yes, if the KPI is clearly tied to a control objective, has an owner, and triggers defined action when out of tolerance. Retain the KPI output and the follow-up records for exceptions.
What evidence is most commonly missing in audits for Principle 16?
Teams often have a list of controls but no consistent proof of evaluations, no documented exceptions workflow, and no remediation validation. Fix this by standardizing templates and forcing every finding into the issues log with closure evidence. (Source: COSO Internal Control guidance page)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream