Evaluation of BC documentation and capabilities
ISO 22301 Clause 8.6 requires you to routinely test, review, and update your business continuity (BC) documentation and your actual continuity capabilities, based on exercises, real incidents, and scheduled reviews. Operationally, you need a repeatable review cadence, defined evaluation criteria, documented outcomes, and tracked corrective actions with evidence.
Key takeaways:
- Treat BC plans as controlled documents tied to evidence from exercises and incidents, not static binders.
- Convert every exercise and disruption into documented findings, owners, deadlines, and retesting.
- Examiners look for traceability: scenario → test/incident → issues → fixes → updated plans → re-validation.
“Evaluation of BC documentation and capabilities” is the part of ISO 22301 that separates a written plan from a working continuity program. Clause 8.6 expects you to prove that your BC documentation stays current and that your organization can actually execute it under stress, across people, process, and technology.
For a Compliance Officer, CCO, or GRC lead, the fastest path is to operationalize this as a closed-loop system: run exercises, conduct post-incident reviews, perform periodic plan reviews, log findings, drive corrective actions, update documents, and then validate that the fixes worked. That loop must be visible in your evidence.
This requirement often fails in practice for predictable reasons: exercises are performed but not translated into plan updates; incident reviews focus on IT root cause and ignore continuity performance; owners “accept” gaps without risk sign-off; and document control is weak, so no one can show which plan version was tested. The guidance below gives you a requirement-level implementation approach you can deploy quickly, including artifacts to retain and audit questions to prepare for.
Regulatory text
ISO 22301:2019 Clause 8.6 states: “The organization shall evaluate and review BC documentation and capabilities through post-incident reviews, exercises, and regular reviews.” 1
Operator meaning: you must have defined mechanisms to (1) review what happened after disruptions, (2) test plans through exercises, and (3) conduct routine reviews even when nothing goes wrong. Each mechanism must produce documented outcomes that lead to updates in BC documentation and measurable improvements in capability. 1
Plain-English interpretation (what the requirement is really asking)
You need to be able to answer, with evidence:
- Are our BC documents accurate right now? (contacts, roles, dependencies, procedures, recovery assumptions)
- Can we execute them in realistic conditions? (decision-making, communications, workarounds, recovery steps)
- Do we learn and improve? (findings become actions; actions become updates; updates are re-tested)
Clause 8.6 is not satisfied by “we did a tabletop” or “we reviewed the plan.” It expects structured evaluation with documented results and tracked remediation that keeps plans aligned to your operating context. 1
Who it applies to (entity and operational context)
Applies to any organization implementing ISO 22301, including centralized and federated operating models. 1
In practice, it applies across:
- Business units that own critical processes and manual workarounds.
- IT and cyber teams responsible for recoverability and technical restoration steps.
- Facilities, HR, Legal/Compliance, Communications for crisis management procedures.
- Third parties that support critical services (outsourcers, SaaS, call centers, logistics). You must evaluate continuity capabilities that depend on them because your capability is only as strong as your dependencies.
If your BCMS scope includes specific sites, products, or services, the evaluation mechanisms must cover that scoped environment and the dependencies that enable it.
What you actually need to do (step-by-step)
Use this as a practical operating model. You can run it in spreadsheets, GRC tooling, or a purpose-built platform, but the steps and evidence must exist.
Step 1: Define your evaluation “triggers” and minimum review set
Create a simple matrix that defines what events require evaluation:
- Post-incident reviews: any disruption that triggers BC procedures, crisis communications, alternate site/remote work activation, major application outage, or significant third-party outage affecting delivery.
- Exercises: scheduled table-top and operational tests (communications drills, restore tests, manual workaround tests).
- Regular reviews: periodic document and capability review, plus reviews driven by change (org restructure, new systems, supplier changes, new locations, major process redesign).
Deliverable: a one-page BC Evaluation and Review Procedure with triggers, roles, and outputs. 1
Step 2: Establish evaluation criteria (how you judge “works vs. doesn’t”)
Define criteria you will apply consistently, such as:
- Accuracy of contacts, roles, escalation paths, and decision rights
- Completeness of step-by-step procedures and prerequisites
- Dependency mapping correctness (systems, data, facilities, third parties)
- Achieved recovery performance compared to your stated objectives (use your internal targets)
- Effectiveness of internal/external communications and coordination
- Evidence of training/role readiness for key personnel
- Documentation control: versioning, approvals, accessibility during outage
Deliverable: an Exercise & Incident Evaluation Checklist used every time.
Step 3: Run exercises with test objectives that map to capabilities
For each exercise:
- Define scenario, scope, objectives, and participants.
- Identify the BC documents being validated (plan name/version).
- Capture observed issues in real time (scribe role).
- End with a structured debrief within a defined internal cadence you can sustain.
Make at least some exercises capability-focused, not just discussion-based. Example: test whether teams can access plan documentation during a simulated identity outage, or whether contact trees work if email is unavailable.
Deliverables:
- Exercise plan (scenario, objectives, attendee list)
- Attendance record
- Results report with findings and assigned actions
Step 4: Conduct post-incident reviews that focus on continuity performance
After a real disruption, run a post-incident review that explicitly covers:
- What continuity actions were taken and when (activation timeline)
- What worked, what failed, what was unclear
- Where documentation was wrong or missing
- Where capabilities were insufficient (skills, tooling, access, third-party support)
- Communication effectiveness (internal, customer, regulator if applicable)
Avoid limiting PIRs to technical root cause. Clause 8.6 cares about BC documentation and capabilities, so ensure BC owners sign off on the review outcomes.
Deliverables:
- PIR report with a BC section
- Updated issue/action log entries
- Evidence of plan updates triggered by the PIR
Step 5: Perform regular reviews with document control and change management
Run structured reviews even without incidents:
- Confirm plan owners and approvers
- Validate contact lists and escalation paths
- Re-confirm dependencies, including third parties
- Validate that referenced tooling and access methods still work
- Check that procedures align with current architecture and operating model
- Confirm copies are accessible during outages (offline access, alternate authentication)
Tie reviews to a document control process: version changes, approvals, distribution, and attestation by plan owners.
Deliverables:
- Review meeting minutes or sign-off record
- Updated plan versions with change logs
- Owner attestation that plans reflect current operations
Step 6: Track corrective actions to closure (and re-test meaningful fixes)
Create a single BC findings and actions register that aggregates issues from:
- exercises
- incidents
- regular reviews
- major change initiatives
Each record should include: finding, risk statement, affected plan(s), owner, target date, status, evidence of completion, and whether re-testing is required.
Retest is where many programs break. Require re-validation for fixes that affect recovery procedures, communications paths, or access methods.
Deliverable: closed-loop action tracking with completion evidence and retest notes.
Step 7: Report to management in operational terms
Management reporting should answer:
- What was tested/reviewed
- Key gaps and trends
- Actions overdue and risk accepted
- Plans updated and pending approvals
- Capability improvements verified
If you use a platform like Daydream, configure it so exercises, PIRs, document versions, and corrective actions are linked. Audits move faster when the traceability is native rather than manually assembled.
Required evidence and artifacts to retain
Keep evidence in a way that supports traceability and version control:
Core documents
- BC Evaluation and Review Procedure (covers exercises, PIRs, regular reviews)
- BC plan inventory with owners, versions, and approval dates
- Document control records (version history, approvals, distribution)
Exercise evidence
- Annual/rolling exercise schedule
- Exercise scripts, objectives, participant lists
- Results reports, debrief notes, attendance logs
- Screenshots/logs showing communications tests (if applicable)
Incident review evidence
- PIR reports including BC performance evaluation
- Timeline of continuity decisions and actions
- Action items created from PIRs and closure evidence
Action management evidence
- Findings/actions register
- Evidence attachments per action (updated plan excerpt, training record, system configuration, third-party confirmation)
- Retest records for closed actions where required
Change-driven review evidence
- Records showing plans reviewed after material changes (system migrations, supplier changes, reorganizations)
Common exam/audit questions and hangups
Expect these questions, and prepare direct evidence paths:
- Show me the last exercise and the plan version tested. Auditors want document/version traceability.
- What changed in your plans as a result of exercises/incidents? They expect visible updates, not generic “lessons learned.”
- How do you ensure regular reviews occur, even without disruptions? Provide the cadence mechanism and completion records.
- How do you track corrective actions to closure? They will sample actions and ask for proof.
- How do you evaluate third-party continuity dependencies? They will probe outsourced processes that are “critical on paper.”
- Who can declare BC activation and who was trained? Capability includes decision rights and readiness.
Hangup to avoid: producing many artifacts that do not connect. A smaller set of linked evidence often audits better than a large pile of disconnected files.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: exercises are “checkbox” events.
Fix: require written objectives tied to specific capabilities, then score outcomes against the evaluation checklist. -
Mistake: PIRs focus only on root cause.
Fix: add a mandatory continuity performance section, owned by BC, with documented plan impacts. -
Mistake: no action ownership or due dates.
Fix: assign a single accountable owner, a target completion date, and required evidence for closure. -
Mistake: plan updates happen but approvals lag.
Fix: define approval SLAs internally, and treat unapproved plans as an operational risk requiring escalation. -
Mistake: third-party dependencies are assumed.
Fix: for each critical service, document which third party enables recovery steps and validate contacts/escalation paths during exercises.
Enforcement context and risk implications
No public enforcement cases were provided in the approved source catalog, so you should treat this requirement as a certification and audit risk rather than a cited enforcement trend.
Operationally, weak evaluation and review creates predictable failure modes:
- Plans drift from reality after org and technology change.
- Teams cannot execute under degraded conditions.
- Repeat incidents recur because fixes are not tracked or verified.
For regulated organizations, those failure modes often cascade into customer impact, contractual breaches, and regulator scrutiny, even if the ISO clause itself is not the enforcement hook.
Practical 30/60/90-day execution plan
Use phased execution without assuming fixed durations for every organization.
Next 30 days (stabilize the loop)
- Publish the BC Evaluation and Review Procedure aligned to Clause 8.6. 1
- Create the evaluation checklist and standardized templates (exercise report, PIR with BC section, action log).
- Inventory BC plans, owners, current versions, and last review dates.
- Stand up a single findings/actions register (even a controlled spreadsheet works).
Next 60 days (prove it works on real artifacts)
- Run at least one exercise in a high-impact area and produce a full results report with actions.
- Perform at least one regular review cycle for a set of priority plans (choose the ones most likely to be audited).
- Close a first batch of actions with evidence, and update plan versions with change logs.
Next 90 days (make it repeatable and auditable)
- Expand the exercise and review coverage to remaining scoped areas based on risk.
- Add change-driven review triggers into your project/change intake (new systems, major supplier changes).
- Implement management reporting with trend visibility: recurring findings, overdue actions, and plan currency.
- If tool support is needed, configure Daydream (or your GRC system) to link: plan version → exercise/PIR → findings → corrective actions → updated plan → retest record.
Frequently Asked Questions
What counts as “regular reviews” under ISO 22301 Clause 8.6?
Regular reviews are scheduled evaluations of BC documentation and capabilities that occur even without incidents or exercises. They should confirm plan accuracy, ownership, dependencies, and accessibility, and they must produce documented outcomes. 1
Do tabletop exercises satisfy the “capabilities” part of the requirement?
They can, if the tabletop evaluates real execution readiness with clear objectives, roles, decision-making, and documented findings. Add operational elements where possible, such as communications drills or access checks, to validate capability beyond discussion.
What is the minimum evidence an auditor will expect to see?
Expect to show an exercise or incident review report, proof of regular plan reviews, and a corrective action log with closure evidence. Auditors also commonly request plan version history to confirm that updates were controlled and approved.
How do we handle third-party dependencies in BC evaluation?
Treat third-party services that support critical processes as part of your capability. Validate contacts, escalation paths, and recovery assumptions during exercises and record gaps as corrective actions.
Who should own post-incident reviews for continuity?
Incident management may run the overall PIR, but BC should own the continuity-performance section and the actions that require plan/procedure changes. Make BC sign-off explicit in the PIR workflow.
If we discover a gap but accept the risk, does that satisfy Clause 8.6?
You can document risk acceptance, but you still need evidence that the gap was identified through an evaluation mechanism and dispositioned through governance. Auditors will look for a clear rationale, approver authority, and impact awareness tied to the affected plans. 1
Footnotes
Frequently Asked Questions
What counts as “regular reviews” under ISO 22301 Clause 8.6?
Regular reviews are scheduled evaluations of BC documentation and capabilities that occur even without incidents or exercises. They should confirm plan accuracy, ownership, dependencies, and accessibility, and they must produce documented outcomes. (Source: ISO 22301:2019 Security and resilience — Business continuity management systems — Requirements)
Do tabletop exercises satisfy the “capabilities” part of the requirement?
They can, if the tabletop evaluates real execution readiness with clear objectives, roles, decision-making, and documented findings. Add operational elements where possible, such as communications drills or access checks, to validate capability beyond discussion.
What is the minimum evidence an auditor will expect to see?
Expect to show an exercise or incident review report, proof of regular plan reviews, and a corrective action log with closure evidence. Auditors also commonly request plan version history to confirm that updates were controlled and approved.
How do we handle third-party dependencies in BC evaluation?
Treat third-party services that support critical processes as part of your capability. Validate contacts, escalation paths, and recovery assumptions during exercises and record gaps as corrective actions.
Who should own post-incident reviews for continuity?
Incident management may run the overall PIR, but BC should own the continuity-performance section and the actions that require plan/procedure changes. Make BC sign-off explicit in the PIR workflow.
If we discover a gap but accept the risk, does that satisfy Clause 8.6?
You can document risk acceptance, but you still need evidence that the gap was identified through an evaluation mechanism and dispositioned through governance. Auditors will look for a clear rationale, approver authority, and impact awareness tied to the affected plans. (Source: ISO 22301:2019 Security and resilience — Business continuity management systems — Requirements)
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream