Annex A 5.29: Information Security During Disruption

Annex a 5.29: information security during disruption requirement expects you to preserve information security controls during business disruption, not suspend them. Operationalize it by defining “disruption” scenarios, pre-authorizing secure degraded modes, assigning decision rights, and capturing evidence from tests and real events that security protections remained effective.

Key takeaways:

  • Define secure “degraded operations” so teams can keep working without bypassing controls.
  • Link BCP/DR to ISMS controls, roles, and approvals so security decisions during incidents are deliberate and logged.
  • Collect repeatable evidence from exercises and incidents to prove 5.29 is operating, not just documented.

Annex A 5.29 sits at the uncomfortable intersection of business continuity and security operations: the moment when systems are down, staff are stressed, and normal processes feel “optional.” Auditors and customers care because disruption is when sensitive data often gets mishandled: emergency accounts get created, logging gets turned off, laptops become the new “data center,” and third parties are brought in quickly without guardrails.

For a Compliance Officer, CCO, or GRC lead, the practical goal is simple: make sure disruption procedures do not create a security free-for-all. You need a small set of pre-made decisions (what can change, who can approve it, how it’s time-bounded, and what evidence gets captured) so that incident response, disaster recovery, and continuity actions stay inside your security risk tolerance.

This page translates Annex a 5.29: information security during disruption requirement into an implementation blueprint: applicability, step-by-step controls, audit-ready artifacts, common failure modes, and an execution plan you can assign this week. Sources for framework context are ISO’s overview and a public Annex A index summary. 1

Regulatory text

Provided excerpt: “ISO/IEC 27001:2022 Annex A control 5.29 implementation expectation (Information Security During Disruption).” 1

What the operator must do

You must ensure information security is maintained when your organization is operating under disruption conditions (for example, outage, loss of facility, ransomware response, loss of key staff, telecom failure). Practically, that means:

  • You define how security controls behave in “degraded mode.”
  • You pre-approve emergency changes with guardrails (who, what, how long, how logged).
  • You test and exercise those arrangements.
  • You keep evidence that security controls stayed in effect during disruption and recovery activities. 1

Plain-English interpretation (what 5.29 really demands)

Annex A 5.29 expects continuity plans to include security, not just uptime. Your business can make operational tradeoffs during disruption, but you still have to protect confidentiality, integrity, and availability at a level consistent with your risk decisions.

In practice, auditors look for two things:

  1. Design: Documented, scenario-based security behavior during disruption (including fallbacks and temporary exceptions).
  2. Operation: Proof that you actually run it: exercises, incident records, change approvals, and post-incident reviews that show security was considered and controlled. 1

Who it applies to (entity + operational context)

Entity scope

Applies to any organization implementing ISO/IEC 27001 that needs to demonstrate Annex A control coverage within its ISMS. This is especially high-stakes for service organizations with customer-facing availability commitments and outsourced processing. 2

Operational contexts that trigger 5.29 controls

Build your control so it explicitly covers:

  • Major incidents (security incidents and non-security outages)
  • Disaster recovery and failover events
  • “Manual workarounds” periods (paper processing, spreadsheet queues, email-based approvals)
  • Emergency third-party access or rapid onboarding of third parties
  • Facility disruptions (building closure, power loss)
  • Workforce disruptions (loss of key operators, widespread endpoint failures)

If you can’t define triggers, you can’t prove you executed the control.

What you actually need to do (step-by-step)

Use this as a build sheet. Assign each step to an owner and track it like a project until it becomes routine.

Step 1: Define “disruption” and the security objectives for disrupted states

Create a short standard that answers:

  • What events declare “disruption mode” (who declares it, where it’s recorded)?
  • What minimum security outcomes must still hold (authentication, authorization, logging, encryption, segregation of duties, data handling)?
  • What can be temporarily relaxed, under what approvals, and with what compensating controls?

Deliverable: Information Security During Disruption Standard mapped to your ISMS control set. 1

Step 2: Design secure degraded modes (pre-approved patterns)

Document pre-approved operational patterns so teams don’t improvise. Examples:

  • Emergency access pattern: break-glass accounts with MFA, time-bound access, and mandatory ticket linkage.
  • Logging preservation pattern: if central SIEM is down, local logs are retained and forwarded later with chain-of-custody notes.
  • Manual processing pattern: if automation is down, manual approvals require dual authorization and a controlled intake channel.

Keep this practical: teams need “do this, not that” instructions.

Deliverable: Degraded Mode Runbooks for key systems and processes.

Step 3: Tie BCP/DR runbooks to security decision rights

During disruption, confusion about who can approve exceptions causes silent control failure. Define:

  • Who can approve emergency changes (IT, Security, Business owner)
  • Who can approve security exceptions (Security with risk acceptance authority)
  • Who can engage third parties and grant access
  • Who performs after-action security review and closes temporary access

Deliverable: RACI for disruption security decisions plus an emergency change workflow.

Step 4: Control emergency changes so they are traceable and reversible

Minimum operational controls:

  • Emergency change tickets are required (even if created after service restoration, with a clear timestamp note).
  • Every temporary change has an owner, an expiry condition, and a rollback plan.
  • Post-restoration review confirms reversals happened (accounts removed, firewall rules cleaned up, debug logging turned off only when appropriate, temporary data stores purged per policy).

Deliverable: Emergency Change Procedure aligned to your change management process.

Step 5: Validate third-party access and dependencies during disruption

Disruption often expands third-party involvement (IR firms, hosting support, replacement suppliers). Add controls:

  • Pre-approved third-party list for emergencies with contract and access prerequisites.
  • Fast-path due diligence checklist for “new in crisis” third parties (data access scope, secure channels, identity proofing, logging requirements).
  • Rules for remote access (MFA, device posture expectations, session recording where feasible).

Deliverable: Emergency third-party onboarding checklist and pre-approved provider roster.

Step 6: Exercise, test, and capture evidence

You need repeatable proof. Build exercises that force security tradeoffs:

  • DR test where primary identity provider is unavailable.
  • Outage where the ticketing system is down.
  • Tabletop scenario: ransomware containment while keeping logs and evidence protected.

Deliverable: Exercise plan, exercise results, and remediation tracking that includes security control outcomes. 1

Step 7: Post-incident security review is mandatory

After any declared disruption:

  • Confirm whether any security controls were bypassed.
  • Verify temporary access was revoked.
  • Confirm data handling in manual workarounds.
  • Record lessons learned and update runbooks.

Deliverable: Post-incident security review template and completed reviews.

Required evidence and artifacts to retain

Auditors commonly accept a tight evidence set if it shows control operation across time.

Artifact What it proves What “good” looks like
Information Security During Disruption standard Defined expectations Clear triggers, minimum controls, exception rules
Degraded mode runbooks Teams had safe defaults System-specific steps, not generic statements
Emergency change records Traceability under pressure Tickets link to approvals, scope, expiry, rollback
Break-glass access logs Access control continuity Time-bounded access, MFA evidence, review sign-off
DR/BCP test results You tested under disruption Findings and fixes tracked to closure
Incident postmortems with security section You reviewed what happened Explicit control impacts and remediation

A recurring gap is “we have BCP” but no proof security remained in force during BCP execution. 1

Common exam/audit questions and hangups

Expect these lines of questioning:

  • “Show me your definition of disruption mode and who can declare it.”
  • “What security controls remain mandatory during DR?”
  • “How do you prevent emergency accounts from becoming permanent?”
  • “Show evidence from a test or real incident where you operated in degraded mode.”
  • “How are third parties handled during emergencies?”
  • “How do you ensure logs and evidence are preserved when monitoring tools are impaired?”

Hangup: teams present a DR plan focused on RTO/RPO language but cannot map it to access control, logging, cryptographic controls, or data handling requirements.

Frequent implementation mistakes (and how to avoid them)

  1. Relying on “common sense” during crisis.
    Fix: write short runbooks with explicit do/don’t steps and decision rights.

  2. Emergency access without lifecycle control.
    Fix: require expiry conditions, review, and formal closure tasks tied to each emergency access grant.

  3. Logging and evidence handling are afterthoughts.
    Fix: include “security telemetry fallback” steps and evidence preservation in every disruption runbook.

  4. Manual workarounds create shadow data stores.
    Fix: pre-define approved tools and storage locations, then require reconciliation and cleanup after restoration.

  5. Third-party expansion without guardrails.
    Fix: maintain an emergency provider roster and a fast-path due diligence checklist that still enforces access boundaries.

Enforcement context and risk implications

No public enforcement cases are provided in the source catalog for this requirement. Treat the risk as practical and contractual: disruption-driven control failures commonly lead to customer notification obligations, audit findings, and breach exposure because disruptions drive improvisation. Your objective under ISO 27001 is auditability: documented expectations plus evidence of operating effectiveness. 1

Practical 30/60/90-day execution plan

Use phased execution so you can show momentum quickly, then harden.

First 30 days (Immediate)

  • Inventory disruption scenarios relevant to your services (outage, DR, cyber incident, manual processing).
  • Draft and approve a one-page “security in disruption” standard: triggers, minimum controls, exception rules, decision rights.
  • Identify systems that need degraded mode runbooks (identity, core apps, logging/SIEM, customer data stores).
  • Start evidence design: where artifacts live, naming convention, and who owns capture.

By 60 days (Near-term build-out)

  • Publish degraded mode runbooks for highest-impact systems.
  • Implement break-glass process with time bounds and review workflow.
  • Add emergency third-party onboarding checklist and pre-approved provider roster.
  • Update emergency change procedure so tickets and approvals are consistently captured.

By 90 days (Operationalize and prove)

  • Run at least one tabletop and one technical exercise that forces degraded mode decisions.
  • Produce completed evidence packets: exercise results, tickets, access logs, and remediation tasks.
  • Add post-incident security review to your incident management definition of done.
  • Review outcomes with leadership and adjust risk acceptance thresholds.

Where Daydream fits naturally

Most teams don’t fail on intent; they fail on evidence consistency. Daydream helps you map Annex A 5.29 to documented control operation and set up recurring evidence capture so your next audit doesn’t depend on hunting down incident tickets and screenshots across tools. 1

Frequently Asked Questions

What counts as a “disruption” for Annex A 5.29?

Treat disruption as any declared event where you operate outside normal procedures (outage, DR failover, manual workaround period, or emergency staffing constraints). Define explicit triggers and who can declare the mode so you can prove consistent execution.

Do we need separate policies for 5.29, or can we embed it in BCP/IR documents?

Embedding is fine if the documents clearly state minimum security controls during disruption, exception approvals, and evidence requirements. Auditors mainly care that the requirement is unambiguous and demonstrably operating. 1

How do we handle break-glass access without slowing incident response?

Pre-approve the mechanism and keep the steps short: strong authentication, time-bound access, ticket linkage, and mandatory review after restoration. Speed comes from preparation, not skipping control.

What evidence is most persuasive in an ISO 27001 audit for 5.29?

Exercise outputs and real incident records that show secure degraded operations: emergency change tickets, access logs, postmortems with security impacts, and proof you revoked temporary access. Pair that with a short standard defining disruption expectations. 1

How does 5.29 affect third-party access during an outage?

It pushes you to pre-define guardrails for emergency third-party access: who can onboard, what due diligence is required, how access is scoped, and how you log and later revoke it. Keep a pre-approved list for common emergency needs.

We’re mid-incident and our normal ticketing system is down. Are we automatically noncompliant?

No, but you need a documented fallback method for traceability (for example, a designated emergency log channel) and a requirement to reconstruct formal records after restoration with clear timestamps and approvers.

Footnotes

  1. ISO/IEC 27001 overview; ISMS.online Annex A control index

  2. ISO/IEC 27001 overview

Frequently Asked Questions

What counts as a “disruption” for Annex A 5.29?

Treat disruption as any declared event where you operate outside normal procedures (outage, DR failover, manual workaround period, or emergency staffing constraints). Define explicit triggers and who can declare the mode so you can prove consistent execution.

Do we need separate policies for 5.29, or can we embed it in BCP/IR documents?

Embedding is fine if the documents clearly state minimum security controls during disruption, exception approvals, and evidence requirements. Auditors mainly care that the requirement is unambiguous and demonstrably operating. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)

How do we handle break-glass access without slowing incident response?

Pre-approve the mechanism and keep the steps short: strong authentication, time-bound access, ticket linkage, and mandatory review after restoration. Speed comes from preparation, not skipping control.

What evidence is most persuasive in an ISO 27001 audit for 5.29?

Exercise outputs and real incident records that show secure degraded operations: emergency change tickets, access logs, postmortems with security impacts, and proof you revoked temporary access. Pair that with a short standard defining disruption expectations. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)

How does 5.29 affect third-party access during an outage?

It pushes you to pre-define guardrails for emergency third-party access: who can onboard, what due diligence is required, how access is scoped, and how you log and later revoke it. Keep a pre-approved list for common emergency needs.

We’re mid-incident and our normal ticketing system is down. Are we automatically noncompliant?

No, but you need a documented fallback method for traceability (for example, a designated emergency log channel) and a requirement to reconstruct formal records after restoration with clear timestamps and approvers.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream