Access to data on pre-used data storage space

To meet the “access to data on pre-used data storage space” requirement, you must ensure that any storage capacity you assign to a new cloud customer contains no readable remnants of a prior customer’s data. Operationally, this means enforcing technical sanitization and isolation controls for every storage lifecycle event (provision, resize, deprovision, repair, redeploy) and retaining evidence that the controls work.

Key takeaways:

  • Treat “storage reassignment” as a controlled event with mandatory sanitization or cryptographic separation before reuse.
  • Cover all storage types: block, file, object, ephemeral, snapshots, backups, and managed disks; don’t leave “edge” services out.
  • Keep audit-ready proof: architecture, configurations, logs, test results, and exception handling that show residual data cannot be exposed.

“Access to data on pre-used data storage space” is a narrow control with a high blast radius. If a customer can read residual bits from storage you previously assigned to another customer, you have a cross-tenant data exposure problem. In a public cloud context, that exposure can include personally identifiable information (PII) even if you never intended to store PII on that specific volume or disk.

This ISO/IEC 27018 requirement focuses on what happens at the exact moment storage is assigned (or reassigned) to a customer: the customer must not be able to see any data that was previously on that space. You operationalize this by controlling the storage lifecycle end-to-end: how storage is created, attached, detached, deleted, reintroduced to a pool, repaired, and reissued. You also validate the control with testing that tries to recover prior contents under realistic attacker conditions (including low-level reads where feasible).

If you’re a Compliance Officer, CCO, or GRC lead, your job is to translate this into: (1) explicit engineering requirements, (2) a repeatable evidence set, and (3) a vendor/third-party due diligence story when storage is provided by an underlying cloud or hosting provider.

Regulatory text

ISO/IEC 27018:2019 Annex A.11.12 states: “The public cloud PII processor shall ensure that whenever data storage space is assigned to a cloud service customer, any data previously residing on that storage space is not visible to that customer.” 1

Operator meaning: every time you allocate storage capacity to a customer (including auto-scaling, failover, and replacement events), you must prevent residual data exposure from prior tenants. Prevention can be achieved via validated media sanitization, cryptographic erasure patterns, or architectural designs where “old bits” are not readable to the new customer under any supported access path.

Plain-English interpretation (what the requirement is really asking)

You must be able to say, and prove, all of the following:

  • No cross-customer residual data exposure: a new customer cannot read old customer data from storage space that is newly assigned to them.
  • Applies to reassignment, not just deletion: the risk moment is allocation/assignment, not only end-of-life disposal.
  • Must hold under normal and “clever” access: it’s not enough that the UI looks empty. The control must address feasible recovery paths (e.g., reading uninitialized blocks, inspecting snapshots, or accessing deallocated object fragments depending on your design).

Who it applies to (entity + operational context)

This applies to public cloud PII processors operating services where customers receive storage resources (directly or indirectly) and the provider controls the underlying infrastructure. 1

In scope operationally:

  • Multi-tenant platforms that provision per-customer storage (SaaS with per-tenant databases, managed file stores, managed disks).
  • IaaS/PaaS-like services where customers attach volumes, create buckets, mount file shares, or generate snapshots.
  • Internal platform teams providing “cloud” to business units can treat BU-to-BU exposure as the same risk, even if ISO/IEC 27018 is not formally required.

Commonly missed in scope:

  • Ephemeral instance storage and scratch disks
  • Snapshots, backup media, replica sets, and disaster recovery copies
  • “Returned” hardware workflows (RMA) and redeploy of repaired devices
  • Caches and accelerators that persist data (content caches, block caches) if they can be reassigned

What you actually need to do (step-by-step)

1) Map every storage assignment path

Create a simple inventory that answers:

  • What storage types exist (block/file/object/DB storage/ephemeral/cache)?
  • Who/what provisions them (control plane, automation, manual ops)?
  • What events cause reassignment (scale, migration, failure replacement, customer delete + pool reuse)?
  • What “hidden” copies exist (snapshots, replicas, logs, backups)?

Deliverable: Storage Lifecycle & Reassignment Data Flow (diagram + table).

2) Define your “no visibility” technical standard

Write an engineering requirement that is testable. Examples of acceptable patterns (choose what fits your architecture):

  • Sanitize-on-release: zeroize or overwrite storage when returned to a free pool before it can be assigned again.
  • Sanitize-on-assign: allocate from a pool that may contain prior data, but perform verified clearing before attach/mount is completed.
  • Cryptographic separation: per-customer encryption with unique keys; on reassignment, destroy/rotate keys so old ciphertext is not recoverable by the new customer (cryptographic erasure). This is only credible if key separation is strict and the threat model matches what the customer can access.

Deliverable: “Storage Reuse Sanitization Standard” (one pager): scope, method, enforcement points, exceptions.

3) Implement hard control gates in the provisioning workflow

Make reassignment impossible unless the gate passes. Practical patterns:

  • Control plane must check a sanitization state flag before attach/assign.
  • Infrastructure automation tags assets with last tenant, wipe status, and timestamp, then blocks reuse until wiped.
  • Quarantine path for failed wipes; items cannot re-enter the free pool.

Deliverable: Provisioning guardrail design (policy-as-code, CI/CD checks, or control plane logic).

4) Cover “secondary storage”: snapshots, replicas, and backups

Residual exposure often happens through indirect paths:

  • A snapshot created under Customer A must not become accessible under Customer B due to mis-tagging, broken ACL inheritance, or account mapping bugs.
  • Backup restore workflows must validate target tenant identity before restore completes.
  • Replication and failover must preserve tenant boundaries.

Deliverable: Tenant-boundary controls for copies (authorization rules + restore/runbook checks).

5) Validate with adversarial-style testing

You need proof beyond “we believe we wipe.” Build tests that attempt to recover prior contents:

  • Allocate a volume, write a known pattern, release it, reallocate, then attempt low-level reads or file carving where feasible.
  • Test each storage class and each region/cluster type if behavior differs.
  • Test failure modes: interrupted wipe, node crash mid-sanitization, rollback events.

Deliverable: Residual Data Exposure Test Plan + test results and pass/fail records.

6) Put exceptions on a short leash

If sanitization cannot run (hardware fault, timeouts, operational incident), define:

  • Containment (quarantine asset, disable reuse)
  • Approval authority for any override (rare)
  • Customer impact analysis and notification decision path (if exposure is suspected)

Deliverable: Exception procedure + incident response linkage.

7) Extend the control to third parties (your upstream providers)

If you run on an underlying cloud provider, colocation, or storage vendor, you still need assurance. Your due diligence should request:

  • Their media sanitization / cryptographic erasure approach for multi-tenant storage reuse
  • Independent audit evidence (e.g., relevant ISO report excerpts) and description of control scope
  • Shared responsibility boundaries: what you control vs what they control

This is where a platform like Daydream can help a GRC team operationalize the evidence chase: standardize the request set, track responses by storage service, and map third-party claims to your internal control language so audits don’t become a scramble.

Required evidence and artifacts to retain

Keep artifacts that prove both design and operating effectiveness:

Design evidence

  • Storage lifecycle diagrams and reassignment paths
  • Sanitization standard (policy/engineering spec)
  • Architecture showing tenant isolation and key management boundaries (if using encryption-based separation)
  • Provisioning workflow controls (policy-as-code, guardrail logic, config baselines)

Operational evidence

  • Sanitization logs (wipe job IDs, asset IDs, timestamps, outcomes)
  • Control plane decision logs showing “wipe complete” required for assignment
  • Test results showing inability to recover prior data across storage types
  • Change records for storage control changes (approvals, rollout notes)
  • Exception tickets (quarantine records, approvals, postmortems)

Audit note: evidence should be linkable from “storage assigned” back to “sanitized” for a sample of events.

Common exam/audit questions and hangups

Expect auditors to probe these:

  • “Show me how a disk/volume moves from Customer A to Customer B. Where is the control gate?”
  • “Is sanitization guaranteed before assignment, or best-effort after release?”
  • “How do you handle failed wipes and hardware faults?”
  • “Do snapshots and backups follow the same tenant boundary rules?”
  • “What testing proves a new customer can’t read previous data?”
  • “What do you rely on from third parties, and what proof do you have?”

Hangup: teams present a policy statement with no linkage to actual provisioning logic or logs. Auditors then treat the control as unverified.

Frequent implementation mistakes (and how to avoid them)

  1. Only wiping on customer delete. Reassignment also happens during maintenance, rebalancing, and failure replacement.
    Avoid it: control must live in the provisioning/assignment pipeline, not only in offboarding runbooks.

  2. Ignoring snapshots and replicas. Residual exposure often comes from broken authorization, not block-level reads.
    Avoid it: treat “copies” as storage assignment events with the same “no visibility” bar.

  3. Relying on encryption without key discipline. If keys are shared, reused, or not tenant-scoped, cryptographic erasure is not credible.
    Avoid it: enforce per-tenant keys, strict separation, and demonstrable key destruction/rotation on reassignment.

  4. No negative testing. “We wipe” is not evidence.
    Avoid it: run repeatable residual recovery tests and keep the artifacts.

  5. Exception pathways that silently bypass wiping. Ops shortcuts during incidents can create the exact exposure this control targets.
    Avoid it: quarantine-by-default, and require explicit approvals with logging for any override.

Enforcement context and risk implications

No public enforcement cases were provided in the source material for this requirement, so don’t build your program around a specific regulator narrative. Treat the risk as straightforward: residual storage exposure is a cross-tenant data breach scenario. If PII is involved, the incident can trigger contractual breaches, customer notification obligations, and regulatory scrutiny under applicable privacy and security regimes. ISO/IEC 27018 frames it as a baseline expectation for public cloud PII processing. 1

A practical 30/60/90-day execution plan

First 30 days (Immediate)

  • Assign an owner: usually Cloud Infrastructure or Storage Engineering, with Security Architecture oversight.
  • Build the storage inventory and lifecycle map for all production environments.
  • Document current-state controls: where wiping occurs, where encryption boundaries exist, and where logs live.
  • Identify known gaps: snapshots, ephemeral storage, failed wipe handling, manual ops paths.

By 60 days (Near-term)

  • Publish the sanitization standard with clear “must” statements tied to assignment events.
  • Implement or tighten provisioning gates so storage cannot be assigned without passing the sanitization state.
  • Create quarantine workflows for failed wipes and repair/redeploy assets.
  • Stand up a minimal evidence pack: sample logs + configuration + workflow screenshots/exports.

By 90 days (Operationalize + prove)

  • Run residual data exposure tests across all storage types and document results.
  • Add monitoring: alerts on wipe failures, reuse attempts without wipe, and exception approvals.
  • Formalize third-party due diligence for any upstream storage provider dependencies; track evidence in a system of record (Daydream if you need structured intake and audit-ready mapping).
  • Schedule recurring control review tied to infrastructure change management (new storage classes, new regions, new replication modes).

Frequently Asked Questions

Does this requirement apply if we “logically delete” customer data but don’t wipe immediately?

The requirement triggers when storage space is assigned to a customer. If logical delete leads to space being reassigned, you need a control that prevents visibility of prior data at assignment time. 1

Is encryption alone enough to meet “not visible”?

It can be, if encryption design prevents the new customer from ever decrypting old ciphertext and keys are strictly segregated per customer. You still need evidence of key separation and a reliable key destruction/rotation process tied to reassignment events. 1

What storage types should we include in scope?

Include any storage space that can be assigned to a cloud service customer: block, file, object, managed databases’ underlying storage where customer assignment changes, ephemeral disks, snapshots, replicas, and backups if they can be restored or mapped across tenants. The control goal is “no prior data visible upon assignment.” 1

What evidence do auditors usually accept?

Auditors look for a chain: documented standard, enforced provisioning gates, logs proving sanitization completed before assignment, and test results attempting recovery of prior data. Keep exceptions and failure-handling tickets because those show whether the control breaks under stress. 1

How do we handle storage returned from hardware repair (RMA)?

Treat repaired or returned components as untrusted until sanitization state is re-established. Put them in quarantine, re-run sanitization or cryptographic reset as appropriate, and block any assignment path that bypasses that workflow. 1

We run on a hyperscaler. Can we inherit this control?

You can rely on upstream controls for underlying media reuse, but you still need due diligence evidence and you must ensure your own tenant-mapping, snapshot access controls, and restore workflows cannot expose residual data. Track third-party assurances and map them to your internal control language for audits. 1

Footnotes

  1. ISO/IEC 27018:2019 Information technology — Security techniques — Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors

Frequently Asked Questions

Does this requirement apply if we “logically delete” customer data but don’t wipe immediately?

The requirement triggers when storage space is assigned to a customer. If logical delete leads to space being reassigned, you need a control that prevents visibility of prior data at assignment time. (Source: ISO/IEC 27018:2019 Information technology — Security techniques — Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors)

Is encryption alone enough to meet “not visible”?

It can be, if encryption design prevents the new customer from ever decrypting old ciphertext and keys are strictly segregated per customer. You still need evidence of key separation and a reliable key destruction/rotation process tied to reassignment events. (Source: ISO/IEC 27018:2019 Information technology — Security techniques — Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors)

What storage types should we include in scope?

Include any storage space that can be assigned to a cloud service customer: block, file, object, managed databases’ underlying storage where customer assignment changes, ephemeral disks, snapshots, replicas, and backups if they can be restored or mapped across tenants. The control goal is “no prior data visible upon assignment.” (Source: ISO/IEC 27018:2019 Information technology — Security techniques — Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors)

What evidence do auditors usually accept?

Auditors look for a chain: documented standard, enforced provisioning gates, logs proving sanitization completed before assignment, and test results attempting recovery of prior data. Keep exceptions and failure-handling tickets because those show whether the control breaks under stress. (Source: ISO/IEC 27018:2019 Information technology — Security techniques — Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors)

How do we handle storage returned from hardware repair (RMA)?

Treat repaired or returned components as untrusted until sanitization state is re-established. Put them in quarantine, re-run sanitization or cryptographic reset as appropriate, and block any assignment path that bypasses that workflow. (Source: ISO/IEC 27018:2019 Information technology — Security techniques — Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors)

We run on a hyperscaler. Can we inherit this control?

You can rely on upstream controls for underlying media reuse, but you still need due diligence evidence and you must ensure your own tenant-mapping, snapshot access controls, and restore workflows cannot expose residual data. Track third-party assurances and map them to your internal control language for audits. (Source: ISO/IEC 27018:2019 Information technology — Security techniques — Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
ISO/IEC 27018: Access to data on pre-used data storage space | Daydream