TSC-A1.3 Guidance

To meet the tsc-a1.3 guidance requirement, you must run reliable backups, prove you can restore them, and store backup copies offsite so service availability can be recovered after accidental loss, ransomware, or a site outage. Operationalize this by defining backup scope and RPO/RTO targets, automating backups, testing restores, and retaining evidence your process works.

Key takeaways:

  • Backups without verified restores usually fail audit; plan and record restore testing.
  • “Offsite” must be resilient to primary-site failure and access compromise, not just “in another folder.”
  • Evidence matters as much as engineering: policies, logs, test results, and exception handling drive SOC 2 outcomes.

TSC-A1.3 sits in the Availability criteria of the AICPA Trust Services Criteria (TSC) used in SOC 2 examinations. The requirement is short, but auditors test it deeply because backup and recovery are the last line of defense when everything else fails. For most service organizations, this criterion becomes a practical program that spans infrastructure, application data, identity access, incident response, and third-party oversight (for cloud backup providers, managed databases, or colocation).

Your goal is straightforward: (1) ensure critical data is backed up at a frequency that matches business needs, (2) ensure you can recover data and systems within agreed targets, and (3) ensure backups are protected and stored offsite so a single event does not take out both production and recovery copies. The operational challenge is scoping what “data” means in your environment (databases, object storage, configs, secrets, code, customer content, logs) and proving that your approach works consistently, not just on paper.

This page translates TSC-A1.3 into an execution plan a CCO, GRC lead, or control owner can implement quickly and defend during a SOC 2 audit.

Regulatory text

Excerpt (TSC-A1.3):The entity provides for data backup, recovery, and offsite storage.” 1

Operator meaning: You need a defined, operating capability to (a) create backups of in-scope data and system components, (b) restore them within required timeframes, and (c) keep backup copies offsite so recovery remains possible if the primary environment is unavailable or compromised. Auditors typically expect documented procedures, consistent execution, monitoring, and evidence of periodic restore testing aligned to system criticality.

Plain-English interpretation (what the auditor is really looking for)

Auditors assess whether your backup and recovery program is designed and operating effectively for the systems in SOC 2 scope. In practice, they will try to answer:

  • Do you know what must be backed up to run the service again?
  • Are backups happening as scheduled, for all in-scope environments?
  • Are backups protected against deletion, tampering, and unauthorized access?
  • Can you restore within the business’s recovery expectations (RPO/RTO)?
  • Is “offsite” real, meaning a primary-site outage or account compromise does not destroy your recovery path?

A clean pass usually requires both strong engineering and clean control evidence.

Who it applies to (entity + operational context)

Applies to: Any organization undergoing a SOC 2 examination where Availability is in scope, including SaaS, fintech, healthcare software providers, managed services, and internal shared services reporting to customers. 1

Operational contexts that trigger more scrutiny:

  • Multi-tenant production environments
  • Managed databases, object storage, and Kubernetes platforms
  • High-change systems (frequent deployments) where configs matter as much as data
  • Reliance on third parties for backups, disaster recovery, or hosting
  • Regulated customers that require explicit RPO/RTO commitments in contracts

What you actually need to do (step-by-step)

Step 1: Define scope and recovery objectives (control design)

  1. Inventory in-scope systems and data repositories (production databases, file stores, customer content, configs, IaC state, encryption keys/secrets where appropriate, and critical logs needed for recovery).
  2. Classify tiers (critical, important, non-critical) based on customer impact and operational dependency.
  3. Set RPO/RTO targets per tier and record the business owner approval (Product, Engineering, Ops). These can be contractual, internal, or risk-based, but they must exist and be testable.
  4. Decide backup types per asset: snapshots, logical dumps, continuous replication, or immutable backups. Document the rationale.

Exam tip: Auditors frequently ask for the list of in-scope assets and how you determined backup frequency and retention.

Step 2: Implement backup execution (control operation)

  1. Automate backups where possible (managed DB scheduled snapshots, backup agents, storage lifecycle rules).
  2. Standardize retention by tier (short-term operational restores vs. longer retention for investigations).
  3. Encrypt backups at rest and in transit using your standard cryptographic approach; align access to least privilege.
  4. Separate duties for destructive actions (deleting backups, changing retention, disabling backup jobs).
  5. Instrument monitoring and alerting for backup job failures, missed schedules, and unusual deletion activity.

Step 3: Make “offsite storage” defensible

“Offsite” should be resilient to the failure mode you care about: site outage, region outage, or compromised credentials. Actions to take:

  1. Store copies in a different failure domain than production (separate region/account/tenant where feasible).
  2. Harden access paths to backup storage (restricted IAM roles, MFA, conditional access, limited break-glass).
  3. Use immutability controls where supported (write-once / object lock / retention lock) to reduce ransomware blast radius.
  4. Document the architecture: where backups live, how they replicate, and what a privileged attacker would need to destroy them.

Common hangup: Teams claim “offsite” because it’s “in the cloud.” Auditors often probe whether it’s also in the same account and controlled by the same admin identities.

Step 4: Prove recovery works (restore testing)

  1. Create a restore test procedure for each backup mechanism (DB snapshot restore, object store recovery, VM image restore, Kubernetes etcd restore, config rebuild via IaC).
  2. Run restore tests on a defined cadence appropriate to system criticality. Document the schedule and owners.
  3. Record outcomes: what was restored, how long it took, issues found, and remediation tickets.
  4. Validate integrity: confirm the restored system is usable, not just that the restore job “completed.”
  5. Test offsite recovery path: periodically restore from the offsite copy, not only the local snapshot.

Step 5: Add governance: policy, review, and exceptions

  1. Write a backup and recovery policy/procedure that maps to scope, roles, tooling, frequency, retention, offsite approach, and testing expectations.
  2. Establish periodic review of backup reports and failures, with escalation paths.
  3. Track exceptions (systems not backed up, temporary waivers, legacy apps) with explicit risk acceptance and timelines.
  4. Assess third parties supporting backups (cloud provider, managed DR vendor) and collect assurance evidence as needed for your SOC 2 scope.

Where Daydream fits naturally: Most SOC 2 backup programs fail on evidence assembly and exception tracking, not raw capability. Daydream can centralize control narratives, map in-scope assets to control requirements, and collect recurring evidence (backup reports, restore test records, tickets) so audit requests do not become a scavenger hunt.

Required evidence and artifacts to retain (audit-ready list)

Keep evidence tied to the audit period and the exact systems in scope. Typical artifacts include:

  • Backup and Recovery Policy/Procedure with approvals and effective date
  • In-scope asset inventory with backup method, frequency, retention, and offsite location
  • Backup job logs/reports showing successful runs and failures handled
  • Monitoring/alerting evidence (alerts, incident tickets, on-call acknowledgments)
  • Restore testing records (test plan, execution screenshots/logs, results, remediation tickets)
  • Architecture diagram for backup/offsite storage and access boundaries
  • Access control evidence for backup repositories (IAM role listings, group memberships, change approvals)
  • Change management records for backup configuration changes and retention changes
  • Exception register and risk acceptance approvals
  • Third-party assurance relevant to backup/hosting where relied upon (for example, SOC reports provided by the third party, if available)

Common exam/audit questions and hangups (what to prepare for)

  • “Show me the systems in scope and how each is backed up.”
  • “How do you know backups are completing? Who reviews failures?”
  • “Demonstrate a restore from the last quarter and show the results.”
  • “Where are offsite backups stored, and what prevents deletion or encryption by an attacker?”
  • “Do you test restoring from the offsite copy or only local snapshots?”
  • “What happens if the primary region/account is unavailable?”
  • “Do developers or administrators have the ability to delete backups without oversight?”

Hangups usually arise when evidence is inconsistent across teams, backups exist but restores are untested, or “offsite” is not meaningfully separated.

Frequent implementation mistakes and how to avoid them

  1. Mistake: Backups are configured, but nobody owns them.
    Avoid: Name a control owner, define review tasks, and create a failure escalation path tied to on-call or operations.

  2. Mistake: Restore testing is ad hoc or only done during incidents.
    Avoid: Schedule routine tests and store the results in a durable location with tickets for fixes.

  3. Mistake: Offsite copy exists but is deletable by the same admin role that manages production.
    Avoid: Separate permissions, require approvals for destructive changes, and consider immutability options supported by your platform.

  4. Mistake: Scope misses “rebuild dependencies.”
    Avoid: Include configs, infrastructure-as-code, and critical secrets management recovery steps in the recovery plan, even if you do not back up secrets directly.

  5. Mistake: Evidence is incomplete or overwritten.
    Avoid: Define an evidence retention process and store periodic exports or reports that persist through the audit window.

Enforcement context and risk implications

SOC 2 is an audit framework, not a regulatory enforcement regime. The practical risk is commercial and operational: failed backup/recovery controls can drive SOC 2 exceptions, customer escalations, delayed deals, and extended audit timelines. TSC-A1.3 issues also correlate with real incident pain: ransomware, accidental deletion, and cloud misconfiguration become existential when restores fail or backups are reachable by the attacker. 1

Practical 30/60/90-day execution plan

Days 0–30: Stabilize scope and baseline controls

  • Confirm Availability is in audit scope and list in-scope systems/data stores.
  • Draft or refresh Backup & Recovery Policy/Procedure and assign owners.
  • Document current backup methods, frequencies, retention, and offsite approach per system.
  • Turn on monitoring for backup failures where missing; define who reviews what.
  • Create an exception register for gaps discovered.

Days 31–60: Prove recoverability and tighten offsite posture

  • Run restore tests for each critical system and record results.
  • Fix high-risk gaps found during restores (missing dependencies, unclear runbooks, access issues).
  • Validate offsite separation and access controls; restrict deletion paths.
  • Implement a recurring management review (e.g., monthly backup health review) with retained minutes or tickets.

Days 61–90: Operationalize evidence and audit-readiness

  • Execute a second restore cycle for critical systems to show consistency.
  • Normalize evidence capture: recurring reports, log exports, ticket templates, approval workflows.
  • Conduct a tabletop for a major outage scenario and confirm backup/recovery steps align.
  • Package audit-ready artifacts by system (policy + logs + restore tests + access controls + exceptions). Daydream can be the system of record for this package and the request/response workflow.

Frequently Asked Questions

Does TSC-A1.3 require a specific backup frequency or retention period?

No. The criterion states you must provide for backup, recovery, and offsite storage; your frequency and retention should be documented, risk-based, and testable against your service commitments. 1

What qualifies as “offsite storage” in a cloud environment?

Offsite storage should be outside the primary failure domain you are trying to withstand (for example, separate region or separate account/tenant) and protected from the same identities that administer production. Document the separation and access constraints so it is auditable.

Are database replicas the same as backups for TSC-A1.3?

Replication helps availability, but it does not always satisfy “backup” if it can replicate corruption, deletion, or ransomware-encrypted data. Keep backups that allow point-in-time recovery and test restores.

How much restore testing is enough for a SOC 2 auditor?

Enough to demonstrate the control operates consistently across the audit period and covers in-scope critical systems. Define a cadence, follow it, and retain evidence of results and remediation actions.

We use a third party for backups. Do we still need evidence?

Yes. You still own the control in your SOC 2 description. Collect third-party artifacts (reports, configurations, job summaries) and show your internal monitoring, review, and restore validation.

What’s the fastest way to get audit-ready evidence for backups and restores?

Standardize a repeatable evidence bundle per system (configuration, schedules, logs, restore tests, access lists, exceptions) and store it centrally. Tools like Daydream help by tracking requests, evidence freshness, and exceptions across teams.

Related compliance topics

Footnotes

  1. AICPA Trust Services Criteria 2017, 2017

Frequently Asked Questions

Does TSC-A1.3 require a specific backup frequency or retention period?

No. The criterion states you must provide for backup, recovery, and offsite storage; your frequency and retention should be documented, risk-based, and testable against your service commitments. (Source: AICPA Trust Services Criteria 2017, 2017)

What qualifies as “offsite storage” in a cloud environment?

Offsite storage should be outside the primary failure domain you are trying to withstand (for example, separate region or separate account/tenant) and protected from the same identities that administer production. Document the separation and access constraints so it is auditable.

Are database replicas the same as backups for TSC-A1.3?

Replication helps availability, but it does not always satisfy “backup” if it can replicate corruption, deletion, or ransomware-encrypted data. Keep backups that allow point-in-time recovery and test restores.

How much restore testing is enough for a SOC 2 auditor?

Enough to demonstrate the control operates consistently across the audit period and covers in-scope critical systems. Define a cadence, follow it, and retain evidence of results and remediation actions.

We use a third party for backups. Do we still need evidence?

Yes. You still own the control in your SOC 2 description. Collect third-party artifacts (reports, configurations, job summaries) and show your internal monitoring, review, and restore validation.

What’s the fastest way to get audit-ready evidence for backups and restores?

Standardize a repeatable evidence bundle per system (configuration, schedules, logs, restore tests, access lists, exceptions) and store it centrally. Tools like Daydream help by tracking requests, evidence freshness, and exceptions across teams.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream