Evidence Documentation

To meet the VDA ISA 5.1.2 evidence documentation requirement, you must maintain assessment-ready proof for each applicable VDA ISA control: the policy/procedure, the implemented configuration or process, and records showing it actually operates. Your goal is simple: an assessor can sample any control and you can produce consistent, current, attributable evidence on demand (VDA ISA Catalog v6.0).

Key takeaways:

  • Build a control-to-evidence map so every VDA ISA requirement points to specific artifacts and owners (VDA ISA Catalog v6.0).
  • Keep evidence “assessment-ready”: versioned, dated, approved, and traceable to systems, tickets, logs, and reviews (VDA ISA Catalog v6.0).
  • Treat evidence as an operational workflow (collection, QA, retention, and refresh), not a once-a-year document scramble (VDA ISA Catalog v6.0).

Evidence documentation is the difference between “we do this” and “we can prove this.” Under TISAX, assessors do not accept intent, plans, or slide decks as a substitute for operational proof. VDA ISA 5.1.2 expects you to prepare and maintain documentation and evidence demonstrating compliance with VDA ISA requirements, specifically for assessment readiness (VDA ISA Catalog v6.0).

For a CCO, GRC lead, or security compliance owner, the operational problem is predictable: evidence exists, but it’s scattered across tools, owned by different teams, inconsistent in naming, and hard to reproduce under time pressure. The fastest path is to standardize what “good evidence” looks like, assign owners, and implement an evidence lifecycle with lightweight QA. That lifecycle needs to cover governance documents (policies and standards), technical implementation proof (configs and screenshots where appropriate), and operational records (tickets, logs, reviews, training records, exception handling).

This page gives you requirement-level guidance you can execute quickly: who is in scope, what to build, the minimum evidence set most assessors expect to see, common audit friction points, and a practical execution plan you can run in phases.

Regulatory text

Requirement (VDA ISA 5.1.2): “Prepare and maintain documentation and evidence demonstrating compliance with VDA ISA requirements for assessment readiness.” (VDA ISA Catalog v6.0)

Operator interpretation:
You need a repeatable system to (1) identify which VDA ISA controls apply, (2) document how your organization meets each control, and (3) retain proof that the control is implemented and operating. “Maintain” means evidence stays current as systems, org structure, and processes change, and you can provide it promptly during a TISAX assessment (VDA ISA Catalog v6.0).

Plain-English interpretation (what the assessor is really testing)

Assessors typically test three layers for any control:

  1. Design: Do you have a documented policy/standard/procedure that meets the intent?
  2. Implementation: Is it actually configured or implemented in the environment?
  3. Operating effectiveness: Do records show the control runs in practice (reviews, tickets, logs, monitoring alerts, access recertifications, training completion, incident records)?

If any one layer is missing, you risk a finding even if the control “exists” informally.

Who it applies to

Entity types: Automotive suppliers and OEMs seeking TISAX assessment readiness under VDA ISA (VDA ISA Catalog v6.0).

Operational context in scope:

  • Information security management evidence across corporate IT and relevant business units.
  • Product, engineering, and manufacturing environments where protected data or systems exist.
  • Third parties (service providers, contractors, hosting providers) where their controls support your VDA ISA obligations. You will often need both your due diligence artifacts and their assurance outputs (e.g., reports, attestations, contractual commitments) as part of your evidence package.

What you actually need to do (step-by-step)

Step 1: Build a control-to-evidence map (your “evidence index”)

Create a single index that lists:

  • VDA ISA control reference (include 5.1.2 as the requirement driving the program) (VDA ISA Catalog v6.0)
  • Control title/intent in your own words
  • Control owner (role + team)
  • Evidence types required (policy, procedure, technical proof, operating records)
  • Evidence location (system of record link)
  • Review cadence and “last validated” date
  • Exceptions/compensating controls (if any) and where those approvals live

This index becomes your assessment readiness backbone. Without it, evidence collection becomes guesswork.

Step 2: Standardize evidence quality (define “acceptable evidence”)

Write short, enforceable criteria. Example criteria that reduce assessor pushback:

  • Version control: artifact shows version, owner, approval, and effective date.
  • Scope clarity: artifact states systems, data types, and organizational coverage.
  • Traceability: evidence links to the system, ticket, or log source that produced it.
  • Integrity: screenshots are dated and tied to a user/context; exports show source and timestamp.
  • Consistency: naming conventions align across documents and folders (control ID or control name included).

Practical tip: require each evidence item to include a “what this proves” one-liner in metadata (in the index or the file header). It prevents mismatches during sampling.

Step 3: Centralize storage with access control and auditability

Pick a system of record that can:

  • Restrict access (least privilege)
  • Preserve history (who changed what, and when)
  • Support structured folders and tagging

Common choices include a GRC tool, a controlled SharePoint/Confluence space, or a document management repository. The critical piece is not the platform; it is consistent indexing, ownership, and change control.

If you use Daydream, set up a dedicated TISAX evidence workspace with a control-to-evidence map, structured requests to control owners, and an evidence QA queue before items are marked “assessment-ready.” Daydream works best when you treat evidence as a workflow with assignments, not a shared folder.

Step 4: Collect the minimum viable evidence set per control

For each applicable control, collect:

  • Governance artifact: policy/standard/procedure
  • Implementation artifact: configuration evidence (screenshots, exports, IaC snippets, system settings), architecture diagrams, or system inventories
  • Operational record: tickets, review sign-offs, alerts, logs, meeting minutes, training records, incident records, exception approvals

Aim for evidence that stands alone. If an artifact requires oral explanation to be understood, add context inside the artifact or attach a short “control narrative” that explains how the pieces connect.

Step 5: Add evidence narratives (short, consistent, assessor-friendly)

For each control, create a brief narrative (often one page) that states:

  • Control intent
  • How your organization satisfies it
  • Systems/processes in scope
  • Evidence list with direct links
  • Known gaps and approved remediation plan (if applicable)

This reduces assessment time because the assessor can see the story without hunting through folders.

Step 6: Implement an evidence refresh and QA process

Operationalize “maintain” with three routines:

  • Event-based refresh: update evidence after major changes (tool migration, org restructure, policy update, new third party).
  • Periodic validation: control owners attest evidence is current; compliance performs spot checks.
  • Pre-assessment freeze: confirm evidence completeness and lock a snapshot for the assessment window.

QA checks should test: recency, traceability, and whether an external reader can understand it.

Required evidence and artifacts to retain (practical checklist)

Use this as a starter checklist; tailor by control:

Governance and program artifacts

  • Information security policy set and approval records
  • Standards/procedures (access control, change management, incident response, backup, vulnerability management)
  • Scope statement (sites, business units, environments)
  • Roles and responsibilities (RACI, job descriptions for key security roles)
  • Risk assessment methodology and outputs (as applicable)
  • Exception process and exception register

Technical and operational artifacts

  • Asset inventory exports (systems, endpoints, servers, cloud accounts)
  • Access control evidence: joiner/mover/leaver records, privileged access assignments, access reviews
  • Logging/monitoring configurations and sample log outputs
  • Vulnerability scans and remediation tickets
  • Patch/change records with approvals and testing evidence
  • Backup configuration and restore test records
  • Incident tickets, post-incident reviews, and lessons learned
  • Security awareness training assignments and completion records

Third-party related artifacts (where they support controls)

  • Third-party inventory and classification
  • Due diligence outputs and risk acceptance records
  • Contract clauses and security schedules
  • Third-party assurance artifacts provided to you (as permitted)

Common exam/audit questions and hangups

Expect these lines of questioning:

  • “Show me the evidence for this control for the last period. Who approved it? Where is it stored?”
  • “How do you know this control is operating, not just documented?”
  • “What happens when a system changes owners or platforms?”
  • “How do you ensure evidence isn’t edited after the fact?”
  • “How do you handle exceptions and compensating controls?”

Hangups that trigger findings:

  • Evidence that is undated, unattributed, or not traceable to a source system.
  • Policies that exist but no proof teams follow them (no tickets, logs, or review artifacts).
  • Evidence scattered across personal drives and chat threads.

Frequent implementation mistakes (and how to avoid them)

  1. Collecting screenshots without context. Fix: require a short caption: system name, environment, date/time, what setting proves, and who captured it.
  2. Relying on tribal knowledge. Fix: write a control narrative and store it with the evidence.
  3. No ownership. Fix: assign a control owner and an evidence steward; enforce updates after changes.
  4. Mixing drafts with approved artifacts. Fix: separate “working” and “approved” repositories, or use status tags with approval workflow.
  5. Over-collecting irrelevant artifacts. Fix: map evidence to control intent; keep a tight set that proves design, implementation, and operation.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for this requirement. Practically, the risk is assessment friction: weak evidence increases the chance of findings, delays, and expensive rework because teams scramble to recreate historical proof. The operational consequence is also internal: without an evidence discipline, you lose change traceability and auditability across critical security controls (VDA ISA Catalog v6.0).

Practical execution plan (30/60/90)

You asked for speed. Use this phased plan as a checklist, not a calendar promise.

First 30 days (Immediate stabilization)

  • Assign an evidence program owner and control owners.
  • Stand up the evidence index (control-to-evidence map) with owners and locations.
  • Define evidence quality criteria (versioning, dating, traceability).
  • Centralize a first-pass repository and naming convention.
  • Pilot evidence narratives for a small set of high-sampling controls (access, logging, change, incidents).

Next 60 days (Build out and QA)

  • Expand the evidence index to cover all applicable controls.
  • Collect minimum viable evidence per control (design, implementation, operating records).
  • Implement QA checks and fix recurring issues (missing dates, unclear scope, broken links).
  • Establish exception handling with documented approvals and compensating control evidence.
  • Run an internal “mock sample”: pick controls at random and test retrieval speed and clarity.

Next 90 days (Operationalize maintenance)

  • Implement event-based refresh triggers tied to change management and major projects.
  • Add periodic owner attestations and compliance spot checks.
  • Create an assessment-ready evidence snapshot and access model for assessors.
  • Integrate third-party evidence intake into your third-party due diligence workflow (contracts, assurance artifacts, risk acceptances).

Frequently Asked Questions

Do we need evidence for every single VDA ISA control, even if it’s “obvious”?

For assessment readiness, you should be able to produce evidence for each applicable control, because the assessor can sample any area (VDA ISA Catalog v6.0). “Obvious” controls still need proof of operation, such as access reviews or ticket records.

What’s the difference between documentation and evidence?

Documentation explains what you intend to do (policies, procedures, standards). Evidence proves you did it and it works (configs, logs, review outputs, tickets) (VDA ISA Catalog v6.0).

Are screenshots acceptable evidence?

Often yes, but only if they are attributable and readable: include date/time, system context, and what the setting proves. If your tooling supports exports or audit logs, those usually stand up better than standalone images.

How do we handle evidence that contains sensitive data?

Store it in a restricted repository, redact where possible, and keep an unredacted version under tighter access if needed for assessor review. Track who approved redactions so you can explain what was removed and why.

What if a control is met by a third party (cloud provider, MSSP, payroll provider)?

Keep your own due diligence and oversight evidence plus the third party’s assurance artifacts you are permitted to retain (contracts, reports, attestations). Map those artifacts directly to the control so the dependency is explicit.

How do we keep evidence current without creating busywork?

Tie evidence refresh to existing workflows: change management, quarterly access reviews, incident postmortems, vulnerability remediation cycles, and vendor reviews. Then run lightweight spot checks to catch drift before assessments.

Frequently Asked Questions

Do we need evidence for every single VDA ISA control, even if it’s “obvious”?

For assessment readiness, you should be able to produce evidence for each applicable control, because the assessor can sample any area (VDA ISA Catalog v6.0). “Obvious” controls still need proof of operation, such as access reviews or ticket records.

What’s the difference between documentation and evidence?

Documentation explains what you intend to do (policies, procedures, standards). Evidence proves you did it and it works (configs, logs, review outputs, tickets) (VDA ISA Catalog v6.0).

Are screenshots acceptable evidence?

Often yes, but only if they are attributable and readable: include date/time, system context, and what the setting proves. If your tooling supports exports or audit logs, those usually stand up better than standalone images.

How do we handle evidence that contains sensitive data?

Store it in a restricted repository, redact where possible, and keep an unredacted version under tighter access if needed for assessor review. Track who approved redactions so you can explain what was removed and why.

What if a control is met by a third party (cloud provider, MSSP, payroll provider)?

Keep your own due diligence and oversight evidence plus the third party’s assurance artifacts you are permitted to retain (contracts, reports, attestations). Map those artifacts directly to the control so the dependency is explicit.

How do we keep evidence current without creating busywork?

Tie evidence refresh to existing workflows: change management, quarterly access reviews, incident postmortems, vulnerability remediation cycles, and vendor reviews. Then run lightweight spot checks to catch drift before assessments.

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
TISAX Evidence Documentation: Implementation Guide | Daydream