Self-Assessment Execution
To meet the VDA ISA 5.2.1 self-assessment execution requirement, you must run an internal self-assessment using the VDA ISA catalog before any external TISAX assessment, document control-by-control status, identify gaps, and drive remediation to assessment readiness. Treat it as a managed program: defined scope, owners, evidence, scoring rationale, and tracked corrective actions. (VDA ISA Catalog v6.0)
Key takeaways:
- Use the VDA ISA catalog as your checklist, scoring method, and evidence map, not a generic ISO-style gap assessment. (VDA ISA Catalog v6.0)
- Audit-readiness comes from defensible evidence, consistent scoring logic, and a closed-loop remediation plan tied to each gap. (VDA ISA Catalog v6.0)
- Operationalize with clear scope boundaries, accountable control owners, and a cadence that stays current as systems and third parties change.
“Self-assessment execution” under VDA ISA 5.2.1 is a readiness gate: you are expected to assess yourself against the same control catalog an external TISAX assessor will use, find what is missing, fix it, and be able to prove it. The practical goal is predictability. You want your internal picture of maturity and gaps to match what the assessor will conclude, within reasonable interpretation boundaries. (VDA ISA Catalog v6.0)
For a Compliance Officer, CCO, or GRC lead, the main operational challenge is not filling out a questionnaire. It is running a repeatable assessment process that produces defensible outputs: scope, system inventory alignment, evidence that maps to each control, and a remediation backlog with owners and due dates. Another common challenge is fragmentation across plants, engineering environments, and third-party data flows. If you do not align scope and evidence early, you end up with “paper compliance” that falls apart in interviews and sampling.
This page translates the requirement into an execution playbook: who must participate, how to run the assessment step-by-step, which artifacts to retain, what auditors typically challenge, and how to stand up a sustainable cadence.
Regulatory text
Requirement (excerpt): “Conduct internal self-assessments using the VDA ISA catalog to identify gaps and prepare for external TISAX assessment.” (VDA ISA Catalog v6.0)
What the operator must do:
You must complete an internal review using the VDA ISA catalog as the authoritative control set, record the implementation status for each applicable control, collect and retain supporting evidence, identify gaps, and drive remediation before your external TISAX assessment. Your output must be organized enough that you can explain scope, show evidence on request, and demonstrate a managed corrective action process. (VDA ISA Catalog v6.0)
Plain-English interpretation (requirement-level)
A compliant self-assessment is a control-by-control internal evaluation that:
- uses the VDA ISA catalog as the baseline,
- produces documented results (not just meetings or verbal statements),
- identifies gaps and turns them into owned remediation actions, and
- prepares your teams for external assessor testing, including interviews and evidence sampling. (VDA ISA Catalog v6.0)
If you cannot show how you reached each conclusion (what you reviewed, who confirmed it, and what evidence supports it), your self-assessment will not function as readiness preparation even if the spreadsheet is complete.
Who it applies to
Entity types: Automotive suppliers and OEMs pursuing TISAX assessment. (VDA ISA Catalog v6.0)
Operational context: Any organization that will undergo an external TISAX assessment and therefore needs to validate information security controls across in-scope sites, systems, processes, and data flows. This typically spans:
- corporate IT and security functions,
- engineering / product development environments,
- plant / OT interfaces where applicable,
- key third parties that handle in-scope information (for example, IT service providers, engineering tool vendors, managed SOC providers), depending on your TISAX scope definition.
What you actually need to do (step-by-step)
Below is a practical execution sequence you can assign and track.
1) Define assessment scope in operational terms
Create a scope statement that is testable:
- in-scope legal entities and sites,
- in-scope business processes (engineering change management, prototype handling, customer data exchange),
- in-scope systems and environments (identity provider, endpoint management, engineering PLM, source code repos, file transfer tools),
- in-scope information types and classifications.
Operator tip: Make scope match how evidence is stored. If evidence is per site or per tenant, your scope should reflect that to avoid “global policy, local reality” gaps.
2) Set up governance and roles (RACI)
Assign roles that match how assessors interview:
- Assessment owner (GRC lead): runs the plan, controls the narrative, ensures consistency.
- Control owners: accountable for implementation and evidence for specific controls (IT ops, IAM, HR, physical security, engineering tools, procurement).
- Evidence steward: maintains the evidence library, naming conventions, and version control.
- Approver: typically the CISO or delegated executive who signs off the results and remediation priorities.
3) Build your VDA ISA control workbook (or platform) with evidence mapping
For each VDA ISA control, add fields that you will defend later:
- applicability (in scope / not applicable with rationale),
- implementation status (your internal rating),
- narrative justification (how the control is met),
- evidence references (links, screenshots, exports, tickets),
- owner, last validated date, and remediation action link if there is a gap. (VDA ISA Catalog v6.0)
If you run this in Daydream, use it as a single workspace that ties control statements to evidence objects and remediation tasks so you can avoid “spreadsheet drift” across versions and sites.
4) Collect evidence with sampling discipline
Evidence needs to prove operation, not intent. Prioritize:
- System configurations: IAM settings, MFA enforcement, conditional access, password policies, encryption settings.
- Operational records: security monitoring alerts, vulnerability scans, patch reports, access reviews, training completion records.
- Process proof: change approvals, incident tickets, exception handling, risk acceptance records.
- Third-party controls: contracts, security requirements, onboarding/offboarding records, access boundaries for provider staff.
Evidence rule: For each control, retain at least one artifact that shows the control exists and at least one artifact that shows it operates.
5) Run structured interviews and walkthroughs
Schedule interviews by domain and keep them bounded:
- confirm how the process runs in practice,
- reconcile conflicting statements with artifacts,
- capture “known breaks” as gaps rather than debating them.
Document interview notes as supporting rationale, but do not treat notes as primary evidence.
6) Score consistently and document your scoring logic
Define internal scoring guidance so two reviewers reach similar conclusions. Common consistency checks:
- If a control is “implemented,” confirm it is deployed across the full scope, not just headquarters.
- If a control depends on a third party (for example, hosting provider), confirm shared responsibility is documented and evidence exists for your part.
7) Convert gaps into a remediation backlog with ownership
Each gap becomes a corrective action with:
- control reference,
- root cause (process, tool, staffing, design),
- fix description,
- owner and target completion date,
- interim compensating control if the gap cannot be closed quickly,
- validation method (what evidence will prove closure).
8) Validate closure and lock readiness outputs
Before the external assessment:
- re-test closed items,
- confirm evidence is accessible and current,
- run a mock evidence pull (can you retrieve artifacts quickly, with clear context?),
- prepare a “scope and systems” briefing pack for assessors.
Required evidence and artifacts to retain
Keep artifacts in a controlled repository with access controls and versioning. Minimum set:
- Scope statement with in-scope sites, systems, processes, and data types.
- Completed VDA ISA self-assessment workbook/export with applicability, ratings, rationales, and evidence references. (VDA ISA Catalog v6.0)
- Evidence index (control-to-artifact map) so you can respond quickly during assessor sampling.
- Key policies and standards referenced by controls (information security policy set, access control standard, encryption standard, logging/monitoring standard).
- Operating records demonstrating control performance (access review outputs, patch/vuln reports, incident records, backup test results).
- Risk register entries and exceptions tied to any “accepted” gaps with approvals.
- Remediation plan and tickets showing action tracking, status, and closure proof.
- Third-party documentation for in-scope providers (security addenda, SLAs relevant to security responsibilities, onboarding/offboarding evidence for provider access).
Common exam/audit questions and hangups
Expect these lines of challenge during external assessment readiness reviews:
- “Show me how you decided this control is applicable or not applicable.”
- “Prove this is deployed across the full scope, not just one site.”
- “Where is the evidence that this runs on a recurring basis?”
- “How do you know privileged access is reviewed and removed when no longer needed?”
- “Which third parties have access to in-scope data, and how did you validate boundaries?”
- “Show me a closed remediation item and the evidence that it is truly fixed.”
Hangups usually come from missing linkage: the control statement exists, but evidence is not clearly tied to it, or the evidence is outdated.
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails | What to do instead |
|---|---|---|
| Treating the self-assessment as a one-time questionnaire | Results go stale and do not predict assessor outcomes | Run it as a managed cycle with owners, evidence refresh, and remediation tracking. |
| “Policy-only” evidence | Policies show intent, not operation | Pair each policy with operational records and configuration proof. |
| Overusing “not applicable” | Assessors will test the rationale | Require a written scope-based rationale and an approver for each N/A. |
| Inconsistent scoring across sites | External assessment sees fragmentation | Publish scoring guidance and run calibration sessions with control owners. |
| Ignoring third-party dependencies | Shared responsibility gaps surface late | Map in-scope third parties and define evidence for what you own vs what they own. |
| Remediation without validation | Tickets close, but control still fails | Define closure criteria and require post-fix evidence before marking resolved. |
Enforcement context and risk implications
No public enforcement cases were provided in the supplied source catalog for this requirement. Practically, the risk of weak self-assessment execution is assessment failure, unplanned remediation work, delays in customer onboarding, and inconsistent security posture across sites and third parties. Your self-assessment is also an internal governance tool: it creates an auditable trail of decisions, exceptions, and corrective actions aligned to the VDA ISA catalog. (VDA ISA Catalog v6.0)
Practical execution plan (30/60/90-day)
You asked for speed to operationalization. Use these phases as an execution template; adjust based on your scope size and existing control maturity.
First 30 days (Immediate: set the foundation)
- Finalize scope statement and system inventory alignment.
- Assign RACI and control ownership.
- Stand up the VDA ISA assessment workbook or a GRC workflow in Daydream.
- Define evidence standards: naming, freshness expectations, storage location, access controls.
- Start evidence collection for high-friction areas (IAM, logging/monitoring, vulnerability management, third-party access).
By 60 days (Near-term: complete assessment and create remediation motion)
- Complete control-by-control assessment for in-scope areas.
- Run calibration review to normalize scoring across assessors/control owners.
- Produce a prioritized gap register and remediation plan.
- Kick off remediation workstreams with clear closure criteria and validation steps.
- Run mock interviews with control owners to ensure they can explain “how it works” and “where the proof is.”
By 90 days (Ongoing readiness: validate closure and prepare assessor experience)
- Re-test remediated controls and archive closure evidence.
- Confirm third-party evidence and shared responsibility documentation for in-scope providers.
- Prepare an assessor-ready evidence pack: scope brief, evidence index, and known exceptions with approvals.
- Establish a cadence for refreshing the self-assessment as environments and third parties change, so you do not rebuild from scratch before the next assessment cycle.
Frequently Asked Questions
Do we have to use the VDA ISA catalog, or can we map from ISO 27001?
The requirement explicitly calls for internal self-assessments using the VDA ISA catalog. (VDA ISA Catalog v6.0) You can map from ISO 27001 internally, but your self-assessment output should still be expressed control-by-control in VDA ISA terms to prepare for TISAX.
What counts as “evidence” for a self-assessment?
Evidence is anything that proves a control is designed and operating, such as system configuration screenshots, exports, logs, tickets, and recurring review outputs. Policies and standards help, but they rarely satisfy “operating effectively” on their own.
Can we mark controls “implemented” if they are only deployed at headquarters?
Only if headquarters is the full assessment scope. If your scope includes additional sites or environments, your rating and narrative need to reflect gaps or phased rollout, with remediation actions tied to the missing coverage.
How do we handle controls owned by a third party (for example, hosted infrastructure)?
Document shared responsibility and keep evidence for your responsibilities (access governance, configuration baselines, monitoring, vendor management), plus any attestations or reports the provider supplies. Also confirm the provider’s access paths into your environment are governed and reviewable.
Who should sign off on the final self-assessment?
Assign an executive approver who can accept residual risk and commit resources for remediation, commonly the CISO or delegated security executive. The sign-off should cover scope, key gaps, and the remediation plan.
What is the fastest way to keep the self-assessment from becoming a stale spreadsheet?
Tie each control to an evidence object and a control owner, and track evidence refresh and remediation tasks in a system of record. Teams often use Daydream to centralize control narratives, artifacts, and corrective actions so readiness stays current between assessment cycles.
Frequently Asked Questions
Do we have to use the VDA ISA catalog, or can we map from ISO 27001?
The requirement explicitly calls for internal self-assessments using the VDA ISA catalog. (VDA ISA Catalog v6.0) You can map from ISO 27001 internally, but your self-assessment output should still be expressed control-by-control in VDA ISA terms to prepare for TISAX.
What counts as “evidence” for a self-assessment?
Evidence is anything that proves a control is designed and operating, such as system configuration screenshots, exports, logs, tickets, and recurring review outputs. Policies and standards help, but they rarely satisfy “operating effectively” on their own.
Can we mark controls “implemented” if they are only deployed at headquarters?
Only if headquarters is the full assessment scope. If your scope includes additional sites or environments, your rating and narrative need to reflect gaps or phased rollout, with remediation actions tied to the missing coverage.
How do we handle controls owned by a third party (for example, hosted infrastructure)?
Document shared responsibility and keep evidence for your responsibilities (access governance, configuration baselines, monitoring, vendor management), plus any attestations or reports the provider supplies. Also confirm the provider’s access paths into your environment are governed and reviewable.
Who should sign off on the final self-assessment?
Assign an executive approver who can accept residual risk and commit resources for remediation, commonly the CISO or delegated security executive. The sign-off should cover scope, key gaps, and the remediation plan.
What is the fastest way to keep the self-assessment from becoming a stale spreadsheet?
Tie each control to an evidence object and a control owner, and track evidence refresh and remediation tasks in a system of record. Teams often use Daydream to centralize control narratives, artifacts, and corrective actions so readiness stays current between assessment cycles.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream