Secure development and engineering practices
The secure development and engineering practices requirement means you must build security into your product and engineering lifecycle, with defined checkpoints and release approvals that prove security risks were identified, treated, and accepted before shipment. For TISAX assessments, the fastest path is to standardize a secure SDLC, mandate security gates, and retain evidence across planning, build, test, and release.
Key takeaways:
- Define a secure SDLC that fits how you actually ship software, firmware, or engineering deliverables, then make it mandatory.
- Add security checkpoints tied to real decisions (merge, build, release), and require named approvals for risk acceptance and go-live.
- Keep audit-ready evidence: policies, process maps, gate results, exceptions, and release approvals mapped to in-scope products.
This page operationalizes the secure development and engineering practices requirement for teams pursuing or maintaining TISAX alignment in automotive supply chains. TISAX assessors look for repeatable security integration in engineering workflows, not isolated “security testing” performed at the end. Your goal is simple: show that every in-scope product or engineering deliverable follows a defined lifecycle where security requirements are set, design risks are reviewed, code changes are controlled, testing is performed, and releases are approved with accountable sign-off.
Most failures are not technical. They are documentary and procedural: teams run scans but cannot prove when scans ran, what passed, what failed, who accepted residual risk, and why the release was still approved. The requirement is “medium” severity in many internal catalogs because it is usually fixable, but it can create high downstream risk if your organization processes customer data, vehicle program IP, or sensitive engineering artifacts.
The guidance below is written for a Compliance Officer, CCO, or GRC lead who needs to drive implementation across engineering, security, and product. It prioritizes step-by-step execution, evidence to retain, and how to handle exceptions without breaking the control.
Secure development and engineering practices requirement (TISAX): plain-English meaning
You must integrate security practices into engineering and product lifecycle processes so security is planned, built, verified, and approved as part of normal delivery work 1. Practically, that means you have a secure SDLC (or secure engineering lifecycle) that applies to in-scope products and is supported by:
- Security requirements and secure design expectations
- Risk identification and treatment before release
- Security verification activities (tests, reviews, scans) appropriate to the product
- Formal release approvals and controlled exception handling
Assessors typically expect evidence that this is systematic. One project doing “the right thing” is not enough if other releases are ad hoc.
Regulatory text
Regulatory excerpt (as provided): “Baseline implementation-intent summary derived from publicly available framework overviews; licensed standard text is not reproduced in this record.” 1
Implementation intent summary: “Integrate security practices into engineering and product lifecycle processes.” 1
What you, the operator, must do: translate this into a documented lifecycle with mandatory security checkpoints, then produce objective evidence that teams follow it for in-scope systems. If you cannot enforce every step (common in legacy teams), you must at least enforce risk-based gates and track exceptions to closure.
Who it applies to (entity + operational context)
Typical entities in scope: automotive suppliers and automotive service providers participating in the automotive supply chain where TISAX expectations apply 1.
Operational contexts where assessors will expect this control:
- Software products, embedded software/firmware, engineering tooling, and internal applications used to process sensitive customer or program data
- CI/CD pipelines and build systems
- Engineering workstations and code repositories
- Third-party development (contractors, dev shops) if they build or modify in-scope deliverables
Rule of thumb for scoping: if a team can introduce code or configuration into an in-scope product, the secure development process must apply to that team’s changes.
What you actually need to do (step-by-step)
Below is an implementation sequence that maps cleanly to how assessors evaluate “integration into lifecycle” and produces strong artifacts.
1) Define your secure SDLC and make it enforceable
Deliverables:
- Secure SDLC policy/standard (one page is fine if it is specific)
- Lifecycle workflow (diagram or checklist) aligned to how you ship (Agile, V-model, hybrid)
- RACI for approvals and exceptions (Product, Engineering, Security)
Minimum required SDLC stages to document:
- Plan: security requirements and data classification expectations
- Design: threat/risk review for material changes
- Build: secure coding expectations and dependency controls
- Verify: security testing activities and pass/fail criteria
- Release: approval gate, risk acceptance, and rollback readiness
- Operate: vulnerability intake, patching expectations, and defect learning
2) Implement security checkpoints (gates) tied to real engineering control points
You need checkpoints that create evidence. Recommended baseline control: secure development checkpoints and release approvals 1.
Common, practical gates:
- Pre-merge gate: code review required; security review triggered for high-risk changes (auth, crypto, network exposure, safety-relevant paths)
- Build gate: dependency and basic static checks run automatically
- Pre-release gate: security test results reviewed; known issues triaged; residual risk decision recorded
- Release approval: named approver confirms required artifacts exist and exceptions are documented
Operational tip: if engineering leaders resist “security gates,” position them as “release readiness criteria.” Same mechanism, better adoption.
3) Establish a risk-based design review process
What to implement:
- A lightweight threat/risk review template (system boundaries, data flows, trust boundaries, abuse cases)
- Triggers for when it is required (new product, new external interface, major architecture change, sensitive data processing)
- A place to store outputs (ticketing system + design repo)
Evidence assessors like:
- Design review tickets with participants, decisions, and follow-ups
- Risk register entries created from design findings
- Documented acceptance when risks are deferred
4) Standardize secure coding and engineering rules teams can follow
Minimum set to document and enforce:
- Secure coding guidelines appropriate to your languages and platform
- Secrets handling rules (no secrets in code; approved secret storage)
- Dependency management rules (approved sources; ownership for updates)
- Logging guidance (avoid sensitive data in logs; log access control expectations)
You do not need perfect uniformity across stacks. You need clear “must” statements plus a mechanism for exceptions.
5) Run security verification that matches the product
Define a verification “menu” by product type:
- Web apps/APIs: authz testing, input validation checks, dependency scanning, basic dynamic tests
- Embedded/firmware: secure boot assumptions, update mechanism review, memory safety focus where relevant
- Engineering tools: access control testing, audit logging checks, hardening validation
Record:
- What ran (tool/job name), when it ran, and the result
- What failed and what you did about it
- What you shipped with and why (risk acceptance)
6) Create an exception process that does not break the control
You will have false positives, deadlines, and legacy gaps. Exceptions are allowed in practice if controlled.
Minimum elements:
- Exception ticket with scope, rationale, compensating controls, and expiry criteria
- Named risk owner approval (Product or Engineering) plus Security concurrence for higher-risk items
- Tracking to closure or renewal
Avoid “verbal approvals.” They fail in assessments because they cannot be evidenced.
7) Require release approvals and retain them
This is the fastest way to produce auditable proof of lifecycle integration.
Implement:
- Release checklist that references the required artifacts (tests, reviews, exceptions)
- Approval captured in your deployment tool, change system, or ticketing platform
- Release notes including security-relevant changes and open risks (when applicable)
8) Extend controls to third-party development
If third parties develop for you, you need contract and operational hooks:
- Contract clauses requiring adherence to your secure SDLC (or an equivalent you approve)
- Evidence delivery requirements (test results, SBOM if you require it, vulnerability handling commitments)
- Access control for repos and build systems; enforce your gates where you can
Required evidence and artifacts to retain (audit-ready list)
Keep evidence by product/release, and keep “system evidence” showing the process exists.
Program-level (shows governance exists):
- Secure SDLC policy/standard and lifecycle diagram
- Role definitions (RACI) for approvals and exceptions
- Secure coding standards and engineering rules
- Exception process definition and templates
Execution-level (shows it operates):
- Samples of design/risk review tickets and outcomes
- CI/CD logs or pipeline records showing security checks executed
- Vulnerability findings with triage notes and remediation tickets
- Release approval records with named approvers and dates
- Exception tickets with approvals and closure/expiry handling
Tip for assessment efficiency: in Daydream, structure evidence by “control → product → release” so you can answer sampling requests quickly without rebuilding the story each time.
Common exam/audit questions and hangups
Assessors and internal auditors tend to probe these areas:
-
“Show me that security is part of the lifecycle, not optional.”
They will ask for your defined process and proof it was followed on real releases. -
“How do you decide what must be reviewed?”
If your triggers are vague (“as needed”), expect pushback. Define triggers and show examples. -
“Who can approve risk?”
They will look for named accountability. If Security is the only approver, you may create bottlenecks; if Engineering self-approves everything, you may lack independence. -
“How do you handle exceptions and deadlines?”
A working exception process is often the difference between “implemented with gaps” and “not implemented.” -
“How do third parties fit into this?”
They may request evidence that contractors or dev partners follow your process for in-scope deliverables.
Frequent implementation mistakes (and how to avoid them)
| Mistake | Why it fails | Fix |
|---|---|---|
| Security testing only at the end | Does not show “integrated into lifecycle” | Add pre-merge and pre-release gates with recorded results |
| No written triggers for design reviews | Decisions become inconsistent and hard to defend | Publish trigger criteria; track reviews in tickets |
| “Risk accepted” without an owner | No accountability | Require a named risk owner and recorded rationale |
| Tools run, but no retained output | You cannot prove operation | Store logs/results linked to releases |
| Third parties excluded | Supply chain introduces ungoverned change | Contract + onboarding requirements; require evidence delivery |
Enforcement context and risk implications
No public enforcement cases were provided in the supplied source catalog for this requirement. Practically, weak secure development practices increase the likelihood of shipping vulnerabilities, mishandling sensitive automotive program information, and failing customer assurance reviews tied to TISAX participation 1. The direct business risk is lost deals, delayed onboarding, or remediation work under customer pressure when you cannot evidence a controlled engineering process.
Practical 30/60/90-day execution plan
Days 1–30: Stabilize scope, rules, and ownership
- Confirm in-scope products/teams and map how releases happen today.
- Publish a secure SDLC standard with mandatory gates and a simple RACI.
- Implement a release checklist and require release approval capture for in-scope releases.
- Stand up an exception ticket type with required fields and approval routing.
Days 31–60: Make gates real in tooling and workflows
- Add security checkpoints into CI/CD (or your existing change workflow) so results are recorded.
- Roll out design/risk review triggers and a lightweight template.
- Train engineering leads and release managers on “what must exist before approval.”
- Start collecting evidence samples in a central repository (Daydream or your GRC system) organized by product and release.
Days 61–90: Prove operation and close the loop
- Run an internal mini-assessment: pick recent releases and verify artifacts exist end-to-end.
- Fix the top recurring exception reasons (missing scans, missing approvals, unclear triggers).
- Extend requirements to third-party development: contract addendum, onboarding checklist, evidence delivery expectations.
- Prepare an assessor-facing evidence pack: policy + process + multiple release samples showing consistent execution.
Frequently Asked Questions
Do we need a single secure SDLC for every engineering team?
You need a consistent minimum baseline with shared checkpoints and approval rules. Teams can have different implementation details as long as you can prove comparable control outcomes across in-scope products.
What is the minimum “release approval” evidence that passes an assessment?
A record that identifies the release, shows required security activities were completed (or exceptions approved), and captures a named approver. Keep it linked to the underlying tickets or pipeline results.
How do we handle legacy products that cannot meet all gates yet?
Use time-bound exceptions with compensating controls and an improvement plan owned by engineering leadership. Assessors typically respond better to controlled exceptions than to informal “we couldn’t do it.”
Our security team is small. How do we avoid becoming the bottleneck for approvals?
Define risk-based triggers so Security reviews only material changes and high-risk exceptions. For lower-risk releases, require engineering-led approval with automated gate evidence.
Do third-party developers have to use our tools and CI/CD pipeline?
Not always, but you must require equivalent secure development outcomes and collect evidence. Where possible, enforce your gates by controlling the integration branch and release approval on your side.
What should we put into Daydream for this requirement?
Store your secure SDLC standard, gate definitions, exception workflow, and a set of recent releases with linked evidence (design review, scan results, remediation tickets, and approval records). That structure reduces scramble during sampling.
Related compliance topics
- 2025 SEC Marketing Rule Examination Focus Areas
- Access and identity controls
- Access Control (AC)
- Access control and identity discipline
- Access control lifecycle management
Footnotes
Frequently Asked Questions
Do we need a single secure SDLC for every engineering team?
You need a consistent minimum baseline with shared checkpoints and approval rules. Teams can have different implementation details as long as you can prove comparable control outcomes across in-scope products.
What is the minimum “release approval” evidence that passes an assessment?
A record that identifies the release, shows required security activities were completed (or exceptions approved), and captures a named approver. Keep it linked to the underlying tickets or pipeline results.
How do we handle legacy products that cannot meet all gates yet?
Use time-bound exceptions with compensating controls and an improvement plan owned by engineering leadership. Assessors typically respond better to controlled exceptions than to informal “we couldn’t do it.”
Our security team is small. How do we avoid becoming the bottleneck for approvals?
Define risk-based triggers so Security reviews only material changes and high-risk exceptions. For lower-risk releases, require engineering-led approval with automated gate evidence.
Do third-party developers have to use our tools and CI/CD pipeline?
Not always, but you must require equivalent secure development outcomes and collect evidence. Where possible, enforce your gates by controlling the integration branch and release approval on your side.
What should we put into Daydream for this requirement?
Store your secure SDLC standard, gate definitions, exception workflow, and a set of recent releases with linked evidence (design review, scan results, remediation tickets, and approval records). That structure reduces scramble during sampling.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream