Annex A 8.29: Security Testing In Development Acceptance
Annex a 8.29: security testing in development acceptance requirement means you must define and execute security testing as a formal gate for promoting code into higher environments (staging/production), and you must retain evidence that testing occurred, issues were tracked, and risk decisions were approved. Build this into your SDLC, CI/CD, and release governance. 1
Key takeaways:
- Make security testing a release criterion, not an optional activity, and tie it to change/release approvals. 1
- Standardize what “passed” means (scope, tools, severity thresholds, approvals, exceptions) per application risk. 1
- Evidence wins audits: keep test results, defect tickets, exception approvals, and deployment records mapped to each release. 1
You can’t “policy” your way through Annex A 8.29. Auditors look for operational proof that security testing is embedded in development and acceptance, and that releases do not bypass agreed security checks without an explicit, documented risk decision. The fastest path is to treat security testing as part of your definition of done and a required input to release approval.
For most service organizations, this control touches more systems than expected: CI/CD pipelines, pull request workflows, build systems, artifact repositories, vulnerability management, issue tracking, and change management. It also forces a decision on governance: who can approve a release when security testing finds issues, and what evidence shows that decision was intentional.
This page gives requirement-level implementation guidance you can put in place quickly: an enforceable testing standard, pipeline gates, exception handling, and an evidence package that maps cleanly to releases. It is written for a Compliance Officer, CCO, or GRC lead coordinating engineering, security, and release management against ISO/IEC 27001:2022 Annex A 8.29. 1
Regulatory text
Framework reference (public excerpt): “ISO/IEC 27001:2022 Annex A control 8.29 implementation expectation (Security Testing In Development Acceptance).” 1
Operator interpretation (what you must do): You must define security testing activities that apply during development and acceptance, run those tests consistently, and use the results as an input to release/acceptance decisions. The operational bar is repeatability and traceability: for a given release, you can show what tests ran, what they found, what was fixed, what was deferred, and who approved any residual risk. 1
Plain-English interpretation of the requirement
Annex A 8.29 expects you to:
- Test for security flaws before you accept software into production-like environments, and
- Prove you did it, with artifacts tied to each release. 1
This is not limited to penetration tests. In practice, “security testing” is a portfolio: automated checks in CI (SAST, dependency scanning), environment-level validation (DAST), and targeted human testing for high-risk changes. The control becomes real when testing is part of acceptance criteria and cannot be silently skipped.
Who it applies to (entity and operational context)
Applies to:
- Service organizations building or materially configuring software that supports the services in scope for your ISMS. 1
Operational contexts that fall under 8.29:
- Internally developed applications, APIs, microservices, mobile apps, infrastructure-as-code, and scripted automation.
- “Configured” platforms where you ship code-like artifacts (workflows, custom functions, templates).
- Releases that can introduce security exposure: authentication flows, authorization logic, cryptography usage, secrets handling, logging, and internet-facing changes.
Common boundary decision: If engineering can change it and it can affect confidentiality, integrity, or availability, include it in the testing/acceptance standard.
What you actually need to do (step-by-step)
1) Define a security testing standard that can be audited
Create a short, enforceable standard (1–3 pages) that answers:
- Scope: which repos/apps are in scope; how you classify “high risk” systems.
- Minimum tests per change/release: what must run in CI; what must run before production.
- Pass/fail criteria: what severity levels block release; what can ship with approval.
- Required roles: who reviews results; who can approve exceptions.
- Evidence requirements: where results are stored and how they map to a release. 1
Deliverable: “Security Testing in Development & Acceptance Standard” approved by Security + Engineering leadership.
2) Map required testing to your SDLC control points
Pick the control points you already have, then attach security tests to them:
- Pull request (PR): code review checklist includes security items for relevant changes.
- CI build: automated scanning runs on every PR or merge.
- Release candidate build: higher-signal scans run on the release artifact.
- Change/release approval: security test results are attached to the ticket.
This reduces friction because you don’t need a new process; you need security conditions on existing gates.
3) Implement CI/CD gates (with a documented exception path)
Auditors will ask whether testing is optional. Make the answer “no,” with one narrow escape hatch.
Minimum gating pattern:
- CI runs required scanners.
- Pipeline fails on defined conditions (for example: tool execution failed, results not produced, or blocked-severity findings present).
- If a release must proceed, require a time-bound exception approved by an authorized risk owner, with compensating controls and a remediation ticket.
Keep the exception path strict. The goal is controlled release risk, not perfect security.
4) Make acceptance decisions traceable to a release artifact
For every production release, you should be able to reconstruct:
- What changed (commit/tag).
- What tests ran (run IDs, timestamps).
- What issues were found (findings list).
- What was fixed vs. deferred (tickets and status).
- Who approved shipping with residual risk (approval record). 1
A practical pattern is to require a single “Release” ticket that contains links to the pipeline run, scan reports, and the exception approval (if any).
5) Integrate findings into vulnerability management, not a side channel
Security testing outputs must feed a workflow that:
- Creates tickets with owners and due dates.
- Tracks remediation to closure.
- Supports retesting or verification.
- Produces metrics for management review (qualitative is fine if you cannot reliably quantify yet).
This is where many programs fail. Testing happens, but findings are not governed.
6) Define when you require deeper testing (risk-based triggers)
Annex A controls are commonly implemented with a risk-based lens. Set explicit triggers for additional testing, such as:
- Internet-facing endpoint changes.
- AuthN/AuthZ modifications.
- New cryptographic storage or key handling.
- Major dependency or framework changes.
- High-impact configuration changes.
Document triggers and keep a record of when they were invoked.
7) Operationalize ownership and RACI
Make ownership unambiguous:
- Engineering owns remediation.
- AppSec/Security defines testing requirements and approves exceptions (or advises an accountable risk owner).
- Release management enforces that approvals and evidence exist before deployment.
- GRC samples evidence and tests control operation.
If you use Daydream to manage control operations, treat 8.29 as a recurring evidence control: each release (or a sampled set of releases) produces a consistent evidence bundle, and exceptions are tracked to closure.
Required evidence and artifacts to retain
Keep evidence tied to releases, not just “we ran scanners.”
Minimum evidence set 2:
- Approved security testing standard and change/release procedure references. 2
- CI/CD pipeline configuration showing required test stages (screenshots or configuration exports).
- Tool outputs or reports (SAST/SCA/DAST as applicable) with run identifiers.
- PR evidence: review history, security checklist completion, sign-offs.
- Defect/vulnerability tickets created from findings and their status.
- Release ticket/change record linking to test evidence.
- Exception approvals (if any), including rationale, compensating controls, and retest plan. 3
Retention tip: Store evidence in systems you already run (ticketing + CI logs + artifact repo) and index it in an audit folder by release tag.
Common exam/audit questions and hangups
Auditors often probe the same weak spots:
-
“Show me security testing for this production release.”
Have a ready release package with links, not screenshots scattered across tools. -
“Can developers bypass the scans?”
Demonstrate branch protections, required status checks, and who can alter pipelines. -
“What happens when you find a high-severity issue?”
Show your defined stop-ship criteria and one example where it was enforced or escalated. -
“How do you handle exceptions?”
Auditors expect an approval record and evidence of follow-up remediation. -
“How do you ensure the tests are appropriate?”
Point to risk triggers and periodic review of the testing standard. 1
Frequent implementation mistakes and how to avoid them
| Mistake | Why it fails audits | Fix |
|---|---|---|
| Running scans “sometimes” | Control operation is inconsistent; no acceptance gate | Make scans mandatory in CI and require results in release tickets |
| No documented pass/fail criteria | Acceptance becomes subjective | Define blocking conditions and who can override |
| Findings live only in scan tools | No remediation governance | Pipe findings into tickets with ownership and tracking |
| Exceptions are informal (Slack/email) | No accountable risk acceptance | Use a standard exception template with approver, scope, expiry, and follow-up |
| Evidence not tied to releases | You can’t prove acceptance controls | Require a release record that links commit/tag to test results |
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so you should plan for audit-driven consequences rather than regulator-specific penalties.
Risk-wise, weak 8.29 operation shows up as:
- Production vulnerabilities from preventable coding errors.
- Inconsistent release decisions and undocumented risk acceptance.
- Higher breach impact because security defects reach production systems.
From an ISO certification perspective, gaps usually manifest as nonconformities where the auditor cannot verify consistent control operation or cannot trace evidence to in-scope releases. 1
A practical 30/60/90-day execution plan
First 30 days (stabilize expectations and evidence)
- Publish the security testing standard with minimum required tests and pass/fail criteria. 1
- Inventory in-scope apps and assign risk tier (simple high/medium/low is enough to start).
- Pick a single evidence pattern: one release ticket template that links to pipeline runs and findings.
- Enable baseline CI checks on the highest-risk repos first; document any temporary exceptions.
By 60 days (turn standards into enforced gates)
- Implement branch protections and required status checks so security scans must pass (or be explicitly excepted).
- Integrate scan outputs into the defect workflow (tickets auto-created or manually triaged).
- Stand up the security exception process with defined approvers and expiry expectations.
- Run an internal “audit rehearsal” on a sample of releases: prove traceability end-to-end.
By 90 days (make it repeatable and reviewable)
- Expand gating to remaining in-scope repos based on risk and delivery criticality.
- Create a recurring control check: sample releases monthly/quarterly and verify the evidence bundle.
- Review and tune pass/fail criteria to reduce false positives without weakening the gate.
- Use Daydream (or your GRC system) to map 8.29 to control operation, assign owners, and schedule recurring evidence capture for assessor-ready reporting. 1
Frequently Asked Questions
What counts as “security testing” for Annex A 8.29?
Any repeatable test activity intended to identify security weaknesses before acceptance, including automated scans and targeted manual testing when risk triggers apply. The key is that you define required tests and show consistent execution and acceptance decisions. 1
Do we need penetration tests for every release?
Annex A 8.29 does not require a pen test for every release in the provided excerpt; it requires security testing as part of development acceptance. Most teams use a mix of automated CI checks and risk-triggered deeper testing for higher-risk changes. 1
How do we handle urgent fixes when scans fail?
Use a formal exception: document the failing control, the reason for urgency, compensating controls, and an owner with a tracked remediation ticket. Auditors accept urgency when risk acceptance is explicit and time-bounded. 1
What evidence is the fastest win for audit readiness?
A release record that links the production deployment to CI run results and the associated remediation tickets. If an exception exists, attach the approval to the same release record. 1
We use third-party code and SaaS platforms. Does 8.29 still apply?
Yes, if you develop or configure components that affect your service, you still need security testing for what you ship (code, templates, configuration-as-code). For purely third-party-managed code, focus on acceptance controls you can execute, such as configuration validation and release governance for your changes. 1
Who should approve security testing exceptions?
Use an accountable risk owner with authority over the system or service, with Security providing review and conditions. Document the approver role in your standard and keep the approval with the release evidence. 1
Footnotes
Frequently Asked Questions
What counts as “security testing” for Annex A 8.29?
Any repeatable test activity intended to identify security weaknesses before acceptance, including automated scans and targeted manual testing when risk triggers apply. The key is that you define required tests and show consistent execution and acceptance decisions. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
Do we need penetration tests for every release?
Annex A 8.29 does not require a pen test for every release in the provided excerpt; it requires security testing as part of development acceptance. Most teams use a mix of automated CI checks and risk-triggered deeper testing for higher-risk changes. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
How do we handle urgent fixes when scans fail?
Use a formal exception: document the failing control, the reason for urgency, compensating controls, and an owner with a tracked remediation ticket. Auditors accept urgency when risk acceptance is explicit and time-bounded. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
What evidence is the fastest win for audit readiness?
A release record that links the production deployment to CI run results and the associated remediation tickets. If an exception exists, attach the approval to the same release record. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
We use third-party code and SaaS platforms. Does 8.29 still apply?
Yes, if you develop or configure components that affect your service, you still need security testing for what you ship (code, templates, configuration-as-code). For purely third-party-managed code, focus on acceptance controls you can execute, such as configuration validation and release governance for your changes. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
Who should approve security testing exceptions?
Use an accountable risk owner with authority over the system or service, with Security providing review and conditions. Document the approver role in your standard and keep the approval with the release evidence. (Source: ISO/IEC 27001 overview; ISMS.online Annex A control index)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream