Annex A 8.28: Secure Coding

Annex a 8.28: secure coding requirement means you must define, implement, and evidence secure coding rules across your software development lifecycle so code is produced, reviewed, tested, and released with security controls baked in. Operationalize it by setting enforceable standards, integrating security checks into CI/CD, and retaining repeatable evidence for audits. 1

Key takeaways:

  • Treat secure coding as an SDLC control with defined rules, tooling gates, and accountable roles, not a one-time training item. 1
  • Auditors will look for proof: standards, secure design and review practices, automated testing outputs, remediation tracking, and release approvals tied to risk. 2
  • Your fastest path is to map 8.28 to a documented control operation with recurring evidence capture that matches how engineering actually ships code. 2

Annex A 8.28 sits in the ISO/IEC 27001:2022 technology controls and focuses on how you build software securely, not just how you operate infrastructure. For a CCO, GRC lead, or Compliance Officer, the practical challenge is consistent execution across teams: different repos, different languages, different deployment patterns, and mixed ownership across employees and third parties.

Assessors typically expect two things: (1) clear, enforceable secure coding expectations and (2) evidence that those expectations run continuously in the development workflow. If your secure coding “program” lives in a PDF but engineers can merge vulnerable code without controls firing, you will struggle to demonstrate operation.

This page translates annex a 8.28: secure coding requirement into a set of implementable steps, concrete artifacts, and audit-ready evidence. It also calls out common failure modes (policy-only controls, tool sprawl without governance, and no exception handling) and gives a practical execution plan you can run with engineering leadership. 1

Regulatory text

Provided excerpt: “ISO/IEC 27001:2022 Annex A control 8.28 implementation expectation (Secure Coding).” 1

Operator interpretation: You need an organizationally defined secure coding approach and must be able to show it is implemented in the way you develop and change software. That normally means:

  • Secure coding rules exist (language/framework appropriate).
  • Engineers are expected to follow them.
  • Security checks exist to detect violations early (review and testing).
  • Findings are tracked to remediation or accepted as a risk with approval.
  • You can produce evidence that this happens as part of normal delivery. 2

Plain-English interpretation (what the control is really asking)

Annex a 8.28: secure coding requirement expects you to prevent common coding flaws from entering production by setting secure coding standards and embedding verification activities into the SDLC. You are not being asked to “promise” secure code; you are being asked to run a repeatable system that reduces coding risk and produces audit evidence.

A workable mental model: secure coding = standards + gates + remediation + exceptions + proof.

Who it applies to

Entity scope: Any organization implementing ISO/IEC 27001 where software is developed or materially modified as part of delivering products/services, including service organizations. 1

Operational contexts that count:

  • Product engineering teams building customer-facing applications.
  • Internal development teams shipping business-critical tools or integrations.
  • DevOps teams creating infrastructure-as-code and deployment scripts (treat this as code).
  • Third parties developing code on your behalf (contractors, development shops, MSPs) where you own or operate the resulting application or where their code changes affect your environment. 2

What you actually need to do (step-by-step)

Use this as your implementation runbook for annex a 8.28: secure coding requirement.

1) Define the control boundary and “what counts as code”

  • Identify in-scope repositories: application code, IaC, build pipelines, container definitions, scripts, and shared libraries.
  • Classify systems by impact (customer data, auth paths, payment flows, privileged operations). Use your existing ISMS risk assessment to drive prioritization. 1

Deliverable: SDLC/secure coding scope statement mapped to system inventory.

2) Publish secure coding standards that are enforceable

Write a secure coding standard that engineers can follow and auditors can test. Minimum topics to include:

  • Input validation and output encoding expectations.
  • Authentication/session handling basics.
  • Authorization rules (server-side checks, least privilege).
  • Secrets handling (no secrets in code; approved vault patterns).
  • Cryptography constraints (approved libraries; no custom crypto).
  • Error handling/logging rules (no sensitive data leakage).
  • Dependency management expectations (pinning, review, patching).
  • Language-specific guidance where needed (separate appendices). 2

Keep it short and testable. If it cannot be verified in code review or tooling, rewrite it.

Deliverables: Secure Coding Standard; language/framework addenda; secure coding checklist.

3) Assign ownership and define required SDLC touchpoints

You need clear accountability:

  • Engineering owns implementation.
  • Security sets baseline requirements and monitors results.
  • GRC defines evidence needs and audit mappings.
  • Product owners approve risk-based exceptions where impact is business-facing.

Define where secure coding is checked:

  • Design (for high-risk changes).
  • Pull request review.
  • Build/CI checks.
  • Pre-release sign-off for sensitive systems. 1

Deliverables: RACI; SDLC policy/procedure updates; “definition of done” updates.

4) Implement preventive and detective controls in the pipeline

Pick controls that match your delivery model; focus on coverage and repeatability.

  • Branch protection and required reviews: require PR review for main branches.
  • Static analysis (SAST): run in CI; fail or block for defined severity threshold (document thresholds and exceptions).
  • Dependency scanning (SCA): detect vulnerable libraries; enforce upgrade workflow.
  • Secrets scanning: prevent credential commits; block merges when triggered.
  • Security unit/integration tests for critical paths: authz checks, input validation tests where relevant.
  • Manual secure review for high-risk code: for example, changes touching authentication/authorization, encryption routines, or sensitive data flows. 2

Practical decision: Don’t boil the ocean. Start by gating the highest-risk repos/systems, then expand.

5) Create a triage and remediation workflow that produces auditable closure

Tool findings that aren’t triaged become audit debt. Define:

  • Who reviews findings and how often (security champion, AppSec, or owning team).
  • How issues are tracked (ticketing system with repo link, severity, due date guidance, status).
  • What “fixed” means (code change merged, verification rerun, evidence attached).
  • When risk acceptance is allowed and who approves it (documented exception). 1

Deliverables: Vulnerability/secure coding findings workflow; ticket templates; risk acceptance template.

6) Manage exceptions explicitly

Secure coding programs fail when teams bypass gates informally. Define an exception process:

  • Allowed reasons (legacy constraints, compensating controls, false positives).
  • Required documentation (system, risk description, business justification, compensating controls, expiry/review date, approver).
  • Central register of exceptions and periodic review. 2

7) Train engineers in a way you can evidence

Training should match your standards and toolchain:

  • Onboarding secure coding training for engineers.
  • Targeted refreshers when standards change or recurring issues show up.
  • Secure code review guidance for reviewers. 1

Keep records: attendance, completion, and training content version.

8) Map to a documented control operation with recurring evidence capture

This is the fastest way to become assessment-ready:

  • Document the control objective, scope, activities, frequency, owners, and evidence outputs.
  • Define an evidence calendar (what you collect monthly/quarterly and per-release). 2

Where Daydream fits naturally: Daydream can act as your control “system of record” for 8.28 by mapping the requirement to a repeatable control operation and prompting recurring evidence capture from engineering systems (CI logs, scan outputs, PR settings, and exception registers) without chasing screenshots across teams. 2

Required evidence and artifacts to retain (audit-ready)

Keep evidence that shows both design and operation:

Control design artifacts

  • Secure Coding Standard (+ version history and approval).
  • SDLC procedure referencing secure coding checks.
  • RACI for secure coding roles.
  • Tooling standards (which scanners run where; gating rules). 1

Operating evidence (sample set)

  • Repo settings exports or screenshots: branch protections, required reviews, CI required checks.
  • CI/CD logs showing SAST/SCA/secrets scans executed.
  • Sample pull requests with review comments tied to secure coding checklist.
  • Ticket samples showing triage, remediation, and closure.
  • Exception register entries with approvals and review/expiry.
  • Training completion records for engineering staff. 2

Common exam/audit questions and hangups

Expect these lines of inquiry:

  • “Show me your secure coding standard and where it is mandatory.”
  • “Which repos are in scope, and how did you decide?”
  • “Demonstrate that security scanning runs on every build for an in-scope system.”
  • “How do you prevent developers from bypassing checks?”
  • “Show a finding from detection through fix and verification.”
  • “How do you handle false positives and exceptions, and who approves them?” 1

Hangup to plan for: assessors often reject “we have a tool” as evidence. They want proof the tool runs, findings are handled, and exceptions are controlled.

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails Fix
Policy-only secure coding standard No operational proof Bind standards to PR checks, CI gates, and ticket workflows. 2
Tool sprawl with inconsistent coverage Gaps across repos/teams Standardize a baseline pipeline and require adoption for in-scope repos. 1
No exception process Bypasses become informal Create an exception register with approvals and periodic review. 2
Findings aren’t tracked to closure “Open forever” issues Enforce ownership, SLAs as guidance, and verification reruns at closure. 1
Third-party developers ignored Material code risk remains Contractually require secure coding rules, scanning, and evidence delivery for third-party code. 2

Enforcement context and risk implications

No public enforcement cases were provided in the available source catalog for this requirement, so this page does not list enforcement actions. The practical risk remains clear: insecure code can introduce vulnerabilities that lead to data exposure, service disruption, and audit findings due to missing evidence of control operation. 1

Practical execution plan (30/60/90-day)

Timeboxes below are execution guidance to help you sequence work; adjust based on your release cadence and engineering bandwidth.

First 30 days (stabilize and define)

  • Confirm in-scope repos and system tiers (high/medium/low impact).
  • Draft and approve Secure Coding Standard and exception process.
  • Set RACI with Engineering, Security, and GRC.
  • Pick baseline tooling coverage for SAST/SCA/secrets scanning and define gating rules for high-impact repos.
  • Define the evidence list and where it will be collected from (CI, SCM, ticketing). 2

Days 31–60 (implement gates + workflow)

  • Roll out branch protections and mandatory PR review for in-scope repos.
  • Turn on CI scans for priority repos; start collecting run logs as evidence.
  • Stand up triage workflow with ticket templates and ownership rules.
  • Run a pilot: take one repo from detection through remediation and exception handling.
  • Train engineers on the standard and the new workflow; retain completion records. 1

Days 61–90 (prove operation + expand coverage)

  • Expand scanning and gating coverage to remaining in-scope repos by tier.
  • Formalize recurring evidence capture (monthly extracts, per-release samples).
  • Review exceptions, confirm expirations, and validate compensating controls.
  • Prepare an “audit packet” for 8.28: standard, scope, RACI, tool configs, sample PRs, sample findings, exception register, training records. 2

Frequently Asked Questions

Do we need secure coding standards for every programming language we use?

You need standards that are applicable to your environment and testable in practice. A single baseline standard with short language/framework appendices often works better than a separate policy per language. 2

Can we meet annex a 8.28: secure coding requirement with training alone?

Training helps, but auditors typically expect operational controls like reviews, automated checks, and remediation tracking. Treat training as supporting evidence, not the control by itself. 1

How do we handle false positives from SAST or dependency scanning?

Use a documented triage process with labeled dispositions (true positive, false positive, accepted risk) and require approval for risk acceptance. Retain the rationale and link it to the specific finding and code change. 2

What evidence is strongest for auditors?

Evidence that is hard to fake and easy to reproduce: CI logs showing scans ran, repo settings showing enforced reviews, and tickets showing findings closed with verification. Pair it with the approved secure coding standard. 1

Does this apply to infrastructure-as-code and scripts?

If the code changes your production environment or security posture, treat it as in-scope code and apply the same secure coding expectations and review gates. Keep the scope statement explicit so auditors see the intent and coverage. 2

We outsource development to a third party. How do we cover 8.28?

Flow down secure coding requirements contractually, require evidence (scan results, review records, exception approvals), and validate their delivery during onboarding and periodic reviews. Keep those artifacts with your ISMS evidence. 1

Footnotes

  1. ISO/IEC 27001 overview

  2. ISMS.online Annex A control index

Frequently Asked Questions

Do we need secure coding standards for every programming language we use?

You need standards that are applicable to your environment and testable in practice. A single baseline standard with short language/framework appendices often works better than a separate policy per language. (Source: ISMS.online Annex A control index)

Can we meet annex a 8.28: secure coding requirement with training alone?

Training helps, but auditors typically expect operational controls like reviews, automated checks, and remediation tracking. Treat training as supporting evidence, not the control by itself. (Source: ISO/IEC 27001 overview)

How do we handle false positives from SAST or dependency scanning?

Use a documented triage process with labeled dispositions (true positive, false positive, accepted risk) and require approval for risk acceptance. Retain the rationale and link it to the specific finding and code change. (Source: ISMS.online Annex A control index)

What evidence is strongest for auditors?

Evidence that is hard to fake and easy to reproduce: CI logs showing scans ran, repo settings showing enforced reviews, and tickets showing findings closed with verification. Pair it with the approved secure coding standard. (Source: ISO/IEC 27001 overview)

Does this apply to infrastructure-as-code and scripts?

If the code changes your production environment or security posture, treat it as in-scope code and apply the same secure coding expectations and review gates. Keep the scope statement explicit so auditors see the intent and coverage. (Source: ISMS.online Annex A control index)

We outsource development to a third party. How do we cover 8.28?

Flow down secure coding requirements contractually, require evidence (scan results, review records, exception approvals), and validate their delivery during onboarding and periodic reviews. Keep those artifacts with your ISMS evidence. (Source: ISO/IEC 27001 overview)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream