Secure Software Development

To meet the C2M2 secure software development requirement, you must run your in-house software through a defined secure development lifecycle: secure coding standards, peer review, automated vulnerability testing, tracked remediation, and retained evidence that the controls operate in real delivery pipelines 1. Build it into SDLC gates and prove it with tickets, scan results, and approvals.

Key takeaways:

  • Scope first: “in-house developed” software in the C2M2 assessment boundary must follow secure SDLC practices 1.
  • Audits fail on missing operating evidence; keep review records, scan outputs, and remediation trails tied to releases 1.
  • Operationalize with release gates, documented tooling settings, and escalation paths, not a policy PDF 1.

“Secure Software Development” under C2M2 is a requirement you operationalize in engineering workflows, then defend with evidence. The text is short, but the expectation is specific: if you develop software internally, you need repeatable practices that reduce preventable weaknesses and produce a traceable record of review and testing activity 1. For a Compliance Officer, CCO, or GRC lead, the work is less about choosing a particular framework brand and more about making sure the organization can prove three things on demand: (1) what “secure development” means in your environment, (2) that teams actually do it for each release or change set in scope, and (3) that exceptions are managed with approvals, compensating controls, and follow-up.

C2M2 is often used by energy sector and critical infrastructure operators to assess cybersecurity capability maturity across scoped business units or operational technology environments 1. That scoping detail matters. The fastest way to fail an assessment is to have secure SDLC practices in one engineering group while another “in scope” group ships code without peer review, scanning, or documented remediation. Your job is to define the minimum secure SDLC bar, embed it into delivery tooling, and retain the artifacts that demonstrate operation.

Regulatory text

Requirement (C2M2 v2.1 ARCHITECTURE-1.D): “Software that is developed in-house follows secure software development practices.” 1

Operator interpretation (what you must do):

  • Identify the software you develop internally within the C2M2 assessment scope, including scripts, services, internal tools, and code that deploys into OT or supports critical business operations 1.
  • Define and enforce “secure software development practices” as concrete SDLC controls, typically including secure coding standards, code review, vulnerability testing, and remediation tracking 1.
  • Produce operating evidence that these practices happen consistently, and that findings are monitored, triaged, escalated, and closed 1.

Plain-English requirement meaning (what examiners expect to see)

You need an SDLC where security is a normal release condition. Teams should not be able to merge or deploy high-risk code changes without:

  • a documented standard for how code should be written safely,
  • a second set of eyes (peer review) before merge,
  • automated checks for known weakness patterns and vulnerable dependencies,
  • a workflow to fix or formally accept risk when issues appear,
  • records that show these steps occurred for real releases 1.

This is a “show me” requirement. The common failure mode is a policy that says “we do secure coding,” while the Git repo shows direct-to-main commits, scans disabled, and vulnerability tickets with no owner.

Who it applies to

Entity types (typical C2M2 adopters):

  • Energy sector organizations
  • Critical infrastructure operators 1

Operational context (when it applies):

  • Applies when your organization has adopted C2M2 for a scoped environment (business unit, function, or OT environment) and you develop software in-house within that scope 1.

In-scope software (practical examples):

  • Applications deployed into production (customer-facing or internal)
  • OT-supporting services, collectors, and telemetry agents
  • Infrastructure-as-code, configuration code, deployment scripts
  • Internal libraries shared across teams

If engineering says “it’s just scripts,” treat that as a red flag until you validate where those scripts run and what they can change.

What you actually need to do (step-by-step)

1) Define scope and ownership

  1. Create an inventory of in-house software in scope. Tie each repo/app to an owner, environment, and criticality.
  2. Assign SDLC control ownership. One accountable engineering leader for enforcement (tooling and gates) and one GRC/control owner for evidence and testing.

Deliverable: scoped inventory + RACI that says who approves exceptions, who maintains tooling, and who produces evidence.

2) Set the minimum secure SDLC baseline (write it like an enforcement-ready standard)

  1. Secure coding standards. Select a baseline and translate it into “must/shall” rules teams can follow. Keep it short and enforceable.
  2. Code review rules. Define what requires review (most production-bound changes), who can approve, and separation-of-duties expectations for critical components.
  3. Vulnerability testing expectations. Require automated scanning appropriate to your stack (code, dependencies, containers, build artifacts) and define severity-based handling.
  4. Remediation and exception handling. Document how findings are ticketed, prioritized, fixed, verified, or risk-accepted with sign-off.

Your baseline should also specify evidence sources (CI logs, pull requests, scan dashboards, ticketing system).

3) Implement SDLC gates in the tools teams already use

  1. Branch protections / merge controls. Enforce review requirements technically, not by training alone.
  2. CI/CD security checks. Turn on scanning in pipelines and fail builds on defined conditions.
  3. Ticket integration. Auto-create tickets for findings or require linking a ticket to a merge request when security exceptions are used.
  4. Logging and retention. Document the systems and retention settings that store SDLC evidence 1.

This aligns to the practical control expectation to document “systems, events, thresholds, and retention settings” supporting secure development 1.

4) Make monitoring and escalation real (not aspirational)

  1. Define who watches findings queues. Assign on-call or operational responsibility for security findings triage.
  2. Set escalation paths. For stuck items, define when engineering leadership and risk owners get involved.
  3. Prove follow-through. Keep review evidence, follow-up tickets, and escalation records that show findings are actively monitored and resolved 1.

5) Establish control testing and management reporting

  1. Control test sampling. On a recurring cadence, sample releases/PRs and verify required steps occurred.
  2. Metrics (qualitative is fine). Track backlog health and repeated patterns, and report where gates are bypassed or exceptions cluster.
  3. Fix control design gaps. If teams can ship around controls, treat that as a control failure and remediate in tooling.

Required evidence and artifacts to retain (audit-ready list)

Keep artifacts tied to specific repos/releases so you can answer “show me for this deployment.”

Program documentation

  • Secure SDLC standard (secure coding, review, testing, remediation, exceptions)
  • Scope statement for in-house developed software under C2M2 assessment boundary 1
  • Roles and responsibilities (RACI) and exception approval authority

Tooling configuration evidence

  • Branch protection settings and code owner rules (screenshots or exported configs)
  • CI/CD configuration showing security checks enabled
  • Documented systems, events, thresholds, and retention settings for SDLC-related logs and records 1

Operating evidence (the “proof”)

  • Pull request history showing reviews and approvals
  • Scan results (SAST/SCA/container) and build logs for selected releases
  • Findings tickets with timestamps, ownership, remediation actions, and closure evidence
  • Exception/risk acceptance records with approver and rationale
  • Escalation records when issues are not resolved in expected timeframes 1

Common exam/audit questions and hangups

Expect variations of:

  • “Which applications are in scope, and how do you know you found them all?”
  • “Show me the last production release. Where is the code review evidence?”
  • “Do scans run on every merge/build? Show the pipeline configuration.”
  • “How do you track vulnerabilities from discovery to closure?”
  • “What happens when engineering can’t fix an issue quickly? Who can approve risk, and where is it documented?”
  • “How long do you retain logs and records needed to prove operation?” 1

Hangups that slow reviews:

  • Evidence exists but is not linkable to a specific deployment.
  • Different teams follow different practices with no documented rationale.
  • “We scan” means ad hoc developer scanning, not enforced pipeline checks.

Frequent implementation mistakes (and how to avoid them)

  1. Policy-only compliance. Fix: implement technical gates and retain operating evidence per release.
  2. Unscoped “secure SDLC.” Fix: write down what repos/apps are covered and why 1.
  3. No exception mechanism. Fix: create a standard risk acceptance workflow with expirations and compensating controls.
  4. Findings without owners. Fix: require assignment and SLA-like internal expectations; track aging and escalate.
  5. Evidence retention gaps. Fix: explicitly document retention settings for CI logs, scan outputs, and ticket records 1.

Risk implications (why this requirement gets attention)

C2M2-focused assessors care about secure software development because software becomes part of your operational attack surface. If secure development is incomplete or not reviewed, suspicious activity and control failures can go undetected, and you may lack operating evidence during internal control testing, audits, customer diligence, or regulator review 1. The business risk is practical: delayed incident response, hard-to-defend risk acceptances, and reduced credibility with external stakeholders.

Practical execution plan (30/60/90)

30 days (stabilize and scope)

  • Confirm C2M2 assessment boundary and identify all in-house software in that boundary 1.
  • Publish a minimum secure SDLC standard: code review required, scanning required, remediation workflow, exception approvals.
  • Pick evidence sources (Git platform, CI/CD, scanner tooling, ticketing) and confirm retention settings are documented 1.

60 days (enforce in tooling and start producing evidence)

  • Turn on branch protections and require approvals for in-scope repos.
  • Integrate automated vulnerability testing into CI/CD for those repos.
  • Stand up triage and escalation: named owners, queue monitoring, and documented follow-up tickets and escalations 1.
  • Run a tabletop audit: pick a recent release and assemble the evidence pack end-to-end.

90 days (control test and scale)

  • Expand enforcement to remaining repos in scope and resolve edge-case pipelines.
  • Start periodic control testing with sampling and documented results.
  • Review exceptions; close out stale items and tighten approval criteria.
  • If you manage GRC evidence in Daydream, map each SDLC control to required artifacts so engineering uploads once and you can reuse it across audits and customer diligence.

Frequently Asked Questions

Does this apply if we only build internal tools and scripts?

If the tools/scripts are developed in-house and fall within your C2M2 assessment scope, treat them as in scope. Focus on where they run, what they can change, and whether failures could affect operations 1.

What is the minimum evidence an auditor will accept?

For sampled releases, you need traceable proof of code review, vulnerability testing results, and remediation or formally approved exceptions. Keep the records in systems with documented retention so you can reproduce the trail later 1.

Can we meet the requirement with developer training alone?

Training helps, but it does not demonstrate that secure practices happened for real changes. Auditors typically expect enforced workflow steps (reviews and scans) plus tickets and approvals that show operation 1.

How should we handle security exceptions when a fix can’t ship in time?

Use a documented risk acceptance workflow with an approver, rationale, compensating controls, and a follow-up ticket to fix. Keep the approval record tied to the release and make sure escalations are visible when remediation stalls 1.

What’s the biggest “gotcha” during assessments?

Evidence fragmentation. If reviews live in Git, scans live in a separate console, and remediation lives in chat, you will struggle to prove end-to-end operation. Make tickets the system of record and link PRs, builds, and scan results to those tickets 1.

How do we operationalize this across multiple engineering teams without constant firefighting?

Standardize the minimum gates, then implement them as reusable templates in CI/CD and repo settings. Track exceptions centrally so GRC can see patterns and push fixes upstream instead of arguing repo by repo 1.

What you actually need to do

Use the cited implementation guidance when translating the requirement into day-to-day operating steps. 2

Footnotes

  1. Cybersecurity Capability Maturity Model v2.1

  2. DOE C2M2 program

Frequently Asked Questions

Does this apply if we only build internal tools and scripts?

If the tools/scripts are developed in-house and fall within your C2M2 assessment scope, treat them as in scope. Focus on where they run, what they can change, and whether failures could affect operations (Source: Cybersecurity Capability Maturity Model v2.1).

What is the minimum evidence an auditor will accept?

For sampled releases, you need traceable proof of code review, vulnerability testing results, and remediation or formally approved exceptions. Keep the records in systems with documented retention so you can reproduce the trail later (Source: Cybersecurity Capability Maturity Model v2.1).

Can we meet the requirement with developer training alone?

Training helps, but it does not demonstrate that secure practices happened for real changes. Auditors typically expect enforced workflow steps (reviews and scans) plus tickets and approvals that show operation (Source: Cybersecurity Capability Maturity Model v2.1).

How should we handle security exceptions when a fix can’t ship in time?

Use a documented risk acceptance workflow with an approver, rationale, compensating controls, and a follow-up ticket to fix. Keep the approval record tied to the release and make sure escalations are visible when remediation stalls (Source: Cybersecurity Capability Maturity Model v2.1).

What’s the biggest “gotcha” during assessments?

Evidence fragmentation. If reviews live in Git, scans live in a separate console, and remediation lives in chat, you will struggle to prove end-to-end operation. Make tickets the system of record and link PRs, builds, and scan results to those tickets (Source: Cybersecurity Capability Maturity Model v2.1).

How do we operationalize this across multiple engineering teams without constant firefighting?

Standardize the minimum gates, then implement them as reusable templates in CI/CD and repo settings. Track exceptions centrally so GRC can see patterns and push fixes upstream instead of arguing repo by repo (Source: Cybersecurity Capability Maturity Model v2.1).

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
C2M2 Secure Software Development: Implementation Guide | Daydream