Secure Software Development
Organizations developing software in-house must implement secure coding practices, code reviews, vulnerability testing, and security throughout the development lifecycle per C2M2 v2.1 ARCHITECTURE-1.D requirements. This applies to all internally-developed applications including business tools, operational technology software, and customer-facing systems.
Key takeaways:
- Establish a formal secure software development lifecycle (SSDLC) with security checkpoints at each phase
- Implement mandatory code reviews and static/dynamic application security testing
- Train developers on secure coding practices and common vulnerability patterns
- Document all security controls and testing results for audit readiness
- Apply requirements to ALL in-house software, not just production applications
The C2M2 Secure Software Development requirement mandates that any software developed internally follows established security practices throughout its lifecycle. This requirement recognizes that custom-developed applications represent significant risk vectors for critical infrastructure operators, particularly in the energy sector where operational technology and business systems increasingly rely on proprietary software.
The scope extends beyond traditional IT applications to include scripts, automation tools, data analysis programs, and modifications to commercial software. Energy sector organizations often develop specialized software for grid management, SCADA system interfaces, customer billing integrations, and compliance reporting tools. Each represents a potential attack surface if not developed with security as a primary consideration.
Many organizations mistakenly limit secure development practices to customer-facing or "critical" applications. The C2M2 framework requires comprehensive coverage of all in-house development efforts, reflecting the reality that attackers frequently exploit internal tools and utilities as initial compromise vectors.
Regulatory text
The C2M2 v2.1 ARCHITECTURE-1.D requirement states: "Software that is developed in-house follows secure software development practices." This establishes a baseline expectation that organizations maintain formal security controls throughout the software development lifecycle, from initial design through deployment and maintenance 1.
The requirement applies to all software created by internal teams or contractors working under the organization's direction. This includes traditional applications, scripts, database queries, API integrations, mobile applications, embedded system code, and modifications to vendor-supplied software. The C2M2 framework does not distinguish between "major" and "minor" development efforts—security practices scale to match project complexity but remain mandatory across all initiatives.
Plain-English Interpretation
This requirement means your organization needs documented, repeatable processes that inject security at every stage of software creation. You can't bolt on security after development completes. Instead, threat modeling happens during design, secure coding standards guide implementation, security testing validates controls, and ongoing monitoring catches runtime issues.
Think beyond your IT department's web applications. That Excel macro your finance team built? The Python script automating meter readings? The SQL reports for regulatory filings? All fall under this requirement if developed internally. The standard recognizes that modern critical infrastructure depends on countless custom software components, each representing potential compromise paths if developed without security considerations.
Who This Applies To
Entity Types:
- Energy sector organizations (utilities, generation facilities, transmission operators)
- Critical infrastructure operators using C2M2 for maturity assessment
- Organizations with any internal software development capabilities
- Entities modifying commercial software or creating system integrations
Operational Context:
- IT departments with development teams
- Operational technology groups creating control system interfaces
- Business units developing automation tools or reporting solutions
- Third-party developers working under organizational contracts
- DevOps teams managing infrastructure as code
The requirement activates whenever your organization creates executable code, regardless of the development team's formal structure or the software's intended use.
Step-by-Step Implementation
Phase 1: Establish Foundation (Days 1-30)
-
Inventory all development activities
- Survey all departments for custom software, scripts, and tools
- Document programming languages, frameworks, and deployment environments
- Identify development team members across the organization
- Create a software asset register with ownership and criticality ratings
-
Define secure development standards
- Select industry-standard secure coding guidelines (OWASP, SANS Top 25)
- Establish minimum security requirements for different software categories
- Create security checklist templates for common development scenarios
- Document approved cryptographic libraries and security frameworks
-
Implement code repository controls
- Centralize code storage in version control systems
- Enable access logging and change tracking
- Establish branching and merge approval workflows
- Configure automated backups and disaster recovery
Phase 2: Build Security Capabilities (Days 31-60)
-
Deploy security testing tools
- Implement static application security testing (SAST) in development environments
- Configure dynamic application security testing (DAST) for runtime analysis
- Establish software composition analysis (SCA) for dependency checking
- Create security testing playbooks for different technology stacks
-
Develop security training program
- Conduct secure coding workshops for all developers
- Provide language-specific security guidance
- Share common vulnerability examples from your environment
- Establish ongoing education requirements
-
Create security review processes
- Define mandatory code review procedures with security focus
- Establish threat modeling requirements for new projects
- Create security architecture review checkpoints
- Document approval workflows for production deployments
Phase 3: Operationalize and Mature (Days 61-90)
-
Integrate security into CI/CD pipelines
- Automate security scans in build processes
- Configure quality gates that block insecure code
- Implement secrets scanning to prevent credential exposure
- Create security dashboards for development teams
-
Establish vulnerability management
- Define severity ratings and remediation timelines
- Create procedures for emergency security patches
- Implement tracking for security debt and technical remediation
- Establish metrics for security defect trends
-
Document compliance evidence
- Create templates for security review documentation
- Establish retention policies for testing results
- Build audit packages demonstrating process adherence
- Configure automated compliance reporting
Required Evidence and Artifacts
Policy Documentation:
- Secure Software Development Lifecycle (SSDLC) policy
- Secure coding standards for each programming language used
- Security testing methodology and tool configuration
- Code review procedures with security checkpoints
- Vulnerability management and remediation procedures
Process Evidence:
- Security training completion records for all developers
- Threat models for significant applications
- Code review logs showing security considerations
- Security testing results (SAST, DAST, penetration testing)
- Vulnerability remediation tracking and closure evidence
Technical Controls:
- Source code repository configurations and access logs
- Security tool deployment evidence and scan results
- CI/CD pipeline configurations showing security integration
- Secrets management tool implementation
- Security monitoring and alerting configurations
Ongoing Compliance:
- Monthly security metrics dashboards
- Quarterly vulnerability trend analysis
- Annual security training updates
- Periodic third-party security assessments
- Continuous improvement documentation
Common Exam Questions and Hangups
Typical Auditor Questions:
-
"Show me your secure coding standards and how developers are trained on them."
- Have training records readily available
- Demonstrate how standards are integrated into development tools
- Show examples of code reviews catching security issues
-
"How do you ensure all in-house developed software follows these practices?"
- Present your software inventory process
- Show how new projects are identified and brought into compliance
- Demonstrate monitoring for "shadow IT" development
-
"What happens when security vulnerabilities are discovered?"
- Walk through your vulnerability management process
- Show examples of remediation with timelines
- Demonstrate how lessons learned improve standards
-
"How do you handle legacy applications developed before these requirements?"
- Document your risk-based approach to retroactive compliance
- Show remediation plans for high-risk legacy systems
- Demonstrate compensating controls for systems awaiting updates
Common Audit Failures:
- Incomplete scope: Organizations often miss scripts, queries, and "minor" tools
- Missing evidence: Security activities happen but aren't documented
- Inconsistent application: Some teams follow practices while others don't
- Tool deployment without process: Security scanners run but findings aren't addressed
- Training gaps: Developers lack current security knowledge for their technology stack
Implementation Mistakes to Avoid
Mistake 1: Limiting Scope to "Production" Applications Many organizations only apply secure development practices to customer-facing or production systems. C2M2 requires coverage of ALL in-house development. That includes test tools, administrative scripts, proof-of-concepts, and internal utilities. Create a clear definition of "software" that captures all executable code your organization produces.
Mistake 2: Over-Relying on Automated Tools Security scanning tools are essential but insufficient alone. A clean SAST scan doesn't prove secure development—it might indicate the scanner can't understand your code patterns. Balance automated testing with manual code reviews, threat modeling, and architecture assessments. Tools find technical vulnerabilities; humans find logic flaws and business risks.
Mistake 3: Treating Security as a Gate, Not a Process Adding a security review before production deployment creates bottlenecks and encourages workarounds. Instead, embed security throughout development. Early threat modeling costs less than fixing architectural flaws. Continuous security testing catches issues when they're cheapest to fix. Make security a development accelerator, not a brake.
Mistake 4: Generic Security Standards Copying OWASP or SANS guidelines without customization leads to developer frustration and non-compliance. Tailor standards to your technology stack, risk profile, and development culture. A utility's SCADA interface team needs different guidance than the customer portal team. Create practical, specific standards that developers can actually follow.
Mistake 5: Inadequate Legacy System Planning Auditors will ask about software developed before your secure development program. "We'll address it eventually" isn't an acceptable answer. Create a formal legacy assessment process, risk-rate existing applications, and document remediation timelines. Show continuous progress even if complete remediation takes years.
Risk Implications
Technical Risks:
- Vulnerabilities in custom software provide direct attack paths into critical systems
- Insecure development practices can cascade through interconnected applications
- Logic flaws in operational technology interfaces may enable physical consequences
- Weak authentication or authorization can expose sensitive control systems
Compliance Risks:
- Failure to demonstrate secure development practices impacts C2M2 maturity ratings
- Incidents stemming from insecure code trigger regulatory scrutiny
- Inability to produce development security evidence delays audit completion
- Inconsistent practices across teams create systemic compliance gaps
Business Risks:
- Security incidents from custom software damage operational reliability
- Remediation costs compound when vulnerabilities are found post-deployment
- Insecure software delays project timelines when security issues block releases
- Reputation damage follows breaches traced to poor development practices
30/60/90-Day Execution Plan
Immediate Actions (Days 1-30):
- Complete software development inventory across all departments
- Document existing security practices and identify gaps
- Select and procure essential security testing tools
- Draft secure software development policy
- Begin developer security awareness communications
Near-Term Improvements (Days 31-60):
- Roll out secure coding standards with team-specific guidance
- Implement code review requirements with security checklists
- Deploy SAST tools in development environments
- Conduct initial developer security training
- Establish vulnerability tracking and remediation processes
Sustained Operations (Days 61-90):
- Integrate security testing into CI/CD pipelines
- Complete threat modeling for critical applications
- Implement security metrics and dashboards
- Conduct tabletop exercise for security incident response
- Schedule third-party assessment to validate program effectiveness
Success Metrics:
- Percentage of developers trained on secure coding practices
- Number of applications with completed threat models
- Security defect detection and remediation rates
- Time from vulnerability discovery to patch deployment
- Audit finding trends showing continuous improvement
Frequently Asked Questions
Does this requirement apply to small scripts and automation tools our operations team creates?
Yes, the C2M2 requirement covers all in-house developed software regardless of size or complexity. However, security controls should scale appropriately—a 50-line Python script needs basic code review and testing, not the full threat modeling exercise required for a customer-facing application.
We modify vendor software configurations and add custom modules. Does this count as "in-house development"?
Yes, any custom code, scripts, or modules you develop—even within vendor platforms—falls under this requirement. This includes customizations to SCADA systems, ERP modifications, and integration scripts. Document these modifications in your software inventory and apply appropriate security controls.
How do we handle contractors or consultants who develop software for us?
Software developed by third parties under your direction must meet the same secure development standards. Include security requirements in contracts, provide contractors with your secure coding standards, and implement additional review processes for externally-developed code before accepting delivery.
What if we discover vulnerabilities in software developed years ago before we had security standards?
Create a risk-based remediation plan. Assess and prioritize legacy applications based on exposure, criticality, and exploitability. Document this assessment and your remediation timeline. Auditors understand you can't fix everything immediately but expect to see a formal process for addressing legacy risk.
Our developers claim security testing slows down delivery. How do we balance speed with security?
Implement security tools that provide immediate feedback within developer workflows. Use incremental security testing rather than large, blocking reviews. Show developers how early security testing prevents costly rework. Track metrics proving that secure development actually accelerates delivery by reducing post-release defects.
Do we need different secure coding standards for each programming language we use?
Yes, effective secure coding guidance must be language-specific. Generic standards frustrate developers and miss language-specific vulnerabilities. Start with high-level principles applicable across languages, then provide detailed guidance for each technology stack. Focus first on your most-used languages and highest-risk applications.
Footnotes
Frequently Asked Questions
Does this requirement apply to small scripts and automation tools our operations team creates?
Yes, the C2M2 requirement covers all in-house developed software regardless of size or complexity. However, security controls should scale appropriately—a 50-line Python script needs basic code review and testing, not the full threat modeling exercise required for a customer-facing application.
We modify vendor software configurations and add custom modules. Does this count as "in-house development"?
Yes, any custom code, scripts, or modules you develop—even within vendor platforms—falls under this requirement. This includes customizations to SCADA systems, ERP modifications, and integration scripts. Document these modifications in your software inventory and apply appropriate security controls.
How do we handle contractors or consultants who develop software for us?
Software developed by third parties under your direction must meet the same secure development standards. Include security requirements in contracts, provide contractors with your secure coding standards, and implement additional review processes for externally-developed code before accepting delivery.
What if we discover vulnerabilities in software developed years ago before we had security standards?
Create a risk-based remediation plan. Assess and prioritize legacy applications based on exposure, criticality, and exploitability. Document this assessment and your remediation timeline. Auditors understand you can't fix everything immediately but expect to see a formal process for addressing legacy risk.
Our developers claim security testing slows down delivery. How do we balance speed with security?
Implement security tools that provide immediate feedback within developer workflows. Use incremental security testing rather than large, blocking reviews. Show developers how early security testing prevents costly rework. Track metrics proving that secure development actually accelerates delivery by reducing post-release defects.
Do we need different secure coding standards for each programming language we use?
Yes, effective secure coding guidance must be language-specific. Generic standards frustrate developers and miss language-specific vulnerabilities. Start with high-level principles applicable across languages, then provide detailed guidance for each technology stack. Focus first on your most-used languages and highest-risk applications.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream