PR.PS-06: Secure software development practices are integrated, and their performance is monitored throughout the software development life cycle
To meet the pr.ps-06: secure software development practices are integrated, and their performance is monitored throughout the software development life cycle requirement, you must embed secure SDLC controls into everyday engineering work and prove they run effectively. That means defined secure development standards, automated and manual security checks at key pipeline stages, tracked exceptions, and metrics that show the controls perform over time (NIST CSWP 29).
Key takeaways:
- Integrate security controls into planning, coding, build, test, release, and maintenance, not as a separate “security review” at the end (NIST CSWP 29).
- Monitoring is part of the requirement: capture performance signals (coverage, findings, remediation, exceptions) and review them on a defined cadence (NIST CSWP 29).
- Audit readiness depends on traceable evidence: policy → procedure → tool outputs → metrics → management review records (NIST CSF 1.1 to 2.0 Core Transition Changes).
PR.PS-06 is a requirement about operational discipline, not security aspiration. You are expected to show that secure software development practices are built into your SDLC and that you continuously monitor whether those practices are working (NIST CSWP 29). For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat this as a control system with clear owners, defined gates, measurable outcomes, and repeatable evidence collection.
The exam risk is rarely “no security tools exist.” The common failure is missing integration (controls exist but are bypassable) or missing monitoring (controls run, but nobody tracks performance, exceptions, or trends). PR.PS-06 also tends to sprawl across teams: engineering owns code, security owns standards and detection logic, platform teams own CI/CD, and product owns release timelines. Your job is to bind these into one requirement-aligned operating model with a single control narrative and defensible artifacts.
This page gives requirement-level implementation guidance you can execute quickly: what it means in plain English, where it applies, step-by-step controls to implement, what evidence to retain, and how to avoid audit hangups. It also includes a practical execution plan and FAQs aligned to real SDLC realities. References are limited to NIST CSF 2.0 materials provided (NIST CSWP 29; NIST CSF 1.1 to 2.0 Core Transition Changes).
Regulatory text
Text (excerpt): “Secure software development practices are integrated, and their performance is monitored throughout the software development life cycle” (NIST CSWP 29; NIST CSF 1.1 to 2.0 Core Transition Changes).
Operator meaning: You must (1) define secure development practices, (2) integrate them into each SDLC phase so they are routinely executed, and (3) monitor performance to confirm the practices operate as intended over time (NIST CSWP 29). “Integrated” means developers encounter the controls in normal workflows (backlog templates, coding standards, CI/CD checks, release gates). “Monitored” means you can produce metrics and review records that show coverage, findings, remediation, exceptions, and trends, with action taken when performance degrades.
Plain-English interpretation (what an examiner expects)
A defensible PR.PS-06 implementation answers four questions with evidence:
- What secure SDLC practices are required? Documented standards and procedures that are specific enough to implement.
- Where do they run in the SDLC? Clear mapping of practices to phases and pipelines (plan/build/test/release/operate).
- Who is accountable? Named control owners and engineering owners, plus escalation and exception authority.
- How do you know they work? Continuous monitoring and management review of control performance (NIST CSWP 29).
If you can’t show performance monitoring, you have a design gap even if engineering is “doing security.”
Who it applies to (entity and operational context)
Applies to: Any organization operating a cybersecurity program that builds, customizes, configures, or deploys software, including internal apps, customer-facing products, infrastructure-as-code, scripts, and CI/CD pipelines (NIST CSWP 29).
Operational contexts to scope explicitly:
- Custom software: in-house applications, APIs, microservices, mobile apps.
- Configured software: major customizations, plugins, workflow logic, low-code apps.
- Delivery mechanisms: CI/CD pipelines, release processes, change management tooling.
- Third-party components: open-source libraries, container images, build actions, SDKs, and outsourced development. PR.PS-06 still applies because secure practices must be integrated into your SDLC and monitored (NIST CSWP 29).
Practical scoping rule: Start with systems that process sensitive data or support critical business processes, then expand coverage as your monitoring matures.
What you actually need to do (step-by-step)
Use this sequence to operationalize PR.PS-06 with minimal rework.
1) Assign ownership and define the control statement
- Name a control owner (often AppSec or GRC) accountable for PR.PS-06 evidence and reporting.
- Name engineering owners by product/platform for execution.
- Write a one-paragraph control statement: “Secure SDLC practices are embedded in each phase and monitored through defined metrics and management review” (NIST CSWP 29).
2) Define secure SDLC practices as enforceable requirements
Create (or refresh) a secure SDLC standard that includes:
- Secure coding expectations (language/framework specific where possible)
- Authentication/authorization patterns your org approves
- Secrets handling requirements
- Dependency/component management expectations
- Logging and security event considerations
- Vulnerability remediation expectations and exception handling
Keep it implementable: each requirement should map to a check (automated, manual, or both).
3) Integrate practices into SDLC touchpoints (“baked into the workflow”)
Build a phase-to-control mapping table like the one below and use it as your audit backbone.
| SDLC phase | Integrated practices (examples) | Evidence source |
|---|---|---|
| Plan / design | Security requirements in user stories; threat modeling for high-risk changes | Ticket templates, completed threat model records |
| Code | Secure coding checklist; peer review expectations | PR templates, code review records |
| Build | Dependency scanning; build integrity checks | CI logs, scanner reports |
| Test | SAST/DAST where applicable; security unit tests | Tool outputs, test results |
| Release | Security sign-off criteria; exception approvals | Release checklist, approvals |
| Operate / maintain | Vulnerability intake and triage; patching workflow | Vuln tracker, change tickets |
The key is that the control execution is repeatable and produces evidence without heroics.
4) Set monitoring metrics that show performance (not just activity)
PR.PS-06 requires performance monitoring (NIST CSWP 29). Define a small metrics set you can sustain. Examples (choose what you can measure reliably):
- Coverage: repos/pipelines onboarded to required checks
- Finding trends: volume and severity distribution over time (qualitative trending is acceptable if consistent)
- Remediation tracking: aging of open issues by category
- Exceptions: count and reasons; time-bound approvals; expiry enforcement
- Pipeline health: pass/fail rates for security gates and top causes of failures
Tie each metric to an owner, a data source, and an internal review cadence.
5) Establish exception handling with expiry and traceability
You need a controlled way to ship when security checks fail:
- Document who can approve exceptions (role-based)
- Require business justification and compensating controls
- Require a target remediation plan and expiration
- Track exceptions in a register and include them in performance reporting
Auditors often accept exceptions; they do not accept invisible exceptions.
6) Run a management review and record decisions
Monitoring without review is weak. Set a recurring forum (security steering, engineering risk review, or SDLC governance) where you:
- Review SDLC security metrics
- Approve remediation initiatives
- Address underperforming teams/pipelines
- Review exception trends and overdue items
Retain minutes, decks, and action items. This becomes your “performance monitored” proof (NIST CSWP 29).
7) Map PR.PS-06 to your governance system and automate evidence collection
Implement the recommended control: map PR.PS-06 to policy, procedure, control owner, and recurring evidence collection (NIST CSF 1.1 to 2.0 Core Transition Changes; NIST CSWP 29). In practice:
- Control in your GRC system with defined evidence requests
- Links to source-of-truth dashboards and repositories
- A quarterly (or other defined cadence) evidence package export
Daydream fits naturally here if you need a clean control-to-evidence workflow, reminders, and an audit-ready evidence binder without chasing engineers across tools.
Required evidence and artifacts to retain
Retain evidence that proves definition, integration, and monitoring:
Governance artifacts
- Secure SDLC policy/standard and supporting procedures (NIST CSWP 29)
- Control ownership RACI and escalation path
- PR.PS-06 control narrative and SDLC phase mapping (NIST CSF 1.1 to 2.0 Core Transition Changes)
Operational artifacts (sample-based is fine if consistent)
- CI/CD configurations showing required security checks enabled
- Tool outputs (SAST/dependency scans/build logs) for selected releases
- Threat model records for scoped changes
- Release checklists and approvals showing security gates
Monitoring artifacts
- Metrics definitions (data source, owner, review cadence)
- Dashboards or reports showing trends
- Meeting minutes/action items demonstrating review and follow-up (NIST CSWP 29)
Exception artifacts
- Exception requests/approvals with justification and expiry
- Exception register and status reporting
Common exam/audit questions and hangups
What auditors ask
- “Show me where secure development practices are defined and mandatory.”
- “Which pipelines enforce these practices, and how do you prevent bypass?”
- “How do you know the controls are operating effectively over time?” (PR.PS-06 performance monitoring)
- “Show exceptions, who approved them, and whether they expired.”
Frequent hangups
- Evidence is scattered across tools with no single narrative.
- Controls exist but are “optional” (no enforcement, no gating).
- Monitoring is ad hoc (no defined metrics, no documented reviews).
- Teams claim compliance, but only one or two repos are actually onboarded.
Frequent implementation mistakes and how to avoid them
-
Mistake: Treating a secure coding policy as “integration.”
Fix: Map each requirement to an SDLC touchpoint and a check (ticket template, PR template, pipeline job, release gate). -
Mistake: Tooling without governance.
Fix: Define owners, metrics, exception authority, and review minutes. Tools produce signals; governance turns them into monitored performance. -
Mistake: No exception expiry.
Fix: Require expiration and track it. Expired exceptions become audit findings fast. -
Mistake: Metrics that measure noise, not performance.
Fix: Prefer a small set tied to outcomes you can act on (coverage, recurring failure causes, exception trends).
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so this page does not cite specific actions. Practically, PR.PS-06 failures create predictable risk: insecure code paths reach production, vulnerabilities persist without ownership, and you cannot demonstrate control effectiveness during regulatory exams, customer due diligence, or incident postmortems (NIST CSWP 29). The fastest way to reduce that exposure is to make secure SDLC controls measurable and reviewable, with a clear evidence trail.
Practical 30/60/90-day execution plan
This plan is structured for execution speed. Treat the dates as targets you set internally based on your release cadence and tooling maturity.
First 30 days (establish the control)
- Confirm scope: which teams/apps/pipelines are in-scope first.
- Assign control owner and engineering owners; document RACI.
- Publish (or refresh) the secure SDLC standard and exception process.
- Build the SDLC phase-to-control mapping table and identify evidence sources.
- Choose the initial monitoring metrics and define data sources.
Days 31–60 (integrate into workflows)
- Update ticketing and PR templates to embed security requirements.
- Turn on required CI/CD checks for the initial scoped repos.
- Stand up an exception register with approval workflow and expiry fields.
- Produce the first metrics report/dashboards and validate data quality.
Days 61–90 (prove monitoring and governance)
- Hold recurring management review; document decisions and actions.
- Expand onboarding to additional repos/teams based on risk.
- Run an internal “audit pack” exercise: export evidence for a sample release.
- Remediate gaps found: bypass paths, missing logs, missing review records.
- Configure recurring evidence collection in your GRC workflow (Daydream or equivalent) to avoid manual scrambles (NIST CSF 1.1 to 2.0 Core Transition Changes).
Frequently Asked Questions
Do we need to build all security checks into CI/CD to meet PR.PS-06?
You need secure development practices integrated throughout the SDLC and monitored for performance (NIST CSWP 29). CI/CD enforcement is the most defensible pattern for many practices, but you can also use controlled manual gates if you can prove execution and monitoring with consistent evidence.
How do we handle third-party developers or outsourced engineering?
Treat them as part of your SDLC scope: require adherence to your secure SDLC standard, enforce controls in your repos/pipelines, and include their work in your monitoring metrics and exception process (NIST CSWP 29).
What is “performance monitored” in practical audit terms?
Auditors look for defined metrics, a repeatable reporting cadence, and records of review with follow-up actions (NIST CSWP 29). A dashboard alone is weaker than a dashboard plus meeting minutes and tracked remediation work.
We have policies, but engineers don’t follow them consistently. What’s the quickest fix?
Add enforceable workflow hooks: PR templates, required reviewers for sensitive changes, and pipeline checks that block merges or releases when requirements are not met. Pair that with an exception path so teams don’t create shadow processes.
Can we sample evidence instead of collecting artifacts for every release?
Yes, sampling is common in audits, but your sampling method must be consistent and your monitoring must still cover the broader population. Keep a defined sampling approach and retain complete evidence for sampled releases.
How should a GRC team work with engineering without slowing delivery?
Focus on standardization and automation: define the minimum secure SDLC baseline, instrument the pipelines, and collect evidence from systems engineers already use. Use a GRC workflow (including Daydream) to pull recurring evidence and track exceptions without interrupting sprint work (NIST CSF 1.1 to 2.0 Core Transition Changes).
Frequently Asked Questions
Do we need to build all security checks into CI/CD to meet PR.PS-06?
You need secure development practices integrated throughout the SDLC and monitored for performance (NIST CSWP 29). CI/CD enforcement is the most defensible pattern for many practices, but you can also use controlled manual gates if you can prove execution and monitoring with consistent evidence.
How do we handle third-party developers or outsourced engineering?
Treat them as part of your SDLC scope: require adherence to your secure SDLC standard, enforce controls in your repos/pipelines, and include their work in your monitoring metrics and exception process (NIST CSWP 29).
What is “performance monitored” in practical audit terms?
Auditors look for defined metrics, a repeatable reporting cadence, and records of review with follow-up actions (NIST CSWP 29). A dashboard alone is weaker than a dashboard plus meeting minutes and tracked remediation work.
We have policies, but engineers don’t follow them consistently. What’s the quickest fix?
Add enforceable workflow hooks: PR templates, required reviewers for sensitive changes, and pipeline checks that block merges or releases when requirements are not met. Pair that with an exception path so teams don’t create shadow processes.
Can we sample evidence instead of collecting artifacts for every release?
Yes, sampling is common in audits, but your sampling method must be consistent and your monitoring must still cover the broader population. Keep a defined sampling approach and retain complete evidence for sampled releases.
How should a GRC team work with engineering without slowing delivery?
Focus on standardization and automation: define the minimum secure SDLC baseline, instrument the pipelines, and collect evidence from systems engineers already use. Use a GRC workflow (including Daydream) to pull recurring evidence and track exceptions without interrupting sprint work (NIST CSF 1.1 to 2.0 Core Transition Changes).
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream