PM-6: Measures of Performance
PM-6 requires you to define a small, decision-ready set of security and privacy performance measures, track them on a recurring cadence, and report results to the right governance bodies so they can act. Operationalize it by establishing metric owners, data sources, calculation rules, thresholds, and a reporting workflow with retained evidence. 1
Key takeaways:
- Define measures that drive decisions, not vanity reporting, and document how each measure is calculated and sourced.
- Monitor on a set cadence and keep evidence that the metrics are produced, reviewed, and acted on.
- Report to governance with clear thresholds, exceptions, and remediation tracking tied to risks.
Footnotes
The pm-6: measures of performance requirement is a management control: it asks whether your security and privacy program can prove progress, detect drift, and trigger action before incidents or audit findings force the issue. The requirement text is short, but execution is not. Auditors will look for two things: (1) a defined set of measures (what you track and why), and (2) operational proof that you monitor and report those measures consistently, with accountable owners and outcomes. 1
PM-6 also prevents a common GRC failure mode: a stack of policies and control statements without a feedback loop. If you can’t show performance over time, you will struggle to defend control effectiveness, prioritize remediation, or justify risk acceptance decisions. Practically, PM-6 becomes a lightweight metrics operating model: a register of measures, a repeatable collection process, a governance forum that reviews results, and a way to link metric exceptions to corrective actions. 2
This page gives requirement-level guidance you can implement quickly: who should own PM-6, what to build, what evidence to retain, and what exam teams typically probe.
Regulatory text
Requirement (verbatim): “Develop, monitor, and report on the results of information security and privacy measures of performance.” 2
What the operator must do:
You must (1) define performance measures for both security and privacy, (2) run them repeatedly to produce results, and (3) deliver those results to governance and stakeholders who can make resourcing and risk decisions. “Develop” is documentation plus design. “Monitor” is recurring execution with traceable inputs. “Report” is distribution plus review and follow-up, not just dashboards. 1
Plain-English interpretation
PM-6 expects a closed loop:
- pick the right metrics,
- calculate them consistently,
- review them with decision-makers, and
- act when results cross thresholds or show negative trends.
If a metric does not drive a decision (prioritize a fix, approve an exception, change a control, fund a project), it is usually noise. Your goal is an auditable performance narrative: “Here’s what we measure, here’s what happened, here’s what we did about it.” 2
Who it applies to (entity and operational context)
PM-6 is relevant for:
- Federal information systems implementing NIST SP 800-53 controls. 1
- Contractor systems handling federal data where NIST SP 800-53 is flowed down contractually or used as the control baseline. 1
Operationally, this applies wherever you run an information security and privacy program with governance expectations: centralized GRC, distributed control owners, outsourced/third-party control operation, or shared services. PM-6 also intersects with third party oversight because many “performance” outcomes depend on third parties (cloud logging coverage, patch SLAs, SOC report exceptions, privacy DSAR tooling performance), even if PM-6 does not explicitly mention third parties. 1
What you actually need to do (step-by-step)
Step 1: Name an accountable owner and governance path
- Assign a PM-6 control owner (often the CISO’s GRC lead or Security Assurance) and a privacy counterpart for privacy measures.
- Define the reporting destinations: security steering committee, risk committee, privacy governance, system owner forums, or ATO governance for federal contexts.
Deliverable: a one-page PM-6 operating procedure that lists roles, inputs, cadence, and distribution.
Step 2: Build a “Measures Register” (your source of truth)
Create a register (spreadsheet or GRC tool) with one row per measure. Minimum fields:
- Measure name and purpose (decision it supports)
- Security or privacy classification
- Metric type (coverage, timeliness, effectiveness, outcome, compliance)
- Data source(s) and system of record
- Calculation definition (formula, numerator/denominator if used)
- Scope and inclusions/exclusions (which assets, BUs, environments)
- Owner and backup
- Collection cadence and reporting cadence
- Thresholds/targets and exception rules
- Consumers (who receives it)
- Related risks/controls and corrective action link
This register is the artifact auditors will ask for because it proves “develop” happened and it anchors consistent monitoring. 2
Step 3: Select measures that map to real control outcomes
Avoid a “measure everything” program. Pick measures that align to your highest-risk controls and operational chokepoints. Examples of decision-grade measures:
- Patch and vulnerability remediation timeliness by severity tier (drives remediation prioritization)
- MFA coverage for privileged access (drives access hardening)
- Logging/telemetry coverage for crown-jewel systems (drives detection gaps)
- Backup success and restore testing results (drives resilience work)
- Privacy request (DSAR) cycle time and backlog (drives staffing and process fixes)
- Data classification coverage for sensitive datasets (drives privacy and security controls)
Document why each measure exists and what action is expected when thresholds are missed. 1
Step 4: Standardize data collection and calculation rules
For each measure:
- Identify the system-generated evidence (ticketing system, SIEM, CMDB, EDR console, DLP, IAM, privacy tooling).
- Define data quality checks (handling missing assets, duplicate tickets, stale CMDB entries).
- Create a repeatable collection script or workflow (export, API pull, scheduled report).
- Lock the definition. If you change it, version it and record the reason.
This is where most PM-6 programs fail: metrics that change meaning quarter to quarter cannot support trend reporting or audit defensibility. 2
Step 5: Establish a monitoring cadence and exception workflow
Monitoring means you can show repeated production, not a one-off report made for an audit. Operationalize with:
- A calendar invite for metric production and review
- A standard “exceptions log” (missed threshold, root cause, corrective action owner, due date, status)
- A tie-in to your risk register when exceptions represent accepted risk or repeated nonperformance
If you track third party–operated controls (managed SOC, cloud provider configurations, outsourced app support), include how you obtain the inputs (attestations, reports, API exports) and what you do when a third party misses expectations. 1
Step 6: Report results in a way governance can act on
Reporting should be short and decision-oriented:
- RAG status by measure and trend
- Top exceptions and systemic root causes
- Material changes in scope or definitions
- Actions required (funding, prioritization, risk acceptance)
A dashboard without meeting minutes and action tracking rarely satisfies “report” because it doesn’t prove review or response. 2
Step 7: Retain evidence and make it assessment-ready
Package PM-6 evidence by reporting period:
- Measures Register (current + version history)
- Raw data extracts or system reports
- Final metric outputs (PDF/slide deck/dashboard export)
- Review evidence (agenda, minutes, attendees, approvals)
- Exception log and remediation tickets
- Any risk acceptances tied to metric exceptions
Daydream (as a GRC workflow layer) is a natural fit when you need to map PM-6 to an owner, a written procedure, and recurring evidence artifacts with due dates and reminders, so metric runs do not depend on one person’s calendar discipline. 2
Required evidence and artifacts to retain (audit-ready checklist)
| Artifact | What it proves | Common format |
|---|---|---|
| Measures Register | “Develop” happened; definitions are controlled | Spreadsheet or GRC record |
| Metric definitions 2 | Calculations are consistent and repeatable | SOP or wiki page |
| Data source mapping | Inputs are traceable to systems of record | Diagram or table |
| Periodic metric outputs | “Monitor” occurred | Dashboard export, PDF, slides |
| Governance review records | “Report” and review occurred | Minutes, tickets, approvals |
| Exceptions log + remediation | Action taken on results | Jira/ServiceNow + log |
| Version/change log | Metric changes are controlled | Changelog entries |
Common exam/audit questions and hangups
- “Show me your measures of performance.” They expect a defined set, not ad hoc dashboards.
- “How do you ensure the metric is accurate?” Be ready with data lineage and quality checks.
- “Who reviews these and what actions resulted?” Have minutes, decisions, and tickets.
- “Do you cover privacy as well as security?” PM-6 explicitly includes both. 2
- “How do you handle metric definition changes?” Provide versioning and comparability notes.
Frequent implementation mistakes and how to avoid them
- Mistake: Vanity metrics. Fix: require each measure to list the decision it triggers and the owner who must act.
- Mistake: No stable definitions. Fix: lock formulas, scope, and data sources; version changes.
- Mistake: Reporting without review. Fix: pair each report with a governance forum and captured outcomes.
- Mistake: No evidence retention. Fix: predefine an evidence package per reporting period and store it centrally.
- Mistake: Security-only metrics. Fix: include privacy operational measures (requests, data inventory coverage, DPIA throughput) that fit your program scope. 2
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement, so you should treat PM-6 primarily as an assessment and governance expectation rather than an enforcement-citation driver. The practical risk is indirect but real: weak measures of performance make it harder to demonstrate control effectiveness, prioritize remediation, and defend risk decisions during audits or ATO activities. 1
Practical 30/60/90-day execution plan
First 30 days: Stand up the minimum viable PM-6 program
- Assign owners (security + privacy) and define reporting recipients.
- Draft the Measures Register template and populate an initial set of measures.
- Document definitions for each initial measure (scope, data sources, formula).
- Run one pilot reporting cycle end-to-end and capture evidence. 2
By 60 days: Stabilize and operationalize
- Automate or standardize data pulls for the highest-effort measures.
- Add data quality checks and handle scope gaps (missing assets, CMDB drift).
- Formalize the exceptions log and link exceptions to remediation tickets.
- Hold a recurring governance review and document decisions. 1
By 90 days: Make it durable and audit-friendly
- Expand coverage to remaining high-risk areas and confirm privacy measures are represented.
- Implement version control for metric definitions and a change approval path.
- Create a repeatable evidence package per reporting period for audit readiness.
- Map each measure to related controls/risks so reporting supports risk governance discussions. 2
Frequently Asked Questions
What’s the minimum set of PM-6 measures I can start with?
Start with measures tied to your top control failure modes and governance decisions, plus at least one privacy operational measure. Document each measure’s purpose, data source, and owner so you can show “develop, monitor, report” with evidence. 2
Do I need a dedicated metrics tool to satisfy PM-6?
No. Auditors care more about consistency, review, and evidence than tooling. A spreadsheet register plus exported reports and meeting minutes can work if the process is repeatable and retained. 1
How do I prove “monitor” vs. “report”?
“Monitor” is the repeated production of results from defined measures, with traceable inputs. “Report” is distribution and governance review, supported by artifacts like decks, dashboards, and minutes. 2
How should PM-6 handle third parties that operate key controls?
Treat third party–dependent metrics like any other measure: define the input source (attestation, report, portal export), the cadence, and what happens when expectations are missed. Record follow-up actions and remediation ownership. 1
What evidence is most commonly missing during audits?
Teams often have dashboards but cannot show stable metric definitions, data lineage, or proof of governance review with decisions. Build an evidence packet per reporting period so you can answer these requests quickly. 2
How do I keep metric definitions from changing every quarter?
Put definitions under change control: version the measure, record what changed, and explain how trend reporting remains meaningful across versions. Store prior definitions alongside prior outputs. 1
Footnotes
Frequently Asked Questions
What’s the minimum set of PM-6 measures I can start with?
Start with measures tied to your top control failure modes and governance decisions, plus at least one privacy operational measure. Document each measure’s purpose, data source, and owner so you can show “develop, monitor, report” with evidence. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Do I need a dedicated metrics tool to satisfy PM-6?
No. Auditors care more about consistency, review, and evidence than tooling. A spreadsheet register plus exported reports and meeting minutes can work if the process is repeatable and retained. (Source: NIST SP 800-53 Rev. 5)
How do I prove “monitor” vs. “report”?
“Monitor” is the repeated production of results from defined measures, with traceable inputs. “Report” is distribution and governance review, supported by artifacts like decks, dashboards, and minutes. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How should PM-6 handle third parties that operate key controls?
Treat third party–dependent metrics like any other measure: define the input source (attestation, report, portal export), the cadence, and what happens when expectations are missed. Record follow-up actions and remediation ownership. (Source: NIST SP 800-53 Rev. 5)
What evidence is most commonly missing during audits?
Teams often have dashboards but cannot show stable metric definitions, data lineage, or proof of governance review with decisions. Build an evidence packet per reporting period so you can answer these requests quickly. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How do I keep metric definitions from changing every quarter?
Put definitions under change control: version the measure, record what changed, and explain how trend reporting remains meaningful across versions. Store prior definitions alongside prior outputs. (Source: NIST SP 800-53 Rev. 5)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream