SI-19(6): Differential Privacy

SI-19(6) requires you to prevent disclosure of personally identifiable information (PII) by adding non-deterministic noise to the results of mathematical operations before you report or release those results 1. Operationalize it by scoping which analytics outputs could expose PII, implementing a differential privacy mechanism at the reporting layer, and retaining evidence that noise is applied consistently and governed.

Key takeaways:

  • Apply differential privacy to reported analytics results where PII could be inferred, not only to raw datasets 1.
  • Treat this as a governed release control: approved queries, privacy parameters, monitoring, and documented exceptions.
  • Keep assessor-ready proof: design decisions, configurations, query allowlists, test results, and change history tied to the systems handling federal data.

The si-19(6): differential privacy requirement is a specific enhancement in NIST SP 800-53 Rev. 5 that targets a common failure mode in privacy programs: releasing “aggregate” statistics that still allow inference about an individual. The control’s intent is narrow and practical. If your system produces reported results from mathematical operations (dashboards, extracts, model metrics, research outputs, de-identified datasets, or ad hoc query results), you need a mechanism that adds non-deterministic noise before results are disclosed externally or broadly internally, so the reported output does not reveal PII 1.

For a CCO, GRC lead, or security compliance owner, the operational challenge is not the math. It’s governance and repeatability: defining where “reporting” happens, ensuring noise is applied consistently, preventing bypass paths (like direct database access or “power user” exports), and proving to assessors that the mechanism is engineered, configured, tested, and monitored. This page gives requirement-level guidance you can hand to engineering and data teams, plus the evidence package you should retain for audits and ATO work aligned to NIST SP 800-53 Rev. 5 2.

Regulatory text

Text (excerpt): “Prevent disclosure of personally identifiable information by adding non-deterministic noise to the results of mathematical operations before the results are reported.” 1

Operator interpretation (what you must do)

You must implement a technical control that modifies reported outputs of computations so that PII is not disclosed through the output. The mechanism must add non-deterministic noise (noise that is not predictable or repeatable in a way that would allow an attacker to reverse it) before results leave the controlled environment or are presented to users who are not authorized to see PII 1.

What this means in practice:

  • This is not “remove names and emails.” SI-19(6) is about inference risk from results (counts, averages, histograms, model accuracy by subgroup, query responses).
  • “Before results are reported” makes the reporting layer a primary control point: BI tools, APIs serving analytics, scheduled reports, research outputs, and data products.
  • Non-deterministic noise must be systematic and governed. A one-off analyst script does not satisfy an assessor’s expectation for a control.

Plain-English requirement statement

If your system handles PII and produces computed results that could be used to learn something about a person, you must add randomized noise to those results before sharing them, so the output does not reveal PII 1.

Who it applies to (entity + operational context)

In-scope entities

  • Federal information systems and contractor systems handling federal data mapped to NIST SP 800-53 control baselines 2.
  • Programs pursuing or maintaining an Authorization to Operate (ATO) or assessed against NIST SP 800-53 Rev. 5.

In-scope operational contexts (where assessors will look)

SI-19(6) becomes relevant when your environment has any of the following:

  • Analytics/reporting on PII-bearing datasets (dashboards, metrics, KPI reporting).
  • Data sharing or research outputs claimed to be “de-identified.”
  • Machine learning workflows that publish metrics, cohort results, or model outputs derived from PII.
  • Self-serve querying (SQL workbenches, notebook environments) where users can run queries that return aggregates.

Typical out-of-scope (or lower priority) contexts

  • Systems with no PII at all.
  • Purely operational transactional outputs that are not “reported results of mathematical operations” (still may be governed by other privacy controls, but SI-19(6) is specifically about reporting computed results).

What you actually need to do (step-by-step)

Step 1: Define the “reporting boundary”

Write down where computed results are produced and released:

  • BI dashboards and exports
  • Analytics APIs
  • Scheduled reports (CSV/email)
  • Research extracts
  • Model evaluation reports

Deliverable: Reporting Boundary Diagram showing data sources → computation layer → reporting endpoints → recipients.

Step 2: Identify outputs that create inference risk

Create an inventory of “PII-adjacent outputs,” such as:

  • Small-cell counts (e.g., counts by ZIP+age+condition)
  • Time-series that can isolate a single person’s event
  • “Difference of aggregates” patterns (querying two overlapping populations)
  • Metrics by rare subgroup

Deliverable: Analytics Output Risk Register with: output name, endpoint/tool, data sources, intended audience, release frequency, and qualitative inference risk.

Step 3: Choose your implementation pattern for differential privacy

You need a consistent mechanism to add non-deterministic noise before release 1. Common patterns:

  • DP query gateway: All approved analytics queries go through a service that adds noise and enforces query rules.
  • DP reporting library: A shared library used by reporting jobs that applies noise by default.
  • DP-enabled platform controls: If your analytics platform supports DP features, configure them centrally and restrict direct access paths.

Decision criteria (document your rationale):

  • Central enforcement (harder to bypass)
  • Ease of proving control operation (logs, configs)
  • Compatibility with tools your teams already use

Deliverable: Design Decision Record for the DP mechanism and enforcement point.

Step 4: Establish governance: what is allowed to be reported

Differential privacy is easy to weaken through ad hoc exceptions. Put guardrails in writing:

  • Approved query types (counts, sums, means) and banned constructs (free-form joins on identifiers, returning raw rows).
  • Minimum cohort thresholds for reporting (set internally as policy, even though SI-19(6) does not prescribe a number).
  • Parameter governance (who can change privacy parameters and how changes are reviewed).
  • Exception process for cases requiring non-noised outputs (who approves, how access is restricted, how it is logged).

Deliverable: Differential Privacy Reporting Standard owned by Security/GRC with data engineering sign-off.

Step 5: Implement access controls to prevent bypass

Assessors will ask: “Can users get the un-noised results another way?” Close the obvious gaps:

  • Restrict direct database access for reporting audiences.
  • Force dashboards/APIs to use the DP mechanism, not direct queries.
  • Separate roles: those who can view raw PII vs those who view DP outputs.
  • Monitor and alert on exports that bypass approved pipelines.

Deliverable: Access Control Matrix for data sources, analytics tools, and reporting outputs.

Step 6: Validate with tests that match real misuse

Do not limit testing to “noise exists.” Test for:

  • Repeat query behavior and variability (non-deterministic behavior).
  • Attempts to narrow cohorts to isolate individuals.
  • Regression tests after code changes to ensure noise application remains in place.

Deliverable: DP Test Plan + Test Results mapped to reporting endpoints.

Step 7: Operationalize change management and recurring evidence

Treat the DP mechanism as a security/privacy control:

  • Version control for configurations and code.
  • Change tickets for parameter updates.
  • Recurring reviews of the output inventory and risk register.
  • Logging that proves noise was applied to each release event.

Deliverable: Control Operating Log (or dashboard) showing operation over time.

Step 8: Map ownership and build an audit-ready control record

Your audit risk is often “we did it, but cannot prove it.” Assign:

  • Control owner (accountable)
  • Engineering owner (implements)
  • Data owner (approves outputs)
  • Assessor-facing artifacts list and location

Practical note: Daydream is useful here as a system of record to map SI-19(6) to a named owner, a repeatable procedure, and a standing evidence list so you can produce artifacts quickly during an assessment 1.

Required evidence and artifacts to retain

Maintain these as a minimum “SI-19(6) evidence pack”:

  • Control narrative for SI-19(6): scope, mechanism, enforcement points, roles 1.
  • Reporting boundary diagram and inventory of reporting endpoints.
  • Risk register for analytics outputs and inference risk rationale.
  • Design decision record describing the DP mechanism and why it meets “non-deterministic noise before reporting” 1.
  • Configuration evidence: DP parameters/config files, policy-as-code rules, platform settings screenshots/exports.
  • Sample logs showing noise application for representative outputs (include timestamps, job IDs, endpoints).
  • Test plan and results for misuse cases.
  • Change management records for DP parameter changes and code changes.
  • Access control evidence: role definitions, group memberships, and proof that audiences cannot access raw outputs.

Common exam/audit questions and hangups

Assessors commonly get stuck on “show me” questions:

  • Where exactly is noise added? Point to the code path/service and show configuration 1.
  • How do you prevent bypass? Demonstrate access controls and blocked direct paths.
  • Is the noise non-deterministic? Show test evidence that repeated runs do not return identical values under the same conditions 1.
  • What’s in scope? Be ready with the reporting boundary and inventory.
  • Who approves releases and parameter changes? Provide your governance standard and change tickets.

Frequent implementation mistakes (and how to avoid them)

  1. Treating masking as differential privacy. Masking identifiers does not meet the stated requirement to add non-deterministic noise to mathematical results 1.
    Fix: Put DP at the reporting layer for aggregates and metrics.

  2. DP exists in one pipeline, but exports bypass it.
    Fix: Lock down raw access, funnel reporting through controlled endpoints, and monitor exports.

  3. No evidence trail. Teams can’t reproduce how a report was generated or what parameters were applied.
    Fix: Log DP application events and store versioned configuration.

  4. Unbounded ad hoc queries. Analysts can compose queries that defeat protections through differencing.
    Fix: Use an allowlist or query gateway with guardrails; require review for new query patterns.

  5. Parameter sprawl. Different teams tune noise inconsistently with no approvals.
    Fix: Centralize parameter governance; tie changes to tickets and approvals.

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for SI-19(6). Your practical risk is assessment failure or control deficiency write-ups when you cannot demonstrate that PII disclosure via reported results is prevented through non-deterministic noise at release points 1. Secondary risk: privacy incidents where “anonymous” reports are re-identified through inference, which can trigger contractual, regulatory, and reputational consequences depending on your environment.

Practical 30/60/90-day execution plan

First 30 days (scope + design)

  • Assign control owner and engineering lead; document RACI.
  • Produce the reporting boundary diagram and inventory of reporting endpoints.
  • Build the analytics output risk register and tag “high inference risk” outputs.
  • Select the DP implementation pattern and write the design decision record.

Days 31–60 (implement + govern)

  • Implement the DP mechanism at the agreed enforcement point (gateway/library/platform configuration).
  • Draft and approve the Differential Privacy Reporting Standard (allowed outputs, exception process, parameter governance).
  • Close bypass paths: tighten roles, restrict direct access, route dashboards/APIs through DP.

Days 61–90 (validate + evidence)

  • Execute the DP test plan; capture results and remediation tickets.
  • Stand up logging and recurring evidence capture (config snapshots, change history, sample output traces).
  • Run an internal “mock assessment” walkthrough: pick two reports and prove end-to-end that noise is applied before reporting 1.
  • Put the evidence pack under document control and schedule recurring reviews.

Frequently Asked Questions

Does SI-19(6) require differential privacy for every dataset with PII?

The text targets “results of mathematical operations” that are reported, not every internal computation 1. Scope it to reporting endpoints and outputs where inference could disclose PII.

If we already de-identify data, do we still need SI-19(6)?

Possibly. De-identification controls the dataset, but SI-19(6) focuses on reported results and inference risk from those results 1. If you publish aggregates or metrics, DP may still be needed.

What counts as “non-deterministic noise” for audit purposes?

You need a mechanism where the added noise is randomized and not predictable, and you can show it is applied before results are reported 1. Keep test results demonstrating variability and configuration evidence.

Can we meet the requirement by rounding or suppressing small counts?

Rounding and suppression are privacy techniques, but SI-19(6) explicitly calls for adding non-deterministic noise to results 1. If you use rounding/suppression, treat them as complementary, not the primary control for this requirement.

How do we handle executives who demand exact numbers in dashboards?

Create a tiered access model: DP-protected metrics for broad distribution and tightly controlled access for authorized users who have a justified need for exact figures. Document the exception path and access controls as part of the governance package.

What evidence is usually hardest to produce during an assessment?

Teams often struggle to show consistent operation over time: logs proving noise was applied to actual report runs, and change records for DP parameter updates. Build automated evidence capture early so you are not reconstructing history during the audit.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

Does SI-19(6) require differential privacy for every dataset with PII?

The text targets “results of mathematical operations” that are reported, not every internal computation (Source: NIST SP 800-53 Rev. 5 OSCAL JSON). Scope it to reporting endpoints and outputs where inference could disclose PII.

If we already de-identify data, do we still need SI-19(6)?

Possibly. De-identification controls the dataset, but SI-19(6) focuses on reported results and inference risk from those results (Source: NIST SP 800-53 Rev. 5 OSCAL JSON). If you publish aggregates or metrics, DP may still be needed.

What counts as “non-deterministic noise” for audit purposes?

You need a mechanism where the added noise is randomized and not predictable, and you can show it is applied before results are reported (Source: NIST SP 800-53 Rev. 5 OSCAL JSON). Keep test results demonstrating variability and configuration evidence.

Can we meet the requirement by rounding or suppressing small counts?

Rounding and suppression are privacy techniques, but SI-19(6) explicitly calls for adding non-deterministic noise to results (Source: NIST SP 800-53 Rev. 5 OSCAL JSON). If you use rounding/suppression, treat them as complementary, not the primary control for this requirement.

How do we handle executives who demand exact numbers in dashboards?

Create a tiered access model: DP-protected metrics for broad distribution and tightly controlled access for authorized users who have a justified need for exact figures. Document the exception path and access controls as part of the governance package.

What evidence is usually hardest to produce during an assessment?

Teams often struggle to show consistent operation over time: logs proving noise was applied to actual report runs, and change records for DP parameter updates. Build automated evidence capture early so you are not reconstructing history during the audit.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream