Data Protection Controls
To meet the data protection controls requirement, you must implement and operate technical and procedural controls that protect scoped data both at rest and in transit, and you must be able to prove those controls are consistently applied. In practice, that means encryption where feasible, strong access restrictions, and operational guardrails (like DLP and exception handling) aligned to your C2M2 scope. 1
Key takeaways:
- Scope first: you cannot “protect data” without defining the systems, data types, and trust boundaries in scope for your C2M2 assessment. 1
- Audits reward evidence: keep approvals, configuration proof, reviews, and exception records for encryption and related controls. 1
- “At rest and in transit” must be end-to-end: include backups, logs, vendor pathways, admin access, and machine-to-machine traffic, not just user web sessions. 1
“Data protection controls” sounds broad until you translate it into operational decisions: which data stores must be encrypted, where keys live, which network paths require encryption, and how you prevent sensitive data from leaving approved channels. C2M2 frames this as an architecture requirement: you implement controls that protect data at rest and in transit within the scope you defined for the assessment. 1
For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat this requirement as a control family with three deliverables: (1) clear standards (what is required and where), (2) validated technical implementation (proof that it’s actually enabled), and (3) durable governance (approvals, periodic review, and documented exceptions). If any of those are missing, audits stall because you can’t show that protections are systematic rather than ad hoc. 1
This page gives you requirement-level guidance you can hand to security engineering and infrastructure owners, while keeping accountability and evidence tight enough for internal control testing, customer diligence, and regulator-facing narratives.
Regulatory text
Requirement (C2M2 v2.1 ARCHITECTURE-1.C, MIL2): “Controls are implemented to protect data at rest and in transit.” 1
Operator interpretation (what you must do):
- Identify where your scoped data resides (at rest) and how it moves (in transit).
- Implement controls appropriate to the risk and architecture, commonly including encryption, access restrictions, and data loss prevention measures.
- Operate those controls with defined approval criteria, provisioning steps, review cadence, and revocation triggers.
- Retain evidence that decisions were authorized and periodically revalidated. 1
Plain-English interpretation
You need to prevent unauthorized disclosure or tampering of sensitive data whether it is stored (databases, file shares, endpoints, backups, logs) or transmitted (user traffic, service-to-service calls, remote admin, replication, integrations with third parties). “Implemented” means enabled by default where required, not “available if someone turns it on.”
For most organizations, auditors will read this requirement as two questions:
- Coverage: Did you protect all relevant repositories and transmission paths in the assessment scope?
- Control operation: Can you show the protections are configured, monitored (where applicable), and governed through approvals, reviews, and exceptions? 1
Who it applies to
Entity types: Energy sector organizations and other critical infrastructure operators using C2M2 as the assessment framework. 1
Operational context: Applies when you have adopted C2M2 for a defined scope (business unit, function, or OT environment) and are assessing maturity for that scope. 1
Typical in-scope environments (practical reading):
- OT networks and supporting IT (historians, engineering workstations, jump hosts)
- Corporate systems that store operationally sensitive data
- Data exchanges with third parties (managed service providers, SaaS, remote support)
- Backup and disaster recovery environments
What you actually need to do (step-by-step)
Step 1: Define scope and data protection objectives
- Confirm assessment scope (sites, networks, applications, OT zones, cloud accounts).
- Define data classes relevant to the scope (e.g., operational telemetry, engineering configs, credentials, security logs, customer/employee data if present in scope).
- Map “data at rest” repositories: databases, object storage, file servers, endpoints, removable media, backups, log platforms.
- Map “data in transit” paths: user-to-app, app-to-app, admin protocols, replication, API integrations, third-party connectivity.
Output: a scoped data flow and storage inventory that engineering can implement against, and GRC can audit against.
Step 2: Set minimum control standards (policy-to-configuration mapping)
Create a “Data Protection Standard” that is testable. Keep it short, but specific:
- Encryption at rest requirements by data class and storage type (cloud storage, databases, endpoint disks, backups).
- Encryption in transit requirements by trust boundary (internal zone-to-zone, remote access, third-party links).
- Key management expectations (ownership, separation of duties, rotation triggers, access approvals).
- Access restrictions tied to least privilege and privileged access pathways.
- DLP expectations where the risk of exfiltration is meaningful (email, endpoints, web uploads, cloud sharing).
- Exception process with compensating controls and time-bound approvals. 1
Practical note: write the standard so it can be validated via configuration evidence (screenshots, exports, policy-as-code outputs, scanner reports).
Step 3: Implement controls for data at rest
Work with system owners to enable and verify:
- Storage encryption: full-disk encryption for endpoints; volume/object encryption for servers and cloud; database encryption where feasible.
- Backup encryption: backups are frequently missed. Treat backups as primary data stores for this requirement.
- Access controls on repositories: tighten ACLs, group membership, service accounts; require approvals for privileged access.
- Secrets protection: move credentials/keys out of code and shared folders into controlled stores; restrict who can read/export secrets.
- Logging and monitoring of access to sensitive stores where you already have telemetry, especially for admin access.
Verification focus: “enabled” is not enough. Capture proof that encryption is on for each in-scope system class, and that access is restricted to approved roles.
Step 4: Implement controls for data in transit
Confirm the organization protects data whenever it crosses a boundary:
- Transport encryption for user access (web, VPN/remote access).
- Service-to-service encryption for internal APIs and messaging where sensitive data is present.
- Administrative traffic protection (remote admin paths, jump hosts, vendor remote support).
- Third-party connections: require encrypted channels for data exchanges and remote support sessions, and record the approved architecture.
Verification focus: identify “plaintext exceptions” (legacy protocols, OT constraints) and manage them through a documented exception with compensating controls (segmentation, monitoring, strict access). 1
Step 5: Operationalize governance (approvals, reviews, revocation)
C2M2 maturity expectations drive governance discipline, not just technical features:
- Define approval criteria for enabling/altering encryption, key access, DLP rule changes, and repository access changes.
- Document provisioning steps (tickets, change records, IaC workflows).
- Set a review cadence for:
- Access to sensitive repositories and key management roles
- Exceptions to encryption or DLP
- Third-party connections carrying sensitive data
- Define revocation triggers (role change, termination, project end, third-party offboarding, certificate compromise). 1
This is where tools like Daydream help: you can standardize control owners, require evidence uploads per system/control, and track exceptions and revalidations so audit response does not depend on one engineer’s memory.
Required evidence and artifacts to retain
Keep evidence that proves design, implementation, and operation:
Governance artifacts
- Data Protection Policy / Standard (approved, versioned)
- Data classification and scope statement for the C2M2 assessment 1
- Key management procedures and ownership/RACI
Implementation evidence (at rest)
- Encryption configuration exports/screenshots per platform (disk, database, object storage)
- Backup configuration showing encryption enabled
- Inventory of in-scope data stores mapped to control status (encrypted: yes/no; exception: yes/no)
Implementation evidence (in transit)
- Network/remote access standards and proof of encrypted protocols in use
- Architecture diagrams showing trust boundaries and encrypted links for sensitive flows
- Third-party connection documentation for data exchange and remote support pathways
Operational evidence (what auditors ask for)
- Access requests and approvals for privileged access to data repositories and keys 1
- Periodic access review results and sign-offs 1
- Exception records with compensating controls, approver, expiry, and revalidation history 1
- Change tickets for material modifications to encryption/DLP/access configurations
Common exam/audit questions and hangups
Expect these questions, and prepare “show me” answers:
-
“What data is in scope, and where is it stored?”
Hangup: no authoritative inventory or unclear scope boundaries. -
“Prove encryption at rest is enabled across the scope.”
Hangup: evidence is anecdotal, not system-by-system. -
“Show how you protect data in transit for admin access and third-party links.”
Hangup: teams focus on user web traffic and forget remote support, replication, or OT pathways. -
“How do you approve and review exceptions?”
Hangup: exceptions exist in email/Slack without expiry, compensating controls, or periodic revalidation. 1 -
“Who can access keys or decrypt data, and how is that reviewed?”
Hangup: unclear key ownership, overbroad admin groups, missing access review records.
Frequent implementation mistakes (and how to avoid them)
-
Mistake: treating encryption as a checkbox rather than coverage.
Fix: maintain a repository-to-control mapping and test it during reviews. -
Mistake: ignoring backups, logs, and exports.
Fix: include secondary stores in your “at rest” inventory and require the same proof. -
Mistake: undocumented plaintext constraints in OT.
Fix: formal exceptions with segmentation and monitoring as compensating controls, plus an owner and expiry. 1 -
Mistake: “set and forget” DLP or access restrictions.
Fix: define review cadence and revocation triggers; keep the artifacts that show it happened. 1 -
Mistake: weak evidence discipline.
Fix: require change tickets, approvals, and periodic review outputs as part of the control, not as an afterthought. Daydream can centralize this evidence per control and system owner, reducing scramble during audits.
Enforcement context and risk implications
No public enforcement cases were provided for this C2M2 requirement in the supplied source catalog. Practically, the risk is still concrete: if controls are poorly designed or not evidenced, inappropriate access can persist and access decisions become hard to justify during internal control testing, audits, customer diligence, or regulator review. 1
Practical 30/60/90-day execution plan
First 30 days (stabilize scope and standards)
- Confirm C2M2 assessment scope and owners for each environment. 1
- Build an inventory of in-scope data stores and transit pathways (start with highest-risk systems).
- Publish a testable Data Protection Standard (at rest, in transit, keys, DLP, exceptions).
- Stand up the evidence model: where approvals, configs, reviews, and exceptions will be stored and who must produce them.
By 60 days (implement and prove baseline coverage)
- Enable and validate encryption at rest for priority repositories; document any exceptions with compensating controls and expiry.
- Validate encryption in transit for remote access, admin pathways, and key third-party connections.
- Implement approval workflow for access to sensitive repositories and key management roles. 1
- Run the first access review for sensitive data repositories; capture results and remediation actions. 1
By 90 days (operationalize cadence and close audit gaps)
- Expand coverage across remaining in-scope systems; refresh the repository/path mapping.
- Establish recurring reviews (access, exceptions, third-party pathways carrying sensitive data) with assigned owners and due dates. 1
- Conduct an internal “audit drill”: sample systems and demand evidence within a short window; fix evidence gaps.
- If you use Daydream, configure control objectives, evidence requests, exception workflows, and revalidation tasks so the next cycle is repeatable.
Frequently Asked Questions
Do we need encryption everywhere to meet “data at rest” and “data in transit”?
The requirement is outcomes-based: you must implement controls that protect data at rest and in transit, typically including encryption, access restrictions, and DLP. Where encryption is not feasible (common in legacy/OT), document an exception with compensating controls, approval, and revalidation. 1
What’s the minimum evidence set an auditor will accept?
Keep (1) the standard/policy, (2) proof of configuration for representative in-scope systems, and (3) operational records like access approvals, review results, and exceptions with periodic revalidation. Evidence should be traceable from requirement to system to control to record. 1
Does this requirement include third-party connections?
Yes if the third-party connectivity or data exchange is in your C2M2 assessment scope. Treat third-party pathways as “data in transit” and retain documentation showing encryption expectations, approvals, and any exceptions. 1
How should we handle key management within this requirement?
Treat key access as privileged access to sensitive data. Define who can administer keys, require approvals, and include key-management roles in periodic access reviews with documented outcomes. 1
We have multiple teams (IT, OT, cloud). Who should own the control?
Assign a single control owner in GRC/compliance for governance and evidence, and named technical owners per platform for implementation. The control owner’s job is to ensure approvals, review cadence, and revocation triggers are defined and followed. 1
How do we keep this from turning into a one-time documentation exercise?
Tie reviews and exception revalidations to a recurring schedule and make evidence capture part of normal workflows (tickets, change management, access reviews). A system like Daydream helps by tracking evidence requests, due dates, and exceptions across owners without relying on inbox archaeology.
What you actually need to do
Use the cited implementation guidance when translating the requirement into day-to-day operating steps. 2
Footnotes
Frequently Asked Questions
Do we need encryption everywhere to meet “data at rest” and “data in transit”?
The requirement is outcomes-based: you must implement controls that protect data at rest and in transit, typically including encryption, access restrictions, and DLP. Where encryption is not feasible (common in legacy/OT), document an exception with compensating controls, approval, and revalidation. (Source: Cybersecurity Capability Maturity Model v2.1)
What’s the minimum evidence set an auditor will accept?
Keep (1) the standard/policy, (2) proof of configuration for representative in-scope systems, and (3) operational records like access approvals, review results, and exceptions with periodic revalidation. Evidence should be traceable from requirement to system to control to record. (Source: Cybersecurity Capability Maturity Model v2.1)
Does this requirement include third-party connections?
Yes if the third-party connectivity or data exchange is in your C2M2 assessment scope. Treat third-party pathways as “data in transit” and retain documentation showing encryption expectations, approvals, and any exceptions. (Source: Cybersecurity Capability Maturity Model v2.1)
How should we handle key management within this requirement?
Treat key access as privileged access to sensitive data. Define who can administer keys, require approvals, and include key-management roles in periodic access reviews with documented outcomes. (Source: Cybersecurity Capability Maturity Model v2.1)
We have multiple teams (IT, OT, cloud). Who should own the control?
Assign a single control owner in GRC/compliance for governance and evidence, and named technical owners per platform for implementation. The control owner’s job is to ensure approvals, review cadence, and revocation triggers are defined and followed. (Source: Cybersecurity Capability Maturity Model v2.1)
How do we keep this from turning into a one-time documentation exercise?
Tie reviews and exception revalidations to a recurring schedule and make evidence capture part of normal workflows (tickets, change management, access reviews). A system like Daydream helps by tracking evidence requests, due dates, and exceptions across owners without relying on inbox archaeology.
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream