Message Integrity

The HITRUST message integrity requirement (CSF v11 10.d) means you must define which “sensitive messages” your applications send, then implement cryptographic integrity and authenticity controls for those messages, typically with message authentication codes (MACs) or digital signatures. Auditors will expect clear scoping plus proof the controls are consistently enforced across systems and third parties. 1

Key takeaways:

  • Scope first: explicitly identify which application messages are “sensitive” and where they flow.
  • Implement crypto integrity: use MACs or digital signatures, with solid key management and verification.
  • Evidence wins exams: retain architecture, configs, logs, and test results that show integrity checks happen in production.

Footnotes

  1. HITRUST CSF v11 Control Reference

“Message integrity” shows up in audits because it’s a practical control with a clear failure mode: messages can be altered in transit or forged by an attacker or misbehaving integration, and the receiving system might accept them as real. HITRUST CSF v11 10.d is explicit that you must (1) identify requirements for authenticity and integrity in applications and (2) implement appropriate controls, including MACs or digital signatures for sensitive messages. 1

For a CCO or GRC lead, the fast path is: define “sensitive message” in your environment, map where those messages move (APIs, webhooks, queues, files, internal service calls, third-party connections), then standardize a small set of approved integrity patterns your engineering teams can apply repeatedly. This page is written to help you operationalize that requirement without turning it into a science project: you’ll get scoping rules, implementation steps, evidence to retain, audit questions to expect, and a practical execution plan.

Regulatory text

HITRUST CSF v11 10.d states: “Requirements for ensuring authenticity and protecting message integrity in applications shall be identified, and appropriate controls identified and implemented. Message authentication codes or digital signatures shall be used to ensure the integrity and authenticity of sensitive messages.” 1

What the operator must do:

  1. Identify requirements: decide which application messages require integrity and authenticity guarantees, based on sensitivity and risk.
  2. Select controls: pick control patterns that provide message integrity and authenticity.
  3. Implement controls: deploy MACs or digital signatures for sensitive messages and ensure the receiver verifies them before processing.
  4. Prove it works: keep evidence that the control is designed correctly, implemented across in-scope paths, and monitored.

Plain-English interpretation (what “message integrity” means in practice)

Message integrity means the receiver can detect if a message changed after it was sent. Message authenticity means the receiver can verify the message came from the expected sender (or at least from someone holding an authorized key).

For HITRUST purposes, you are usually protecting:

  • API requests/responses between services (internal microservices, mobile apps, partner APIs)
  • Webhooks sent to or received from third parties
  • Queue and event bus messages (pub/sub, streaming)
  • Batch file exchanges (SFTP uploads, EDI-like payloads, exports to third parties)
  • Commands to sensitive systems (payments, eligibility, patient identity, provisioning)

Encryption in transit (like TLS) helps, but the requirement calls out MACs or digital signatures for sensitive messages. That implies you need application-level integrity for at least some flows, not only network-layer protection. 1

Who this applies to

Entity scope: All organizations implementing HITRUST CSF that build, operate, or rely on applications that send/receive sensitive messages. 1

Operational scope (where it bites in audits):

  • Systems handling regulated or high-impact data and actions (health data, identity, access changes, clinical workflows, billing)
  • Integration-heavy environments (many APIs, many third parties, event-driven architectures)
  • Third-party exchanges where you don’t control the full transport path end-to-end (partner webhooks, B2B integrations, SaaS-to-SaaS message flows)

If you have third parties sending you data or instructions, message integrity is a shared risk. Your controls must cover your side (verification, rejection, logging), and you should impose requirements on third parties where feasible (contract language, integration specs, testing).

What you actually need to do (step-by-step)

Step 1: Define “sensitive message” for your environment

Create a short, auditable definition and apply it consistently. A workable definition includes both data sensitivity and action sensitivity:

Sensitive messages include any message that:

  • Contains confidential or regulated data, or
  • Triggers a privileged or safety-critical action (create user, change permissions, update patient record, release funds), or
  • Feeds systems of record or downstream decisions where tampering creates material risk.

Deliverable: a one-page “Sensitive Message Criteria” standard your engineering teams can follow. 1

Step 2: Inventory message flows (scope the control)

Build a message-flow inventory that answers:

  • Sender application/service
  • Receiver application/service/third party
  • Protocol/transport (REST, gRPC, webhook, queue, file)
  • Payload format and location of integrity fields (header, envelope, body)
  • Whether the message is “sensitive” under your criteria
  • Current integrity/auth approach (if any)
  • Owner (team) and environment coverage (prod/non-prod)

Keep it simple: this inventory is both your scope control and your audit roadmap.

Step 3: Choose standard integrity patterns (minimize design variance)

Pick a small set of approved patterns and publish them as engineering standards.

Common patterns aligned to “MACs or digital signatures” include:

  • HMAC (MAC) for symmetric integrity between known parties sharing a secret key
  • Digital signatures for asymmetric integrity when you need non-repudiation-like properties or broader distribution of verification (one signer, many verifiers)

Define decision criteria:

  • Use HMAC for service-to-service within your boundary or tightly managed partner connections.
  • Use digital signatures for third-party ecosystems, webhook verification at scale, or cases where key sharing is hard to govern.

Document the required fields:

  • Canonicalization rules (exact bytes that are signed/MAC’d)
  • Timestamp/nonce to prevent replay (if relevant to your risk model)
  • Key ID (“kid”) to support rotation
  • Algorithm allow-list

This “profile” is what prevents teams from inventing incompatible signing schemes. 1

Step 4: Implement verification as a gate (don’t “verify and continue anyway”)

Auditors and incident responders care about one thing: what happens when validation fails.

Implementation requirements to set:

  • Fail closed for sensitive messages: if signature/MAC verification fails, reject the message and do not process side effects.
  • Log with enough context: record sender identity (as asserted), key ID, algorithm, failure reason, and correlation ID. Avoid storing full sensitive payloads in logs.
  • Alert on anomalies: repeated failures, sudden spikes, unexpected key IDs, or invalid timestamps.

For queues and event buses, require consumers to verify integrity before acting. For file transfers, verify a detached signature or signed manifest before ingestion.

Step 5: Key management and rotation (where programs often fail)

Message integrity collapses if keys are unmanaged. Set minimum operational rules:

  • Store signing keys in an approved secrets system or HSM-backed service (depending on your architecture).
  • Enforce least privilege for key access (only the signer can sign; verifiers can only verify where possible).
  • Define rotation triggers and a rotation process that won’t break integrations (support multiple active keys via key ID).
  • Document how compromised keys are revoked and how receivers respond.

You don’t need exotic crypto policy text. You need “who owns keys, where they live, how they rotate, what breaks glass.”

Step 6: Third-party integrations (contract + technical controls)

For third parties that send you sensitive messages:

  • Put integrity requirements into integration specs (required headers/fields, algorithms, timestamp rules).
  • Include contract/security addendum language requiring integrity controls for sensitive messages, plus breach notification and key compromise notice.
  • Test signing/verification in pre-production and keep results.

If you can’t force a third party to sign messages, treat that flow as a risk exception. Compensating controls might include strict allow-lists, TLS mutual auth, and strong replay defenses, but document why MAC/signature is not feasible and what you did instead. 1

Step 7: Testing and monitoring

Evidence-grade testing includes:

  • Unit/integration tests proving verification rejects tampered messages
  • Negative test cases (wrong key, altered body, replayed timestamp)
  • Operational monitoring dashboards or alerts for verification failures

You want to be able to show an auditor: “Here is the control design; here is it working in production; here is how we detect failures.”

Required evidence and artifacts to retain (audit-ready)

Maintain a “message integrity” evidence packet with:

  • Policy/standard defining sensitive messages and required integrity controls 1
  • Message-flow inventory with sensitivity classification and control mapping
  • Architecture diagrams for key systems showing where signing and verification occur
  • Technical specifications (signing profile: algorithms, canonicalization, headers, key IDs, replay controls)
  • Key management records: ownership, storage mechanism, access controls, rotation procedure
  • Configuration snapshots: API gateway or service configs that enforce verification; webhook verification settings; queue consumer verification middleware settings
  • Logs and samples: redacted production logs showing verification success/failure events and response actions
  • Test evidence: test cases and results for tamper detection and rejection
  • Third-party artifacts: integration agreements/specs and onboarding test results

Keep evidence tied to specific in-scope message flows. Broad policy without system-level proof is a common audit gap.

Common exam/audit questions and hangups

Expect questions like:

  • “Define ‘sensitive message’ and show which flows are in scope.”
  • “Where do you apply MACs or digital signatures today? Show production configuration.”
  • “What happens if verification fails? Show logs.”
  • “How are keys generated, stored, rotated, and revoked?”
  • “How do you ensure third parties sign messages, and how do you validate them?”
  • “How do you prevent replay attacks for signed messages?”

Hangups that slow audits:

  • Teams say “TLS covers integrity,” but cannot show MAC/signature controls for sensitive messages as specified. 1
  • Controls exist only in one integration pattern (for example, public APIs) but not in queues, internal service calls, or batch transfers.
  • Verification is “best effort” with warnings instead of hard failure for sensitive paths.

Frequent implementation mistakes (and how to avoid them)

  1. Signing the wrong bytes: teams sign a JSON object after parsing, but the receiver signs a different canonical form and verification fails.
    Fix: define canonicalization and enforce it with shared libraries.

  2. No key ID, no rotation: a single shared secret gets embedded across services and partners.
    Fix: require kid, support multiple active keys, document rotation.

  3. Logging sensitive payloads during failures: engineers dump full bodies to debug signature mismatches.
    Fix: log hashes, correlation IDs, and failure codes; keep payload logging behind break-glass controls.

  4. Verification after processing: messages are accepted, processed, then verified “for audit.”
    Fix: enforce verification as an admission control before side effects.

  5. Third-party exceptions not governed: “partner can’t do it” becomes permanent.
    Fix: document exception rationale, compensating controls, and a target state for remediation.

Enforcement context and risk implications

No public enforcement cases were provided in the source material for this requirement, so treat this section as risk-based operational context rather than a regulator narrative.

Risk outcomes message integrity controls reduce:

  • Tampering with clinical, billing, identity, or authorization messages
  • Forged webhook events triggering account takeover or data corruption
  • Replay of previously valid messages (if timestamps/nonces are part of your design)
  • Disputes and incident response ambiguity about what was sent and accepted

For HITRUST assessments, message integrity issues often cascade into broader findings: weak key management, insufficient secure SDLC controls, and inadequate third-party integration governance. 1

Practical 30/60/90-day execution plan

First 30 days (stabilize scope and standards)

  • Publish “Sensitive Message Criteria” and get sign-off from security architecture and application owners. 1
  • Build the first version of the message-flow inventory for your highest-risk systems and top third-party integrations.
  • Choose approved integrity patterns (HMAC and/or digital signature) and publish a signing profile (algorithms, required fields, failure handling).
  • Identify quick wins: one or two critical flows where you can implement signing and verification end-to-end.

Days 31–60 (implement and prove)

  • Implement signing/verification libraries or gateway middleware for the chosen patterns.
  • Enforce fail-closed behavior on in-scope flows; add structured logging and alerts for verification failures.
  • Stand up key management: key storage, access control, rotation playbook, and key ID support.
  • Run negative tests and retain results as audit evidence.

Days 61–90 (scale coverage and lock in governance)

  • Expand coverage to remaining in-scope flows, including queues/events and batch transfers.
  • Add third-party requirements to onboarding checklists and contract templates; update integration specs.
  • Build an evidence packet per critical system: diagrams, configs, test results, and operational logs.
  • Add continuous checks: CI tests for integrity enforcement, config drift detection, and periodic control self-attestation.

Where Daydream fits naturally: once you have the scope and standards, Daydream can help you track message flows, assign control owners, collect evidence artifacts (configs, test outputs, logs), and keep third-party integration requirements tied to each in-scope connection so audits don’t become a scramble.

Frequently Asked Questions

Does TLS satisfy the message integrity requirement by itself?

TLS protects data in transit, but HITRUST CSF v11 10.d explicitly calls for MACs or digital signatures for sensitive messages. Treat TLS as necessary transport security, then add application-level integrity where messages are sensitive. 1

What’s the difference between a MAC and a digital signature for compliance purposes?

A MAC (like HMAC) uses a shared secret and works well for tightly controlled parties. Digital signatures use asymmetric keys and make verification easier to distribute without sharing signing secrets; both meet the control intent when implemented correctly. 1

How do we decide which messages are “sensitive”?

Classify by data and by action. If a message contains regulated/confidential data or triggers privileged actions or system-of-record updates, treat it as sensitive and require integrity plus authenticity controls. 1

What evidence do auditors usually ask for first?

They start with scope (message-flow inventory and sensitive-message criteria), then ask for proof in production: configs showing verification enforcement, logs of accept/reject outcomes, and key management procedures. 1

How do we handle third parties that cannot sign messages?

Document a risk exception with compensating controls and an agreed remediation path. Also ensure your receiver applies strict validation, allow-lists, and monitoring, and that the exception is time-bound in your governance process. 1

What’s the most common technical failure mode?

Mismatched canonicalization (the sender signs one representation, the receiver verifies another) and lack of a clean rotation path. Solve both with a standard signing profile and shared libraries. 1

Footnotes

  1. HITRUST CSF v11 Control Reference

Frequently Asked Questions

Does TLS satisfy the message integrity requirement by itself?

TLS protects data in transit, but HITRUST CSF v11 10.d explicitly calls for MACs or digital signatures for sensitive messages. Treat TLS as necessary transport security, then add application-level integrity where messages are sensitive. (Source: HITRUST CSF v11 Control Reference)

What’s the difference between a MAC and a digital signature for compliance purposes?

A MAC (like HMAC) uses a shared secret and works well for tightly controlled parties. Digital signatures use asymmetric keys and make verification easier to distribute without sharing signing secrets; both meet the control intent when implemented correctly. (Source: HITRUST CSF v11 Control Reference)

How do we decide which messages are “sensitive”?

Classify by data and by action. If a message contains regulated/confidential data or triggers privileged actions or system-of-record updates, treat it as sensitive and require integrity plus authenticity controls. (Source: HITRUST CSF v11 Control Reference)

What evidence do auditors usually ask for first?

They start with scope (message-flow inventory and sensitive-message criteria), then ask for proof in production: configs showing verification enforcement, logs of accept/reject outcomes, and key management procedures. (Source: HITRUST CSF v11 Control Reference)

How do we handle third parties that cannot sign messages?

Document a risk exception with compensating controls and an agreed remediation path. Also ensure your receiver applies strict validation, allow-lists, and monitoring, and that the exception is time-bound in your governance process. (Source: HITRUST CSF v11 Control Reference)

What’s the most common technical failure mode?

Mismatched canonicalization (the sender signs one representation, the receiver verifies another) and lack of a clean rotation path. Solve both with a standard signing profile and shared libraries. (Source: HITRUST CSF v11 Control Reference)

Authoritative Sources

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream
HITRUST CSF Message Integrity: Implementation Guide | Daydream