Cybersecurity Risk Management and Incident Disclosure

To meet the cybersecurity risk management and incident disclosure requirement, you must (1) run a documented cybersecurity risk program that matches what you tell clients and the market, and (2) keep defensible incident triage and disclosure-decision records so you can prove why, when, and how you communicated about an incident. This is a recurring exam focus tied to anti-fraud and “misleading statements” risk.

Key takeaways:

  • Your biggest exposure is the gap between cybersecurity statements (privacy notices, marketing, client comms) and real control operation.
  • Build a disclosure-ready incident workflow: triage, escalation, materiality assessment, approvals, communications, and evidence retention.
  • Test and validate controls and the incident process; track remediation to closure with owners and dates.

Compliance teams get pulled into cybersecurity late: after an event, after a client asks hard questions, or during an SEC exam request list. The operational problem is simple. Cybersecurity obligations are not only technical; they also show up as disclosure and anti-fraud risk when public statements, marketing materials, or client communications misstate your security posture or your incident handling.

For SEC-registered investment advisers, the Marketing Rule prohibits advertisements that are untrue or misleading (17 CFR 275.206(4)-1(a)(1) 1). Cybersecurity language routinely appears in “advertisements” as defined by the rule (websites, pitch decks, RFP responses, market commentary, newsletters, and other promotional communications). If those materials say you have controls you do not actually run, or imply monitoring/response capabilities you cannot evidence, your cybersecurity program becomes a marketing compliance issue.

This page is written for a CCO, Compliance Officer, or GRC lead who needs an execution plan: what to implement, who owns what, what proof to keep, and what exam teams will challenge—so you can operationalize the cybersecurity risk management and incident disclosure requirement quickly and defend it under scrutiny. SEC exam priorities continue to include focus on compliance with recently adopted SEC rules including the Marketing Rule 2.

Regulatory text

Primary requirement (operator view): Do not disseminate cybersecurity-related statements in advertisements that are materially untrue or misleading, and be able to substantiate what you say with current, supportable evidence.

Text excerpt (Marketing Rule): “It shall constitute a fraudulent, deceptive, or manipulative act… for any investment adviser to disseminate any advertisement that includes any untrue statement of a material fact, or that is otherwise false or misleading.” (17 CFR 275.206(4)-1(a)(1) 1)

What this means for cybersecurity risk management and incident disclosure requirement:

  • Any cybersecurity claim in marketing/client acquisition materials must be accurate, not overstated, and not missing qualifiers that make the overall impression misleading 1.
  • If you describe incident response capabilities (“24/7 monitoring,” “we notify clients promptly,” “we have a tested IR plan”), you need operational proof those processes exist and operate as described.
  • SEC exam focus on Marketing Rule compliance increases the likelihood that cybersecurity claims embedded in marketing collateral get tested 2.

Plain-English interpretation (what you’re on the hook for)

You must be able to answer two questions at any point in time:

  1. Are our cybersecurity statements true today?
    Meaning: the controls, governance, and monitoring you describe in public or client-facing communications match what IT/security and third parties actually do.

  2. If an incident happens, can we prove our decisions and communications were reasonable and consistent?
    Meaning: you have an incident workflow that produces timestamped, reviewable records for triage, escalation, disclosure deliberations, and approvals.

If either answer is “no,” your program is exam-fragile. You may be technically secure but still non-defensible because your communications create anti-fraud exposure.

Who it applies to (entity + operational context)

Primary applicability: SEC-registered investment advisers that produce or disseminate advertisements covered by the Marketing Rule 1.

Operationally, this requirement hits teams that:

  • Publish cybersecurity language on websites, pitch decks, DDQs/RFPs, client letters, newsletters, and factsheets.
  • Provide clients with incident-related communications (initial notice, updates, remediation statements, “no impact” statements).
  • Rely on third parties for key cybersecurity functions (managed detection and response, cloud hosting, fund administrators, CRM platforms) where your marketing may imply stronger control than you can actually verify.

What you actually need to do (step-by-step)

1) Build an “external cybersecurity claims inventory”

Goal: Identify every place you make a cybersecurity statement that could be deemed misleading if unsupported.

  • Collect: website pages, pitchbooks, RFP templates, DDQ responses, investor letters, privacy notices, security addenda, and any “security overview” one-pagers.
  • Break statements into testable claims, such as:
    • “We encrypt data at rest and in transit.”
    • “We conduct annual penetration tests.”
    • “We maintain an incident response plan and run tabletop exercises.”
    • “We notify clients promptly after a breach.”

Operator tip: Treat DDQ/RFP boilerplate as high risk. People copy old language long after controls or providers change.

2) Map each claim to a control owner and proof

Create a table with four columns:

  1. Claim (verbatim text)
  2. Control/process that makes it true
  3. Control owner (named role, not a team)
  4. Evidence location (link to ticketing system, SOC reports, policies, logs)

If you cannot produce evidence quickly, either:

  • fix the control (security work), or
  • fix the statement (compliance/marketing work), or
  • add limiting language so it’s not misleading.

3) Implement written cybersecurity governance that matches disclosures

Stand up a written governance standard that is “disclosure-aware”:

  • Define minimum control expectations (access control, encryption, logging, vulnerability management, backups, third-party oversight).
  • Assign owners and review cadence for:
    • cybersecurity program documents,
    • client-facing cybersecurity language,
    • incident communications templates.

This directly addresses the common failure mode: disclosures drift while the program changes.

4) Operationalize incident triage + disclosure-decision logs

Non-negotiable artifact: a formal log that records what happened and who decided what, when. Minimum fields to capture:

  • Detection source and time
  • Incident category and scope (systems, data types)
  • Initial severity and rationale
  • Containment actions taken
  • Legal/compliance involvement (who, when)
  • Disclosure decision points (what was considered, what was decided)
  • Communications issued (to whom, when, channel)
  • Post-incident corrective actions and owners

Why this matters: The risk factor is incomplete incident decision evidence, which makes post-incident defensibility weak 3.

5) Put approval controls around cybersecurity statements

Treat cybersecurity content like performance advertising: controlled drafting, review, and retention.

  • Create a required review path: Security owner + Compliance + Legal (as appropriate).
  • Lock templates for RFP/DDQ cybersecurity sections.
  • Require change control for published website/security pages.

6) Validate controls and test the incident process

Run periodic validation so you can back up statements with current evidence:

  • Tabletop exercises focused on realistic scenarios: ransomware at a third party, misdirected wire email compromise, cloud credential leak.
  • Targeted testing: confirm encryption settings, MFA enforcement, privileged access reviews, log retention, backup restoration.

Track remediation to closure with dates, owners, and validation notes. Control testing is how you avoid stale claims and stale incident plans.

Required evidence and artifacts to retain (audit-ready list)

Keep these in a centralized compliance evidence repository (GRC tool or a controlled folder with retention rules):

  • Cybersecurity governance standard with owners and review history.
  • External cybersecurity claims inventory with mapped controls and evidence pointers.
  • Marketing/advertisement approval records for materials containing cybersecurity statements 1.
  • Incident response plan and tabletop/test reports, including attendance and outcomes.
  • Incident triage and disclosure-decision logs with timestamps and sign-offs.
  • Remediation tracker (issues, risk rating, owner, due date, closure evidence).
  • Third-party oversight artifacts for security-critical providers (security summaries, contract security clauses, escalation contacts), to avoid making promises you cannot enforce.

Common exam/audit questions and hangups

Expect requests and challenges in these areas, especially given exam focus on Marketing Rule compliance 2:

  • “Show me support for the cybersecurity statements in your pitch deck/website.”
  • “Who approves cybersecurity language in advertisements? Show your workflow.”
  • “Provide evidence of incident response testing and post-test remediation.”
  • “Walk through your last incident: timeline, decisions, communications, approvals.”
  • “How do you ensure DDQ answers remain current after control or provider changes?”
  • “How do you supervise third parties that provide key security controls you reference in marketing?”

Frequent implementation mistakes (and how to avoid them)

  1. Copy-paste cybersecurity claims without proof.
    Fix: require each claim to have an evidence link before publication.

  2. Treat incident response as IT-only.
    Fix: build a joint IR escalation path that hardwires Compliance/Legal into disclosure decisions and log approvals.

  3. No “overall impression” review.
    A statement can be technically true but still misleading if it implies broader coverage.
    Fix: add qualifiers and scope language (“for corporate devices,” “for systems in scope,” “where supported by the service”).

  4. Stale DDQ/RFP libraries.
    Fix: put a review gate on templates and require re-approval when controls, providers, or scope changes.

  5. Testing without closure discipline.
    Fix: tie tabletop outcomes to tickets, assign owners, and require closure evidence.

Enforcement context and risk implications

You do not need a breach for this to become a regulatory problem. Misleading cybersecurity statements can independently create anti-fraud exposure under the Marketing Rule if made in advertisements 1. The practical risk is exam friction, remediation orders, and reputational fallout when clients discover mismatches between promises and reality.

Daydream (when you’re ready for it) fits as the system of record for: (1) an always-current inventory of external cybersecurity claims and where they appear, (2) mapped controls and evidence, and (3) incident decision logs with approval trails. The win is speed during exams and consistency during incidents.

Practical 30/60/90-day execution plan

First 30 days (stabilize and inventory)

  • Appoint owners: Security lead, Compliance reviewer, Marketing/content gatekeeper.
  • Build the external cybersecurity claims inventory across top materials (website + pitch deck + DDQ template).
  • Create a basic incident triage and disclosure-decision log template and require it for all security events, even “near misses.”
  • Stand up an approvals workflow for any cybersecurity statements in advertisements 1.

Days 31–60 (prove and align)

  • Map each claim to evidence; remove, narrow, or correct unsupported statements.
  • Write or refresh your cybersecurity governance standard with review cadence and named owners.
  • Run a tabletop exercise that forces disclosure deliberations (include Compliance and whoever signs client communications).
  • Start a remediation tracker and close the highest-risk gaps revealed by claim-to-evidence mapping.

Days 61–90 (operationalize and test)

  • Implement change control: any change to controls or key security third parties triggers review of related external statements.
  • Conduct targeted validation of the controls you most often claim (MFA, encryption, monitoring, backups), and file evidence.
  • Perform a mock exam: pick one published cybersecurity statement and one incident scenario; practice producing substantiation and decision records quickly.
  • Formalize training for client-facing teams who answer cybersecurity questions (sales, investor relations).

Frequently Asked Questions

Does the Marketing Rule really apply to cybersecurity statements?

If the statement appears in an “advertisement” as defined by the rule, it must not be untrue or misleading 1. In practice, that includes websites, pitch decks, and many RFP/DDQ responses used to solicit investors or clients.

What counts as “evidence” that a cybersecurity claim is true?

Evidence is something an examiner can inspect: policies with review history, system configuration screenshots or exports, logs, test reports, and third-party reports where appropriate. The key is that the evidence is current and clearly ties back to the specific claim language.

Our IR plan exists, but we don’t document decisions during incidents. Is that a problem?

Yes, because after an incident you must be able to reconstruct the timeline and decision basis for communications and disclosures. A disclosure-decision log with timestamps and sign-offs is a practical control to reduce defensibility gaps.

Can we keep broad cybersecurity language if we add disclaimers?

Disclaimers help only if they change the overall impression and accurately describe scope and limits. The safer approach is to narrow statements to what you can prove and avoid absolute terms (“always,” “fully,” “guaranteed”).

How do third parties affect this requirement?

If your marketing implies controls provided by a third party (monitoring, encryption, testing), you still need oversight and evidence that the third party actually provides what you claim. Otherwise, your statements can become misleading even if the gap sits with the provider.

What should Compliance own versus Security?

Security should own control operation and technical evidence; Compliance should own the inventory of external statements, the approval workflow for advertisements, and the recordkeeping discipline for incident communications decisions. Both must agree on claim language and substantiation.

Related compliance topics

Footnotes

  1. 17 CFR 275.206(4)-1

  2. 2025 Exam Priorities

  3. 17 CFR 275.206(4)-1; Source: 2025 Exam Priorities

Frequently Asked Questions

Does the Marketing Rule really apply to cybersecurity statements?

If the statement appears in an “advertisement” as defined by the rule, it must not be untrue or misleading (Source: 17 CFR 275.206(4)-1). In practice, that includes websites, pitch decks, and many RFP/DDQ responses used to solicit investors or clients.

What counts as “evidence” that a cybersecurity claim is true?

Evidence is something an examiner can inspect: policies with review history, system configuration screenshots or exports, logs, test reports, and third-party reports where appropriate. The key is that the evidence is current and clearly ties back to the specific claim language.

Our IR plan exists, but we don’t document decisions during incidents. Is that a problem?

Yes, because after an incident you must be able to reconstruct the timeline and decision basis for communications and disclosures. A disclosure-decision log with timestamps and sign-offs is a practical control to reduce defensibility gaps.

Can we keep broad cybersecurity language if we add disclaimers?

Disclaimers help only if they change the overall impression and accurately describe scope and limits. The safer approach is to narrow statements to what you can prove and avoid absolute terms (“always,” “fully,” “guaranteed”).

How do third parties affect this requirement?

If your marketing implies controls provided by a third party (monitoring, encryption, testing), you still need oversight and evidence that the third party actually provides what you claim. Otherwise, your statements can become misleading even if the gap sits with the provider.

What should Compliance own versus Security?

Security should own control operation and technical evidence; Compliance should own the inventory of external statements, the approval workflow for advertisements, and the recordkeeping discipline for incident communications decisions. Both must agree on claim language and substantiation.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream