SEC Artificial Intelligence Marketing Compliance - AI Washing Prevention

To meet the sec artificial intelligence marketing compliance - ai washing prevention requirement, you must ensure every AI/ML statement in any “advertisement” is accurate, substantiated, and not misleading, with written controls that prevent unverified AI claims from reaching the public. Operationally, that means inventorying AI-related claims, mapping each claim to proof of capability, and tightening marketing review, documentation, and periodic testing.

Key takeaways:

  • The SEC treats false or unsubstantiated AI claims as Marketing Rule fraud risk under 17 CFR 275.206(4)-1(a)(1) 1.
  • Enforcement has already focused on “AI washing” by advisers, with penalties of $225,000 and $175,000 in two March 2024 cases 2.
  • Your fastest path is a claim-by-claim substantiation file, plus a cross-functional marketing approval workflow that requires technology sign-off on AI claims.

“AI washing” in SEC terms is simpler than many teams assume: if your marketing says you use AI or ML in a way that matters to an investor’s decision, you need to be able to prove it, or you need to change the claim. The SEC’s Investment Adviser Marketing Rule already prohibits untrue statements of material fact or statements that are otherwise false or misleading in advertisements, and the SEC has applied that standard directly to AI claims 3.

For a CCO or GRC lead, the operational challenge is that AI claims can spread across channels (websites, pitch decks, social posts, RFPs, and even Form ADV narrative disclosures), while proof lives somewhere else (product docs, engineering tickets, vendor contracts, model evaluations). Your job is to connect those dots and make it hard for the business to publish aspirational language.

This page gives requirement-level guidance you can put into controls quickly: define what counts as an AI claim, establish “substantiation before publication,” build an evidence pack that survives an exam, and run a recurring sweep to catch drift.

Requirement: SEC artificial intelligence marketing compliance - AI washing prevention requirement

Plain-English interpretation

If you advertise AI/ML capabilities in connection with investment advisory services, your statements must be truthful, not overstated, and supported by documentation you can produce promptly. A claim can be misleading even if parts are true; for example, saying “AI-driven” when AI is only used for a minor internal workflow can mislead if investors would reasonably think AI drives the advice or portfolio decisions 4.

Who this applies to

Entity scope

  • Registered Investment Advisers (RIAs) and advisers subject to the SEC Marketing Rule’s advertisement standards 1.

Operational scope (where this shows up)

  • Any “advertisement” as defined and covered by the Marketing Rule, including public-facing and client-facing marketing materials such as websites, social media, pitchbooks, presentations, product pages, and other promotional communications 1.
  • Any narrative statements about AI/ML in SEC filings presented as part of marketing or promotional communications. The SEC’s March 2024 AI washing actions referenced misstatements across outward communications, reinforcing that you should treat all public claims as in-scope 2.

Regulatory text

Marketing Rule prohibition (core standard):
“It shall constitute a fraudulent, deceptive, or manipulative act, practice, or course of business within the meaning of section 206(4) of the Act for any investment adviser, directly or indirectly, to disseminate any advertisement that includes any untrue statement of a material fact, or that is otherwise false or misleading.” 1

What the operator must do

  • Treat AI/ML claims as “material fact” statements that require substantiation before release.
  • Prevent false, overstated, or unverifiable AI statements from being disseminated in any covered advertisement.
  • Maintain written policies and procedures reasonably designed to prevent inaccurate or unsubstantiated AI-related disclosures, because failure to implement these controls featured in enforcement 5.

Public enforcement cases

In the Matter of Delphia (USA) Inc. (IA-6573, March 18, 2024)

  • What happened: The adviser made marketing claims that it used client data and external partner data with AI/ML in its investment process; the SEC found the statements were false and misleading because the adviser did not use that data with AI/ML in its investment process at the time 6.
  • Penalty: $225,000 6.
  • Operator lesson: If you claim AI/ML uses specific data types (spending, social, personal information), you need evidence of actual data integration and use within the investment process, not a roadmap slide.

In the Matter of Global Predictions Inc. (IA-6574, March 18, 2024)

  • What happened: The adviser promoted itself with broad AI claims (including being the “first regulated AI financial advisor”) and could not substantiate many claims; the SEC also cited insufficient documentation and failure to adopt and implement written policies and procedures to prevent inaccurate or unsubstantiated AI-related disclosures 5.
  • Penalty: $175,000 5.
  • Operator lesson: Even if some AI exists, you still need substantiation discipline: documentation of what the system does, how it’s used, and controls that stop marketing drift.

SEC statement of focus (Press Release 2024-36)

SEC Chair Gary Gensler explicitly warned advisers not to mislead the public by saying they use AI when they do not, and labeled this “AI washing” that harms investors 7. Treat this as an exam and enforcement signal, not commentary.

What you actually need to do (step-by-step)

Step 1: Define what counts as an AI/ML marketing claim (build a “claim taxonomy”)

Create a short internal standard that flags statements requiring substantiation, including:

  • “AI-driven,” “machine learning,” “expert AI forecasts,” “optimization technology,” “autonomous,” “predictive,” “first regulated AI advisor” style superlatives 5.
  • Claims about what data is used to make models “smarter” (client spending patterns, social media, personal data) 6.
  • Claims about what the AI does in the advisory process (idea generation, portfolio construction, risk modeling, trade execution, monitoring).

Deliverable: a one-page “AI Claims Standard” referenced by your marketing review policy.

Step 2: Inventory every place AI claims appear (centralize the surface area)

Build an inventory across:

  • Website pages, blogs, FAQs, product pages
  • Social media accounts
  • Pitch decks and fact sheets
  • RFP templates and one-pagers
  • Form ADV narrative language where applicable to promotional statements

Make it operational: owners, last updated date, approval status, and whether the asset contains AI claims.

Step 3: Create a claim-by-claim substantiation file (the exam survival artifact)

For each AI claim, record:

  1. Exact wording (copy/paste) and channel
  2. What the claim means in operational terms (one sentence)
  3. System(s) involved (internal tool, third-party product)
  4. Evidence that the capability exists and is used as described
  5. Approval and reviewer names (Compliance + Technology)

Minimum standard: if you cannot attach evidence that would convince a skeptical examiner, rewrite the claim.

Step 4: Implement cross-functional review with mandatory technology sign-off

Update your marketing review workflow so that any AI/ML claim requires:

  • Compliance approval (Marketing Rule standard) 1
  • Technology/engineering approval (capability substantiation)
  • If third-party AI is referenced, procurement/vendor owner confirmation that contract scope matches the claim

Practical control wording: “No AI/ML claim may be published without a completed substantiation record and technology sign-off.”

Step 5: Fix existing claims fast (triage and remediate)

Triage criteria:

  • Highest risk: claims that AI directly drives advice or performance, claims about unique “first” status, and claims about client data being used 2.
  • Medium risk: generalized “we use AI” without describing scope.

Remediation actions:

  • Rewrite as precise scope statements (what, where, and what not).
  • Remove unsupported data-source statements immediately.
  • Add qualifying language only if it is still accurate and not misleading in context.

Step 6: Run a recurring quarterly sweep for drift

The enforcement cases involved multi-channel outward statements over time. Put a recurring control on the calendar:

  • Quarterly scan of public properties and the top client-facing deck library.
  • Confirm claims still match current systems and data feeds.
  • Retire old decks and archived web pages.

Step 7: Operationalize with a system of record (where Daydream fits)

Most failures happen because approvals and evidence live in email and chats. Daydream can serve as the system of record for:

  • Marketing material inventory and approval workflows
  • Claim substantiation attachments (engineering docs, test results, architecture notes)
  • Audit-ready evidence packages and control attestations

Keep it simple: start with the AI claims inventory + substantiation checklist, then expand to full marketing compliance workflow.

Required evidence and artifacts to retain

Retain artifacts in a way that you can produce by claim and by channel:

  • Marketing materials inventory (with versions and approval dates)
  • Completed AI claim substantiation forms for each AI/ML statement
  • Technology sign-off records (ticket approvals, review memos)
  • Documentation of AI system capabilities (architecture summaries, feature descriptions)
  • Data source documentation showing what feeds the models (and what does not) 6
  • Testing/performance documentation sufficient to support the claim being made (avoid performance claims you cannot evidence)
  • Written marketing policies and procedures addressing AI-related disclosures 5
  • Quarterly audit/sweep results, remediation tickets, and re-approval evidence

Common exam/audit questions and hangups

Expect questions framed as “show me” requests:

  • “List all places you mention AI/ML and who approved each statement.” 1
  • “Produce documentation substantiating this claim that AI drives X.” 5
  • “What client data do you say you use, and where is the proof it is actually used in the investment process?” 6
  • “Show your written policies and procedures designed to prevent inaccurate AI disclosures.” 5
  • “What changed since last year, and how do you prevent stale marketing?” (Inferred from multi-channel drift risk described in enforcement patterns; keep your answer grounded in your controls and inventory rather than citing a new source.)

Hangups that slow teams down:

  • Engineering says “it’s AI-ish” but can’t define the model, features, data sources, or decision role.
  • Marketing relies on third-party vendor brochures that overstate capabilities; you still own the claim.
  • Form ADV language is treated as separate from “marketing” and escapes review.

Frequent implementation mistakes (and how to avoid them)

  1. Overbroad AI labels (“AI-powered” everywhere).
    Fix: require scope statements tied to a defined system and defined step in the advisory workflow 5.

  2. Claiming specific data sources without technical proof.
    Fix: prohibit naming data categories (spending, social, “personal data”) unless you can show ingestion and use in the investment process 6.

  3. No documentation trail.
    Fix: treat substantiation as a pre-publication gate; keep the evidence with the claim record 5.

  4. Approvals without the right reviewers.
    Fix: require technology sign-off for AI claims; compliance alone cannot validate model reality.

  5. One-time cleanup with no monitoring.
    Fix: quarterly sweep and decommission old decks/pages.

Enforcement context and risk implications

The SEC’s first explicit AI washing cases against investment advisers resulted in civil penalties totaling $400,000 across two firms 2. The SEC also signaled continued scrutiny through its public statement that advisers should not claim to use AI when they are not 7.

Risk to plan for:

  • Marketing Rule violations (false or misleading statements) 1
  • Compliance program deficiencies where policies/procedures are missing or not implemented for AI claims 5
  • Fast-moving reputational risk if “AI” is central to your brand promise

30/60/90-day execution plan

Day 0–30: Stop the bleeding and map the surface area

  • Appoint owners: Compliance (policy), Marketing (inventory), Technology (substantiation).
  • Publish an interim rule: “No new AI/ML marketing claims without technology + compliance approval.”
  • Inventory all AI-related statements across channels; freeze high-risk pages/decks until reviewed.
  • Build the substantiation template and start with the top 10 most visible claims.

Day 31–60: Implement the control and remediate claims

  • Update written marketing review procedures to include AI claim gates 5.
  • Remediate or remove unsupported claims; document changes and rationale.
  • Create a centralized repository for evidence (ticket system or Daydream) linked to each claim record.
  • Train Marketing, Sales, and Product on “what triggers substantiation.”

Day 61–90: Prove sustainability (monitoring + testing)

  • Run your first quarterly AI-claims sweep and document results.
  • Test the workflow: sample a set of published materials and confirm each AI claim has evidence and approvals.
  • Add third-party confirmation steps where AI capabilities come from external providers (contract scope + marketing rights).
  • Prepare an exam-ready packet: inventory, approvals, substantiation files, policy, and sweep logs.

Frequently Asked Questions

Does this apply if we only use AI internally (for drafting, research, or operations)?

The requirement is triggered by what you advertise. If you publicly claim AI/ML is part of the advisory service, you need substantiation and controls under the Marketing Rule’s prohibition on misleading statements 1.

Can we say “AI-powered” if we use a third-party tool that has AI features?

Only if you can substantiate what the tool actually does and how it is used in your advisory process. You still own the claim, and you need documentation and written procedures to prevent unsubstantiated disclosures 5.

Are claims about using client spending or social media data especially risky?

Yes. The Delphia matter focused on claims about using client data with AI/ML that were found false and misleading because the adviser did not use that data with AI/ML in its investment process 6.

What evidence is “enough” to substantiate an AI claim?

Enough means a reviewer can connect the exact marketing statement to documented system behavior: what model or algorithm is used, what data feeds it, and where it affects advice or portfolio decisions. Global Predictions shows the SEC will cite inability to substantiate and lack of documentation 5.

Do we need a special AI policy, or can we extend our Marketing Rule procedures?

Extending your marketing review procedures is usually the fastest path, as long as you add explicit AI claim gates, technology sign-off, and a substantiation file requirement. The SEC cited failures to adopt and implement written policies and procedures to prevent inaccurate AI-related disclosures 5.

How do we handle “aspirational” roadmap statements about AI?

Avoid publishing roadmap language that reads like a present-tense capability. If you communicate future plans, clearly label them as future-looking and confirm the overall message cannot be read as “we do this today,” because the Marketing Rule prohibits false or misleading statements 1.

Related compliance topics

Footnotes

  1. 17 CFR 275.206(4)-1, 2021

  2. IA-6573, 2024; IA-6574, 2024

  3. 17 CFR 275.206(4)-1, 2021; IA-6573, 2024; IA-6574, 2024

  4. 17 CFR 275.206(4)-1, 2021; IA-6574, 2024

  5. IA-6574, 2024

  6. IA-6573, 2024

  7. SEC Press Release 2024-36, 2024

Frequently Asked Questions

Does this apply if we only use AI internally (for drafting, research, or operations)?

The requirement is triggered by what you *advertise*. If you publicly claim AI/ML is part of the advisory service, you need substantiation and controls under the Marketing Rule’s prohibition on misleading statements (Source: 17 CFR 275.206(4)-1, 2021).

Can we say “AI-powered” if we use a third-party tool that has AI features?

Only if you can substantiate what the tool actually does and how it is used in your advisory process. You still own the claim, and you need documentation and written procedures to prevent unsubstantiated disclosures (Source: IA-6574, 2024).

Are claims about using client spending or social media data especially risky?

Yes. The Delphia matter focused on claims about using client data with AI/ML that were found false and misleading because the adviser did not use that data with AI/ML in its investment process (Source: IA-6573, 2024).

What evidence is “enough” to substantiate an AI claim?

Enough means a reviewer can connect the exact marketing statement to documented system behavior: what model or algorithm is used, what data feeds it, and where it affects advice or portfolio decisions. Global Predictions shows the SEC will cite inability to substantiate and lack of documentation (Source: IA-6574, 2024).

Do we need a special AI policy, or can we extend our Marketing Rule procedures?

Extending your marketing review procedures is usually the fastest path, as long as you add explicit AI claim gates, technology sign-off, and a substantiation file requirement. The SEC cited failures to adopt and implement written policies and procedures to prevent inaccurate AI-related disclosures (Source: IA-6574, 2024).

How do we handle “aspirational” roadmap statements about AI?

Avoid publishing roadmap language that reads like a present-tense capability. If you communicate future plans, clearly label them as future-looking and confirm the overall message cannot be read as “we do this today,” because the Marketing Rule prohibits false or misleading statements (Source: 17 CFR 275.206(4)-1, 2021).

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream