Hypothetical performance governance

The hypothetical performance governance requirement means you may only present hypothetical performance when you can prove it’s appropriate for the intended audience and supported by a documented, repeatable methodology with clear assumptions and limitations. Operationalize it by gating who can receive it, standardizing how it’s built, and retaining review-and-approval evidence for every use.

Key takeaways:

  • Treat hypothetical performance as a controlled output with audience eligibility checks and method sign-offs. (17 CFR 275.206(4)-1)
  • Standardize calculation inputs, assumptions, and disclosure language, then lock version control and approvals. (17 CFR 275.206(4)-1)
  • Keep books-and-records evidence that links each hypothetical figure to source data, methodology, and pre-use review. (17 CFR 275.204-2)

Compliance teams usually get burned on hypothetical performance for one reason: it looks like a number, but regulators treat it like a high-risk claim. Under the SEC’s marketing rule framework, hypothetical performance is not “banned,” but it must be governed. Your job is to show that the firm (1) limited hypothetical performance to an audience that can understand and use it, and (2) built it using a methodology that is consistently applied, reviewable, and not misleading. (17 CFR 275.206(4)-1)

This requirement page is written for a CCO, compliance officer, or GRC lead who needs to stand up a workable control quickly across marketing, IR/BD, product, and portfolio teams. The emphasis is operational: intake, eligibility, calculation standards, disclosures, approvals, and retention. You will also find audit-ready artifacts, common examiner questions, and implementation mistakes that show up in real life.

If you already have a marketing review process, this work typically becomes a “hypothetical performance lane” within it: a stricter checklist, extra documentation, and tighter distribution controls. The goal is simple: any hypothetical performance output should be defensible on demand, with a clear paper trail. (17 CFR 275.204-2)

Requirement: hypothetical performance governance requirement (plain-English)

You must control the use of hypothetical performance with safeguards that address (a) the intended audience and (b) the methodology behind the figures. (17 CFR 275.206(4)-1)

In practice, that means:

  • Audience safeguards: hypothetical performance only goes to recipients who have the sophistication, context, and ability to evaluate it (and the limitations are clearly explained).
  • Methodology safeguards: the firm can explain how it generated the numbers, what assumptions were used, what data inputs were used, what was excluded, and why the presentation is not misleading.

Separately, you must retain the records that allow you to recreate what was shown, when, to whom, and on what basis. (17 CFR 275.204-2)

Regulatory text

Provided excerpt: “Control use of hypothetical performance with audience and methodology safeguards.” (17 CFR 275.206(4)-1)

Operator interpretation (what you must do):

  1. Decide when hypothetical performance is permitted (and when it is prohibited) based on recipient type and distribution channel. (17 CFR 275.206(4)-1)
  2. Define a standard methodology for each hypothetical performance “type” you produce (model, backtest, targeted, or other hypothetical constructs), including assumptions, data sources, calculation steps, and limitations. (17 CFR 275.206(4)-1)
  3. Run pre-use review and approval that verifies eligibility + methodology + disclosures before the content is distributed. (17 CFR 275.206(4)-1)
  4. Maintain books and records sufficient to substantiate the presentation and evidence the review. (17 CFR 275.204-2)

Who it applies to

Entities: Registered Investment Advisers and their supervised persons involved in marketing, performance reporting, product marketing, and investor communications. (17 CFR 275.206(4)-1)

Operational contexts where it shows up:

  • Pitch decks, DDQs, RFP responses, fact sheets, websites, and email campaigns that include simulated/model/backtested results.
  • One-off “custom” scenarios built by product or portfolio teams for a prospect (for example, a tailored model allocation or sleeve simulation).
  • Third parties creating or distributing your marketing content (placement agents, marketers, consultants, model marketplace platforms). You still own the governance and evidence trail.

What you actually need to do (step-by-step)

Use this as a build sheet for a control you can roll out across teams.

Step 1: Define what counts as “hypothetical performance” in your firm

Create a one-page internal definition and examples list that the business can apply consistently. Include common categories you see internally (model, simulated, backtested, pro forma, targeted). Tie it directly to the requirement to control hypothetical performance with safeguards. (17 CFR 275.206(4)-1)

Output: Hypothetical Performance Standard (definition + examples + owner).

Step 2: Put an eligibility gate in front of distribution

Build an audience eligibility matrix that tells marketing and sales who can receive hypothetical performance and under what conditions.

A practical matrix usually includes:

  • Recipient type: retail, institutional, consultant, fund-of-funds, sophisticated individual, etc.
  • Channel: public website, mass email, one-to-one email, data room, RFP portal.
  • Allow/deny: yes/no plus required disclosures and approvals.
  • Required context: what must accompany the numbers (methodology summary, assumptions, limitations, and any needed supporting information). (17 CFR 275.206(4)-1)

Control operation: content cannot be sent until the sender attests (or Compliance validates) that the recipient meets eligibility criteria and the channel is permitted.

Step 3: Standardize the methodology and lock inputs

For each hypothetical performance “type,” maintain a methodology pack that answers:

  • What is the objective of the hypothetical output?
  • What data is used (source systems, benchmarks, time period selection logic)?
  • What assumptions are used (fees, rebalancing, transaction costs, constraints)?
  • What calculations are performed (return formula, compounding, benchmark comparison)?
  • What is excluded (securities, accounts, periods) and why?
  • What limitations apply and how are they disclosed? (17 CFR 275.206(4)-1)

Implementation detail that matters: assign an accountable owner (often performance, product, or quant) and require version control. Examiners will ask whether the methodology is consistently applied or rebuilt ad hoc per prospect.

Step 4: Build a repeatable review workflow (pre-use)

Hypothetical performance needs a stricter review lane than ordinary marketing claims. Require approvals from:

  • Content owner (marketing/product) for accuracy of narrative and positioning
  • Methodology owner (performance/quant) for calculation integrity and assumptions
  • Compliance for audience eligibility, required disclosures, and overall non-misleading presentation controls (17 CFR 275.206(4)-1)

Use a checklist that forces reviewers to confirm:

  • Recipient eligibility is documented.
  • Methodology version is cited and attached.
  • Assumptions and limitations are prominent and consistent.
  • The firm can reproduce the figure from retained workpapers. (17 CFR 275.204-2)

Step 5: Control changes, reuse, and repackaging

Most breakdowns happen after initial approval, when someone:

  • Copies numbers into a new deck,
  • Updates a chart without updating assumptions,
  • Sends a “slightly modified” version to a new audience.

Set rules:

  • Any material change triggers re-approval.
  • Reuse is permitted only if the audience eligibility and distribution channel match the approved use case.
  • Every outbound instance is logged (or the distribution is constrained to controlled systems). (17 CFR 275.204-2)

Step 6: Retain records that let you defend the claim fast

Your retention package should make it easy to answer: “Show me exactly what was sent, to whom, and how you built it.” (17 CFR 275.204-2)

If you are scaling this, Daydream-style workflow tooling becomes useful for two reasons: (1) structured intake fields that force eligibility/methodology documentation up front, and (2) immutable approval logs with attachments for workpapers and final materials.

Required evidence and artifacts to retain (audit-ready list)

Retain these artifacts in a searchable repository, linked to each specific hypothetical performance use case. (17 CFR 275.204-2)

Artifact What “good” looks like Owner
Audience eligibility criteria Written matrix + rationale for permitted audiences/channels Compliance
Distribution evidence Recipient classification, channel used, and a copy of what was delivered Sales/Marketing Ops
Methodology document Versioned methodology, data sources, assumptions, limitations Performance/Quant
Calculation workpapers Inputs, code/spreadsheets, output files, QA checks Performance/Quant
Disclosures library Standard language for assumptions/limitations, tailored as needed Compliance
Review checklist Completed checklist showing all required sign-offs Compliance
Final approved content PDF/deck/web snapshot with approval timestamp Marketing
Change log What changed and why; re-approval where required Marketing/Compliance

Common exam/audit questions and hangups

Expect questions like:

  • “How do you define hypothetical performance, and who decides?” (17 CFR 275.206(4)-1)
  • “Show me your criteria for determining whether hypothetical performance is appropriate for a particular audience.” (17 CFR 275.206(4)-1)
  • “Reproduce this hypothetical track record. What inputs and assumptions were used?” (17 CFR 275.204-2)
  • “How do you prevent a salesperson from forwarding an institutional deck to a non-eligible contact?” (17 CFR 275.206(4)-1)
  • “Where is the evidence of pre-use compliance review for this specific piece?” (17 CFR 275.204-2)

Hangups that slow teams down:

  • No consistent recipient classification method.
  • Methodology exists “in someone’s head” or in unversioned spreadsheets.
  • Approvals happen in email with no durable audit trail.

Frequent implementation mistakes and how to avoid them

  1. Mistake: treating hypothetical performance like ordinary performance.
    Fix: create a separate review lane with extra required attachments (methodology + eligibility proof). (17 CFR 275.206(4)-1)

  2. Mistake: allowing public distribution “because it’s on the website already.”
    Fix: require explicit channel approval in the eligibility matrix; do not assume website posting is acceptable for hypothetical outputs. (17 CFR 275.206(4)-1)

  3. Mistake: inconsistent assumptions across decks (fees, reinvestment, constraints).
    Fix: lock a methodology version and require a variance memo when deviating. Retain both. (17 CFR 275.204-2)

  4. Mistake: approvals without reproducibility.
    Fix: make reproducibility a checklist item: reviewers must confirm workpapers exist and are attached. (17 CFR 275.204-2)

  5. Mistake: third parties generate or edit hypothetical results without your controls.
    Fix: contractually require them to follow your methodology/disclosure standards and route materials through your approval workflow. Keep the same evidence set. (17 CFR 275.204-2)

Enforcement context and risk implications (practical)

No public enforcement cases were provided in the source catalog for this page, so don’t build your program around a single headline. Build it around examiner expectations: hypothetical performance is easy to misunderstand, easy to cherry-pick, and hard to reproduce without strong governance. Your risk is not only a misleading-marketing allegation; it’s also a records deficiency if you cannot show how you built the numbers and who approved them. (17 CFR 275.204-2)

Practical execution plan (30/60/90)

Use this as a field plan. Adjust sequencing based on your marketing volume and where hypothetical performance appears.

First 30 days (stabilize and stop uncontrolled use)

  • Inventory where hypothetical performance appears (decks, DDQs, RFPs, web, model marketplaces).
  • Stand up a temporary rule: no new hypothetical performance goes out without Compliance review and a named methodology owner sign-off. (17 CFR 275.206(4)-1)
  • Draft the audience eligibility matrix and publish “allowed channels” guidance.

By 60 days (standardize and document)

  • Publish the Hypothetical Performance Standard (definition + examples + routing).
  • Create methodology templates and require version control for each hypothetical type.
  • Implement a checklist-driven approval workflow with required attachments and a distribution log. (17 CFR 275.204-2)

By 90 days (operate, test, and evidence)

  • Run testing: pick recent hypothetical outputs and confirm you can reproduce them from retained workpapers and show approvals.
  • Train marketing, product, and sales on eligibility gating and “no forward” rules.
  • Fold third parties into the workflow through contract addenda and submission processes, then audit one third-party-created item for compliance. (17 CFR 275.204-2)

Frequently Asked Questions

What counts as “hypothetical performance” for governance purposes?

Treat any performance figure that is simulated, model-based, backtested, targeted, or otherwise not purely actual client/account performance as hypothetical for routing and controls. Document your internal definition and examples so teams classify consistently. (17 CFR 275.206(4)-1)

Can we show hypothetical performance to any institutional prospect?

Don’t assume “institutional” automatically qualifies. Put written eligibility criteria in place and document why the specific recipient and channel are appropriate for hypothetical performance. (17 CFR 275.206(4)-1)

What documentation is non-negotiable for each hypothetical performance output?

Keep the final approved content, the methodology version, the inputs/workpapers needed to reproduce the output, and evidence of eligibility and approvals. If you can’t recreate it quickly, your records position is weak. (17 CFR 275.204-2)

How do we handle a one-off custom scenario requested in an RFP?

Route it through the hypothetical performance lane: eligibility check, methodology alignment (or documented variance), required limitations, and recorded approvals before submission. Retain the full calculation package with the RFP record. (17 CFR 275.206(4)-1)

Do we need a different process if a third party helped build the backtest?

Yes, you need the same governance outcome: documented assumptions, reproducibility, and approvals. Require the third party to provide inputs/workpapers and submit to your review workflow; retain those records. (17 CFR 275.204-2)

What’s the fastest way to make this exam-ready without rebuilding everything?

Start by gating distribution and forcing attachments (methodology + workpapers) in your existing marketing review queue. Then standardize templates and move to structured intake so you get consistent evidence every time. (17 CFR 275.204-2)

Frequently Asked Questions

What counts as “hypothetical performance” for governance purposes?

Treat any performance figure that is simulated, model-based, backtested, targeted, or otherwise not purely actual client/account performance as hypothetical for routing and controls. Document your internal definition and examples so teams classify consistently. (17 CFR 275.206(4)-1)

Can we show hypothetical performance to any institutional prospect?

Don’t assume “institutional” automatically qualifies. Put written eligibility criteria in place and document why the specific recipient and channel are appropriate for hypothetical performance. (17 CFR 275.206(4)-1)

What documentation is non-negotiable for each hypothetical performance output?

Keep the final approved content, the methodology version, the inputs/workpapers needed to reproduce the output, and evidence of eligibility and approvals. If you can’t recreate it quickly, your records position is weak. (17 CFR 275.204-2)

How do we handle a one-off custom scenario requested in an RFP?

Route it through the hypothetical performance lane: eligibility check, methodology alignment (or documented variance), required limitations, and recorded approvals before submission. Retain the full calculation package with the RFP record. (17 CFR 275.206(4)-1)

Do we need a different process if a third party helped build the backtest?

Yes, you need the same governance outcome: documented assumptions, reproducibility, and approvals. Require the third party to provide inputs/workpapers and submit to your review workflow; retain those records. (17 CFR 275.204-2)

What’s the fastest way to make this exam-ready without rebuilding everything?

Start by gating distribution and forcing attachments (methodology + workpapers) in your existing marketing review queue. Then standardize templates and move to structured intake so you get consistent evidence every time. (17 CFR 275.204-2)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream