Time-period and benchmark consistency controls
Implement the time-period and benchmark consistency controls requirement by enforcing a single, documented standard for (1) which benchmarks are used and (2) which performance periods are shown, so every comparison in marketing is like-for-like and not misleading. Operationalize it with a benchmark/period selection matrix, pre-approved templates, and pre-publication checks tied to recordkeeping. (17 CFR 275.206(4)-1)
Key takeaways:
- Standardize benchmark selection and lock it to strategy, mandate, and share class so teams cannot “shop” for favorable comparators. (17 CFR 275.206(4)-1)
- Require period alignment rules (same start/end dates, same frequency, same methodology) across portfolio and benchmark results in every artifact. (17 CFR 275.206(4)-1)
- Make the control exam-ready with repeatable review steps and preserved evidence under your books-and-records program. (17 CFR 275.204-2)
Time-period and benchmark inconsistency is one of the fastest ways to turn “accurate numbers” into a misleading message. The operational risk is rarely fraud; it’s marketing drift: different teams, different templates, different date ranges, and benchmarks chosen ad hoc. The SEC’s marketing rule framework expects your advertisements to avoid misleading statements and omissions, and inconsistent comparisons can mislead even where each component number is true in isolation. (17 CFR 275.206(4)-1)
For a CCO or GRC lead, the goal is straightforward: force like-for-like comparisons through design controls (standard benchmark mapping and standard periods) and detection controls (pre-publication review and exception handling). If you only rely on human judgment during review, you will miss edge cases such as partial-period composites, strategy transitions, benchmark ticker changes, or “since inception” figures paired with a benchmark that started later.
This page gives requirement-level implementation guidance you can apply quickly: who must comply, the exact operational steps, what evidence to retain, common exam questions, and the failure modes that trigger remediation. It also maps the control to books-and-records retention so you can prove what you ran, what you approved, and why. (17 CFR 275.204-2)
Requirement: time-period and benchmark consistency controls requirement (SEC marketing context)
Plain-English interpretation: If you compare performance (or performance-related characteristics) to a benchmark in marketing, you need consistent, like-for-like periods and a consistent benchmark selection approach so the comparison does not mislead. “Consistent” means consistent across the portfolio and the benchmark within the same presentation, and consistent across materials for the same strategy unless you document and approve an exception. (17 CFR 275.206(4)-1)
Who it applies to
Entities
- Registered Investment Advisers creating, approving, or distributing advertisements and marketing materials that include performance and benchmark comparisons. (17 CFR 275.206(4)-1)
Operational context
- Factsheets, pitchbooks, RFP/RFI responses, client letters with performance, website performance pages, social posts with performance snippets, model marketplace listings, due diligence questionnaires, and consultant databases.
- Any workflow where a third party (marketing agency, placement agent, IR consultant, distributor platform) can publish your performance content. Treat these parties as part of your advertising supply chain and subject them to the same benchmark/period rules through contract language and review gates. (17 CFR 275.206(4)-1)
Regulatory text
Regulatory excerpt (provided): “Apply consistent periods and benchmarks to prevent misleading comparisons.” (17 CFR 275.206(4)-1)
What the operator must do: Build and run controls that prevent teams from (a) selecting different time windows for the portfolio versus the benchmark, or (b) swapping benchmarks across materials in a way that changes the implication of outperformance/underperformance. Your control should be preventative (templates and locked selections) and detective (review checklists and evidence). (17 CFR 275.206(4)-1)
What you actually need to do (step-by-step)
Step 1: Establish a benchmark and period standard (write it so reviewers can test it)
Create a short “Benchmark & Period Standard” that answers four questions:
-
Which benchmark is permitted for each strategy?
- Maintain a strategy-to-benchmark mapping table (primary benchmark, any allowed secondary benchmarks, and the rationale).
- Include rules for share class, currency, hedged/unhedged, net/gross variants, and fee treatment alignment. (17 CFR 275.206(4)-1)
-
Which periods must be shown when performance is shown?
- Define your required period set (for example: standardized trailing periods and since inception) and specify that portfolio and benchmark must use the same start/end dates for each displayed period. (17 CFR 275.206(4)-1)
-
How do you handle partial periods and inception nuances?
- Document rules for “since inception” when the benchmark inception differs, when the strategy changed, or when the account/composite inception differs from the product launch. Require an explicit label and an approved exception when periods cannot align perfectly. (17 CFR 275.206(4)-1)
-
What triggers an exception? Who approves it?
- Define exception categories (benchmark change, strategy change, data limitation, client-mandated benchmark) and require compliance approval plus documented rationale. (17 CFR 275.206(4)-1)
Step 2: Operationalize with a selection matrix and locked templates
Turn the standard into tools that reduce reviewer guesswork:
-
Benchmark/Period Selection Matrix (control artifact)
A table that lists each product/strategy, permitted benchmark(s), required periods, and data source. This becomes the single source of truth for marketing, performance reporting, and RFP teams. (17 CFR 275.206(4)-1) -
Pre-approved templates
Factsheet and pitchbook templates with:- Fixed period columns/rows
- Pre-populated benchmark name/ticker (where relevant)
- Locked disclosure blocks for benchmark descriptions and limitations
This prevents silent template drift. (17 CFR 275.206(4)-1)
-
Data sourcing rules
Define the system(s) of record for portfolio performance and for benchmark returns and require that marketing pulls from those sources, not manual spreadsheets, except through an approved workflow. Retain what you used as records. (17 CFR 275.204-2)
Step 3: Add pre-publication checks that a reviewer can execute in minutes
Embed a Time-Period and Benchmark Consistency Checklist into your ad review process. Minimum checks:
-
Benchmark consistency
- Is the benchmark permitted for this strategy per the matrix?
- Does the benchmark variant match the product presentation (currency, hedged status, net/gross context as applicable)?
- If a custom blend is used, is the methodology documented and approved? (17 CFR 275.206(4)-1)
-
Period alignment
- For every displayed period, verify the portfolio and benchmark cover identical start/end dates.
- Confirm that any “since inception” calculation start date is consistent and clearly labeled for both portfolio and benchmark, or approved as an exception. (17 CFR 275.206(4)-1)
-
Presentation consistency
- Confirm that the time period labeling is unambiguous (e.g., “as of” date and period definition).
- Confirm that the same periods are used across related materials for the same strategy (factsheet vs pitchbook vs website), or that differences are documented and approved. (17 CFR 275.206(4)-1)
Step 4: Build an exception workflow that is simple and auditable
Create an exception form (ticket, workflow tool, or PDF) that captures:
- Requestor, material name/version, distribution channel
- Benchmark requested and why it differs from the standard
- Period mismatch details and why alignment is not possible
- Proposed disclosure language
- Approver(s) and approval date Then store it with the final approved advertisement package. (17 CFR 275.204-2)
Step 5: Ongoing monitoring (sample, test, correct)
Run periodic sampling of published materials (website, consultant databases, third-party distributor portals) to confirm that what got published matches what was approved. Keep evidence of the test and outcomes. (17 CFR 275.204-2)
Where Daydream fits naturally: If you are running this control across multiple channels and third parties, Daydream can act as the system of record for benchmark/period standards, route approvals, and preserve the “ad package” evidence (inputs, reviewer checklist, approvals, final output) for exam-ready retrieval. (17 CFR 275.204-2)
Required evidence and artifacts to retain
Retain artifacts in a way your team can retrieve by strategy, date, and distribution channel:
- Benchmark/Period Selection Matrix with version history and approvals. (17 CFR 275.206(4)-1)
- Benchmark change log (what changed, why, approvals, effective date). (17 CFR 275.206(4)-1)
- Advertisement review package for each item:
- Drafts and final versions
- Data extracts used (portfolio and benchmark) or system reports
- Completed reviewer checklist
- Approvals and dates
- Distribution list/channel confirmation where available (17 CFR 275.204-2)
- Exception tickets and supporting documentation. (17 CFR 275.204-2)
- Periodic monitoring results (sampling plan, findings, remediation actions). (17 CFR 275.204-2)
Common exam/audit questions and hangups
Expect questions that test whether you have a standard and whether it actually governs day-to-day content:
- “Show me how you decide which benchmark applies to this strategy.” Bring the matrix and the approval history. (17 CFR 275.206(4)-1)
- “Prove the benchmark and portfolio periods match in this factsheet.” Show the checklist and the underlying data report. (17 CFR 275.204-2)
- “Why does the website show different periods than the pitchbook?” You need either harmonization or an approved exception with rationale. (17 CFR 275.206(4)-1)
- “How do you control third parties who post your performance?” Show contractual review rights, your approval workflow, and your monitoring evidence. (17 CFR 275.204-2)
Frequent implementation mistakes (and how to avoid them)
-
Benchmark shopping by channel
- Fix: lock benchmark selection to the strategy in the matrix; require compliance approval for any deviation. (17 CFR 275.206(4)-1)
-
“Since inception” confusion
- Fix: define inception date types (product, composite, account) and require explicit labeling plus alignment rules with the benchmark. (17 CFR 275.206(4)-1)
-
Template drift and hidden edits
- Fix: centralize templates; require controlled versioning; block manual period edits where possible. (17 CFR 275.206(4)-1)
-
No proof of what data was used
- Fix: store the data extract or system-generated report with the ad package and tie it to the approval. (17 CFR 275.204-2)
-
Custom benchmarks without methodology
- Fix: require documented construction methodology, rebalancing rules, and a named data source before approval. (17 CFR 275.206(4)-1)
Enforcement context and risk implications
No specific public enforcement cases were provided in the source catalog for this requirement, so this guidance focuses on exam defensibility and the underlying regulatory expectation that ads must not be misleading. In practice, inconsistent periods and benchmarks create a clean narrative for an examiner: the firm presented a comparison that implied a conclusion (relative performance) without like-for-like inputs. Your risk is higher when the inconsistency appears repeatedly across channels or when third parties publish outdated versions that differ from approved materials. (17 CFR 275.206(4)-1)
Practical execution plan (30/60/90-day)
Use phases instead of fixed-day promises. Treat the timing as adjustable to your publication volume and system maturity.
First 30 days (Immediate stabilization)
- Inventory all in-market materials that include benchmark comparisons (factsheets, pitchbooks, web pages, RFP templates). (17 CFR 275.206(4)-1)
- Draft the Benchmark & Period Standard and publish an interim matrix for top strategies. (17 CFR 275.206(4)-1)
- Add the checklist to the marketing review workflow and require it for any new or updated material. (17 CFR 275.206(4)-1)
- Start retaining complete ad packages in a single repository aligned to your recordkeeping approach. (17 CFR 275.204-2)
Next 60 days (Control hardening)
- Expand the matrix to all strategies/products and finalize approval governance. (17 CFR 275.206(4)-1)
- Convert high-volume collateral into locked templates and retire legacy versions. (17 CFR 275.206(4)-1)
- Implement the exception workflow and train marketing, IR, and RFP teams on “no matrix, no publish.” (17 CFR 275.206(4)-1)
- Begin sampling published channels, including third-party portals, and document findings and remediation. (17 CFR 275.204-2)
Next 90 days (Operational maturity)
- Add QA automation where feasible (e.g., required fields, forced period sets, validation rules in your workflow tool). (17 CFR 275.206(4)-1)
- Run a thematic review of “since inception” and custom benchmark usage; clean up disclosures and align dates. (17 CFR 275.206(4)-1)
- Conduct an internal mock exam: pick a published piece and trace it from data source to approval to publication, then confirm records are complete and retrievable. (17 CFR 275.204-2)
Frequently Asked Questions
Do I need the exact same benchmark everywhere for a strategy?
You need a consistent, documented approach. If you allow multiple benchmarks (primary and secondary), define when each is permitted and require approval for deviations. (17 CFR 275.206(4)-1)
What if a client requires a different benchmark in an RFP?
Treat it as an exception. Document the client requirement, align periods, and retain the approved RFP response version with the rationale. (17 CFR 275.204-2)
Can we show “since inception” for the strategy but only show shorter history for the benchmark?
That mismatch can mislead unless you have a documented reason and clear labeling. Your default should be aligned start/end dates for both, with exceptions reviewed and approved. (17 CFR 275.206(4)-1)
How do we control benchmark changes over time without rewriting every old factsheet?
Maintain a benchmark change log with effective dates, and ensure materials clearly show “as of” dates and the benchmark applicable for that period. Keep the version history and approvals for what was actually distributed. (17 CFR 275.204-2)
Does this apply to third parties that post our performance (platforms, consultants, distributors)?
Yes operationally, because their postings can function as your advertisements. Contract for review rights, provide approved content, and test what is live against what you approved. Keep monitoring evidence. (17 CFR 275.204-2)
What evidence matters most in an exam?
Examiners typically want to see a standard (matrix and policy), proof you followed it (checklist and approvals), and records that support the numbers (data reports and retained versions). (17 CFR 275.204-2)
Frequently Asked Questions
Do I need the exact same benchmark everywhere for a strategy?
You need a consistent, documented approach. If you allow multiple benchmarks (primary and secondary), define when each is permitted and require approval for deviations. (17 CFR 275.206(4)-1)
What if a client requires a different benchmark in an RFP?
Treat it as an exception. Document the client requirement, align periods, and retain the approved RFP response version with the rationale. (17 CFR 275.204-2)
Can we show “since inception” for the strategy but only show shorter history for the benchmark?
That mismatch can mislead unless you have a documented reason and clear labeling. Your default should be aligned start/end dates for both, with exceptions reviewed and approved. (17 CFR 275.206(4)-1)
How do we control benchmark changes over time without rewriting every old factsheet?
Maintain a benchmark change log with effective dates, and ensure materials clearly show “as of” dates and the benchmark applicable for that period. Keep the version history and approvals for what was actually distributed. (17 CFR 275.204-2)
Does this apply to third parties that post our performance (platforms, consultants, distributors)?
Yes operationally, because their postings can function as your advertisements. Contract for review rights, provide approved content, and test what is live against what you approved. Keep monitoring evidence. (17 CFR 275.204-2)
What evidence matters most in an exam?
Examiners typically want to see a standard (matrix and policy), proof you followed it (checklist and approvals), and records that support the numbers (data reports and retained versions). (17 CFR 275.204-2)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream