PT-7(2): First Amendment Information
PT-7(2) requires you to block processing of data that describes how an individual exercises First Amendment rights (speech, religion, press, assembly, petition) unless you have a valid exception: explicit statutory authorization, the individual’s authorization, or a scoped, authorized law enforcement activity. Operationalize it by classifying the data, preventing collection/derivation, and enforcing documented approvals with technical controls and audit-ready evidence. 1
Key takeaways:
- Treat “First Amendment information” as a restricted data class with default deny processing. 1
- Allow processing only under documented, verifiable exceptions tied to statute, consent, or authorized law enforcement scope. 1
- Audits focus on proof: data maps, control gates, access controls, and exception records that show the prohibition works in practice. 2
The pt-7(2): first amendment information requirement is a privacy guardrail designed to prevent systems from becoming dossiers about people’s protected civic and religious activity. In practice, this control shows up in surprising places: analytics tags that infer political affiliation, HR tools that track union-related activity, customer support notes about activism, social media monitoring feeds, or free-text fields that capture “why” someone attended an event.
Your job as a Compliance Officer, CCO, or GRC lead is to translate a high-level prohibition into operational reality: stop collection where possible, block derived attributes when not allowed, and require documented authorization when an exception applies. Assessors will not accept “we don’t do that” without artifacts that show your data flows, processing purposes, and technical enforcement points.
This page gives you requirement-level implementation guidance you can hand to engineering, privacy, product, and internal audit. It focuses on practical control design, evidence, and common audit traps, so you can implement quickly and defend the decision-making later. 2
Regulatory text
Text (verbatim): “Prohibit the processing of information describing how any individual exercises rights guaranteed by the First Amendment unless expressly authorized by statute or by the individual or unless pertinent to and within the scope of an authorized law enforcement activity.” 1
What an operator must do:
- Implement a default prohibition on processing “First Amendment information.” Processing includes collecting, generating/inferencing, storing, searching, analyzing, sharing, or using it for decisions. 1
- Permit processing only when one of three exceptions is met and documented:
- Expressly authorized by statute, or
- Authorized by the individual, or
- Pertinent to and within scope of an authorized law enforcement activity. 1
Plain-English interpretation
If your system handles federal data (or you operate a federal information system), you must not process data that reveals or describes a person’s protected expression or affiliation, unless you can point to a valid legal basis or explicit permission, or you are supporting a properly authorized law enforcement purpose within its defined boundaries. 1
What counts as “First Amendment information” (practical examples)
Treat the following as “in scope” until counsel says otherwise:
- Political speech indicators: party affiliation, campaign volunteering, protest participation, donated-to-candidate notes, “activist” tags in CRM.
- Religious exercise: stated religion, worship attendance, prayer groups, religious accommodations in a way that describes practice.
- Assembly/association: membership in advocacy groups, unions, clubs when tied to expressive activity.
- Petition/press activity: signing petitions, publishing/reading certain media when tracked at an individual level.
Also include derived/inferred attributes, not just explicit fields. A “political_interest_score” produced by ML is still “information describing how an individual exercises rights” if it reflects protected activity.
Who it applies to
Entity scope
- Federal information systems implementing NIST SP 800-53 Rev. 5 controls. 2
- Contractor systems handling federal data where NIST SP 800-53 is flowed down contractually or used as the security/privacy baseline. 2
Operational contexts where it shows up
- Data platforms (logs, data lakes, customer analytics, CDPs).
- Identity and access systems (attributes, group membership, profile enrichment).
- Case management and investigations (tickets, notes, evidence attachments).
- Third-party data ingestion (open-source intelligence feeds, social monitoring vendors, enrichment APIs).
- AI/ML pipelines (feature stores, embeddings, topic classification, sentiment analysis).
What you actually need to do (step-by-step)
1) Assign ownership and define the prohibition in policy
- Name a control owner (privacy lead, GRC lead, or system privacy officer) and a technical co-owner (data platform or security engineering).
- Publish a short “PT-7(2) processing prohibition” standard: what is banned, what is allowed, and who approves exceptions. 1
Practical tip: Write the standard in “system terms” (fields, pipelines, tools), not only legal concepts.
2) Identify where First Amendment information could enter or be derived
Create a targeted data discovery exercise:
- Review data intake sources: forms, integrations, imports, scraped data, call transcripts, chat logs.
- Review free-text capture points: notes fields, ticket comments, HR case notes.
- Review derived attributes: segmentation labels, ML classifications, risk scores, interest profiles.
Output: a list of systems + data elements + processing purposes that could describe First Amendment activity.
3) Classify and tag the data as restricted
Add a data classification rule: “First Amendment information = restricted; default deny processing.” 1
Implement in practice:
- Data catalog tags (where you have them).
- Schema annotations and field-level labels for structured stores.
- Labeling guidance for unstructured repositories (eDiscovery, ticketing exports).
4) Implement “default deny” controls at the collection and processing layers
Aim to prevent the data from existing, and then prevent it from spreading.
Collection controls
- Remove fields from forms and templates.
- Add UI microcopy: “Do not enter political/religious/protest activity details unless required and approved.”
- Configure DLP/text scanning rules for common terms (political party names, “petition,” “union organizing,” religious institutions) to route for review rather than store automatically.
Processing controls
- Block joins, enrichment, and segmentation rules that create protected-activity profiles.
- Restrict search and analytics on any tagged data.
- Prevent use in automated decisioning (eligibility, risk scoring, employment decisions) unless an exception applies.
5) Build an exception workflow that matches the three allowed bases
You need a documented gate that answers: “Which exception applies, and where is the proof?”
Minimum required fields in the exception record:
- Data elements involved and where they reside.
- Purpose and processing activities (collect/store/analyze/share).
- Exception type: statute, individual authorization, or authorized law enforcement scope. 1
- Approvers (privacy + legal + system owner, as appropriate).
- Time bounds, access bounds, and downstream sharing limits.
Decision matrix (use in intake triage)
| Scenario | Allowed? | What you must have |
|---|---|---|
| Product team wants to segment users by political interest | No, by default | Exception approved under one of the three bases, otherwise stop |
| Customer support note includes “attends protests” | Usually no | Edit/redact guidance + controls to prevent capture; exception only with valid basis |
| Law enforcement support case with documented authorization | Potentially | Written authorization details + scope limitation + access restrictions |
6) Limit access and sharing (need-to-know)
- Enforce least privilege for any repository that might contain restricted data.
- Add explicit sharing rules for third parties: prohibit collection/processing of First Amendment information unless your contract and their processing instructions meet the exception basis and scope. 2
Third-party due diligence angle: update security/privacy addenda to require the third party to (a) not process this data unless instructed under a documented exception, and (b) provide deletion/export support for remediation.
7) Monitor and prove the control works
- Run periodic searches or DLP reports for prohibited terms/fields.
- Sample tickets/case notes for improper capture.
- Review new integrations and new ML features for derived protected-activity attributes.
- Track exceptions: counts, age, owners, expirations, and closure evidence.
Daydream fit (natural): use Daydream to map PT-7(2) to an owner, a procedure, and recurring evidence artifacts so audits don’t stall on “show me how this is enforced.” 1
Required evidence and artifacts to retain
Keep artifacts that prove both design and operation:
- Policy/standard: PT-7(2) prohibition and exception criteria. 1
- Data map entries: systems, data elements, and processing purposes where risk exists.
- Data classification schema: “First Amendment information” tag/label definition.
- Technical evidence:
- Form/template changes
- DLP rules or content scanning configurations
- Access control group listings for restricted repositories
- Logs showing blocked/routed events (where available)
- Exception records with approvals and scope limits.
- Third-party contract clauses / DPAs / processing instructions reflecting the prohibition and exceptions.
- Monitoring outputs: review logs, sample results, remediation tickets, and closure notes.
Common exam/audit questions and hangups
Expect these:
- “Show your definition of First Amendment information and how staff are trained to recognize it.” 2
- “Where in your systems could this data exist today? Prove it.”
- “How do you prevent derived attributes (ML tags, segmentation) from creating First Amendment information?”
- “Walk me through an exception from request to approval to expiration.” 1
- “How do you ensure third parties don’t ingest or generate this data?”
Hangup: teams often present a policy but cannot show enforcement points in the SDLC, data platform, and user interfaces.
Frequent implementation mistakes (and how to avoid them)
-
Treating it as “just PII.”
Fix: define it as a distinct restricted class tied to protected activity, including inferred data. 1 -
Relying on consent banners for broad authorization.
Fix: require explicit, documented authorization that is specific enough to justify the processing, and link it to systems and purposes. -
Ignoring unstructured text.
Fix: focus on ticket notes, call transcripts, chat logs, and attachments. Add controls that prevent entry and route for review. -
No scope control for law enforcement work.
Fix: document the authorization, define boundaries (data elements, users, time), and enforce access controls accordingly. 1 -
No evidence cadence.
Fix: define recurring artifacts (monitoring report, exception register export, access review output) and store them centrally for audits.
Enforcement context and risk implications
No public enforcement cases were provided in the source material for this requirement, so you should treat enforcement risk as assessment and contractual risk rather than cite specific penalties or case outcomes. 2
Practical risk you can explain to leadership:
- Collecting or inferring protected-activity data increases privacy impact and can create mission and reputational harm if misused.
- For federal programs and contractors, failure to implement PT-7(2) typically shows up as an audit finding: missing prohibition, weak exception handling, and weak evidence of operational enforcement. 2
Practical 30/60/90-day execution plan
First 30 days (Immediate stabilization)
- Assign owner(s) and publish the PT-7(2) standard with exception paths. 1
- Identify highest-risk intake points: forms, ticketing, analytics, OSINT/social feeds.
- Freeze new development that would introduce protected-activity profiling until reviewed.
Days 31–60 (Control build-out)
- Implement restricted data tagging in your catalog/schema where feasible.
- Add collection controls (remove fields, update templates, add warnings).
- Stand up the exception workflow and register (even if manual at first).
- Update third-party contract language and processing instructions for affected providers.
Days 61–90 (Operationalize and make it auditable)
- Add monitoring: scheduled scans, sampling, and exception aging reviews.
- Run an access review on repositories likely to contain restricted data.
- Execute a tabletop: “A support ticket includes protest participation, what happens?” Document outcomes and improvements.
- Use Daydream (or your GRC system) to track owners, procedures, and evidence uploads so the control stays testable over time. 1
Frequently Asked Questions
Does PT-7(2) ban storing someone’s religion for benefits administration?
PT-7(2) targets information describing how a person exercises First Amendment rights, and it allows processing with the individual’s authorization or when authorized by statute. Document the legal basis and restrict access and downstream use to the approved purpose. 1
What if we don’t have a “political affiliation” field, but our ML model infers it?
Inferred attributes can still be “information describing” First Amendment exercise. Add feature governance to block protected-activity features and require an exception before creating or using those outputs. 1
Do we need to delete historical data that could include First Amendment information?
If you discover it exists without a valid exception basis, treat it as a remediation issue: identify locations, restrict access immediately, then decide on deletion or minimization with legal and privacy input. Keep evidence of the decision and the actions taken. 2
How do we handle customer support notes where agents may mention protests, unions, or religion?
Train agents with concrete do/don’t examples, remove prompts that invite capture, and add scanning or review workflows for flagged terms in free-text fields. Pair that with a redaction/edit process for improperly captured content. 2
What documentation satisfies “authorized by the individual”?
Keep a record that ties the individual’s authorization to the specific processing purpose and system context, and ensure the authorization is retrievable during an audit. Avoid relying on vague, blanket permission that cannot be mapped to the processing activity. 1
How do we flow this requirement down to third parties?
Put explicit contract terms and processing instructions in place: the third party must not collect, infer, or process First Amendment information unless you provide documented direction under a valid exception, and they must support monitoring and deletion requests. 2
Footnotes
Frequently Asked Questions
Does PT-7(2) ban storing someone’s religion for benefits administration?
PT-7(2) targets information describing how a person exercises First Amendment rights, and it allows processing with the individual’s authorization or when authorized by statute. Document the legal basis and restrict access and downstream use to the approved purpose. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
What if we don’t have a “political affiliation” field, but our ML model infers it?
Inferred attributes can still be “information describing” First Amendment exercise. Add feature governance to block protected-activity features and require an exception before creating or using those outputs. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
Do we need to delete historical data that could include First Amendment information?
If you discover it exists without a valid exception basis, treat it as a remediation issue: identify locations, restrict access immediately, then decide on deletion or minimization with legal and privacy input. Keep evidence of the decision and the actions taken. (Source: NIST SP 800-53 Rev. 5)
How do we handle customer support notes where agents may mention protests, unions, or religion?
Train agents with concrete do/don’t examples, remove prompts that invite capture, and add scanning or review workflows for flagged terms in free-text fields. Pair that with a redaction/edit process for improperly captured content. (Source: NIST SP 800-53 Rev. 5)
What documentation satisfies “authorized by the individual”?
Keep a record that ties the individual’s authorization to the specific processing purpose and system context, and ensure the authorization is retrievable during an audit. Avoid relying on vague, blanket permission that cannot be mapped to the processing activity. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)
How do we flow this requirement down to third parties?
Put explicit contract terms and processing instructions in place: the third party must not collect, infer, or process First Amendment information unless you provide documented direction under a valid exception, and they must support monitoring and deletion requests. (Source: NIST SP 800-53 Rev. 5)
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream