Threat Intelligence Sharing
To meet the threat intelligence sharing requirement (C2M2 v2.1 ISC-1.B), you must run a defined, repeatable process to share cybersecurity information (for example, indicators of compromise and lessons learned) with relevant external communities, and keep proof that sharing occurred. Assign an owner, define what gets shared and when, and retain records that stand up in audits.
Key takeaways:
- Treat threat intelligence sharing as an operating control with scope, ownership, cadence, and evidence 1
- Share actionable cybersecurity information with relevant organizations and communities, not ad hoc anecdotes 1
- Auditors will focus on “show me” artifacts: what you shared, with whom, approvals, and exceptions tracked to closure 1
Threat intelligence sharing is easy to describe and surprisingly easy to fail in practice. Teams join an ISAC, set up a mailbox, and consider the requirement “done.” C2M2’s expectation is operational: cybersecurity information must actually be shared with relevant organizations and communities within your defined scope 1. That means you need a working mechanism to (1) decide what to share, (2) sanitize it so you do not spill sensitive or regulated data, (3) publish it through approved channels, and (4) retain evidence that the process runs as designed.
This page translates the requirement into a control you can execute quickly. It gives you a practical workflow, decision points that tend to stall approvals, and an evidence pack you can hand to internal audit or customer assessors. If you run a critical infrastructure or energy-sector environment, this is also a resilience control: sharing credible indicators and attack patterns improves sector-wide detection and response, and regulators and counterparties increasingly expect it as part of mature cyber governance 2.
Regulatory text
Excerpt: “Cybersecurity information is shared with relevant organizations and communities.” 1
Operator interpretation: You need a defined process that results in outbound sharing of cybersecurity-relevant information to external parties that make sense for your environment (sector peers, coordinating bodies, information sharing communities). “Shared” must be demonstrable through artifacts, not implied by memberships or subscriptions 1.
What you must do to satisfy the requirement:
- Establish ownership and governance for outbound sharing.
- Define what categories of cybersecurity information you will share (and what you will not share).
- Set approved sharing channels and recipients (“relevant organizations and communities”).
- Execute sharing on a repeatable cadence and during defined triggers (for example, incidents).
- Retain evidence that shows sharing occurred and exceptions were handled 1.
Plain-English requirement statement
You must be able to show that your organization actively shares useful cybersecurity information (such as indicators, tactics observed, defensive measures, and lessons learned) with appropriate external communities, using an approved process that protects sensitive data and produces audit-ready records 1.
Who it applies to
Entity scope: Organizations using C2M2 to assess cybersecurity maturity, commonly in energy sector and other critical infrastructure contexts 1.
Operational scope: Applies to the business unit, function, or OT/IT environment you have defined as “in scope” for the C2M2 assessment 1.
Teams involved (typical):
- Security operations / threat intel / incident response (content generation)
- Legal / privacy / regulatory compliance (sanitization rules, approvals)
- Communications (external-facing statements when needed)
- Third-party risk management (sharing through and with third parties, contract alignment)
- OT engineering (if OT indicators or plant impacts are shared)
What you actually need to do (step-by-step)
Step 1: Assign accountability and write the minimum viable procedure
Create a short “Threat Intelligence Sharing Procedure” with:
- Control owner (role, not a name)
- In-scope systems and environments (IT, OT, cloud)
- What counts as cybersecurity information you will share (examples below)
- Approved communities/channels and who can post
- Approval path (including when expedited sharing is allowed)
- Evidence to retain and where it lives 1
Keep it short enough that responders can follow it during an incident.
Step 2: Define what you share (and standardize formats)
Create a simple taxonomy for outbound items. Start with these “shareable” categories:
- Indicators of compromise (IPs, domains, file hashes, email subjects, YARA/Sigma-style patterns where appropriate)
- TTP summaries (what was observed, how it worked, what it targeted)
- Defensive actions that helped (detections, hardening actions, monitoring tips)
- Post-incident lessons learned that peers can act on
Define a “do not share” list:
- Sensitive operational details that increase attackability of your environment
- Data that could identify individuals or customers unless explicitly permitted
- Attorney-client privileged content
- Proprietary details your contracts prohibit
Step 3: Define “relevant organizations and communities” for your scope
Document the specific communities that make sense for your environment. Common options include:
- Sector information sharing communities (for example, ISAC-like groups)
- Government or coordinating bodies relevant to your sector
- Trusted peer groups with NDAs
- Product security communities when a vulnerability is involved
Your evidence should show you made a reasoned selection aligned to your risk profile, not a random list.
Step 4: Build a sanitization and approval workflow that can move fast
Most programs fail here because approvals are unclear. Implement:
- Default sanitization rules (strip customer names, internal hostnames, employee identifiers)
- Traffic Light Protocol (TLP) labeling or an equivalent internal marking scheme, if your communities use it
- Approval matrix:
- Routine indicators: security duty officer approval
- Incident-derived narrative: security + legal/compliance approval
- Any content referencing a third party: third-party risk + legal review
Make “expedited sharing” explicit: what can be shared quickly during active campaigns, and how you do after-the-fact review.
Step 5: Operationalize triggers and cadence
Define at least two operating modes:
- Cadenced sharing: a recurring package (for example, a monthly digest of notable indicators and trends)
- Event-driven sharing: triggered by incidents, credible threats, material vulnerabilities, or active phishing campaigns targeting your workforce
Tie event-driven sharing to your incident response process so it happens reliably.
Step 6: Execute and record proof
For every outbound share, capture:
- Date/time and channel
- Recipient community or organization
- What was shared (artifact or summary)
- Classification/marking and sanitization notes
- Approver(s) and any redactions applied
If you shared verbally (calls, briefings), retain a short memo and attendee list.
Step 7: Review control performance and track exceptions to closure
Run periodic control checks:
- Are shares happening as defined?
- Are approvals timely?
- Are communities responding with actionable intel you can ingest?
Track exceptions as tickets with ownership and closure evidence 1. This is where many teams fail audits: they “know there was a delay,” but cannot show they identified it as a control issue and fixed it.
Required evidence and artifacts to retain (audit-ready)
Maintain a single folder or GRC record with:
- Threat Intelligence Sharing Procedure (current version + change history) 1
- List of approved communities/channels and authorized posters
- Approval matrix and sanitization guidelines (including any legal/privacy rules)
- Sharing log (tickets, emails, portal posts, screenshots, SIEM/SOAR notes)
- Samples of shared indicators and write-ups (sanitized copies)
- Exception register with remediation tickets and closure notes 1
- Periodic review record showing the control was reviewed on a defined cadence 1
If you use Daydream to manage controls and evidence collection, map this requirement to a single control record and attach the sharing log outputs and review tickets as time-stamped evidence. That makes “prove it operated” a pull, not a scramble.
Common exam/audit questions and hangups
Auditors and assessors tend to ask:
- “Who owns outbound sharing, and what authority do they have to publish?”
- “What communities do you share with, and why are they relevant?”
- “Show me the last few shares and the approval trail.”
- “How do you prevent sensitive data from being shared?”
- “What happens when you decide not to share something? Where is that decision recorded?”
- “How do you know the process runs consistently rather than only during major incidents?” 1
Hangups usually appear around legal approval latency and unclear data handling rules.
Frequent implementation mistakes (and how to avoid them)
- Counting “membership” as sharing. Fix: require a sharing log entry that ties to an outbound post or briefing.
- No clear definition of “cybersecurity information.” Fix: publish examples and minimum fields (indicator, context, observed date, confidence, recommended action).
- Approvals that block responders. Fix: pre-authorize routine indicator sharing; reserve legal review for narratives and sensitive cases.
- Over-sharing sensitive operational detail. Fix: publish redaction rules and require classification/marking on every share.
- No evidence retention. Fix: automate capture (ticket templates, mailbox journaling, portal export) and store in a single evidence location.
- Ignoring third-party constraints. Fix: coordinate with third-party risk management and check contractual confidentiality before sharing data that could identify a third party.
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for this requirement. Treat the risk as an assurance and resilience gap: if you cannot show threat intelligence sharing is assigned, executed, and evidenced, you may not be able to demonstrate effective operation during internal control testing, audits, customer diligence, or regulator review 1. The practical impact is delay: delayed approvals, missing logs, and inconsistent sharing all surface during assessments.
Practical 30/60/90-day execution plan
First 30 days (stand up the control)
- Name an owner and backup; document responsibilities.
- Draft the Threat Intelligence Sharing Procedure (one to two pages).
- Identify approved communities/channels and set up access for authorized posters.
- Publish sanitization rules and an approval matrix.
- Create a sharing log template and a ticket workflow for exceptions.
Days 31–60 (run it in production)
- Start cadenced sharing (even if small) and record artifacts.
- Run one tabletop: “incident-derived intel, sanitized, approved, posted.”
- Validate retention: confirm you can retrieve approvals and posted content quickly.
- Tune decision points that slow down responders (most fixes are in the approval matrix).
Days 61–90 (make it audit-proof)
- Perform a formal control review and document outcomes 1.
- Sample recent shares and confirm required fields, markings, and approvals are present.
- Close any exceptions with tickets and evidence 1.
- Align third-party language: ensure NDAs and contracts support participation and sharing where required.
Frequently Asked Questions
Do we have to join an ISAC to meet the threat intelligence sharing requirement?
The requirement is to share cybersecurity information with relevant organizations and communities, not to join any specific group 1. If an ISAC-like community is the most relevant channel for your sector, document that rationale and keep evidence of actual outbound sharing.
What counts as “cybersecurity information” for sharing?
Share information that peers can act on: indicators, observed tactics, defensive measures, and lessons learned with enough context to apply safely 1. Avoid sensitive operational details, personal data, and privileged content.
Can we share anonymously?
Often yes, depending on the community channel and your internal rules. Document when anonymous or unattributed sharing is permitted and keep the same evidence trail (what was shared, approvals, and where it was posted).
How do we handle threat intel that references a third party (vendor, partner, customer)?
Add a review step for third-party-identifying details and check contractual confidentiality restrictions before sharing. If you redact the identity, record what you changed and why in the sharing log.
What evidence is strongest for auditors?
Time-stamped outbound posts or messages, linked approvals, and a maintained sharing log that shows repeatability. Also retain periodic control reviews and exception tickets tracked to closure 1.
We ingest intel but rarely produce it. How do we comply?
Start with small outbound shares: sanitized indicators seen in your environment and defensive detections you created. The control is about outbound sharing occurring through an assigned process, even if volume is modest 1.
Footnotes
Frequently Asked Questions
Do we have to join an ISAC to meet the threat intelligence sharing requirement?
The requirement is to share cybersecurity information with relevant organizations and communities, not to join any specific group (Source: Cybersecurity Capability Maturity Model v2.1). If an ISAC-like community is the most relevant channel for your sector, document that rationale and keep evidence of actual outbound sharing.
What counts as “cybersecurity information” for sharing?
Share information that peers can act on: indicators, observed tactics, defensive measures, and lessons learned with enough context to apply safely (Source: Cybersecurity Capability Maturity Model v2.1). Avoid sensitive operational details, personal data, and privileged content.
Can we share anonymously?
Often yes, depending on the community channel and your internal rules. Document when anonymous or unattributed sharing is permitted and keep the same evidence trail (what was shared, approvals, and where it was posted).
How do we handle threat intel that references a third party (vendor, partner, customer)?
Add a review step for third-party-identifying details and check contractual confidentiality restrictions before sharing. If you redact the identity, record what you changed and why in the sharing log.
What evidence is strongest for auditors?
Time-stamped outbound posts or messages, linked approvals, and a maintained sharing log that shows repeatability. Also retain periodic control reviews and exception tickets tracked to closure (Source: Cybersecurity Capability Maturity Model v2.1).
We ingest intel but rarely produce it. How do we comply?
Start with small outbound shares: sanitized indicators seen in your environment and defensive detections you created. The control is about outbound sharing occurring through an assigned process, even if volume is modest (Source: Cybersecurity Capability Maturity Model v2.1).
Authoritative Sources
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream