Compliance Requirements Library2583
Requirement records across 41 compliance frameworks, including SEC/FINRA obligations and framework controls.
Browse by framework →C2M2
View framework →10 requirements · C2M2
- Architecture and resilience engineering
- Asset, change, and configuration management
- Cybersecurity governance maturity
- Cybersecurity workforce capability development
- Identity and access management maturity
- Incident response and continuity
- Information sharing and stakeholder coordination
- Operational situational awareness
- Supply chain and external dependency risk
- Threat and vulnerability management
CIS AWS Foundations
View framework →43 requirements · CIS AWS Foundations
- CIS AWS Foundations v1.2 1.1: Avoid the use of the root user
- CIS AWS Foundations v1.2 1.10: Ensure IAM password policy prevents password reuse
- CIS AWS Foundations v1.2 1.11: Ensure IAM password policy expires passwords within 90 days or less
- CIS AWS Foundations v1.2 1.12: IAM root user access key should not exist
- CIS AWS Foundations v1.2 1.13: MFA should be enabled for the root user
- CIS AWS Foundations v1.2 1.14: Hardware MFA should be enabled for the root user
- CIS AWS Foundations v1.2 1.16: IAM users should not have IAM policies attached
- CIS AWS Foundations v1.2 1.18: Security contact information should be provided for an AWS account
- CIS AWS Foundations v1.2 1.2: MFA should be enabled for all IAM users that have a console password
- CIS AWS Foundations v1.2 1.22: IAM policies should not allow full "*" administrative privileges
- CIS AWS Foundations v1.2 1.3: Unused IAM user credentials should be removed
- CIS AWS Foundations v1.2 1.4: IAM users' access keys should be rotated every 90 days or less
- CIS AWS Foundations v1.2 1.5: Ensure IAM password policy requires at least one uppercase letter
- CIS AWS Foundations v1.2 1.6: Ensure IAM password policy requires at least one lowercase letter
- CIS AWS Foundations v1.2 1.7: Ensure IAM password policy requires at least one symbol
- CIS AWS Foundations v1.2 1.8: Ensure IAM password policy requires at least one number
- CIS AWS Foundations v1.2 1.9: Ensure IAM password policy requires minimum password length of 14 or greater
- CIS AWS Foundations v1.2 2.1: CloudTrail should be enabled and configured with at least one multi-Region trail that includes read and write management events
- CIS AWS Foundations v1.2 2.2: CloudTrail log file validation should be enabled
- CIS AWS Foundations v1.2 2.3: Ensure the S3 bucket used to store CloudTrail logs is not publicly accessible
- CIS AWS Foundations v1.2 2.4: CloudTrail trails should be integrated with Amazon CloudWatch Logs
- CIS AWS Foundations v1.2 2.5: AWS Config should be enabled and use the service-linked role for resource recording
- CIS AWS Foundations v1.2 2.6: Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket
- CIS AWS Foundations v1.2 2.7: CloudTrail should have encryption at-rest enabled
- CIS AWS Foundations v1.2 2.8: AWS KMS key rotation should be enabled
- CIS AWS Foundations v1.2 2.9: VPC flow logging should be enabled in all VPCs
- CIS AWS Foundations v1.2 3.1: Ensure a log metric filter and alarm exist for unauthorized API calls
- CIS AWS Foundations v1.2 3.10: Ensure a log metric filter and alarm exist for security group changes
- CIS AWS Foundations v1.2 3.11: Ensure a log metric filter and alarm exist for changes to Network Access Control Lists (NACL)
- CIS AWS Foundations v1.2 3.12: Ensure a log metric filter and alarm exist for changes to network gateways
- CIS AWS Foundations v1.2 3.13: Ensure a log metric filter and alarm exist for route table changes
- CIS AWS Foundations v1.2 3.14: Ensure a log metric filter and alarm exist for VPC changes
- CIS AWS Foundations v1.2 3.2: Ensure a log metric filter and alarm exist for Management Console sign-in without MFA
- CIS AWS Foundations v1.2 3.3: A log metric filter and alarm should exist for usage of the "root" user
- CIS AWS Foundations v1.2 3.4: Ensure a log metric filter and alarm exist for IAM policy changes
- CIS AWS Foundations v1.2 3.5: Ensure a log metric filter and alarm exist for CloudTrail configuration changes
- CIS AWS Foundations v1.2 3.6: Ensure a log metric filter and alarm exist for AWS Management Console authentication failures
- CIS AWS Foundations v1.2 3.7: Ensure a log metric filter and alarm exist for disabling or scheduled deletion of customer managed keys
- CIS AWS Foundations v1.2 3.8: Ensure a log metric filter and alarm exist for S3 bucket policy changes
- CIS AWS Foundations v1.2 3.9: Ensure a log metric filter and alarm exist for AWS Config configuration changes
- CIS AWS Foundations v1.2 4.1: Security groups should not allow ingress from 0.0.0.0/0 or ::/0 to port 22
- CIS AWS Foundations v1.2 4.2: Security groups should not allow ingress from 0.0.0.0/0 or ::/0 to port 3389
- CIS AWS Foundations v1.2 4.3: VPC default security groups should not allow inbound or outbound traffic
CIS V8
View framework →153 requirements · CIS Controls v8
- Safeguard 1.1: Establish and Maintain Detailed Enterprise Asset Inventory
- Safeguard 1.2: Address Unauthorized Assets
- Safeguard 1.3: Utilize an Active Discovery Tool
- Safeguard 1.4: Use Dynamic Host Configuration Protocol (DHCP) Logging to Update Enterprise Asset Inventory
- Safeguard 1.5: Use a Passive Asset Discovery Tool
- Safeguard 10.1: Deploy and Maintain Anti-Malware Software
- Safeguard 10.2: Configure Automatic Anti-Malware Signature Updates
- Safeguard 10.3: Disable Autorun and Autoplay for Removable Media
- Safeguard 10.4: Configure Automatic Anti-Malware Scanning of Removable Media
- Safeguard 10.5: Enable Anti-Exploitation Features
- Safeguard 10.6: Centrally Manage Anti-Malware Software
- Safeguard 10.7: Use Behavior-Based Anti-Malware Software
- Safeguard 11.1: Establish and Maintain a Data Recovery Process
- Safeguard 11.2: Perform Automated Backups
- Safeguard 11.3: Protect Recovery Data
- Safeguard 11.4: Establish and Maintain an Isolated Instance of Recovery Data
- Safeguard 11.5: Test Data Recovery
- Safeguard 12.1: Ensure Network Infrastructure is Up-to-Date
- Safeguard 12.2: Establish and Maintain a Secure Network Architecture
- Safeguard 12.3: Securely Manage Network Infrastructure
- Safeguard 12.4: Establish and Maintain Architecture Diagram(s)
- Safeguard 12.5: Centralize Network Authentication, Authorization, and Auditing (AAA)
- Safeguard 12.6: Use of Secure Network Management and Communication Protocols
- Safeguard 12.7: Ensure Remote Devices Utilize a VPN and are Connecting to an Enterprise's AAA Infrastructure
- Safeguard 12.8: Establish and Maintain Dedicated Computing Resources For all Administrative Work
- Safeguard 13.1: Centralize Security Event Alerting
- Safeguard 13.10: Perform Application Layer Filtering
- Safeguard 13.11: Tune Security Event Alerting Thresholds
- Safeguard 13.2: Deploy a Host-Based Intrusion Detection Solution
- Safeguard 13.3: Deploy a Network Intrusion Detection Solution
- Safeguard 13.4: Perform Traffic Filtering Between Network Segments
- Safeguard 13.5: Manage Access Control for Remote Assets
- Safeguard 13.6: Collect Network Traffic Flow Logs
- Safeguard 13.7: Deploy a Host-Based Intrusion Prevention Solution
- Safeguard 13.8: Deploy a Network Intrusion Prevention Solution
- Safeguard 13.9: Deploy Port-Level Access Control
- Safeguard 14.1: Establish and Maintain a Security Awareness Program
- Safeguard 14.2: Train Workforce Members to Recognize Social Engineering Attacks
- Safeguard 14.3: Train Workforce Members on Authentication Best Practices
- Safeguard 14.4: Train Workforce on Data Handling Best Practices
- Safeguard 14.5: Train Workforce Members on Causes of Unintentional Data Exposure
- Safeguard 14.6: Train Workforce Members on Recognizing and Reporting Security Incidents
- Safeguard 14.7: Train Workforce on How to Identify and Report if Their Enterprise Assets are Missing Security Updates
- Safeguard 14.8: Train Workforce on the Dangers of Connecting to and Transmitting Enterprise Data Over Insecure Networks
- Safeguard 14.9: Conduct Role-Specific Security Awareness and Skills Training
- Safeguard 15.1: Establish and Maintain an Inventory of Service Providers
- Safeguard 15.2: Establish and Maintain a Service Provider Management Policy
- Safeguard 15.3: Classify Service Providers
- Safeguard 15.4: Ensure Service Provider Contracts Include Security Requirements
- Safeguard 15.5: Assess Service Providers
- Safeguard 15.6: Monitor Service Providers
- Safeguard 15.7: Securely Decommission Service Providers
- Safeguard 16.1: Establish and Maintain a Secure Application Development Process
- Safeguard 16.10: Apply Secure Design Principles in Application Architectures
- Safeguard 16.11: Leverage Vetted Modules or Services for Application Security Components
- Safeguard 16.12: Implement Code-Level Security Checks
- Safeguard 16.13: Conduct Application Penetration Testing
- Safeguard 16.14: Conduct Threat Modeling
- Safeguard 16.2: Establish and Maintain a Process to Accept and Address Software Vulnerabilities
- Safeguard 16.3: Perform Root Cause Analysis on Security Vulnerabilities
- Safeguard 16.4: Establish and Manage an Inventory of Third-Party Software Components
- Safeguard 16.5: Use Up-to-Date and Trusted Third-Party Software Components
- Safeguard 16.6: Establish and Maintain a Severity Rating System and Process for Application Vulnerabilities
- Safeguard 16.7: Use Standard Hardening Configuration Templates for Application Infrastructure
- Safeguard 16.8: Separate Production and Non-Production Systems
- Safeguard 16.9: Train Developers in Application Security Concepts and Secure Coding
- Safeguard 17.1: Designate Personnel to Manage Incident Handling
- Safeguard 17.2: Establish and Maintain Contact Information for Reporting Security Incidents
- Safeguard 17.3: Establish and Maintain an Enterprise Process for Reporting Incidents
- Safeguard 17.4: Establish and Maintain an Incident Response Process
- Safeguard 17.5: Assign Key Roles and Responsibilities
- Safeguard 17.6: Define Mechanisms for Communicating During Incident Response
- Safeguard 17.7: Conduct Routine Incident Response Exercises
- Safeguard 17.8: Conduct Post-Incident Reviews
- Safeguard 17.9: Establish and Maintain Security Incident Thresholds
- Safeguard 18.1: Establish and Maintain a Penetration Testing Program
- Safeguard 18.2: Perform Periodic External Penetration Tests
- Safeguard 18.3: Remediate Penetration Test Findings
- Safeguard 18.4: Validate Security Measures
- Safeguard 18.5: Perform Periodic Internal Penetration Tests
- Safeguard 2.1: Establish and Maintain a Software Inventory
- Safeguard 2.2: Ensure Authorized Software is Currently Supported
- Safeguard 2.3: Address Unauthorized Software
- Safeguard 2.4: Utilize Automated Software Inventory Tools
- Safeguard 2.5: Allowlist Authorized Software
- Safeguard 2.6: Allowlist Authorized Libraries
- Safeguard 2.7: Allowlist Authorized Scripts
- Safeguard 3.1: Establish and Maintain a Data Management Process
- Safeguard 3.10: Encrypt Sensitive Data in Transit
- Safeguard 3.11: Encrypt Sensitive Data at Rest
- Safeguard 3.12: Segment Data Processing and Storage Based on Sensitivity
- Safeguard 3.13: Deploy a Data Loss Prevention Solution
- Safeguard 3.14: Log Sensitive Data Access
- Safeguard 3.2: Establish and Maintain a Data Inventory
- Safeguard 3.3: Configure Data Access Control Lists
- Safeguard 3.4: Enforce Data Retention
- Safeguard 3.5: Securely Dispose of Data
- Safeguard 3.6: Encrypt Data on End-User Devices
- Safeguard 3.7: Establish and Maintain a Data Classification Scheme
- Safeguard 3.8: Document Data Flows
- Safeguard 3.9: Encrypt Data on Removable Media
- Safeguard 4.1: Establish and Maintain a Secure Configuration Process
- Safeguard 4.10: Enforce Automatic Device Lockout on Portable End-User Devices
- Safeguard 4.11: Enforce Remote Wipe Capability on Portable End-User Devices
- Safeguard 4.12: Separate Enterprise Workspaces on Mobile End-User Devices
- Safeguard 4.2: Establish and Maintain a Secure Configuration Process for Network Infrastructure
- Safeguard 4.3: Configure Automatic Session Locking on Enterprise Assets
- Safeguard 4.4: Implement and Manage a Firewall on Servers
- Safeguard 4.5: Implement and Manage a Firewall on End-User Devices
- Safeguard 4.6: Securely Manage Enterprise Assets and Software
- Safeguard 4.7: Manage Default Accounts on Enterprise Assets and Software
- Safeguard 4.8: Uninstall or Disable Unnecessary Services on Enterprise Assets and Software
- Safeguard 4.9: Configure Trusted DNS Servers on Enterprise Assets
- Safeguard 5.1: Establish and Maintain an Inventory of Accounts
- Safeguard 5.2: Use Unique Passwords
- Safeguard 5.3: Disable Dormant Accounts
- Safeguard 5.4: Restrict Administrator Privileges to Dedicated Administrator Accounts
- Safeguard 5.5: Establish and Maintain an Inventory of Service Accounts
- Safeguard 5.6: Centralize Account Management
- Safeguard 6.1: Establish an Access Granting Process
- Safeguard 6.2: Establish an Access Revoking Process
- Safeguard 6.3: Require MFA for Externally-Exposed Applications
- Safeguard 6.4: Require MFA for Remote Network Access
- Safeguard 6.5: Require MFA for Administrative Access
- Safeguard 6.6: Establish and Maintain an Inventory of Authentication and Authorization Systems
- Safeguard 6.7: Centralize Access Control
- Safeguard 6.8: Define and Maintain Role-Based Access Control
- Safeguard 7.1: Establish and Maintain a Vulnerability Management Process
- Safeguard 7.2: Establish and Maintain a Remediation Process
- Safeguard 7.3: Perform Automated Operating System Patch Management
- Safeguard 7.4: Perform Automated Application Patch Management
- Safeguard 7.5: Perform Automated Vulnerability Scans of Internal Enterprise Assets
- Safeguard 7.6: Perform Automated Vulnerability Scans of Externally-Exposed Enterprise Assets
- Safeguard 7.7: Remediate Detected Vulnerabilities
- Safeguard 8.1: Establish and Maintain an Audit Log Management Process
- Safeguard 8.10: Retain Audit Logs
- Safeguard 8.11: Conduct Audit Log Reviews
- Safeguard 8.12: Collect Service Provider Logs
- Safeguard 8.2: Collect Audit Logs
- Safeguard 8.3: Ensure Adequate Audit Log Storage
- Safeguard 8.4: Standardize Time Synchronization
- Safeguard 8.5: Collect Detailed Audit Logs
- Safeguard 8.6: Collect DNS Query Audit Logs
- Safeguard 8.7: Collect URL Request Audit Logs
- Safeguard 8.8: Collect Command-Line Audit Logs
- Safeguard 8.9: Centralize Audit Logs
- Safeguard 9.1: Ensure Use of Only Fully Supported Browsers and Email Clients
- Safeguard 9.2: Use DNS Filtering Services
- Safeguard 9.3: Maintain and Enforce Network-Based URL Filters
- Safeguard 9.4: Restrict Unnecessary or Unauthorized Browser and Email Client Extensions
- Safeguard 9.5: Implement DMARC
- Safeguard 9.6: Block Unnecessary File Types
- Safeguard 9.7: Deploy and Maintain Email Server Anti-Malware Protections
Client Communications & Marketing Compliance
View framework →54 requirements · FINRA, SEC, SEC Investment Advisers Act Section 206, SEC Rule 206(4)-1(a)(5), SEC-Enforcement, SEC-Enforcement-Trends, SEC-Knowledge-Graph, State
- 2025 SEC Marketing Rule Examination Focus Areas
- Arizona Investment Adviser Advertising Rules and Disclosure Requirements
- California Form ADV Brochure Delivery and Disclosure Requirements
- California Investment Adviser Advertising and Disclosure Rules
- Client Notification and Disclosure Requirements for Custody
- Client Relationship Summary - Form CRS Requirements
- Cybersecurity Policies and Incident Disclosure Requirements
- Cybersecurity Risk Management and Incident Disclosure
- ESG Investment Disclosure and Anti-Greenwashing Standards
- FINRA Communication Supervision and Approval Standards
- FINRA Communications Filing and Pre-Approval Requirements
- FINRA Customer Records and Communication Recordkeeping Violations
- FINRA Misleading Communications and Content Standards
- Form ADV Cybersecurity Risk Disclosure Requirements
- Investment Advisory Client Communication Recordkeeping
- Investment Performance Claims and Substantiation
- Marketing Communication Compliance and Substantiation Standards
- Misleading Marketing Communications and Fair Presentation
- Off-Channel Communications Compliance and Supervision
- Off-Channel Communications Enforcement Focus - WhatsApp and Personal Messaging Violations
- Related-Party Fee Disclosure Requirements
- SEC Client Communication Standards and Approval Requirements
- SEC Cybersecurity Risk Management and Incident Disclosure
- SEC Form CRS (Client Relationship Summary)
- SEC Knowledge Graph Cybersecurity Risk Management and Incident Disclosure
- SEC Marketing Communication Standards and Substantiation
- SEC Marketing Rule - Benchmark Selection and Disclosure Standards
- SEC Marketing Rule - Cherry-Picked Data and Selective Presentation
- SEC Marketing Rule - Comparison Claims and Substantiation
- SEC Marketing Rule - Exaggerated Claims and Superlative Language
- SEC Marketing Rule - False Statements
- SEC Marketing Rule - Fee Disclosure and Transparency Requirements
- SEC Marketing Rule - Hypothetical Performance Presentation
- SEC Marketing Rule - Performance Advertising
- SEC Marketing Rule - Promissory Language and Guaranteed Returns
- SEC Marketing Rule - Risk Minimization and Misleading Language
- SEC Marketing Rule - Testimonial and Endorsement Standards
- SEC Marketing Rule Comprehensive Compliance Framework
- SEC Marketing Rule Performance Representation Standards
- SEC Uses client testimonials without proper disclosures
- SEC-Enforcement Client Communication Standards and Approval Requirements
- SEC-Enforcement Client Communication Standards and Approval Requirements
- SEC-Enforcement Client Communication Standards and Approval Requirements
- SEC-Enforcement Cybersecurity Risk Management and Incident Disclosure
- SEC-Enforcement Cybersecurity Risk Management and Incident Disclosure
- SEC-Enforcement ESG Investment Disclosure and Anti-Greenwashing Standards
- SEC-Enforcement ESG Investment Disclosure and Anti-Greenwashing Standards
- SEC-Enforcement ESG Investment Disclosure and Anti-Greenwashing Standards
- SEC-Enforcement Marketing Communication Standards and Substantiation
- SEC-Enforcement Marketing Communication Standards and Substantiation
- SEC-Enforcement Marketing Communication Standards and Substantiation
- Social Media Communications Supervision and Recordkeeping
- State Investment Adviser Advertising and Promotional Standards
- Unsubstantiated Marketing Claims and Misleading Statements
Client Onboarding & Suitability Compliance
View framework →7 requirements · FINRA, SEC, SEC-Enforcement, SEC-Enforcement-Trends, SEC-Knowledge-Graph
- FINRA FINRA Know Your Customer Rule
- FINRA FINRA Quantitative Suitability (Anti-Churning)
- FINRA Suitability Rule - Customer-Specific Requirements
- Regulation Best Interest Implementation Standards
- SEC Regulation Best Interest (Reg BI)
- Senior Investor Protection and Suitability
- Senior Investor Protection and Suitability
CMMC
View framework →110 requirements · CMMC
- CMMC Level 2 Practice 3.1.1: Limit system access to authorized users, processes acting on behalf of authorized users, and
- CMMC Level 2 Practice 3.1.10: Use session lock with pattern-hiding displays to prevent access and viewing of data after a
- CMMC Level 2 Practice 3.1.11: Terminate (automatically) a user session after a defined condition
- CMMC Level 2 Practice 3.1.12: Monitor and control remote access sessions
- CMMC Level 2 Practice 3.1.13: Employ cryptographic mechanisms to protect the confidentiality of remote access sessions
- CMMC Level 2 Practice 3.1.14: Route remote access via managed access control points
- CMMC Level 2 Practice 3.1.15: Authorize remote execution of privileged commands and remote access to security-relevant
- CMMC Level 2 Practice 3.1.16: Authorize wireless access prior to allowing such connections
- CMMC Level 2 Practice 3.1.17: Protect wireless access using authentication and encryption
- CMMC Level 2 Practice 3.1.18: Control connection of mobile devices
- CMMC Level 2 Practice 3.1.19: Encrypt CUI on mobile devices and mobile computing platforms.23
- CMMC Level 2 Practice 3.1.2: Limit system access to the types of transactions and functions that authorized users are
- CMMC Level 2 Practice 3.1.20: Verify and control/limit connections to and use of external systems
- CMMC Level 2 Practice 3.1.21: Limit use of portable storage devices on external systems
- CMMC Level 2 Practice 3.1.22: Control CUI posted or processed on publicly accessible systems
- CMMC Level 2 Practice 3.1.3: Control the flow of CUI in accordance with approved authorizations
- CMMC Level 2 Practice 3.1.4: Separate the duties of individuals to reduce the risk of malevolent activity without collusion
- CMMC Level 2 Practice 3.1.5: Employ the principle of least privilege, including for specific security functions and privileged
- CMMC Level 2 Practice 3.1.6: Use non-privileged accounts or roles when accessing nonsecurity functions
- CMMC Level 2 Practice 3.1.7: Prevent non-privileged users from executing privileged functions and capture the execution of
- CMMC Level 2 Practice 3.1.8: Limit unsuccessful logon attempts
- CMMC Level 2 Practice 3.1.9: Provide privacy and security notices consistent with applicable CUI rules
- CMMC Level 2 Practice 3.10.1: addresses physical access for individuals whose maintenance
- CMMC Level 2 Practice 3.10.2: Protect and monitor the physical facility and support infrastructure for organizational systems
- CMMC Level 2 Practice 3.10.3: Escort visitors and monitor visitor activity
- CMMC Level 2 Practice 3.10.4: Maintain audit logs of physical access
- CMMC Level 2 Practice 3.10.5: Control and manage physical access devices
- CMMC Level 2 Practice 3.10.6: Enforce safeguarding measures for CUI at alternate work sites
- CMMC Level 2 Practice 3.11.1: Periodically assess the risk to organizational operations (including mission, functions, image, or
- CMMC Level 2 Practice 3.11.2: Scan for vulnerabilities in organizational systems and applications periodically and when new
- CMMC Level 2 Practice 3.11.3: Remediate vulnerabilities in accordance with risk assessments
- CMMC Level 2 Practice 3.12.1: Periodically assess the security controls in organizational systems to determine if the controls
- CMMC Level 2 Practice 3.12.2: Develop and implement plans of action designed to correct deficiencies and reduce or
- CMMC Level 2 Practice 3.12.3: Monitor security controls on an ongoing basis to ensure the continued effectiveness of the
- CMMC Level 2 Practice 3.12.4: Develop, document, and periodically update system security plans that describe system
- CMMC Level 2 Practice 3.13.1: Monitor, control, and protect communications (i.e., information transmitted or received by
- CMMC Level 2 Practice 3.13.10: Establish and manage cryptographic keys for cryptography employed in organizational
- CMMC Level 2 Practice 3.13.11: Employ FIPS-validated cryptography when used to protect the confidentiality of CUI
- CMMC Level 2 Practice 3.13.12: Prohibit remote activation of collaborative computing devices and provide indication of
- CMMC Level 2 Practice 3.13.13: Control and monitor the use of mobile code
- CMMC Level 2 Practice 3.13.14: Control and monitor the use of Voice over Internet Protocol (VoIP) technologies
- CMMC Level 2 Practice 3.13.15: Protect the authenticity of communications sessions
- CMMC Level 2 Practice 3.13.16: Protect the confidentiality of CUI at rest
- CMMC Level 2 Practice 3.13.2: Employ architectural designs, software development techniques, and systems engineering
- CMMC Level 2 Practice 3.13.3: Separate user functionality from system management functionality
- CMMC Level 2 Practice 3.13.4: Prevent unauthorized and unintended information transfer via shared system resources
- CMMC Level 2 Practice 3.13.5: Implement subnetworks for publicly accessible system components that are physically or
- CMMC Level 2 Practice 3.13.6: Deny network communications traffic by default and allow network communications traffic by
- CMMC Level 2 Practice 3.13.7: Prevent remote devices from simultaneously establishing non-remote connections with
- CMMC Level 2 Practice 3.13.8: Implement cryptographic mechanisms to prevent unauthorized disclosure of CUI during
- CMMC Level 2 Practice 3.13.9: Terminate network connections associated with communications sessions at the end of the
- CMMC Level 2 Practice 3.14.1: Identify, report, and correct system flaws in a timely manner
- CMMC Level 2 Practice 3.14.2: Provide protection from malicious code at designated locations within organizational systems
- CMMC Level 2 Practice 3.14.3: Monitor system security alerts and advisories and take action in response
- CMMC Level 2 Practice 3.14.4: Update malicious code protection mechanisms when new releases are available
- CMMC Level 2 Practice 3.14.5: Perform periodic scans of organizational systems and real-time scans of files from external
- CMMC Level 2 Practice 3.14.6: Monitor organizational systems, including inbound and outbound communications traffic, to
- CMMC Level 2 Practice 3.14.7: Identify unauthorized use of organizational systems
- CMMC Level 2 Practice 3.2.1: Ensure that managers, systems administrators, and users of organizational systems are made
- CMMC Level 2 Practice 3.2.2: Ensure that personnel are trained to carry out their assigned information security-related
- CMMC Level 2 Practice 3.2.3: Provide security awareness training on recognizing and reporting potential indicators of insider
- CMMC Level 2 Practice 3.3.1: Create and retain system audit logs and records to the extent needed to enable the
- CMMC Level 2 Practice 3.3.2: Ensure that the actions of individual system users can be uniquely traced to those users, so
- CMMC Level 2 Practice 3.3.3: Review and update logged events
- CMMC Level 2 Practice 3.3.4: Alert in the event of an audit logging process failure
- CMMC Level 2 Practice 3.3.5: Correlate audit record review, analysis, and reporting processes for investigation and response
- CMMC Level 2 Practice 3.3.6: Provide audit record reduction and report generation to support on-demand analysis and
- CMMC Level 2 Practice 3.3.7: Provide a system capability that compares and synchronizes internal system clocks with an
- CMMC Level 2 Practice 3.3.8: Protect audit information and audit logging tools from unauthorized access, modification, and
- CMMC Level 2 Practice 3.3.9: Limit management of audit logging functionality to a subset of privileged users
- CMMC Level 2 Practice 3.4.1: Establish and maintain baseline configurations and inventories of organizational systems
- CMMC Level 2 Practice 3.4.2: Establish and enforce security configuration settings for information technology products
- CMMC Level 2 Practice 3.4.3: Track, review, approve or disapprove, and log changes to organizational systems
- CMMC Level 2 Practice 3.4.4: Analyze the security impact of changes prior to implementation
- CMMC Level 2 Practice 3.4.5: Define, document, approve, and enforce physical and logical access restrictions associated with
- CMMC Level 2 Practice 3.4.6: Employ the principle of least functionality by configuring organizational systems to provide
- CMMC Level 2 Practice 3.4.7: Restrict, disable, or prevent the use of nonessential programs, functions, ports, protocols, and
- CMMC Level 2 Practice 3.4.8: Apply deny-by-exception (blacklisting) policy to prevent the use of unauthorized software or
- CMMC Level 2 Practice 3.4.9: Control and monitor user-installed software
- CMMC Level 2 Practice 3.5.1: Identify system users, processes acting on behalf of users, and devices
- CMMC Level 2 Practice 3.5.10: Store and transmit only cryptographically-protected passwords
- CMMC Level 2 Practice 3.5.11: Obscure feedback of authentication information
- CMMC Level 2 Practice 3.5.2: Authenticate (or verify) the identities of users, processes, or devices, as a prerequisite to
- CMMC Level 2 Practice 3.5.3: Use multifactor authentication for local and network access to privileged accounts and for
- CMMC Level 2 Practice 3.5.4: Employ replay-resistant authentication mechanisms for network access to privileged and non-
- CMMC Level 2 Practice 3.5.5: Prevent reuse of identifiers for a defined period
- CMMC Level 2 Practice 3.5.6: Disable identifiers after a defined period of inactivity
- CMMC Level 2 Practice 3.5.7: Enforce a minimum password complexity and change of characters when new passwords are
- CMMC Level 2 Practice 3.5.8: Prohibit password reuse for a specified number of generations
- CMMC Level 2 Practice 3.5.9: Allow temporary password use for system logons with an immediate change to a permanent
- CMMC Level 2 Practice 3.6.1: Establish an operational incident-handling capability for organizational systems that includes
- CMMC Level 2 Practice 3.6.2: Track, document, and report incidents to designated officials and/or authorities both internal
- CMMC Level 2 Practice 3.6.3: Test the organizational incident response capability
- CMMC Level 2 Practice 3.7.1: Perform maintenance on organizational systems.26
- CMMC Level 2 Practice 3.7.2: Provide controls on the tools, techniques, mechanisms, and personnel used to conduct system
- CMMC Level 2 Practice 3.7.3: Ensure equipment removed for off-site maintenance is sanitized of any CUI
- CMMC Level 2 Practice 3.7.4: Check media containing diagnostic and test programs for malicious code before the media are
- CMMC Level 2 Practice 3.7.5: Require multifactor authentication to establish nonlocal maintenance sessions via external
- CMMC Level 2 Practice 3.7.6: Supervise the maintenance activities of maintenance personnel without required access
- CMMC Level 2 Practice 3.8.1: Protect (i.e., physically control and securely store) system media containing CUI, both paper and
- CMMC Level 2 Practice 3.8.2: Limit access to CUI on system media to authorized users
- CMMC Level 2 Practice 3.8.3: Sanitize or destroy system media containing CUI before disposal or release for reuse
- CMMC Level 2 Practice 3.8.4: Mark media with necessary CUI markings and distribution limitations
- CMMC Level 2 Practice 3.8.5: Control access to media containing CUI and maintain accountability for media during transport
- CMMC Level 2 Practice 3.8.6: Implement cryptographic mechanisms to protect the confidentiality of CUI stored on digital
- CMMC Level 2 Practice 3.8.7: Control the use of removable media on system components
- CMMC Level 2 Practice 3.8.8: Prohibit the use of portable storage devices when such devices have no identifiable owner
- CMMC Level 2 Practice 3.8.9: Protect the confidentiality of backup CUI at storage locations
- CMMC Level 2 Practice 3.9.1: Screen individuals prior to authorizing access to organizational systems containing CUI
- CMMC Level 2 Practice 3.9.2: Ensure that organizational systems containing CUI are protected during and after personnel
COBIT
View framework →40 requirements · COBIT
- APO01: Managed I&T Management Framework
- APO02: Managed Strategy
- APO03: Managed Enterprise Architecture
- APO04: Managed Innovation
- APO05: Managed Portfolio
- APO06: Managed Budget and Costs
- APO07: Managed Human Resources
- APO08: Managed Relationships
- APO09: Managed Service Level Agreements
- APO10: Managed Vendors
- APO11: Managed Quality
- APO12: Managed Risk
- APO13: Managed Security
- APO14: Managed Data
- BAI01: Managed Programs and Projects
- BAI02: Managed Requirements Definition
- BAI03: Managed Solutions Identification and Build
- BAI04: Managed Availability and Capacity
- BAI05: Managed Organizational Change
- BAI06: Managed IT Changes
- BAI07: Managed IT Change Acceptance and Transitioning
- BAI08: Managed Knowledge
- BAI09: Managed Assets
- BAI10: Managed Configuration
- BAI11: Managed IT Projects
- DSS01: Managed Operations
- DSS02: Managed Service Requests and Incidents
- DSS03: Managed Problems
- DSS04: Managed Continuity
- DSS05: Managed Security Services
- DSS06: Managed Business Process Controls
- EDM01: Ensured Governance Framework Setting and Maintenance
- EDM02: Ensured Benefits Delivery
- EDM03: Ensured Risk Optimization
- EDM04: Ensured Resource Optimization
- EDM05: Ensured Stakeholder Engagement
- MEA01: Managed Performance and Conformance Monitoring
- MEA02: Managed System of Internal Control
- MEA03: Managed Compliance with External Requirements
- MEA04: Managed Assurance
COSO
View framework →17 requirements · COSO
- Principle 1: Demonstrates commitment to integrity and values
- Principle 10: Selects and develops control activities that help mitigate risks
- Principle 11: Selects and develops general controls over technology
- Principle 12: Bases controls on thorough policies and procedures
- Principle 13: Uses relevant, high-quality information
- Principle 14: Communicates internally to support controls
- Principle 15: Communicates externally
- Principle 16: Conducts ongoing and/or separate evaluations
- Principle 17: Evaluates and communicates deficiencies
- Principle 2: Demonstrates independence and exercises oversight responsibility
- Principle 3: Establishes structure, authority and responsibility
- Principle 4: Demonstrates commitment to attracting, developing and retaining competent staff
- Principle 5: Enforces accountability
- Principle 6: Specifies suitable, specific objectives
- Principle 7: Identifies and analyzes risks
- Principle 8: Assesses fraud risk
- Principle 9: Identifies and analyzes significant changes
Data Security & Technology Compliance
View framework →5 requirements · NYDFS (State), SEC
- NYDFS Cybersecurity Regulation (23 NYCRR 500)
- SEC Artificial Intelligence Marketing Compliance - AI Washing Prevention
- SEC Cybersecurity Incident Disclosure - Item 1.05 Form 8-K
- SEC Electronic Recordkeeping and Books & Records Requirements - Off-Channel Communications
- SEC Regulation SCI - Systems Compliance and Integrity
10 requirements · DCC
- Continuous compliance monitoring
- Control-to-evidence lifecycle management
- Cross-framework mapping governance
- Entity and stage-specific applicability overlays
- Evidence-centric operations
- Operational feedback loop and catalog improvement
- Quality gates for publishable compliance guidance
- Regulatory traceability and citation discipline
- Risk-prioritized remediation
- Unified control taxonomy
DORA
View framework →64 requirements · DORA
- Article 1: Subject matter
- Article 10: Detection
- Article 11: Response and recovery
- Article 12: Backup policies and procedures, restoration and recovery procedures and methods
- Article 13: Learning and evolving
- Article 14: Communication
- Article 15: Further harmonisation of ICT risk management tools, methods, processes and policies
- Article 16: Simplified ICT risk management framework
- Article 17: ICT-related incident management process
- Article 18: Classification of ICT-related incidents and cyber threats
- Article 19: Reporting of major ICT-related incidents and voluntary notification of significant cyber threats
- Article 2: Scope
- Article 20: Harmonisation of reporting content and templates
- Article 21: Centralisation of reporting of major ICT-related incidents
- Article 22: Supervisory feedback
- Article 23: Operational or security payment-related incidents concerning credit institutions, payment institutions, account information service providers, and electronic money institutions
- Article 24: General requirements for the performance of digital operational resilience testing
- Article 25: Testing of ICT tools and systems
- Article 26: Advanced testing of ICT tools, systems and processes based on TLPT
- Article 27: Requirements for testers for the carrying out of TLPT
- Article 28: General principles
- Article 29: Preliminary assessment of ICT concentration risk at entity level
- Article 3: Definitions
- Article 30: Key contractual provisions
- Article 31: Designation of critical ICT third-party service providers
- Article 32: Structure of the Oversight Framework
- Article 33: Tasks of the Lead Overseer
- Article 34: Operational coordination between Lead Overseers
- Article 35: Powers of the Lead Overseer
- Article 36: Exercise of the powers of the Lead Overseer outside the Union
- Article 37: Request for information
- Article 38: General investigations
- Article 39: Inspections
- Article 4: Proportionality principle
- Article 40: Ongoing oversight
- Article 41: Harmonisation of conditions enabling the conduct of the oversight activities
- Article 42: Follow-up by competent authorities
- Article 43: Oversight fees
- Article 44: International cooperation
- Article 45: Information-sharing arrangements on cyber threat information and intelligence
- Article 46: Competent authorities
- Article 47: Cooperation with structures and authorities established by Directive (EU) 2022/2555
- Article 48: Cooperation between authorities
- Article 49: Financial cross-sector exercises, communication and cooperation
- Article 5: Governance and organisation
- Article 50: Administrative penalties and remedial measures
- Article 51: Exercise of the power to impose administrative penalties and remedial measures
- Article 52: Criminal penalties
- Article 53: Notification duties
- Article 54: Publication of administrative penalties
- Article 55: Professional secrecy
- Article 56: Data Protection
- Article 57: Exercise of the delegation
- Article 58: Review clause
- Article 59: Amendments to Regulation (EC) No 1060/2009
- Article 6: ICT risk management framework
- Article 60: Amendments to Regulation (EU) No 648/2012
- Article 61: Amendments to Regulation (EU) No 909/2014
- Article 62: Amendments to Regulation (EU) No 600/2014
- Article 63: Amendment to Regulation (EU) 2016/1011
- Article 64: Entry into force and application
- Article 7: ICT systems, protocols and tools
- Article 8: Identification
- Article 9: Protection and prevention
FedRAMP
View framework →10 requirements · FedRAMP
- Annual assessment and authorization maintenance
- Authorization package readiness
- Baseline control implementation
- Configuration baseline and hardening governance
- Continuous monitoring
- Control inheritance and shared responsibility management
- Incident reporting and federal stakeholder coordination
- POA&M governance
- Third-party assessment readiness
- Vulnerability scanning and remediation SLAs
FINRA Communications Supervision
View framework →12 requirements · FINRA
- Books and records retrieval readiness
- Communications content standards
- Correspondence and internal communication review
- Exception escalation and remediation governance
- Off-channel communication supervision
- Principal approval workflow controls
- Recordkeeping and retention
- Research and correspondence distinction controls
- Retail communication filing obligations
- Supervisory system for communications
- Supervisory training and annual certification
- Training and attestation
GDPR
View framework →99 requirements · GDPR
- Article 1: Subject-matter and objectives
- Article 10: Processing of personal data relating to criminal convictions and offences
- Article 11: Processing which does not require identification
- Article 12: Transparent information, communication and modalities for the exercise of the rights of the data subject
- Article 13: Information to be provided where personal data are collected from the data subject
- Article 14: Information to be provided where personal data have not been obtained from the data subject
- Article 15: Right of access by the data subject
- Article 16: Right to rectification
- Article 17: Right to erasure (‘right to be forgotten’)
- Article 18: Right to restriction of processing
- Article 19: Notification obligation regarding rectification or erasure of personal data or restriction of processing
- Article 2: Material scope
- Article 20: Right to data portability
- Article 21: Right to object
- Article 22: Automated individual decision-making, including profiling
- Article 23: Restrictions
- Article 24: Responsibility of the controller
- Article 25: Data protection by design and by default
- Article 26: Joint controllers
- Article 27: Representatives of controllers or processors not established in the Union
- Article 28: Processor
- Article 29: Processing under the authority of the controller or processor
- Article 3: Territorial scope
- Article 30: Records of processing activities
- Article 31: Cooperation with the supervisory authority
- Article 32: Security of processing
- Article 33: Notification of a personal data breach to the supervisory authority
- Article 34: Communication of a personal data breach to the data subject
- Article 35: Data protection impact assessment
- Article 36: Prior consultation
- Article 37: Designation of the data protection officer
- Article 38: Position of the data protection officer
- Article 39: Tasks of the data protection officer
- Article 4: Definitions
- Article 40: Codes of conduct
- Article 41: Monitoring of approved codes of conduct
- Article 42: Certification
- Article 43: Certification bodies
- Article 44: General principle for transfers
- Article 45: Transfers on the basis of an adequacy decision
- Article 46: Transfers subject to appropriate safeguards
- Article 47: Binding corporate rules
- Article 48: Transfers or disclosures not authorised by Union law
- Article 49: Derogations for specific situations
- Article 5: Principles relating to processing of personal data
- Article 50: International cooperation for the protection of personal data
- Article 51: Supervisory authority
- Article 52: Independence
- Article 53: General conditions for the members of the supervisory authority
- Article 54: Rules on the establishment of the supervisory authority
- Article 55: Competence
- Article 56: Competence of the lead supervisory authority
- Article 57: Tasks
- Article 58: Powers
- Article 59: Activity reports
- Article 6: Lawfulness of processing
- Article 60: Cooperation between the lead supervisory authority and the other supervisory authorities concerned
- Article 61: Mutual assistance
- Article 62: Joint operations of supervisory authorities
- Article 63: Consistency mechanism
- Article 64: Opinion of the Board
- Article 65: Dispute resolution by the Board
- Article 66: Urgency procedure
- Article 67: Exchange of information
- Article 68: European Data Protection Board
- Article 69: Independence
- Article 7: Conditions for consent
- Article 70: Tasks of the Board
- Article 71: Reports
- Article 72: Procedure
- Article 73: Chair
- Article 74: Tasks of the Chair
- Article 75: Secretariat
- Article 76: Confidentiality
- Article 77: Right to lodge a complaint with a supervisory authority
- Article 78: Right to an effective judicial remedy against a supervisory authority
- Article 79: Right to an effective judicial remedy against a controller or processor
- Article 8: Conditions applicable to child's consent in relation to information society services
- Article 80: Representation of data subjects
- Article 81: Suspension of proceedings
- Article 82: Right to compensation and liability
- Article 83: General conditions for imposing administrative fines
- Article 84: Penalties
- Article 85: Processing and freedom of expression and information
- Article 86: Processing and public access to official documents
- Article 87: Processing of the national identification number
- Article 88: Processing in the context of employment
- Article 89: Safeguards and derogations relating to processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes
- Article 9: Processing of special categories of personal data
- Article 90: Obligations of secrecy
- Article 91: Existing data protection rules of churches and religious associations
- Article 92: Exercise of the delegation
- Article 93: Committee procedure
- Article 94: Repeal of Directive 95/46/EC
- Article 95: Relationship with Directive 2002/58/EC
- Article 96: Relationship with previously concluded Agreements
- Article 97: Commission reports
- Article 98: Review of other Union legal acts on data protection
- Article 99: Entry into force and application
HICP
View framework →10 requirements · HICP
- Access management
- Data protection and loss prevention
- Data protection and loss prevention operations
- Email protection systems
- Endpoint protection
- Identity and access management controls
- Incident response and cyber resilience
- Medical device and legacy system risk management
- Network management and resilience
- Third-party and business partner cybersecurity
HIPAA
View framework →10 requirements · HIPAA
HITRUST
View framework →12 requirements · HITRUST
- Access and identity controls
- Asset and configuration controls
- Assurance and continuous improvement
- Business continuity and disaster recovery readiness
- Compliance assurance and control testing
- Data classification and handling safeguards
- Incident response and resilience
- Policy and standards governance
- Program governance and risk management
- Security operations and monitoring
- Third-party and supply chain risk assurance
- Vulnerability and threat management
Investment Management Operations & Asset Protection
View framework →12 requirements · FINRA Rules, Investment Advisers Act, Investment Advisers Act Section 206, Securities Exchange Act, Unknown
- Best Execution: 2025 Standards (SEC Trend)
- Best Execution: Fiduciary Duty (SEC 206)
- Best Execution: Trade Allocation (SEC 206)
- Custody: Comprehensive Compliance (SEC 206(4)-2)
- Custody: Digital Assets (SEC Enforcement)
- Custody: Digital Assets Trend (SEC Trend 2025)
- Custody: Qualified Custodian (SEC 206(4)-2)
- Custody: Surprise Examinations (SEC 206(4)-2)
- Customer Mail Retention (FINRA 2268)
- Cybersecurity: Reg S-P Safeguards (SEC Reg S-P)
- Proxy Voting: RIA Requirements (SEC 206(4)-6)
- Soft Dollars: Section 28(e) (Exchange Act)
ISO 20000
View framework →10 requirements · ISO 20000
- Change and release management governance
- Incident and service request management
- Problem and root-cause management
- Service catalog and service level governance
- Service continuity and availability controls
- Service delivery and support operations
- Service design and transition controls
- Service management governance
- Service performance monitoring and improvement
- Supplier and relationship management
ISO 22301
View framework →10 requirements · ISO 22301
- Business continuity governance
- Business continuity objectives and service priorities
- Business impact analysis and risk assessment
- Continual improvement
- Continuity strategy and plans
- Crisis communications management
- Exercise and validation
- Exercise program governance
- Post-disruption review and corrective action
- Resource and dependency readiness
ISO 27001
View framework →93 requirements · ISO 27001
- Annex A 5.1: Policies for Information Security
- Annex A 5.10: Acceptable Use of Information and Other Associated Assets
- Annex A 5.11: Return of Assets
- Annex A 5.12: Classification of Information
- Annex A 5.13: Labelling of Information
- Annex A 5.14: Information Transfer
- Annex A 5.15: Access Control
- Annex A 5.16: Identity Management
- Annex A 5.17: Authentication Information
- Annex A 5.18: Access Rights
- Annex A 5.19: Information Security in Supplier Relationships
- Annex A 5.2: Information Security Roles and Responsibilities
- Annex A 5.20: Addressing Information Security Within Supplier Agreements
- Annex A 5.21: Managing Information Security in the ICT Supply Chain
- Annex A 5.22: Monitoring, Review and Change Management of Supplier Services
- Annex A 5.23: Information Security for Use of Cloud Services
- Annex A 5.24: Information Security Incident Management Planning and Preparation
- Annex A 5.25: Assessment and Decision on Information Security Events
- Annex A 5.26: Response to Information Security Incidents
- Annex A 5.27: Learning From Information Security Incidents
- Annex A 5.28: Collection of Evidence
- Annex A 5.29: Information Security During Disruption
- Annex A 5.3: Segregation of Duties
- Annex A 5.30: ICT Readiness for Business Continuity
- Annex A 5.31: Legal, Statutory, Regulatory and Contractual Requirements
- Annex A 5.32: Intellectual Property Rights
- Annex A 5.33: Protection of Records
- Annex A 5.34: Privacy and Protection of PII
- Annex A 5.35: Independent Review of Information Security
- Annex A 5.36: Compliance With Policies, Rules and Standards for Information Security
- Annex A 5.37: Documented Operating Procedures
- Annex A 5.4: Management Responsibilities
- Annex A 5.5: Contact With Authorities
- Annex A 5.6: Contact With Special Interest Groups
- Annex A 5.7: Threat Intelligence
- Annex A 5.8: Information Security in Project Management
- Annex A 5.9: Inventory of Information and Other Associated Assets
- Annex A 6.1: Screening
- Annex A 6.2: Terms And Conditions Of Employment
- Annex A 6.3: Information Security Awareness Education Training
- Annex A 6.4: Disciplinary Process
- Annex A 6.5: Responsibilities After Termination Change Of Employment
- Annex A 6.6: Confidentiality Or Non Disclosure Agreements
- Annex A 6.7: Remote Working
- Annex A 6.8: Information Security Event Reporting
- Annex A 7.1: Physical Security Perimeters
- Annex A 7.10: Storage Media
- Annex A 7.11: Supporting Utilities
- Annex A 7.12: Cabling Security
- Annex A 7.13: Equipment Maintenance
- Annex A 7.14: Secure Disposal Or Re Use Of Equipment
- Annex A 7.2: Physical Entry
- Annex A 7.3: Securing Offices Rooms Facilities
- Annex A 7.4: Physical Security Monitoring
- Annex A 7.5: Protecting Against Physical Environmental Threats
- Annex A 7.6: Working In Secure Areas
- Annex A 7.7: Clear Desk Clear Screen
- Annex A 7.8: Equipment Siting Protection
- Annex A 7.9: Security Of Assets Off Premises
- Annex A 8.1: User Endpoint Devices
- Annex A 8.10: Information Deletion
- Annex A 8.11: Data Masking
- Annex A 8.12: Data Leakage Prevention
- Annex A 8.13: Information Backup
- Annex A 8.14: Redundancy Of Information Processing Facilities
- Annex A 8.15: Logging
- Annex A 8.16: Monitoring Activities
- Annex A 8.17: Clock Synchronisation
- Annex A 8.18: Use Of Privileged Utility Programs
- Annex A 8.19: Installation Software On Operational Systems
- Annex A 8.2: Use Of Privileged Access Rights
- Annex A 8.20: Network Security
- Annex A 8.21: Security Of Network Services
- Annex A 8.22: Segregation Of Networks
- Annex A 8.23: Web Filtering
- Annex A 8.24: Use Of Cryptography
- Annex A 8.25: Secure Development Life Cycle
- Annex A 8.26: Application Security Requirements
- Annex A 8.27: Secure System Architecture Engineering Principles
- Annex A 8.28: Secure Coding
- Annex A 8.29: Security Testing In Development Acceptance
- Annex A 8.3: Information Access Restriction
- Annex A 8.30: Outsourced Development
- Annex A 8.31: Separation Of Development Test Production Environments
- Annex A 8.32: Change Management
- Annex A 8.33: Test Information
- Annex A 8.34: Protection Information Systems During Audit Testing
- Annex A 8.4: Access To Source Code
- Annex A 8.5: Secure Authentication
- Annex A 8.6: Capacity Management
- Annex A 8.7: Protection Against Malware
- Annex A 8.8: Management Of Technical Vulnerabilities
- Annex A 8.9: Configuration Management
ISO 27017
View framework →10 requirements · ISO 27017
- Cloud administrative operations security
- Cloud API and interface security
- Cloud asset and tenant segregation controls
- Cloud customer onboarding and offboarding controls
- Cloud lifecycle and change security
- Cloud monitoring and incident handling
- Cloud service continuity and recovery
- Cloud shared responsibility governance
- Shared responsibility communications
- Virtual environment isolation assurance
ISO 27018
View framework →10 requirements · ISO 27018
- Consent and lawful processing support for cloud PII
- Customer audit and assurance support
- Data subject control support
- PII disclosure and transfer controls
- PII incident response and notification
- PII processing governance in cloud
- PII retention and deletion
- PII return, transfer, and disposal controls
- Restriction on secondary use of customer PII
- Subprocessor transparency and governance
ISO 27701
View framework →10 requirements · ISO 27701
- Breach handling and privacy incident response
- Consent and legal basis support
- Controller obligations
- Data subject request handling
- Monitoring and continual privacy improvement
- PII inventory and processing records
- Privacy information management governance
- Privacy risk and control integration
- Processor obligations
- Third-party privacy assurance
ISO 42001
View framework →10 requirements · ISO 42001
- AI impact assessment and risk acceptance
- AI lifecycle risk management
- AI management system governance
- AI use-case inventory and classification
- Auditability and accountability records
- Data and model oversight
- Data governance for model development and operation
- Human oversight and transparency
- Monitoring and improvement
- Operational monitoring and anomaly response
ISO 9001
View framework →10 requirements · ISO 9001
- Competence and operational discipline
- Continual improvement program
- Corrective action and continual improvement
- Customer feedback and complaint handling
- Nonconformity and corrective action process
- Performance evaluation
- Process and customer requirements management
- Quality management governance
- Quality planning and objective setting
- Supplier quality governance
NIS 2
View framework →46 requirements · NIS_2
- Article 1: Subject matter
- Article 10: Computer security incident response teams (CSIRTs)
- Article 11: Requirements, technical capabilities and tasks of CSIRTs
- Article 12: Coordinated vulnerability disclosure and a European vulnerability database
- Article 13: Cooperation at national level
- Article 14: Cooperation Group
- Article 15: CSIRTs network
- Article 16: European cyber crisis liaison organisation network (EU-CyCLONe)
- Article 17: International cooperation
- Article 18: Report on the state of cybersecurity in the Union
- Article 19: Peer reviews
- Article 2: Scope
- Article 20: Governance
- Article 21: Cybersecurity risk-management measures
- Article 22: Union level coordinated security risk assessments of critical supply chains
- Article 23: Reporting obligations
- Article 24: Use of European cybersecurity certification schemes
- Article 25: Standardisation
- Article 26: Jurisdiction and territoriality
- Article 27: Registry of entities
- Article 28: Database of domain name registration data
- Article 29: Cybersecurity information-sharing arrangements
- Article 3: Essential and important entities
- Article 30: Voluntary notification of relevant information
- Article 31: General aspects concerning supervision and enforcement
- Article 32: Supervisory and enforcement measures in relation to essential entities
- Article 33: Supervisory and enforcement measures in relation to important entities
- Article 34: General conditions for imposing administrative fines on essential and important entities
- Article 35: Infringements entailing a personal data breach
- Article 36: Penalties
- Article 37: Mutual assistance
- Article 38: Exercise of the delegation
- Article 39: Committee procedure
- Article 4: Sector-specific Union legal acts
- Article 40: Review
- Article 41: Transposition
- Article 42: Amendment of Regulation (EU) No 910/2014
- Article 43: Amendment of Directive (EU) 2018/1972
- Article 44: Repeal
- Article 45: Entry into force
- Article 46: Addressees
- Article 5: Minimum harmonisation
- Article 6: Definitions
- Article 7: National cybersecurity strategy
- Article 8: Competent authorities and single points of contact
- Article 9: National cyber crisis management frameworks
NIST AI RMF
View framework →72 requirements · NIST_AI_RMF
- GOVERN-1.1: Legal and regulatory requirements involving AI are understood, managed, and documented.
- GOVERN-1.2: The characteristics of trustworthy AI are integrated into organizational policies, processes, procedures, and practices.
- GOVERN-1.3: Processes, procedures, and practices are in place to determine the needed level of risk management activities based on the organization's risk tolerance.
- GOVERN-1.4: The risk management process and its outcomes are established through transparent policies, procedures, and other controls based on organizational risk priorities.
- GOVERN-1.5: Ongoing monitoring and periodic review of the risk management process and its outcomes are planned and organizational roles and responsibilities clearly defined, including determining the frequency of periodic review.
- GOVERN-1.6: Mechanisms are in place to inventory AI systems and are resourced according to organizational risk priorities.
- GOVERN-1.7: Processes and procedures are in place for decommissioning and phasing out AI systems safely and in a manner that does not increase risks or decrease the organization’s trustworthiness.
- GOVERN-2.1: Roles and responsibilities and lines of communication related to mapping, measuring, and managing AI risks are documented and are clear to individuals and teams throughout the organization.
- GOVERN-2.2: The organization’s personnel and partners receive AI risk management training to enable them to perform their duties and responsibilities consistent with related policies, procedures, and agreements.
- GOVERN-2.3: Executive leadership of the organization takes responsibility for decisions about risks associated with AI system development and deployment.
- GOVERN-3.1: Decision-making related to mapping, measuring, and managing AI risks throughout the lifecycle is informed by a diverse team (e.g., diversity of demographics, disciplines, experience, expertise, and backgrounds).
- GOVERN-3.2: Policies and procedures are in place to define and differentiate roles and responsibilities for human-AI configurations and oversight of AI systems.
- GOVERN-4.1: Organizational policies and practices are in place to foster a critical thinking and safety-first mindset in the design, development, deployment, and uses of AI systems to minimize potential negative impacts.
- GOVERN-4.2: Organizational teams document the risks and potential impacts of the AI technology they design, develop, deploy, evaluate, and use, and they communicate about the impacts more broadly.
- GOVERN-4.3: Organizational practices are in place to enable AI testing, identification of incidents, and information sharing.
- GOVERN-5.1: Organizational policies and practices are in place to collect, consider, prioritize, and integrate feedback from those external to the team that developed or deployed the AI system regarding the potential individual and societal
- GOVERN-5.2: Mechanisms are established to enable the team that developed or deployed AI systems to regularly incorporate adjudicated feedback from relevant AI actors into system design and implementation.
- GOVERN-6.1: Policies and procedures are in place that address AI risks associated with third-party entities, including risks of infringement of a third-party’s intellectual property or other rights.
- GOVERN-6.2: Contingency processes are in place to handle failures or incidents in third-party data or AI systems deemed to be high-risk.
- MANAGE-1.1: A determination is made as to whether the AI system achieves its intended purposes and stated objectives and whether its development or deployment should proceed.
- MANAGE-1.2: Treatment of documented AI risks is prioritized based on impact, likelihood, and available resources or methods.
- MANAGE-1.3: Responses to the AI risks deemed high priority, as identified by the map function, are developed, planned, and documented. Risk response options can include mitigating, transferring, avoiding, or accepting.
- MANAGE-1.4: Negative residual risks (defined as the sum of all unmitigated risks) to both downstream acquirers of AI systems and end users are documented.
- MANAGE-2.1: Resources required to manage AI risks are taken into account – along with viable non-AI alternative systems, approaches, or methods – to reduce the magnitude or likelihood of potential impacts.
- MANAGE-2.2: Mechanisms are in place and applied to sustain the value of deployed AI systems.
- MANAGE-2.3: Procedures are followed to respond to and recover from a previously unknown risk when it is identified.
- MANAGE-2.4: Mechanisms are in place and applied, and responsibilities are assigned and understood, to supersede, disengage, or deactivate AI systems that demonstrate performance or outcomes inconsistent with intended use.
- MANAGE-3.1: AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documented.
- MANAGE-3.2: Pre-trained models which are used for development are monitored as part of AI system regular monitoring and maintenance.
- MANAGE-4.1: Post-deployment AI system monitoring plans are implemented, including mechanisms for capturing and evaluating input from users and other relevant AI actors, appeal and override, decommissioning, incident response, recovery, and
- MANAGE-4.2: Measurable activities for continual improvements are integrated into AI system updates and include regular engagement with interested parties, including relevant AI actors.
- MANAGE-4.3: Incidents and errors are communicated to relevant AI actors, including affected communities. Processes for tracking, responding to, and recovering from incidents and errors are followed and documented.
- MAP-1.1: Intended purposes, potentially beneficial uses, context-specific laws, norms and expectations, and prospective settings in which the AI system will be deployed are understood and documented. Considerations include: the specific set
- MAP-1.2: Interdisciplinary AI actors, competencies, skills, and capacities for establishing context reflect demographic diversity and broad domain and user experience expertise, and their participation is documented. Opportunities for inter
- MAP-1.3: The organization’s mission and relevant goals for AI technology are understood and documented.
- MAP-1.4: The business value or context of business use has been clearly defined or – in the case of assessing existing AI systems – re-evaluated.
- MAP-1.5: Organizational risk tolerances are determined and documented.
- MAP-1.6: System requirements (e.g., “the system shall respect the privacy of its users”) are elicited from and understood by relevant AI actors. Design decisions take socio-technical implications into account to address AI risks.
- MAP-2.1: The specific tasks and methods used to implement the tasks that the AI system will support are defined (e.g., classifiers, generative models, recommenders).
- MAP-2.2: Information about the AI system’s knowledge limits and how system output may be utilized and overseen by humans is documented. Documentation provides sufficient information to assist relevant AI actors when making decisions and tak
- MAP-2.3: Scientific integrity and TEVV considerations are identified and documented, including those related to experimental design, data collection and selection (e.g., availability, representativeness, suitability), system trustworthiness
- MAP-3.1: Potential benefits of intended AI system functionality and performance are examined and documented.
- MAP-3.2: Potential costs, including non-monetary costs, which result from expected or realized AI errors or system functionality and trustworthiness – as connected to organizational risk tolerance – are examined and documented.
- MAP-3.3: Targeted application scope is specified and documented based on the system’s capability, established context, and AI system categorization.
- MAP-3.4: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant technical standards and certifications – are defined, assessed, and documented.
- MAP-3.5: Processes for human oversight are defined, assessed, and documented in accordance with organizational policies from the govern function.
- MAP-4.1: Approaches for mapping AI technology and legal risks of its components – including the use of third-party data or software – are in place, followed, and documented, as are risks of infringement of a third party’s intellectual prope
- MAP-4.2: Internal risk controls for components of the AI system, including third-party AI technologies, are identified and documented.
- MAP-5.1: Likelihood and magnitude of each identified impact (both potentially beneficial and harmful) based on expected use, past uses of AI systems in similar contexts, public incident reports, feedback from those external to the team that
- MAP-5.2: Practices and personnel for supporting regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are in place and documented.
- MEASURE-1.1: Approaches and metrics for measurement of AI risks enumerated during the map function are selected for implementation starting with the most significant AI risks. The risks or trustworthiness characteristics that will not – or
- MEASURE-1.2: Appropriateness of AI metrics and effectiveness of existing controls are regularly assessed and updated, including reports of errors and potential impacts on affected communities.
- MEASURE-1.3: Internal experts who did not serve as front-line developers for the system and/or independent assessors are involved in regular assessments and updates. Domain experts, users, AI actors external to the team that developed or de
- MEASURE-2.1: Test sets, metrics, and details about the tools used during TEVV are documented.
- MEASURE-2.10: Privacy risk of the AI system – as identified in the map function – is examined and documented.
- MEASURE-2.11: Fairness and bias – as identified in the map function – are evaluated and results are documented.
- MEASURE-2.12: Environmental impact and sustainability of AI model training and management activities – as identified in the map function – are assessed and documented.
- MEASURE-2.13: Effectiveness of the employed TEVV metrics and processes in the measure function are evaluated and documented.
- MEASURE-2.2: Evaluations involving human subjects meet applicable requirements (including human subject protection) and are representative of the relevant population.
- MEASURE-2.3: AI system performance or assurance criteria are measured qualitatively or quantitatively and demonstrated for conditions similar to deployment setting(s). Measures are documented.
- MEASURE-2.4: The functionality and behavior of the AI system and its components – as identified in the map function – are monitored when in production.
- MEASURE-2.5: The AI system to be deployed is demonstrated to be valid and reliable. Limitations of the generalizability beyond the conditions under which the technology was developed are documented.
- MEASURE-2.6: The AI system is evaluated regularly for safety risks – as identified in the map function. The AI system to be deployed is demonstrated to be safe, its residual negative risk does not exceed the risk tolerance, and it can fail
- MEASURE-2.7: AI system security and resilience – as identified in the map function – are evaluated and documented.
- MEASURE-2.8: Risks associated with transparency and accountability – as identified in the map function – are examined and documented.
- MEASURE-2.9: The AI model is explained, validated, and documented, and AI system output is interpreted within its context – as identified in the map function – to inform responsible use and governance.
- MEASURE-3.1: Approaches, personnel, and documentation are in place to regularly identify and track existing, unanticipated, and emergent AI risks based on factors such as intended and actual performance in deployed contexts.
- MEASURE-3.2: Risk tracking approaches are considered for settings where AI risks are difficult to assess using currently available measurement techniques or where metrics are not yet available.
- MEASURE-3.3: Feedback processes for end users and impacted communities to report problems and appeal system outcomes are established and integrated into AI system evaluation metrics.
- MEASURE-4.1: Measurement approaches for identifying AI risks are connected to deployment context(s) and informed through consultation with domain experts and other end users. Approaches are documented.
- MEASURE-4.2: Measurement results regarding AI system trustworthiness in deployment context(s) and across the AI lifecycle are informed by input from domain experts and relevant AI actors to validate whether the system is performing consiste
- MEASURE-4.3: Measurable performance improvements or declines based on consultations with relevant AI actors, including affected communities, and field data about context-relevant risks and trustworthiness characteristics are identified and
NIST CSF 2.0
View framework →106 requirements · NIST_CSF_2_0
- DE.AE-02: Potentially adverse events are analyzed to better understand associated activities
- DE.AE-03: Information is correlated from multiple sources
- DE.AE-04: The estimated impact and scope of adverse events are understood
- DE.AE-06: Information on adverse events is provided to authorized staff and tools
- DE.AE-07: Cyber threat intelligence and other contextual information are integrated into the analysis
- DE.AE-08: Incidents are declared when adverse events meet the defined incident criteria
- DE.CM-01: Networks and network services are monitored to find potentially adverse events
- DE.CM-02: The physical environment is monitored to find potentially adverse events
- DE.CM-03: Personnel activity and technology usage are monitored to find potentially adverse events
- DE.CM-06: External service provider activities and services are monitored to find potentially adverse events
- DE.CM-09: Computing hardware and software, runtime environments, and their data are monitored to find potentially adverse events
- GV.OC-01: The organizational mission is understood and informs cybersecurity risk management
- GV.OC-02: Internal and external stakeholders are understood, and their needs and expectations regarding cybersecurity risk management are understood and considered
- GV.OC-03: Legal, regulatory, and contractual requirements regarding cybersecurity — including privacy and civil liberties obligations — are understood and managed
- GV.OC-04: Critical objectives, capabilities, and services that stakeholders depend on or expect from the organization are understood and communicated
- GV.OC-05: Outcomes, capabilities, and services that the organization depends on are understood and communicated
- GV.OV-01: Cybersecurity risk management strategy outcomes are reviewed to inform and adjust strategy and direction
- GV.OV-02: The cybersecurity risk management strategy is reviewed and adjusted to ensure coverage of organizational requirements and risks
- GV.OV-03: Organizational cybersecurity risk management performance is evaluated and reviewed for adjustments needed
- GV.PO-01: Policy for managing cybersecurity risks is established based on organizational context, cybersecurity strategy, and priorities and is communicated and enforced
- GV.PO-02: Policy for managing cybersecurity risks is reviewed, updated, communicated, and enforced to reflect changes in requirements, threats, technology, and organizational mission
- GV.RM-01: Risk management objectives are established and agreed to by organizational stakeholders
- GV.RM-02: Risk appetite and risk tolerance statements are established, communicated, and maintained
- GV.RM-03: Cybersecurity risk management activities and outcomes are included in enterprise risk management processes
- GV.RM-04: Strategic direction that describes appropriate risk response options is established and communicated
- GV.RM-05: Lines of communication across the organization are established for cybersecurity risks, including risks from suppliers and other third parties
- GV.RM-06: A standardized method for calculating, documenting, categorizing, and prioritizing cybersecurity risks is established and communicated
- GV.RM-07: Strategic opportunities (i.e., positive risks) are characterized and are included in organizational cybersecurity risk discussions
- GV.RR-01: Organizational leadership is responsible and accountable for cybersecurity risk and fosters a culture that is risk-aware, ethical, and continually improving
- GV.RR-02: Roles, responsibilities, and authorities related to cybersecurity risk management are established, communicated, understood, and enforced
- GV.RR-03: Adequate resources are allocated commensurate with the cybersecurity risk strategy, roles, responsibilities, and policies
- GV.RR-04: Cybersecurity is included in human resources practices
- GV.SC-01: A cybersecurity supply chain risk management program, strategy, objectives, policies, and processes are established and agreed to by organizational stakeholders
- GV.SC-02: Cybersecurity roles and responsibilities for suppliers, customers, and partners are established, communicated, and coordinated internally and externally
- GV.SC-03: Cybersecurity supply chain risk management is integrated into cybersecurity and enterprise risk management, risk assessment, and improvement processes
- GV.SC-04: Suppliers are known and prioritized by criticality
- GV.SC-05: Requirements to address cybersecurity risks in supply chains are established, prioritized, and integrated into contracts and other types of agreements with suppliers and other relevant third parties
- GV.SC-06: Planning and due diligence are performed to reduce risks before entering into formal supplier or other third-party relationships
- GV.SC-07: The risks posed by a supplier, their products and services, and other third parties are understood, recorded, prioritized, assessed, responded to, and monitored over the course of the relationship
- GV.SC-08: Relevant suppliers and other third parties are included in incident planning, response, and recovery activities
- GV.SC-09: Supply chain security practices are integrated into cybersecurity and enterprise risk management programs, and their performance is monitored throughout the technology product and service life cycle
- GV.SC-10: Cybersecurity supply chain risk management plans include provisions for activities that occur after the conclusion of a partnership or service agreement
- ID.AM-01: Inventories of hardware managed by the organization are maintained
- ID.AM-02: Inventories of software, services, and systems managed by the organization are maintained
- ID.AM-03: Representations of the organization’s authorized network communication and internal and external network data flows are maintained
- ID.AM-04: Inventories of services provided by suppliers are maintained
- ID.AM-05: Assets are prioritized based on classification, criticality, resources, and impact on the mission
- ID.AM-07: Inventories of data and corresponding metadata for designated data types are maintained
- ID.AM-08: Systems, hardware, software, services, and data are managed throughout their life cycles
- ID.IM-01: Improvements are identified from evaluations
- ID.IM-02: Improvements are identified from security tests and exercises, including those done in coordination with suppliers and relevant third parties
- ID.IM-03: Improvements are identified from execution of operational processes, procedures, and activities
- ID.IM-04: Incident response plans and other cybersecurity plans that affect operations are established, communicated, maintained, and improved
- ID.RA-01: Vulnerabilities in assets are identified, validated, and recorded
- ID.RA-02: Cyber threat intelligence is received from information sharing forums and sources
- ID.RA-03: Internal and external threats to the organization are identified and recorded
- ID.RA-04: Potential impacts and likelihoods of threats exploiting vulnerabilities are identified and recorded
- ID.RA-05: Threats, vulnerabilities, likelihoods, and impacts are used to understand inherent risk and inform risk response prioritization
- ID.RA-06: Risk responses are chosen, prioritized, planned, tracked, and communicated
- ID.RA-07: Changes and exceptions are managed, assessed for risk impact, recorded, and tracked
- ID.RA-08: Processes for receiving, analyzing, and responding to vulnerability disclosures are established
- ID.RA-09: The authenticity and integrity of hardware and software are assessed prior to acquisition and use
- ID.RA-10: Critical suppliers are assessed prior to acquisition
- PR.AA-01: Identities and credentials for authorized users, services, and hardware are managed by the organization
- PR.AA-02: Identities are proofed and bound to credentials based on the context of interactions
- PR.AA-03: Users, services, and hardware are authenticated
- PR.AA-04: Identity assertions are protected, conveyed, and verified
- PR.AA-05: Access permissions, entitlements, and authorizations are defined in a policy, managed, enforced, and reviewed, and incorporate the principles of least privilege and separation of duties
- PR.AA-06: Physical access to assets is managed, monitored, and enforced commensurate with risk
- PR.AT-01: Personnel are provided with awareness and training so that they possess the knowledge and skills to perform general tasks with cybersecurity risks in mind
- PR.AT-02: Individuals in specialized roles are provided with awareness and training so that they possess the knowledge and skills to perform relevant tasks with cybersecurity risks in mind
- PR.DS-01: The confidentiality, integrity, and availability of data-at-rest are protected
- PR.DS-02: The confidentiality, integrity, and availability of data-in-transit are protected
- PR.DS-10: The confidentiality, integrity, and availability of data-in-use are protected
- PR.DS-11: Backups of data are created, protected, maintained, and tested
- PR.IR-01: Networks and environments are protected from unauthorized logical access and usage
- PR.IR-02: The organization’s technology assets are protected from environmental threats
- PR.IR-03: Mechanisms are implemented to achieve resilience requirements in normal and adverse situations
- PR.IR-04: Adequate resource capacity to ensure availability is maintained
- PR.PS-01: Configuration management practices are established and applied
- PR.PS-02: Software is maintained, replaced, and removed commensurate with risk
- PR.PS-03: Hardware is maintained, replaced, and removed commensurate with risk
- PR.PS-04: Log records are generated and made available for continuous monitoring
- PR.PS-05: Installation and execution of unauthorized software are prevented
- PR.PS-06: Secure software development practices are integrated, and their performance is monitored throughout the software development life cycle
- RC.CO-03: Recovery activities and progress in restoring operational capabilities are communicated to designated internal and external stakeholders
- RC.CO-04: Public updates on incident recovery are shared using approved methods and messaging
- RC.RP-01: The recovery portion of the incident response plan is executed once initiated from the incident response process
- RC.RP-02: Recovery actions are selected, scoped, prioritized, and performed
- RC.RP-03: The integrity of backups and other restoration assets is verified before using them for restoration
- RC.RP-04: Critical mission functions and cybersecurity risk management are considered to establish post-incident operational norms
- RC.RP-05: The integrity of restored assets is verified, systems and services are restored, and normal operating status is confirmed
- RC.RP-06: The end of incident recovery is declared based on criteria, and incident-related documentation is completed
- RS.AN-03: Analysis is performed to establish what has taken place during an incident and the root cause of the incident
- RS.AN-06: Actions performed during an investigation are recorded, and the records’ integrity and provenance are preserved
- RS.AN-07: Incident data and metadata are collected, and their integrity and provenance are preserved
- RS.AN-08: An incident’s magnitude is estimated and validated
- RS.CO-02: Internal and external stakeholders are notified of incidents
- RS.CO-03: Information is shared with designated internal and external stakeholders
- RS.MA-01: The incident response plan is executed in coordination with relevant third parties once an incident is declared
- RS.MA-02: Incident reports are triaged and validated
- RS.MA-03: Incidents are categorized and prioritized
- RS.MA-04: Incidents are escalated or elevated as needed
- RS.MA-05: The criteria for initiating incident recovery are applied
- RS.MI-01: Incidents are contained
- RS.MI-02: Incidents are eradicated
NIST SP 800-171
View framework →130 requirements · NIST_SP_800_171
- 03.01.01: Account Management
- 03.01.02: Access Enforcement
- 03.01.03: Information Flow Enforcement
- 03.01.04: Separation of Duties
- 03.01.05: Least Privilege
- 03.01.06: Least Privilege – Privileged Accounts
- 03.01.07: Least Privilege – Privileged Functions
- 03.01.08: Unsuccessful Logon Attempts
- 03.01.09: System Use Notification
- 03.01.10: Device Lock
- 03.01.11: Session Termination
- 03.01.12: Remote Access
- 03.01.13: Withdrawn
- 03.01.14: Withdrawn
- 03.01.15: Withdrawn
- 03.01.16: Wireless Access
- 03.01.17: Withdrawn
- 03.01.18: Access Control for Mobile Devices
- 03.01.19: Withdrawn
- 03.01.20: Use of External Systems
- 03.01.21: Withdrawn
- 03.01.22: Publicly Accessible Content
- 03.02.01: Literacy Training and Awareness
- 03.02.02: Role-Based Training
- 03.02.03: Withdrawn
- 03.03.01: Event Logging
- 03.03.02: Audit Record Content
- 03.03.03: Audit Record Generation
- 03.03.04: Response to Audit Logging Process Failures
- 03.03.05: Audit Record Review, Analysis, and Reporting
- 03.03.06: Audit Record Reduction and Report Generation
- 03.03.07: Time Stamps
- 03.03.08: Protection of Audit Information
- 03.03.09: Withdrawn
- 03.04.01: Baseline Configuration
- 03.04.02: Configuration Settings
- 03.04.03: Configuration Change Control
- 03.04.04: Impact Analyses
- 03.04.05: Access Restrictions for Change
- 03.04.06: Least Functionality
- 03.04.07: Withdrawn
- 03.04.08: Authorized Software – Allow by Exception
- 03.04.09: Withdrawn
- 03.04.10: System Component Inventory
- 03.04.11: Information Location
- 03.04.12: System and Component Configuration for High-Risk Areas
- 03.05.01: User Identification and Authentication
- 03.05.02: Device Identification and Authentication
- 03.05.03: Multi-Factor Authentication
- 03.05.04: Replay-Resistant Authentication
- 03.05.05: Identifier Management
- 03.05.06: Withdrawn
- 03.05.07: Password Management
- 03.05.08: Withdrawn
- 03.05.09: Withdrawn
- 03.05.10: Withdrawn
- 03.05.11: Authentication Feedback
- 03.05.12: Authenticator Management
- 03.06.01: Incident Handling
- 03.06.02: Incident Monitoring, Reporting, and Response Assistance
- 03.06.03: Incident Response Testing
- 03.06.04: Incident Response Training
- 03.06.05: Incident Response Plan
- 03.07.01: Withdrawn
- 03.07.02: Withdrawn
- 03.07.03: Withdrawn
- 03.07.04: and 03.07.06.
- 03.07.05: Nonlocal Maintenance
- 03.07.06: Maintenance Personnel
- 03.08.01: Media Storage
- 03.08.02: Media Access
- 03.08.03: Media Sanitization
- 03.08.04: Media Marking
- 03.08.05: Media Transport
- 03.08.06: Withdrawn
- 03.08.07: Media Use
- 03.08.08: Withdrawn
- 03.08.09: System Backup – Cryptographic Protection
- 03.09.01: Personnel Screening
- 03.09.02: Personnel Termination and Transfer
- 03.10.01: ad
- 03.10.02: Monitoring Physical Access
- 03.10.03: Withdrawn
- 03.10.04: Withdrawn
- 03.10.05: Withdrawn
- 03.10.06: Alternate Work Site
- 03.10.07: Physical Access Control
- 03.10.08: Access Control for Transmission
- 03.11.01: Risk Assessment
- 03.11.02: Vulnerability Monitoring and Scanning
- 03.11.03: Withdrawn
- 03.11.04: Risk Response
- 03.12.01: Security Assessment
- 03.12.02: Plan of Action and Milestones
- 03.12.03: Continuous Monitoring
- 03.12.04: Withdrawn
- 03.12.05: Information Exchange
- 03.13.01: Boundary Protection
- 03.13.02: Withdrawn
- 03.13.03: Withdrawn
- 03.13.04: Information in Shared System Resources
- 03.13.05: Withdrawn
- 03.13.06: Network Communications – Deny by Default – Allow by Exception
- 03.13.07: Withdrawn
- 03.13.08: for authenticators stored in organizational
- 03.13.09: Network Disconnect
- 03.13.10: Cryptographic Key Establishment and Management
- 03.13.11: Cryptographic Protection
- 03.13.12: Collaborative Computing Devices and Applications
- 03.13.13: Mobile Code
- 03.13.14: Withdrawn
- 03.13.15: Session Authenticity
- 03.13.16: Withdrawn
- 03.14.01: Flaw Remediation
- 03.14.02: Malicious Code Protection
- 03.14.03: Security Alerts, Advisories, and Directives
- 03.14.04: Withdrawn
- 03.14.05: Withdrawn
- 03.14.06: System Monitoring
- 03.14.07: Withdrawn
- 03.14.08: Information Management and Retention
- 03.15.01: Policy and Procedures
- 03.15.02: System Security Plan
- 03.15.03: for authenticators in the possession of individuals and by 03.01.01
- 03.16.01: Security Engineering Principles
- 03.16.02: Unsupported System Components
- 03.16.03: External System Services
- 03.17.01: Supply Chain Risk Management Plan
- 03.17.02: Acquisition Strategies, Tools, and Methods
- 03.17.03: Supply Chain Requirements and Processes
NIST SP 800-53
View framework →1196 requirements · NIST_SP_800_53
- AC-1: Policy and Procedures
- AC-10: Concurrent Session Control
- AC-11: Device Lock
- AC-11(1): Pattern-hiding Displays
- AC-12: Session Termination
- AC-12(1): User-initiated Logouts
- AC-12(2): Termination Message
- AC-12(3): Timeout Warning Message
- AC-13: Supervision and Review — Access Control
- AC-14: Permitted Actions Without Identification or Authentication
- AC-14(1): Necessary Uses
- AC-15: Automated Marking
- AC-16: Security and Privacy Attributes
- AC-16(1): Dynamic Attribute Association
- AC-16(10): Attribute Configuration by Authorized Individuals
- AC-16(2): Attribute Value Changes by Authorized Individuals
- AC-16(3): Maintenance of Attribute Associations by System
- AC-16(4): Association of Attributes by Authorized Individuals
- AC-16(5): Attribute Displays on Objects to Be Output
- AC-16(6): Maintenance of Attribute Association
- AC-16(7): Consistent Attribute Interpretation
- AC-16(8): Association Techniques and Technologies
- AC-16(9): Attribute Reassignment — Regrading Mechanisms
- AC-17: Remote Access
- AC-17(1): Monitoring and Control
- AC-17(10): Authenticate Remote Commands
- AC-17(2): Protection of Confidentiality and Integrity Using Encryption
- AC-17(3): Managed Access Control Points
- AC-17(4): Privileged Commands and Access
- AC-17(5): Monitoring for Unauthorized Connections
- AC-17(6): Protection of Mechanism Information
- AC-17(7): Additional Protection for Security Function Access
- AC-17(8): Disable Nonsecure Network Protocols
- AC-17(9): Disconnect or Disable Access
- AC-18: Wireless Access
- AC-18(1): Authentication and Encryption
- AC-18(2): Monitoring Unauthorized Connections
- AC-18(3): Disable Wireless Networking
- AC-18(4): Restrict Configurations by Users
- AC-18(5): Antennas and Transmission Power Levels
- AC-19: Access Control for Mobile Devices
- AC-19(1): Use of Writable and Portable Storage Devices
- AC-19(2): Use of Personally Owned Portable Storage Devices
- AC-19(3): Use of Portable Storage Devices with No Identifiable Owner
- AC-19(4): Restrictions for Classified Information
- AC-19(5): Full Device or Container-based Encryption
- AC-2: Account Management
- AC-2(1): Automated System Account Management
- AC-2(10): Shared and Group Account Credential Change
- AC-2(11): Usage Conditions
- AC-2(12): Account Monitoring for Atypical Usage
- AC-2(13): Disable Accounts for High-risk Individuals
- AC-2(2): Automated Temporary and Emergency Account Management
- AC-2(3): Disable Accounts
- AC-2(4): Automated Audit Actions
- AC-2(5): Inactivity Logout
- AC-2(6): Dynamic Privilege Management
- AC-2(7): Privileged User Accounts
- AC-2(8): Dynamic Account Management
- AC-2(9): Restrictions on Use of Shared and Group Accounts
- AC-20: Use of External Systems
- AC-20(1): Limits on Authorized Use
- AC-20(2): Portable Storage Devices — Restricted Use
- AC-20(3): Non-organizationally Owned Systems — Restricted Use
- AC-20(4): Network Accessible Storage Devices — Prohibited Use
- AC-20(5): Portable Storage Devices — Prohibited Use
- AC-21: Information Sharing
- AC-21(1): Automated Decision Support
- AC-21(2): Information Search and Retrieval
- AC-22: Publicly Accessible Content
- AC-23: Data Mining Protection
- AC-24: Access Control Decisions
- AC-24(1): Transmit Access Authorization Information
- AC-24(2): No User or Process Identity
- AC-25: Reference Monitor
- AC-3: Access Enforcement
- AC-3(1): Restricted Access to Privileged Functions
- AC-3(10): Audited Override of Access Control Mechanisms
- AC-3(11): Restrict Access to Specific Information Types
- AC-3(12): Assert and Enforce Application Access
- AC-3(13): Attribute-based Access Control
- AC-3(14): Individual Access
- AC-3(15): Discretionary and Mandatory Access Control
- AC-3(2): Dual Authorization
- AC-3(3): Mandatory Access Control
- AC-3(4): Discretionary Access Control
- AC-3(5): Security-relevant Information
- AC-3(6): Protection of User and System Information
- AC-3(7): Role-based Access Control
- AC-3(8): Revocation of Access Authorizations
- AC-3(9): Controlled Release
- AC-4: Information Flow Enforcement
- AC-4(1): Object Security and Privacy Attributes
- AC-4(10): Enable and Disable Security or Privacy Policy Filters
- AC-4(11): Configuration of Security or Privacy Policy Filters
- AC-4(12): Data Type Identifiers
- AC-4(13): Decomposition into Policy-relevant Subcomponents
- AC-4(14): Security or Privacy Policy Filter Constraints
- AC-4(15): Detection of Unsanctioned Information
- AC-4(16): Information Transfers on Interconnected Systems
- AC-4(17): Domain Authentication
- AC-4(18): Security Attribute Binding
- AC-4(19): Validation of Metadata
- AC-4(2): Processing Domains
- AC-4(20): Approved Solutions
- AC-4(21): Physical or Logical Separation of Information Flows
- AC-4(22): Access Only
- AC-4(23): Modify Non-releasable Information
- AC-4(24): Internal Normalized Format
- AC-4(25): Data Sanitization
- AC-4(26): Audit Filtering Actions
- AC-4(27): Redundant/Independent Filtering Mechanisms
- AC-4(28): Linear Filter Pipelines
- AC-4(29): Filter Orchestration Engines
- AC-4(3): Dynamic Information Flow Control
- AC-4(30): Filter Mechanisms Using Multiple Processes
- AC-4(31): Failed Content Transfer Prevention
- AC-4(32): Process Requirements for Information Transfer
- AC-4(4): Flow Control of Encrypted Information
- AC-4(5): Embedded Data Types
- AC-4(6): Metadata
- AC-4(7): One-way Flow Mechanisms
- AC-4(8): Security and Privacy Policy Filters
- AC-4(9): Human Reviews
- AC-5: Separation of Duties
- AC-6: Least Privilege
- AC-6(1): Authorize Access to Security Functions
- AC-6(10): Prohibit Non-privileged Users from Executing Privileged Functions
- AC-6(2): Non-privileged Access for Nonsecurity Functions
- AC-6(3): Network Access to Privileged Commands
- AC-6(4): Separate Processing Domains
- AC-6(5): Privileged Accounts
- AC-6(6): Privileged Access by Non-organizational Users
- AC-6(7): Review of User Privileges
- AC-6(8): Privilege Levels for Code Execution
- AC-6(9): Log Use of Privileged Functions
- AC-7: Unsuccessful Logon Attempts
- AC-7(1): Automatic Account Lock
- AC-7(2): Purge or Wipe Mobile Device
- AC-7(3): Biometric Attempt Limiting
- AC-7(4): Use of Alternate Authentication Factor
- AC-8: System Use Notification
- AC-9: Previous Logon Notification
- AC-9(1): Unsuccessful Logons
- AC-9(2): Successful and Unsuccessful Logons
- AC-9(3): Notification of Account Changes
- AC-9(4): Additional Logon Information
- AT-1: Policy and Procedures
- AT-2: Literacy Training and Awareness
- AT-2(1): Practical Exercises
- AT-2(2): Insider Threat
- AT-2(3): Social Engineering and Mining
- AT-2(4): Suspicious Communications and Anomalous System Behavior
- AT-2(5): Advanced Persistent Threat
- AT-2(6): Cyber Threat Environment
- AT-3: Role-based Training
- AT-3(1): Environmental Controls
- AT-3(2): Physical Security Controls
- AT-3(3): Practical Exercises
- AT-3(4): Suspicious Communications and Anomalous System Behavior
- AT-3(5): Processing Personally Identifiable Information
- AT-4: Training Records
- AT-5: Contacts with Security Groups and Associations
- AT-6: Training Feedback
- AU-1: Policy and Procedures
- AU-10: Non-repudiation
- AU-10(1): Association of Identities
- AU-10(2): Validate Binding of Information Producer Identity
- AU-10(3): Chain of Custody
- AU-10(4): Validate Binding of Information Reviewer Identity
- AU-10(5): Digital Signatures
- AU-11: Audit Record Retention
- AU-11(1): Long-term Retrieval Capability
- AU-12: Audit Record Generation
- AU-12(1): System-wide and Time-correlated Audit Trail
- AU-12(2): Standardized Formats
- AU-12(3): Changes by Authorized Individuals
- AU-12(4): Query Parameter Audits of Personally Identifiable Information
- AU-13: Monitoring for Information Disclosure
- AU-13(1): Use of Automated Tools
- AU-13(2): Review of Monitored Sites
- AU-13(3): Unauthorized Replication of Information
- AU-14: Session Audit
- AU-14(1): System Start-up
- AU-14(2): Capture and Record Content
- AU-14(3): Remote Viewing and Listening
- AU-15: Alternate Audit Logging Capability
- AU-16: Cross-organizational Audit Logging
- AU-16(1): Identity Preservation
- AU-16(2): Sharing of Audit Information
- AU-16(3): Disassociability
- AU-2: Event Logging
- AU-2(1): Compilation of Audit Records from Multiple Sources
- AU-2(2): Selection of Audit Events by Component
- AU-2(3): Reviews and Updates
- AU-2(4): Privileged Functions
- AU-3: Content of Audit Records
- AU-3(1): Additional Audit Information
- AU-3(2): Centralized Management of Planned Audit Record Content
- AU-3(3): Limit Personally Identifiable Information Elements
- AU-4: Audit Log Storage Capacity
- AU-4(1): Transfer to Alternate Storage
- AU-5: Response to Audit Logging Process Failures
- AU-5(1): Storage Capacity Warning
- AU-5(2): Real-time Alerts
- AU-5(3): Configurable Traffic Volume Thresholds
- AU-5(4): Shutdown on Failure
- AU-5(5): Alternate Audit Logging Capability
- AU-6: Audit Record Review, Analysis, and Reporting
- AU-6(1): Automated Process Integration
- AU-6(10): Audit Level Adjustment
- AU-6(2): Automated Security Alerts
- AU-6(3): Correlate Audit Record Repositories
- AU-6(4): Central Review and Analysis
- AU-6(5): Integrated Analysis of Audit Records
- AU-6(6): Correlation with Physical Monitoring
- AU-6(7): Permitted Actions
- AU-6(8): Full Text Analysis of Privileged Commands
- AU-6(9): Correlation with Information from Nontechnical Sources
- AU-7: Audit Record Reduction and Report Generation
- AU-7(1): Automatic Processing
- AU-7(2): Automatic Sort and Search
- AU-8: Time Stamps
- AU-8(1): Synchronization with Authoritative Time Source
- AU-8(2): Secondary Authoritative Time Source
- AU-9: Protection of Audit Information
- AU-9(1): Hardware Write-once Media
- AU-9(2): Store on Separate Physical Systems or Components
- AU-9(3): Cryptographic Protection
- AU-9(4): Access by Subset of Privileged Users
- AU-9(5): Dual Authorization
- AU-9(6): Read-only Access
- AU-9(7): Store on Component with Different Operating System
- CA-1: Policy and Procedures
- CA-2: Control Assessments
- CA-2(1): Independent Assessors
- CA-2(2): Specialized Assessments
- CA-2(3): Leveraging Results from External Organizations
- CA-3: Information Exchange
- CA-3(1): Unclassified National Security System Connections
- CA-3(2): Classified National Security System Connections
- CA-3(3): Unclassified Non-national Security System Connections
- CA-3(4): Connections to Public Networks
- CA-3(5): Restrictions on External System Connections
- CA-3(6): Transfer Authorizations
- CA-3(7): Transitive Information Exchanges
- CA-4: Security Certification
- CA-5: Plan of Action and Milestones
- CA-5(1): Automation Support for Accuracy and Currency
- CA-6: Authorization
- CA-6(1): Joint Authorization — Intra-organization
- CA-6(2): Joint Authorization — Inter-organization
- CA-7: Continuous Monitoring
- CA-7(1): Independent Assessment
- CA-7(2): Types of Assessments
- CA-7(3): Trend Analyses
- CA-7(4): Risk Monitoring
- CA-7(5): Consistency Analysis
- CA-7(6): Automation Support for Monitoring
- CA-8: Penetration Testing
- CA-8(1): Independent Penetration Testing Agent or Team
- CA-8(2): Red Team Exercises
- CA-8(3): Facility Penetration Testing
- CA-9: Internal System Connections
- CA-9(1): Compliance Checks
- CM-1: Policy and Procedures
- CM-10: Software Usage Restrictions
- CM-10(1): Open-source Software
- CM-11: User-installed Software
- CM-11(1): Alerts for Unauthorized Installations
- CM-11(2): Software Installation with Privileged Status
- CM-11(3): Automated Enforcement and Monitoring
- CM-12: Information Location
- CM-12(1): Automated Tools to Support Information Location
- CM-13: Data Action Mapping
- CM-14: Signed Components
- CM-2: Baseline Configuration
- CM-2(1): Reviews and Updates
- CM-2(2): Automation Support for Accuracy and Currency
- CM-2(3): Retention of Previous Configurations
- CM-2(4): Unauthorized Software
- CM-2(5): Authorized Software
- CM-2(6): Development and Test Environments
- CM-2(7): Configure Systems and Components for High-risk Areas
- CM-3: Configuration Change Control
- CM-3(1): Automated Documentation, Notification, and Prohibition of Changes
- CM-3(2): Testing, Validation, and Documentation of Changes
- CM-3(3): Automated Change Implementation
- CM-3(4): Security and Privacy Representatives
- CM-3(5): Automated Security Response
- CM-3(6): Cryptography Management
- CM-3(7): Review System Changes
- CM-3(8): Prevent or Restrict Configuration Changes
- CM-4: Impact Analyses
- CM-4(1): Separate Test Environments
- CM-4(2): Verification of Controls
- CM-5: Access Restrictions for Change
- CM-5(1): Automated Access Enforcement and Audit Records
- CM-5(2): Review System Changes
- CM-5(3): Signed Components
- CM-5(4): Dual Authorization
- CM-5(5): Privilege Limitation for Production and Operation
- CM-5(6): Limit Library Privileges
- CM-5(7): Automatic Implementation of Security Safeguards
- CM-6: Configuration Settings
- CM-6(1): Automated Management, Application, and Verification
- CM-6(2): Respond to Unauthorized Changes
- CM-6(3): Unauthorized Change Detection
- CM-6(4): Conformance Demonstration
- CM-7: Least Functionality
- CM-7(1): Periodic Review
- CM-7(2): Prevent Program Execution
- CM-7(3): Registration Compliance
- CM-7(4): Unauthorized Software — Deny-by-exception
- CM-7(5): Authorized Software — Allow-by-exception
- CM-7(6): Confined Environments with Limited Privileges
- CM-7(7): Code Execution in Protected Environments
- CM-7(8): Binary or Machine Executable Code
- CM-7(9): Prohibiting The Use of Unauthorized Hardware
- CM-8: System Component Inventory
- CM-8(1): Updates During Installation and Removal
- CM-8(2): Automated Maintenance
- CM-8(3): Automated Unauthorized Component Detection
- CM-8(4): Accountability Information
- CM-8(5): No Duplicate Accounting of Components
- CM-8(6): Assessed Configurations and Approved Deviations
- CM-8(7): Centralized Repository
- CM-8(8): Automated Location Tracking
- CM-8(9): Assignment of Components to Systems
- CM-9: Configuration Management Plan
- CM-9(1): Assignment of Responsibility
- CP-1: Policy and Procedures
- CP-10: System Recovery and Reconstitution
- CP-10(1): Contingency Plan Testing
- CP-10(2): Transaction Recovery
- CP-10(3): Compensating Security Controls
- CP-10(4): Restore Within Time Period
- CP-10(5): Failover Capability
- CP-10(6): Component Protection
- CP-11: Alternate Communications Protocols
- CP-12: Safe Mode
- CP-13: Alternative Security Mechanisms
- CP-2: Contingency Plan
- CP-2(1): Coordinate with Related Plans
- CP-2(2): Capacity Planning
- CP-2(3): Resume Mission and Business Functions
- CP-2(4): Resume All Mission and Business Functions
- CP-2(5): Continue Mission and Business Functions
- CP-2(6): Alternate Processing and Storage Sites
- CP-2(7): Coordinate with External Service Providers
- CP-2(8): Identify Critical Assets
- CP-3: Contingency Training
- CP-3(1): Simulated Events
- CP-3(2): Mechanisms Used in Training Environments
- CP-4: Contingency Plan Testing
- CP-4(1): Coordinate with Related Plans
- CP-4(2): Alternate Processing Site
- CP-4(3): Automated Testing
- CP-4(4): Full Recovery and Reconstitution
- CP-4(5): Self-challenge
- CP-5: Contingency Plan Update
- CP-6: Alternate Storage Site
- CP-6(1): Separation from Primary Site
- CP-6(2): Recovery Time and Recovery Point Objectives
- CP-6(3): Accessibility
- CP-7: Alternate Processing Site
- CP-7(1): Separation from Primary Site
- CP-7(2): Accessibility
- CP-7(3): Priority of Service
- CP-7(4): Preparation for Use
- CP-7(5): Equivalent Information Security Safeguards
- CP-7(6): Inability to Return to Primary Site
- CP-8: Telecommunications Services
- CP-8(1): Priority of Service Provisions
- CP-8(2): Single Points of Failure
- CP-8(3): Separation of Primary and Alternate Providers
- CP-8(4): Provider Contingency Plan
- CP-8(5): Alternate Telecommunication Service Testing
- CP-9: System Backup
- CP-9(1): Testing for Reliability and Integrity
- CP-9(2): Test Restoration Using Sampling
- CP-9(3): Separate Storage for Critical Information
- CP-9(4): Protection from Unauthorized Modification
- CP-9(5): Transfer to Alternate Storage Site
- CP-9(6): Redundant Secondary System
- CP-9(7): Dual Authorization for Deletion or Destruction
- CP-9(8): Cryptographic Protection
- IA-1: Policy and Procedures
- IA-10: Adaptive Authentication
- IA-11: Re-authentication
- IA-12: Identity Proofing
- IA-12(1): Supervisor Authorization
- IA-12(2): Identity Evidence
- IA-12(3): Identity Evidence Validation and Verification
- IA-12(4): In-person Validation and Verification
- IA-12(5): Address Confirmation
- IA-12(6): Accept Externally-proofed Identities
- IA-13: Identity Providers and Authorization Servers
- IA-13(1): Protection of Cryptographic Keys
- IA-13(2): Verification of Identity Assertions and Access Tokens
- IA-13(3): Token Management
- IA-2: Identification and Authentication (Organizational Users)
- IA-2(1): Multi-factor Authentication to Privileged Accounts
- IA-2(10): Single Sign-on
- IA-2(11): Remote Access — Separate Device
- IA-2(12): Acceptance of PIV Credentials
- IA-2(13): Out-of-band Authentication
- IA-2(2): Multi-factor Authentication to Non-privileged Accounts
- IA-2(3): Local Access to Privileged Accounts
- IA-2(4): Local Access to Non-privileged Accounts
- IA-2(5): Individual Authentication with Group Authentication
- IA-2(6): Access to Accounts —separate Device
- IA-2(7): Network Access to Non-privileged Accounts — Separate Device
- IA-2(8): Access to Accounts — Replay Resistant
- IA-2(9): Network Access to Non-privileged Accounts — Replay Resistant
- IA-3: Device Identification and Authentication
- IA-3(1): Cryptographic Bidirectional Authentication
- IA-3(2): Cryptographic Bidirectional Network Authentication
- IA-3(3): Dynamic Address Allocation
- IA-3(4): Device Attestation
- IA-4: Identifier Management
- IA-4(1): Prohibit Account Identifiers as Public Identifiers
- IA-4(2): Supervisor Authorization
- IA-4(3): Multiple Forms of Certification
- IA-4(4): Identify User Status
- IA-4(5): Dynamic Management
- IA-4(6): Cross-organization Management
- IA-4(7): In-person Registration
- IA-4(8): Pairwise Pseudonymous Identifiers
- IA-4(9): Attribute Maintenance and Protection
- IA-5: Authenticator Management
- IA-5(1): Password-based Authentication
- IA-5(10): Dynamic Credential Binding
- IA-5(11): Hardware Token-based Authentication
- IA-5(12): Biometric Authentication Performance
- IA-5(13): Expiration of Cached Authenticators
- IA-5(14): Managing Content of PKI Trust Stores
- IA-5(15): GSA-approved Products and Services
- IA-5(16): In-person or Trusted External Party Authenticator Issuance
- IA-5(17): Presentation Attack Detection for Biometric Authenticators
- IA-5(18): Password Managers
- IA-5(2): Public Key-based Authentication
- IA-5(3): In-person or Trusted External Party Registration
- IA-5(4): Automated Support for Password Strength Determination
- IA-5(5): Change Authenticators Prior to Delivery
- IA-5(6): Protection of Authenticators
- IA-5(7): No Embedded Unencrypted Static Authenticators
- IA-5(8): Multiple System Accounts
- IA-5(9): Federated Credential Management
- IA-6: Authentication Feedback
- IA-7: Cryptographic Module Authentication
- IA-8: Identification and Authentication (Non-organizational Users)
- IA-8(1): Acceptance of PIV Credentials from Other Agencies
- IA-8(2): Acceptance of External Authenticators
- IA-8(3): Use of FICAM-approved Products
- IA-8(4): Use of Defined Profiles
- IA-8(5): Acceptance of PIV-I Credentials
- IA-8(6): Disassociability
- IA-9: Service Identification and Authentication
- IA-9(1): Information Exchange
- IA-9(2): Transmission of Decisions
- IR-1: Policy and Procedures
- IR-10: Integrated Information Security Analysis Team
- IR-2: Incident Response Training
- IR-2(1): Simulated Events
- IR-2(2): Automated Training Environments
- IR-2(3): Breach
- IR-3: Incident Response Testing
- IR-3(1): Automated Testing
- IR-3(2): Coordination with Related Plans
- IR-3(3): Continuous Improvement
- IR-4: Incident Handling
- IR-4(1): Automated Incident Handling Processes
- IR-4(10): Supply Chain Coordination
- IR-4(11): Integrated Incident Response Team
- IR-4(12): Malicious Code and Forensic Analysis
- IR-4(13): Behavior Analysis
- IR-4(14): Security Operations Center
- IR-4(15): Public Relations and Reputation Repair
- IR-4(2): Dynamic Reconfiguration
- IR-4(3): Continuity of Operations
- IR-4(4): Information Correlation
- IR-4(5): Automatic Disabling of System
- IR-4(6): Insider Threats
- IR-4(7): Insider Threats — Intra-organization Coordination
- IR-4(8): Correlation with External Organizations
- IR-4(9): Dynamic Response Capability
- IR-5: Incident Monitoring
- IR-5(1): Automated Tracking, Data Collection, and Analysis
- IR-6: Incident Reporting
- IR-6(1): Automated Reporting
- IR-6(2): Vulnerabilities Related to Incidents
- IR-6(3): Supply Chain Coordination
- IR-7: Incident Response Assistance
- IR-7(1): Automation Support for Availability of Information and Support
- IR-7(2): Coordination with External Providers
- IR-8: Incident Response Plan
- IR-8(1): Breaches
- IR-9: Information Spillage Response
- IR-9(1): Responsible Personnel
- IR-9(2): Training
- IR-9(3): Post-spill Operations
- IR-9(4): Exposure to Unauthorized Personnel
- MA-1: Policy and Procedures
- MA-2: Controlled Maintenance
- MA-2(1): Record Content
- MA-2(2): Automated Maintenance Activities
- MA-3: Maintenance Tools
- MA-3(1): Inspect Tools
- MA-3(2): Inspect Media
- MA-3(3): Prevent Unauthorized Removal
- MA-3(4): Restricted Tool Use
- MA-3(5): Execution with Privilege
- MA-3(6): Software Updates and Patches
- MA-4: Nonlocal Maintenance
- MA-4(1): Logging and Review
- MA-4(2): Document Nonlocal Maintenance
- MA-4(3): Comparable Security and Sanitization
- MA-4(4): Authentication and Separation of Maintenance Sessions
- MA-4(5): Approvals and Notifications
- MA-4(6): Cryptographic Protection
- MA-4(7): Disconnect Verification
- MA-5: Maintenance Personnel
- MA-5(1): Individuals Without Appropriate Access
- MA-5(2): Security Clearances for Classified Systems
- MA-5(3): Citizenship Requirements for Classified Systems
- MA-5(4): Foreign Nationals
- MA-5(5): Non-system Maintenance
- MA-6: Timely Maintenance
- MA-6(1): Preventive Maintenance
- MA-6(2): Predictive Maintenance
- MA-6(3): Automated Support for Predictive Maintenance
- MA-7: Field Maintenance
- MP-1: Policy and Procedures
- MP-2: Media Access
- MP-2(1): Automated Restricted Access
- MP-2(2): Cryptographic Protection
- MP-3: Media Marking
- MP-4: Media Storage
- MP-4(1): Cryptographic Protection
- MP-4(2): Automated Restricted Access
- MP-5: Media Transport
- MP-5(1): Protection Outside of Controlled Areas
- MP-5(2): Documentation of Activities
- MP-5(3): Custodians
- MP-5(4): Cryptographic Protection
- MP-6: Media Sanitization
- MP-6(1): Review, Approve, Track, Document, and Verify
- MP-6(2): Equipment Testing
- MP-6(3): Nondestructive Techniques
- MP-6(4): Controlled Unclassified Information
- MP-6(5): Classified Information
- MP-6(6): Media Destruction
- MP-6(7): Dual Authorization
- MP-6(8): Remote Purging or Wiping of Information
- MP-7: Media Use
- MP-7(1): Prohibit Use Without Owner
- MP-7(2): Prohibit Use of Sanitization-resistant Media
- MP-8: Media Downgrading
- MP-8(1): Documentation of Process
- MP-8(2): Equipment Testing
- MP-8(3): Controlled Unclassified Information
- MP-8(4): Classified Information
- PE-1: Policy and Procedures
- PE-10: Emergency Shutoff
- PE-10(1): Accidental and Unauthorized Activation
- PE-11: Emergency Power
- PE-11(1): Alternate Power Supply — Minimal Operational Capability
- PE-11(2): Alternate Power Supply — Self-contained
- PE-12: Emergency Lighting
- PE-12(1): Essential Mission and Business Functions
- PE-13: Fire Protection
- PE-13(1): Detection Systems — Automatic Activation and Notification
- PE-13(2): Suppression Systems — Automatic Activation and Notification
- PE-13(3): Automatic Fire Suppression
- PE-13(4): Inspections
- PE-14: Environmental Controls
- PE-14(1): Automatic Controls
- PE-14(2): Monitoring with Alarms and Notifications
- PE-15: Water Damage Protection
- PE-15(1): Automation Support
- PE-16: Delivery and Removal
- PE-17: Alternate Work Site
- PE-18: Location of System Components
- PE-18(1): Facility Site
- PE-19: Information Leakage
- PE-19(1): National Emissions Policies and Procedures
- PE-2: Physical Access Authorizations
- PE-2(1): Access by Position or Role
- PE-2(2): Two Forms of Identification
- PE-2(3): Restrict Unescorted Access
- PE-20: Asset Monitoring and Tracking
- PE-21: Electromagnetic Pulse Protection
- PE-22: Component Marking
- PE-23: Facility Location
- PE-3: Physical Access Control
- PE-3(1): System Access
- PE-3(2): Facility and Systems
- PE-3(3): Continuous Guards
- PE-3(4): Lockable Casings
- PE-3(5): Tamper Protection
- PE-3(6): Facility Penetration Testing
- PE-3(7): Physical Barriers
- PE-3(8): Access Control Vestibules
- PE-4: Access Control for Transmission
- PE-5: Access Control for Output Devices
- PE-5(1): Access to Output by Authorized Individuals
- PE-5(2): Link to Individual Identity
- PE-5(3): Marking Output Devices
- PE-6: Monitoring Physical Access
- PE-6(1): Intrusion Alarms and Surveillance Equipment
- PE-6(2): Automated Intrusion Recognition and Responses
- PE-6(3): Video Surveillance
- PE-6(4): Monitoring Physical Access to Systems
- PE-7: Visitor Control
- PE-8: Visitor Access Records
- PE-8(1): Automated Records Maintenance and Review
- PE-8(2): Physical Access Records
- PE-8(3): Limit Personally Identifiable Information Elements
- PE-9: Power Equipment and Cabling
- PE-9(1): Redundant Cabling
- PE-9(2): Automatic Voltage Controls
- PL-1: Policy and Procedures
- PL-10: Baseline Selection
- PL-11: Baseline Tailoring
- PL-2: System Security and Privacy Plans
- PL-2(1): Concept of Operations
- PL-2(2): Functional Architecture
- PL-2(3): Plan and Coordinate with Other Organizational Entities
- PL-3: System Security Plan Update
- PL-4: Rules of Behavior
- PL-4(1): Social Media and External Site/Application Usage Restrictions
- PL-5: Privacy Impact Assessment
- PL-6: Security-related Activity Planning
- PL-7: Concept of Operations
- PL-8: Security and Privacy Architectures
- PL-8(1): Defense in Depth
- PL-8(2): Supplier Diversity
- PL-9: Central Management
- PM-1: Information Security Program Plan
- PM-10: Authorization Process
- PM-11: Mission and Business Process Definition
- PM-12: Insider Threat Program
- PM-13: Security and Privacy Workforce
- PM-14: Testing, Training, and Monitoring
- PM-15: Security and Privacy Groups and Associations
- PM-16: Threat Awareness Program
- PM-16(1): Automated Means for Sharing Threat Intelligence
- PM-17: Protecting Controlled Unclassified Information on External Systems
- PM-18: Privacy Program Plan
- PM-19: Privacy Program Leadership Role
- PM-2: Information Security Program Leadership Role
- PM-20: Dissemination of Privacy Program Information
- PM-20(1): Privacy Policies on Websites, Applications, and Digital Services
- PM-21: Accounting of Disclosures
- PM-22: Personally Identifiable Information Quality Management
- PM-23: Data Governance Body
- PM-24: Data Integrity Board
- PM-25: Minimization of Personally Identifiable Information Used in Testing, Training, and Research
- PM-26: Complaint Management
- PM-27: Privacy Reporting
- PM-28: Risk Framing
- PM-29: Risk Management Program Leadership Roles
- PM-3: Information Security and Privacy Resources
- PM-30: Supply Chain Risk Management Strategy
- PM-30(1): Suppliers of Critical or Mission-essential Items
- PM-31: Continuous Monitoring Strategy
- PM-32: Purposing
- PM-4: Plan of Action and Milestones Process
- PM-5: System Inventory
- PM-5(1): Inventory of Personally Identifiable Information
- PM-6: Measures of Performance
- PM-7: Enterprise Architecture
- PM-7(1): Offloading
- PM-8: Critical Infrastructure Plan
- PM-9: Risk Management Strategy
- PS-1: Policy and Procedures
- PS-2: Position Risk Designation
- PS-3: Personnel Screening
- PS-3(1): Classified Information
- PS-3(2): Formal Indoctrination
- PS-3(3): Information Requiring Special Protective Measures
- PS-3(4): Citizenship Requirements
- PS-4: Personnel Termination
- PS-4(1): Post-employment Requirements
- PS-4(2): Automated Actions
- PS-5: Personnel Transfer
- PS-6: Access Agreements
- PS-6(1): Information Requiring Special Protection
- PS-6(2): Classified Information Requiring Special Protection
- PS-6(3): Post-employment Requirements
- PS-7: External Personnel Security
- PS-8: Personnel Sanctions
- PS-9: Position Descriptions
- PT-1: Policy and Procedures
- PT-2: Authority to Process Personally Identifiable Information
- PT-2(1): Data Tagging
- PT-2(2): Automation
- PT-3: Personally Identifiable Information Processing Purposes
- PT-3(1): Data Tagging
- PT-3(2): Automation
- PT-4: Consent
- PT-4(1): Tailored Consent
- PT-4(2): Just-in-time Consent
- PT-4(3): Revocation
- PT-5: Privacy Notice
- PT-5(1): Just-in-time Notice
- PT-5(2): Privacy Act Statements
- PT-6: System of Records Notice
- PT-6(1): Routine Uses
- PT-6(2): Exemption Rules
- PT-7: Specific Categories of Personally Identifiable Information
- PT-7(1): Social Security Numbers
- PT-7(2): First Amendment Information
- PT-8: Computer Matching Requirements
- RA-1: Policy and Procedures
- RA-10: Threat Hunting
- RA-2: Security Categorization
- RA-2(1): Impact-level Prioritization
- RA-3: Risk Assessment
- RA-3(1): Supply Chain Risk Assessment
- RA-3(2): Use of All-source Intelligence
- RA-3(3): Dynamic Threat Awareness
- RA-3(4): Predictive Cyber Analytics
- RA-4: Risk Assessment Update
- RA-5: Vulnerability Monitoring and Scanning
- RA-5(1): Update Tool Capability
- RA-5(10): Correlate Scanning Information
- RA-5(11): Public Disclosure Program
- RA-5(2): Update Vulnerabilities to Be Scanned
- RA-5(3): Breadth and Depth of Coverage
- RA-5(4): Discoverable Information
- RA-5(5): Privileged Access
- RA-5(6): Automated Trend Analyses
- RA-5(7): Automated Detection and Notification of Unauthorized Components
- RA-5(8): Review Historic Audit Logs
- RA-5(9): Penetration Testing and Analyses
- RA-6: Technical Surveillance Countermeasures Survey
- RA-7: Risk Response
- RA-8: Privacy Impact Assessments
- RA-9: Criticality Analysis
- SA-1: Policy and Procedures
- SA-10: Developer Configuration Management
- SA-10(1): Software and Firmware Integrity Verification
- SA-10(2): Alternative Configuration Management Processes
- SA-10(3): Hardware Integrity Verification
- SA-10(4): Trusted Generation
- SA-10(5): Mapping Integrity for Version Control
- SA-10(6): Trusted Distribution
- SA-10(7): Security and Privacy Representatives
- SA-11: Developer Testing and Evaluation
- SA-11(1): Static Code Analysis
- SA-11(2): Threat Modeling and Vulnerability Analyses
- SA-11(3): Independent Verification of Assessment Plans and Evidence
- SA-11(4): Manual Code Reviews
- SA-11(5): Penetration Testing
- SA-11(6): Attack Surface Reviews
- SA-11(7): Verify Scope of Testing and Evaluation
- SA-11(8): Dynamic Code Analysis
- SA-11(9): Interactive Application Security Testing
- SA-12: Supply Chain Protection
- SA-12(1): Acquisition Strategies / Tools / Methods
- SA-12(10): Validate as Genuine and Not Altered
- SA-12(11): Penetration Testing / Analysis of Elements, Processes, and Actors
- SA-12(12): Inter-organizational Agreements
- SA-12(13): Critical Information System Components
- SA-12(14): Identity and Traceability
- SA-12(15): Processes to Address Weaknesses or Deficiencies
- SA-12(2): Supplier Reviews
- SA-12(3): Trusted Shipping and Warehousing
- SA-12(4): Diversity of Suppliers
- SA-12(5): Limitation of Harm
- SA-12(6): Minimizing Procurement Time
- SA-12(7): Assessments Prior to Selection / Acceptance / Update
- SA-12(8): Use of All-source Intelligence
- SA-12(9): Operations Security
- SA-13: Trustworthiness
- SA-14: Criticality Analysis
- SA-14(1): Critical Components with No Viable Alternative Sourcing
- SA-15: Development Process, Standards, and Tools
- SA-15(1): Quality Metrics
- SA-15(10): Incident Response Plan
- SA-15(11): Archive System or Component
- SA-15(12): Minimize Personally Identifiable Information
- SA-15(13): Logging Syntax
- SA-15(2): Security and Privacy Tracking Tools
- SA-15(3): Criticality Analysis
- SA-15(4): Threat Modeling and Vulnerability Analysis
- SA-15(5): Attack Surface Reduction
- SA-15(6): Continuous Improvement
- SA-15(7): Automated Vulnerability Analysis
- SA-15(8): Reuse of Threat and Vulnerability Information
- SA-15(9): Use of Live Data
- SA-16: Developer-provided Training
- SA-17: Developer Security and Privacy Architecture and Design
- SA-17(1): Formal Policy Model
- SA-17(2): Security-relevant Components
- SA-17(3): Formal Correspondence
- SA-17(4): Informal Correspondence
- SA-17(5): Conceptually Simple Design
- SA-17(6): Structure for Testing
- SA-17(7): Structure for Least Privilege
- SA-17(8): Orchestration
- SA-17(9): Design Diversity
- SA-18: Tamper Resistance and Detection
- SA-18(1): Multiple Phases of System Development Life Cycle
- SA-18(2): Inspection of Systems or Components
- SA-19: Component Authenticity
- SA-19(1): Anti-counterfeit Training
- SA-19(2): Configuration Control for Component Service and Repair
- SA-19(3): Component Disposal
- SA-19(4): Anti-counterfeit Scanning
- SA-2: Allocation of Resources
- SA-20: Customized Development of Critical Components
- SA-21: Developer Screening
- SA-21(1): Validation of Screening
- SA-22: Unsupported System Components
- SA-22(1): Alternative Sources for Continued Support
- SA-23: Specialization
- SA-24: Design For Cyber Resiliency
- SA-3: System Development Life Cycle
- SA-3(1): Manage Preproduction Environment
- SA-3(2): Use of Live or Operational Data
- SA-3(3): Technology Refresh
- SA-4: Acquisition Process
- SA-4(1): Functional Properties of Controls
- SA-4(10): Use of Approved PIV Products
- SA-4(11): System of Records
- SA-4(12): Data Ownership
- SA-4(2): Design and Implementation Information for Controls
- SA-4(3): Development Methods, Techniques, and Practices
- SA-4(4): Assignment of Components to Systems
- SA-4(5): System, Component, and Service Configurations
- SA-4(6): Use of Information Assurance Products
- SA-4(7): NIAP-approved Protection Profiles
- SA-4(8): Continuous Monitoring Plan for Controls
- SA-4(9): Functions, Ports, Protocols, and Services in Use
- SA-5: System Documentation
- SA-5(1): Functional Properties of Security Controls
- SA-5(2): Security-relevant External System Interfaces
- SA-5(3): High-level Design
- SA-5(4): Low-level Design
- SA-5(5): Source Code
- SA-6: Software Usage Restrictions
- SA-7: User-installed Software
- SA-8: Security and Privacy Engineering Principles
- SA-8(1): Clear Abstractions
- SA-8(10): Hierarchical Trust
- SA-8(11): Inverse Modification Threshold
- SA-8(12): Hierarchical Protection
- SA-8(13): Minimized Security Elements
- SA-8(14): Least Privilege
- SA-8(15): Predicate Permission
- SA-8(16): Self-reliant Trustworthiness
- SA-8(17): Secure Distributed Composition
- SA-8(18): Trusted Communications Channels
- SA-8(19): Continuous Protection
- SA-8(2): Least Common Mechanism
- SA-8(20): Secure Metadata Management
- SA-8(21): Self-analysis
- SA-8(22): Accountability and Traceability
- SA-8(23): Secure Defaults
- SA-8(24): Secure Failure and Recovery
- SA-8(25): Economic Security
- SA-8(26): Performance Security
- SA-8(27): Human Factored Security
- SA-8(28): Acceptable Security
- SA-8(29): Repeatable and Documented Procedures
- SA-8(3): Modularity and Layering
- SA-8(30): Procedural Rigor
- SA-8(31): Secure System Modification
- SA-8(32): Sufficient Documentation
- SA-8(33): Minimization
- SA-8(4): Partially Ordered Dependencies
- SA-8(5): Efficiently Mediated Access
- SA-8(6): Minimized Sharing
- SA-8(7): Reduced Complexity
- SA-8(8): Secure Evolvability
- SA-8(9): Trusted Components
- SA-9: External System Services
- SA-9(1): Risk Assessments and Organizational Approvals
- SA-9(2): Identification of Functions, Ports, Protocols, and Services
- SA-9(3): Establish and Maintain Trust Relationship with Providers
- SA-9(4): Consistent Interests of Consumers and Providers
- SA-9(5): Processing, Storage, and Service Location
- SA-9(6): Organization-controlled Cryptographic Keys
- SA-9(7): Organization-controlled Integrity Checking
- SA-9(8): Processing and Storage Location — U.S. Jurisdiction
- SC-1: Policy and Procedures
- SC-10: Network Disconnect
- SC-11: Trusted Path
- SC-11(1): Irrefutable Communications Path
- SC-12: Cryptographic Key Establishment and Management
- SC-12(1): Availability
- SC-12(2): Symmetric Keys
- SC-12(3): Asymmetric Keys
- SC-12(4): PKI Certificates
- SC-12(5): PKI Certificates / Hardware Tokens
- SC-12(6): Physical Control of Keys
- SC-13: Cryptographic Protection
- SC-13(1): FIPS-validated Cryptography
- SC-13(2): NSA-approved Cryptography
- SC-13(3): Individuals Without Formal Access Approvals
- SC-13(4): Digital Signatures
- SC-14: Public Access Protections
- SC-15: Collaborative Computing Devices and Applications
- SC-15(1): Physical or Logical Disconnect
- SC-15(2): Blocking Inbound and Outbound Communications Traffic
- SC-15(3): Disabling and Removal in Secure Work Areas
- SC-15(4): Explicitly Indicate Current Participants
- SC-16: Transmission of Security and Privacy Attributes
- SC-16(1): Integrity Verification
- SC-16(2): Anti-spoofing Mechanisms
- SC-16(3): Cryptographic Binding
- SC-17: Public Key Infrastructure Certificates
- SC-18: Mobile Code
- SC-18(1): Identify Unacceptable Code and Take Corrective Actions
- SC-18(2): Acquisition, Development, and Use
- SC-18(3): Prevent Downloading and Execution
- SC-18(4): Prevent Automatic Execution
- SC-18(5): Allow Execution Only in Confined Environments
- SC-19: Voice Over Internet Protocol
- SC-2: Separation of System and User Functionality
- SC-2(1): Interfaces for Non-privileged Users
- SC-2(2): Disassociability
- SC-20: Secure Name/Address Resolution Service (Authoritative Source)
- SC-20(1): Child Subspaces
- SC-20(2): Data Origin and Integrity
- SC-21: Secure Name/Address Resolution Service (Recursive or Caching Resolver)
- SC-21(1): Data Origin and Integrity
- SC-22: Architecture and Provisioning for Name/Address Resolution Service
- SC-23: Session Authenticity
- SC-23(1): Invalidate Session Identifiers at Logout
- SC-23(2): User-initiated Logouts and Message Displays
- SC-23(3): Unique System-generated Session Identifiers
- SC-23(4): Unique Session Identifiers with Randomization
- SC-23(5): Allowed Certificate Authorities
- SC-24: Fail in Known State
- SC-25: Thin Nodes
- SC-26: Decoys
- SC-26(1): Detection of Malicious Code
- SC-27: Platform-independent Applications
- SC-28: Protection of Information at Rest
- SC-28(1): Cryptographic Protection
- SC-28(2): Offline Storage
- SC-28(3): Cryptographic Keys
- SC-29: Heterogeneity
- SC-29(1): Virtualization Techniques
- SC-3: Security Function Isolation
- SC-3(1): Hardware Separation
- SC-3(2): Access and Flow Control Functions
- SC-3(3): Minimize Nonsecurity Functionality
- SC-3(4): Module Coupling and Cohesiveness
- SC-3(5): Layered Structures
- SC-30: Concealment and Misdirection
- SC-30(1): Virtualization Techniques
- SC-30(2): Randomness
- SC-30(3): Change Processing and Storage Locations
- SC-30(4): Misleading Information
- SC-30(5): Concealment of System Components
- SC-31: Covert Channel Analysis
- SC-31(1): Test Covert Channels for Exploitability
- SC-31(2): Maximum Bandwidth
- SC-31(3): Measure Bandwidth in Operational Environments
- SC-32: System Partitioning
- SC-32(1): Separate Physical Domains for Privileged Functions
- SC-33: Transmission Preparation Integrity
- SC-34: Non-modifiable Executable Programs
- SC-34(1): No Writable Storage
- SC-34(2): Integrity Protection on Read-only Media
- SC-34(3): Hardware-based Protection
- SC-35: External Malicious Code Identification
- SC-36: Distributed Processing and Storage
- SC-36(1): Polling Techniques
- SC-36(2): Synchronization
- SC-37: Out-of-band Channels
- SC-37(1): Ensure Delivery and Transmission
- SC-38: Operations Security
- SC-39: Process Isolation
- SC-39(1): Hardware Separation
- SC-39(2): Separate Execution Domain Per Thread
- SC-4: Information in Shared System Resources
- SC-4(1): Security Levels
- SC-4(2): Multilevel or Periods Processing
- SC-40: Wireless Link Protection
- SC-40(1): Electromagnetic Interference
- SC-40(2): Reduce Detection Potential
- SC-40(3): Imitative or Manipulative Communications Deception
- SC-40(4): Signal Parameter Identification
- SC-41: Port and I/O Device Access
- SC-42: Sensor Capability and Data
- SC-42(1): Reporting to Authorized Individuals or Roles
- SC-42(2): Authorized Use
- SC-42(3): Prohibit Use of Devices
- SC-42(4): Notice of Collection
- SC-42(5): Collection Minimization
- SC-43: Usage Restrictions
- SC-44: Detonation Chambers
- SC-45: System Time Synchronization
- SC-45(1): Synchronization with Authoritative Time Source
- SC-45(2): Secondary Authoritative Time Source
- SC-46: Cross Domain Policy Enforcement
- SC-47: Alternate Communications Paths
- SC-48: Sensor Relocation
- SC-48(1): Dynamic Relocation of Sensors or Monitoring Capabilities
- SC-49: Hardware-enforced Separation and Policy Enforcement
- SC-5: Denial-of-service Protection
- SC-5(1): Restrict Ability to Attack Other Systems
- SC-5(2): Capacity, Bandwidth, and Redundancy
- SC-5(3): Detection and Monitoring
- SC-50: Software-enforced Separation and Policy Enforcement
- SC-51: Hardware-based Protection
- SC-6: Resource Availability
- SC-7: Boundary Protection
- SC-7(1): Physically Separated Subnetworks
- SC-7(10): Prevent Exfiltration
- SC-7(11): Restrict Incoming Communications Traffic
- SC-7(12): Host-based Protection
- SC-7(13): Isolation of Security Tools, Mechanisms, and Support Components
- SC-7(14): Protect Against Unauthorized Physical Connections
- SC-7(15): Networked Privileged Accesses
- SC-7(16): Prevent Discovery of System Components
- SC-7(17): Automated Enforcement of Protocol Formats
- SC-7(18): Fail Secure
- SC-7(19): Block Communication from Non-organizationally Configured Hosts
- SC-7(2): Public Access
- SC-7(20): Dynamic Isolation and Segregation
- SC-7(21): Isolation of System Components
- SC-7(22): Separate Subnets for Connecting to Different Security Domains
- SC-7(23): Disable Sender Feedback on Protocol Validation Failure
- SC-7(24): Personally Identifiable Information
- SC-7(25): Unclassified National Security System Connections
- SC-7(26): Classified National Security System Connections
- SC-7(27): Unclassified Non-national Security System Connections
- SC-7(28): Connections to Public Networks
- SC-7(29): Separate Subnets to Isolate Functions
- SC-7(3): Access Points
- SC-7(4): External Telecommunications Services
- SC-7(5): Deny by Default — Allow by Exception
- SC-7(6): Response to Recognized Failures
- SC-7(7): Split Tunneling for Remote Devices
- SC-7(8): Route Traffic to Authenticated Proxy Servers
- SC-7(9): Restrict Threatening Outgoing Communications Traffic
- SC-8: Transmission Confidentiality and Integrity
- SC-8(1): Cryptographic Protection
- SC-8(2): Pre- and Post-transmission Handling
- SC-8(3): Cryptographic Protection for Message Externals
- SC-8(4): Conceal or Randomize Communications
- SC-8(5): Protected Distribution System
- SC-9: Transmission Confidentiality
- SI-1: Policy and Procedures
- SI-10: Information Input Validation
- SI-10(1): Manual Override Capability
- SI-10(2): Review and Resolve Errors
- SI-10(3): Predictable Behavior
- SI-10(4): Timing Interactions
- SI-10(5): Restrict Inputs to Trusted Sources and Approved Formats
- SI-10(6): Injection Prevention
- SI-11: Error Handling
- SI-12: Information Management and Retention
- SI-12(1): Limit Personally Identifiable Information Elements
- SI-12(2): Minimize Personally Identifiable Information in Testing, Training, and Research
- SI-12(3): Information Disposal
- SI-13: Predictable Failure Prevention
- SI-13(1): Transferring Component Responsibilities
- SI-13(2): Time Limit on Process Execution Without Supervision
- SI-13(3): Manual Transfer Between Components
- SI-13(4): Standby Component Installation and Notification
- SI-13(5): Failover Capability
- SI-14: Non-persistence
- SI-14(1): Refresh from Trusted Sources
- SI-14(2): Non-persistent Information
- SI-14(3): Non-persistent Connectivity
- SI-15: Information Output Filtering
- SI-16: Memory Protection
- SI-17: Fail-safe Procedures
- SI-18: Personally Identifiable Information Quality Operations
- SI-18(1): Automation Support
- SI-18(2): Data Tags
- SI-18(3): Collection
- SI-18(4): Individual Requests
- SI-18(5): Notice of Correction or Deletion
- SI-19: De-identification
- SI-19(1): Collection
- SI-19(2): Archiving
- SI-19(3): Release
- SI-19(4): Removal, Masking, Encryption, Hashing, or Replacement of Direct Identifiers
- SI-19(5): Statistical Disclosure Control
- SI-19(6): Differential Privacy
- SI-19(7): Validated Algorithms and Software
- SI-19(8): Motivated Intruder
- SI-2: Flaw Remediation
- SI-2(1): Central Management
- SI-2(2): Automated Flaw Remediation Status
- SI-2(3): Time to Remediate Flaws and Benchmarks for Corrective Actions
- SI-2(4): Automated Patch Management Tools
- SI-2(5): Automatic Software and Firmware Updates
- SI-2(6): Removal of Previous Versions of Software and Firmware
- SI-2(7): Root Cause Analysis
- SI-20: Tainting
- SI-21: Information Refresh
- SI-22: Information Diversity
- SI-23: Information Fragmentation
- SI-3: Malicious Code Protection
- SI-3(1): Central Management
- SI-3(10): Malicious Code Analysis
- SI-3(2): Automatic Updates
- SI-3(3): Non-privileged Users
- SI-3(4): Updates Only by Privileged Users
- SI-3(5): Portable Storage Devices
- SI-3(6): Testing and Verification
- SI-3(7): Nonsignature-based Detection
- SI-3(8): Detect Unauthorized Commands
- SI-3(9): Authenticate Remote Commands
- SI-4: System Monitoring
- SI-4(1): System-wide Intrusion Detection System
- SI-4(10): Visibility of Encrypted Communications
- SI-4(11): Analyze Communications Traffic Anomalies
- SI-4(12): Automated Organization-generated Alerts
- SI-4(13): Analyze Traffic and Event Patterns
- SI-4(14): Wireless Intrusion Detection
- SI-4(15): Wireless to Wireline Communications
- SI-4(16): Correlate Monitoring Information
- SI-4(17): Integrated Situational Awareness
- SI-4(18): Analyze Traffic and Covert Exfiltration
- SI-4(19): Risk for Individuals
- SI-4(2): Automated Tools and Mechanisms for Real-time Analysis
- SI-4(20): Privileged Users
- SI-4(21): Probationary Periods
- SI-4(22): Unauthorized Network Services
- SI-4(23): Host-based Devices
- SI-4(24): Indicators of Compromise
- SI-4(25): Optimize Network Traffic Analysis
- SI-4(3): Automated Tool and Mechanism Integration
- SI-4(4): Inbound and Outbound Communications Traffic
- SI-4(5): System-generated Alerts
- SI-4(6): Restrict Non-privileged Users
- SI-4(7): Automated Response to Suspicious Events
- SI-4(8): Protection of Monitoring Information
- SI-4(9): Testing of Monitoring Tools and Mechanisms
- SI-5: Security Alerts, Advisories, and Directives
- SI-5(1): Automated Alerts and Advisories
- SI-6: Security and Privacy Function Verification
- SI-6(1): Notification of Failed Security Tests
- SI-6(2): Automation Support for Distributed Testing
- SI-6(3): Report Verification Results
- SI-7: Software, Firmware, and Information Integrity
- SI-7(1): Integrity Checks
- SI-7(10): Protection of Boot Firmware
- SI-7(11): Confined Environments with Limited Privileges
- SI-7(12): Integrity Verification
- SI-7(13): Code Execution in Protected Environments
- SI-7(14): Binary or Machine Executable Code
- SI-7(15): Code Authentication
- SI-7(16): Time Limit on Process Execution Without Supervision
- SI-7(17): Runtime Application Self-protection
- SI-7(2): Automated Notifications of Integrity Violations
- SI-7(3): Centrally Managed Integrity Tools
- SI-7(4): Tamper-evident Packaging
- SI-7(5): Automated Response to Integrity Violations
- SI-7(6): Cryptographic Protection
- SI-7(7): Integration of Detection and Response
- SI-7(8): Auditing Capability for Significant Events
- SI-7(9): Verify Boot Process
- SI-8: Spam Protection
- SI-8(1): Central Management
- SI-8(2): Automatic Updates
- SI-8(3): Continuous Learning Capability
- SI-9: Information Input Restrictions
- SR-1: Policy and Procedures
- SR-10: Inspection of Systems or Components
- SR-11: Component Authenticity
- SR-11(1): Anti-counterfeit Training
- SR-11(2): Configuration Control for Component Service and Repair
- SR-11(3): Anti-counterfeit Scanning
- SR-12: Component Disposal
- SR-2: Supply Chain Risk Management Plan
- SR-2(1): Establish SCRM Team
- SR-3: Supply Chain Controls and Processes
- SR-3(1): Diverse Supply Base
- SR-3(2): Limitation of Harm
- SR-3(3): Sub-tier Flow Down
- SR-4: Provenance
- SR-4(1): Identity
- SR-4(2): Track and Trace
- SR-4(3): Validate as Genuine and Not Altered
- SR-4(4): Supply Chain Integrity — Pedigree
- SR-5: Acquisition Strategies, Tools, and Methods
- SR-5(1): Adequate Supply
- SR-5(2): Assessments Prior to Selection, Acceptance, Modification, or Update
- SR-6: Supplier Assessments and Reviews
- SR-6(1): Testing and Analysis
- SR-7: Supply Chain Operations Security
- SR-8: Notification Agreements
- SR-9: Tamper Resistance and Detection
- SR-9(1): Multiple Stages of System Development Life Cycle
NIST SP 800-61
View framework →8 requirements · NIST SP 800-61
PCI DSS 4.0
View framework →12 requirements · PCI DSS 4.0
- Account and authentication security
- Apply secure configurations to all system components
- Develop and maintain secure systems and software
- Logging and monitoring
- Network security controls
- Protect account data with strong cryptography during transmission
- Protect stored account data
- Protect stored and transmitted account data
- Protect systems and networks from malicious software
- Restrict access to system components and cardholder data by business need
- Security governance and third-party oversight
- Vulnerability and malware management
SEC Marketing Content Analysis
View framework →12 requirements · SEC
- Approval, dissemination, and archive linkage
- Books and records retention
- Compensated testimonial disclosure controls
- Gross and net performance presentation controls
- Hypothetical performance governance
- Marketing rule anti-fraud standards
- Performance advertising controls
- Substantiation file requirements for material claims
- Testimonials and endorsements compliance
- Third-party rating due diligence
- Third-party ratings governance
- Time-period and benchmark consistency controls
SOC 1
View framework →8 requirements · SOC 1
- Complementary subservice organization controls
- Control design documentation
- Control objectives for financial reporting
- Exception and remediation governance
- Operating effectiveness evidence
- Scope boundary and system description accuracy
- Testing exception evaluation and reporting
- User entity control considerations
SOC 2 - Trust Services Criteria (2017)
View framework →55 requirements · SOC2
- COSO Principle 1: The entity demonstrates a commitment to integrity and ethical values
- COSO Principle 10: The entity selects and develops control activities that contribute to the mitigation of risks
- COSO Principle 11: The entity selects and develops general control activities over technology
- COSO Principle 12: The entity deploys control activities through policies and procedures
- COSO Principle 13: The entity obtains or generates and uses relevant, quality information to support the functioning of internal control
- COSO Principle 14: The entity internally communicates information necessary to support the functioning of internal control
- COSO Principle 15: The entity communicates with external parties regarding matters affecting the functioning of internal control
- COSO Principle 16: The entity selects, develops, and performs ongoing and/or separate evaluations
- COSO Principle 17: The entity evaluates and communicates internal control deficiencies in a timely manner
- COSO Principle 2: The board of directors demonstrates independence from management and exercises oversight of the development and performance of internal control
- COSO Principle 3: Management establishes, with board oversight, structures, reporting lines, and appropriate authorities and responsibilities
- COSO Principle 4: The entity demonstrates a commitment to attract, develop, and retain competent individuals
- COSO Principle 5: The entity holds individuals accountable for their internal control responsibilities
- COSO Principle 6: The entity specifies objectives with sufficient clarity to enable identification and assessment of risks
- COSO Principle 7: The entity identifies risks to the achievement of its objectives and analyzes risks
- COSO Principle 8: The entity considers the potential for fraud in assessing risks
- COSO Principle 9: The entity identifies and assesses changes that could significantly impact the system of internal control
- Prior to issuing credentials and granting access, the entity registers and authorizes new users
- The entity assesses and manages risks associated with vendors and business partners
- The entity authorizes, designs, develops or acquires, configures, documents, tests, approves changes
- The entity authorizes, designs, develops, implements, operates, approves, maintains, and monitors environmental protections
- The entity authorizes, modifies, or removes access to data, software, functions, and services
- The entity collects personal information only for the purposes identified in the notice
- The entity communicates choices available regarding the collection, use, retention, disclosure, and disposal of personal information
- The entity corrects, amends, or appends personal information based on information provided by data subjects
- The entity discloses personal information to third parties with the explicit consent of data subjects
- The entity discontinues logical and physical protections over physical assets
- The entity disposes of confidential information to meet the entity's objectives
- The entity evaluates security events to determine whether they could or have resulted in failures
- The entity grants identified and authenticated data subjects the ability to access their stored personal information
- The entity identifies and maintains confidential information
- The entity identifies, develops, and implements activities to recover from security incidents
- The entity identifies, selects, and develops risk mitigation activities for risks arising from business disruptions
- The entity implements controls to prevent or detect and act upon the introduction of unauthorized or malicious software
- The entity implements logical access security measures to protect against threats from sources outside its system boundaries
- The entity implements logical access security software, infrastructure, and architectures
- The entity implements policies and procedures over system inputs to provide reasonable assurance
- The entity implements policies and procedures over system outputs
- The entity implements policies and procedures over system processing
- The entity implements policies and procedures to store inputs, items in processing, and outputs
- The entity implements procedures to receive, address, resolve, and communicate the resolution of inquiries and complaints
- The entity limits the use of personal information to purposes identified in the notice
- The entity maintains, monitors, and evaluates current processing capacity
- The entity monitors system components and the operation of those components for anomalies
- The entity obtains or generates, uses, and communicates relevant, quality information regarding the objectives
- The entity provides data subjects with an accounting of personal information disclosed to third parties
- The entity provides for data backup, recovery, and offsite storage
- The entity provides notice to data subjects about privacy practices
- The entity responds to identified security incidents by executing a defined incident response program
- The entity restricts physical access to facilities and protected information assets
- The entity restricts the transmission, movement, and removal of information to authorized users and processes
- The entity retains personal information consistent with its objectives
- The entity retains personal information consistent with its objectives
- The entity securely disposes of personal information
- To meet its objectives, the entity uses detection and monitoring procedures to identify anomalies
SOC 2 TSC 2017
View framework →55 requirements · SOC2_TSC_2017
- TSC-A1.1 Guidance
- TSC-A1.2 Guidance
- TSC-A1.3 Guidance
- TSC-C1.1 Guidance
- TSC-C1.2 Guidance
- TSC-CC1.1 Guidance
- TSC-CC1.2 Guidance
- TSC-CC1.3 Guidance
- TSC-CC1.4 Guidance
- TSC-CC1.5 Guidance
- TSC-CC2.1 Guidance
- TSC-CC2.2 Guidance
- TSC-CC2.3 Guidance
- TSC-CC3.1 Guidance
- TSC-CC3.2 Guidance
- TSC-CC3.3 Guidance
- TSC-CC3.4 Guidance
- TSC-CC4.1 Guidance
- TSC-CC4.2 Guidance
- TSC-CC5.1 Guidance
- TSC-CC5.2 Guidance
- TSC-CC5.3 Guidance
- TSC-CC6.1 Guidance
- TSC-CC6.2 Guidance
- TSC-CC6.3 Guidance
- TSC-CC6.4 Guidance
- TSC-CC6.5 Guidance
- TSC-CC6.6 Guidance
- TSC-CC6.7 Guidance
- TSC-CC6.8 Guidance
- TSC-CC7.1 Guidance
- TSC-CC7.2 Guidance
- TSC-CC7.3 Guidance
- TSC-CC7.4 Guidance
- TSC-CC7.5 Guidance
- TSC-CC8.1 Guidance
- TSC-CC9.1 Guidance
- TSC-CC9.2 Guidance
- TSC-P1.1 Guidance
- TSC-P2.1 Guidance
- TSC-P3.1 Guidance
- TSC-P3.2 Guidance
- TSC-P4.1 Guidance
- TSC-P4.2 Guidance
- TSC-P4.3 Guidance
- TSC-P5.1 Guidance
- TSC-P5.2 Guidance
- TSC-P6.1 Guidance
- TSC-P7.1 Guidance
- TSC-P8.1 Guidance
- TSC-PI1.1 Guidance
- TSC-PI1.2 Guidance
- TSC-PI1.3 Guidance
- TSC-PI1.4 Guidance
- TSC-PI1.5 Guidance
11 requirements · SOX
- Title I: Public Company Accounting Oversight Board
- Title II: Auditor Independence
- Title III: Corporate Responsibility
- Title IV: Enhanced Financial Disclosures
- Title IX: White-Collar Crime Penalty Enhancements
- Title V: Analyst Conflicts Of Interest
- Title VI: Commission Resources And Authority
- Title VII: Studies And Reports
- Title VIII: Corporate And Criminal Fraud Accountability
- Title X: Corporate Tax Returns
- Title XI: Corporate Fraud And Accountability
State & Specialized Regulations
View framework →4 requirements · Investment Advisers Act, Investment Advisers Act Section 206, Securities Exchange Act
Supervision & Governance Compliance
View framework →17 requirements · FINRA Rules, Investment Advisers Act, SEC, FINRA, BSA
- Borrowing/Lending with Customers (FINRA 3240)
- Code of Ethics Requirements for Registered Investment Advisers
- FINRA Rule 3310 - Anti-Money Laundering Compliance Program
- Outside Business Activities (FINRA 3270)
- Pay-to-Play: Political Contributions (SEC 206(4)-5)
- Private Fund Fee and Expense Oversight
- Private Securities Transactions (FINRA 3280)
- Recordkeeping: Books & Records (SEC 204-2)
- Reporting Requirements (FINRA 4530)
- RIA Compliance Program Rule Implementation
- SEC AML/BSA Requirements for Investment Advisers
- SEC Code of Ethics (RIA)
- SEC Compliance Program Rule (RIA Compliance Policies)
- SEC Supervision Rule - Section 203(e)(6)
- Solicitation: Referral Arrangements (SEC 206(4)-3)
- Supervisory Procedures and Compliance Program Oversight
- Supervisory Procedures and Compliance Program Oversight
TISAX
View framework →10 requirements · TISAX
- Assessment evidence readiness
- Continual improvement and reassessment readiness
- Incident response and notification coordination
- Information classification and handling procedures
- Information security governance
- Physical site and facility protections
- Protection of confidential information
- Prototype and physical security
- Secure development and engineering practices
- Third-party and partner assurance