AU-7(2): Automatic Sort and Search
To meet the au-7(2): automatic sort and search requirement, you must implement audit reporting that lets authorized users automatically sort and search audit records so investigations, monitoring, and exams can be performed quickly and consistently. Operationalize it by standardizing queryable fields, enforcing access controls, and retaining proof that real users can run repeatable searches on current logs.
Key takeaways:
- Build audit reports around queryable, indexed fields (time, user, event type, system, outcome) and validate search performance in practice.
- Control who can search what; broad search capability can amplify sensitive-data exposure if roles and permissions are loose.
- Keep assessor-ready evidence: saved queries, screenshots, sample reports, and a procedure mapping owners, cadence, and artifacts.
AU-7(2) sits in the NIST SP 800-53 Audit and Accountability family and focuses on a practical outcome: when you have an incident, an insider concern, or an auditor asking for proof, your team can rapidly find and organize audit evidence without manual log scraping. “Automatic sort and search” is less about buying a specific tool and more about ensuring your audit records are structured, accessible, and usable for real investigations.
For most organizations, this becomes a logging architecture and operations requirement. You need a place where audit records land (SIEM, log analytics platform, centralized logging service, managed detection platform), a defined set of fields that are consistently captured, and permissions that allow security and compliance staff to search while preventing unnecessary exposure of sensitive log content.
If you handle federal data, operate a federal information system, or support customers who flow down NIST SP 800-53 requirements, this control enhancement is often examined as “show me you can answer audit questions quickly.” The failure mode is common: logs exist, but nobody can reliably query them across systems, time ranges, identities, and event types. This page gives requirement-level implementation guidance you can execute immediately.
Regulatory text
Requirement (excerpt): “NIST SP 800-53 control AU-7.2.” 1
Framework context: AU-7 is “Audit Record Reduction and Report Generation”; AU-7(2) is the enhancement “Automatic Sort and Search.” 2
What the operator must do: implement audit report generation capabilities that support automated sorting and searching of audit records. In practice, an assessor expects you to demonstrate that authorized personnel can query audit logs by meaningful fields (example: user, host, time window, event type), sort results, and export or preserve the report output for investigations and audit requests.
Plain-English interpretation (what “automatic” means)
“Automatic” means your team does not depend on manual, error-prone steps like:
- Copying raw logs into spreadsheets to filter.
- Grepping individual servers one by one.
- Ad hoc parsing that changes per analyst.
Instead, you have a repeatable search interface (or API), consistent fields, and saved queries or dashboards that produce the same results when run again under the same conditions.
Who it applies to (entity and operational context)
Applies to:
- Federal information systems implementing NIST SP 800-53 controls. 2
- Contractor systems handling federal data where NIST SP 800-53 requirements are flowed down via contract, authority to operate (ATO) boundary, or customer security requirements. 2
Operational contexts where AU-7(2) is typically assessed:
- Centralized logging/SIEM operations for production systems.
- Identity and access monitoring (admin actions, authentication events).
- Cloud activity logging (control plane events).
- High-sensitivity enclaves where incident response depends on fast log triage.
What you actually need to do (step-by-step)
1) Assign ownership and define the “audit search service”
Deliverable: a named control owner (often SecOps/SOC) and a supporting owner (platform/IT), plus a defined system of record for searchable audit logs.
- Decide which platform is authoritative for audit searches (example: SIEM or centralized log platform).
- Define which sources must forward logs into it (identity, endpoints, servers, cloud control plane, key applications).
Why auditors care: if audit records are scattered, you cannot credibly claim “automatic sort and search” across the system boundary.
2) Standardize minimum searchable fields (normalize)
Deliverable: a field dictionary and mapping.
Set a minimum set of fields that must be present (or reliably derivable) for each audited event type. Common ones:
- Timestamp (with timezone consistency)
- Actor (user/service account)
- Source (IP/host/device)
- Target resource (system, application, object)
- Event type/action (login, privilege change, config change, data access)
- Outcome/status (success/failure)
- Correlation ID / request ID (where applicable)
Implementation notes:
- Normalize into a common schema in your SIEM/log platform.
- Ensure indexing is enabled for the fields you plan to search and sort on, otherwise “search exists” but is operationally unusable.
3) Implement automatic sort and search features (queries, dashboards, reports)
Deliverable: saved searches and/or dashboards that cover the audit questions you get repeatedly.
Build at least:
- A time-bounded search template (last X hours/days) with sort by time.
- A user-centric template (all events for user/service account).
- A privileged activity template (admin group changes, role grants, policy edits).
- An authentication failure template (failed logins by source or account).
- A change activity template (configuration changes, deployment actions).
Make these saved objects with access controls, not tribal knowledge in an analyst’s head. Your assessor will accept screenshots and exported query definitions as evidence, but they will prefer you to run the searches live.
4) Lock down access (search is powerful)
Deliverable: role-based access control (RBAC) for who can search, what indices they can access, and what fields they can view.
Control design expectations:
- Limit “search everything” permissions to incident response/SOC and a small set of administrators.
- Segment sensitive logs (example: authentication logs may expose user identifiers; application logs may contain sensitive fields if logging is sloppy).
- Keep an access review mechanism tied to joiner/mover/leaver.
This is where AU-7(2) intersects with privacy and insider risk: the better your search, the more damage an over-permissioned user can do.
5) Prove it works with a repeatable test procedure
Deliverable: a short “AU-7(2) operational test” that a control performer can run on a schedule and during audits.
A practical test script:
- Select a defined time window.
- Run a saved query for authentication failures; sort by count and time; export results.
- Run a saved query for privileged changes; filter to a known admin group; export results.
- Run a user activity search for a test account; sort chronologically; confirm the timeline reconstructs actions.
Keep output examples and note the log sources searched.
6) Operationalize retention and coverage (so search has something to search)
Deliverable: documented log source onboarding checklist and monitoring for ingestion gaps.
Even though AU-7(2) is about sort/search, assessors often test “do you actually have the logs across the boundary?” Put guardrails in place:
- New systems cannot go live without forwarding audit logs to the search platform.
- Alerts for ingestion stoppage, parser failures, or sudden drops in volume.
- A ticket trail showing remediation when gaps occur.
7) Map the requirement to an audit-ready control record (make it assessable)
Deliverable: a control statement that maps AU-7(2) to owners, procedures, tools, and recurring evidence artifacts.
This is the fastest way to reduce assessment friction. Many teams manage this in Daydream so the control narrative, test steps, and evidence checklist stay consistent across audits and do not depend on one person.
Required evidence and artifacts to retain
Keep artifacts that prove both design (you built it) and operation (you use it):
Design evidence
- Logging architecture diagram showing centralized searchable audit repository.
- Field dictionary / normalization mapping for key log sources.
- RBAC configuration summary for log search access (roles, groups, scopes).
Operational evidence
- Exported saved queries/dashboards (or screenshots showing query definitions).
- Sample search outputs (exported reports) demonstrating sort and filter.
- Access review records for log search roles.
- Ingestion health reports or alerts (showing you detect gaps).
- A brief procedure/runbook: “How to perform audit log searches for investigations and audits.”
Common exam/audit questions and hangups
Assessors tend to ask:
- “Show me you can search by user, time, and event type across your boundary.”
- “Who can run these searches? How do you approve access?”
- “Are searches repeatable? Or does each analyst do it differently?”
- “How do you know logs are complete for the period you searched?”
- “Can you export results and preserve them for an investigation record?”
Hangups that slow audits:
- Logs exist but are not centralized.
- You can search one source, but not across core sources (IDP, endpoints, cloud).
- Queries are ad hoc, not saved, and not governed.
- Search access is overbroad, creating a separate control problem.
Frequent implementation mistakes and how to avoid them
| Mistake | Why it fails AU-7(2) in practice | How to avoid it |
|---|---|---|
| Relying on raw log storage without indexed fields | Searches become slow, incomplete, or manual | Define a minimum field set and index it in the search platform |
| “One SIEM dashboard” with no drill-down | Auditors want evidence you can answer specific questions | Maintain saved searches for common audit scenarios and run them live |
| No governance for search access | Search capability increases sensitive-data exposure | Implement RBAC, approvals, and periodic access reviews |
| Inconsistent timestamps/time zones | Sorting produces misleading timelines | Normalize timestamps at ingestion and document time standard |
| Gaps in ingestion go unnoticed | A search that returns nothing may be a logging failure | Monitor ingestion health and retain tickets showing remediation |
Enforcement context and risk implications
No public enforcement cases were provided in the source catalog for AU-7(2), so you should treat this as an assessment-readiness and incident-readiness requirement rather than a control with direct, cited penalty history in this dataset.
Risk implications are still concrete:
- Investigation risk: delayed scoping and containment when you cannot quickly find related events.
- Audit risk: control fails as “implemented but not operating effectively” if you cannot demonstrate repeatable search/sort on demand.
- Data exposure risk: improving search without tightening access increases insider risk and may conflict with internal privacy expectations.
Practical execution plan (30/60/90-day)
First 30 days (stabilize and define)
- Name the AU-7(2) control owner and approve the “audit search system of record.”
- Inventory log sources in scope for the system boundary (identity, endpoints, servers, cloud, key apps).
- Draft the minimum searchable field dictionary.
- Confirm who currently has access to search logs; freeze unnecessary broad access until RBAC is cleaned up.
- Create an evidence checklist aligned to the control record (owners, artifacts, cadence). Track it in Daydream so evidence requests do not become email archaeology.
By 60 days (implement and standardize)
- Onboard missing log sources into the centralized platform, prioritizing identity and privileged activity.
- Implement normalization and indexing for the minimum searchable fields.
- Build and save the core query pack (auth failures, privileged changes, user timeline, configuration changes).
- Write the short operational test procedure and run it once; retain outputs as initial evidence.
By 90 days (prove operations and harden)
- Add ingestion health monitoring and an escalation path for broken parsers/forwarders.
- Run an access review on log search roles; document approvals and removals.
- Conduct an internal “tabletop audit request”: have someone outside SecOps ask for a defined audit question, then time-box the response and retain the resulting report package.
- Finalize the AU-7(2) control narrative and evidence map in Daydream, including where artifacts live and who produces them.
Frequently Asked Questions
Do we need a SIEM to satisfy au-7(2): automatic sort and search requirement?
You need a system that can automatically search and sort audit records with controlled access. Many organizations meet this with a SIEM, but a centralized log analytics platform can work if it supports saved queries, indexing, and RBAC.
What’s the minimum “show me” proof an auditor will accept?
A live demonstration is strongest: run a saved search over a defined time window, filter by user or event type, sort results, and export the report. Back it up with screenshots/exported query definitions and an access control listing for who can perform searches.
Can we meet AU-7(2) if different teams keep logs in different tools?
It’s possible but painful to defend. If you cannot perform consistent searches across the boundary from a defined audit search service, the requirement will look partially implemented and will slow incident response.
How do we prevent the log search tool from becoming a sensitive-data leakage point?
Treat search access as privileged. Use RBAC, restrict index/namespace access, limit export permissions, and perform periodic access reviews tied to role changes.
What fields should we prioritize for normalization if we can’t do everything at once?
Start with timestamp, actor identity, source, event type, target resource, and outcome. Those fields support most incident and audit questions and make sorting meaningful.
How does Daydream help with AU-7(2) specifically?
Daydream helps you keep AU-7(2) mapped to a clear owner, a repeatable procedure, and a recurring evidence set. That reduces scramble during audits and makes it easier to show consistent operation over time.
Footnotes
Frequently Asked Questions
Do we need a SIEM to satisfy au-7(2): automatic sort and search requirement?
You need a system that can automatically search and sort audit records with controlled access. Many organizations meet this with a SIEM, but a centralized log analytics platform can work if it supports saved queries, indexing, and RBAC.
What’s the minimum “show me” proof an auditor will accept?
A live demonstration is strongest: run a saved search over a defined time window, filter by user or event type, sort results, and export the report. Back it up with screenshots/exported query definitions and an access control listing for who can perform searches.
Can we meet AU-7(2) if different teams keep logs in different tools?
It’s possible but painful to defend. If you cannot perform consistent searches across the boundary from a defined audit search service, the requirement will look partially implemented and will slow incident response.
How do we prevent the log search tool from becoming a sensitive-data leakage point?
Treat search access as privileged. Use RBAC, restrict index/namespace access, limit export permissions, and perform periodic access reviews tied to role changes.
What fields should we prioritize for normalization if we can’t do everything at once?
Start with timestamp, actor identity, source, event type, target resource, and outcome. Those fields support most incident and audit questions and make sorting meaningful.
How does Daydream help with AU-7(2) specifically?
Daydream helps you keep AU-7(2) mapped to a clear owner, a repeatable procedure, and a recurring evidence set. That reduces scramble during audits and makes it easier to show consistent operation over time.
Operationalize this requirement
Map requirement text to controls, owners, evidence, and review workflows inside Daydream.
See Daydream