AU-7(1): Automatic Processing

AU-7(1): Automatic Processing requires you to have tooling that can automatically process, sort, and search audit records for defined “events of interest” using specific fields you choose (for example: user, system, event type, outcome). To operationalize it, standardize audit log fields, centralize logs, implement searchable queries/detections, and retain repeatable evidence that the capability works. 1

Key takeaways:

  • Define “events of interest” and the exact audit record content you will use to find them.
  • Implement a working log pipeline plus query/detection capability (SIEM/log analytics) that supports processing, sorting, and searching.
  • Keep assessor-ready evidence: schemas, queries, screenshots/exports, and recurring test results mapped to an owner and procedure.

The au-7(1): automatic processing requirement is easy to misunderstand because most teams already “have logs.” AU-7(1) asks for something more specific: an implemented capability to process, sort, and search audit records for events you care about, based on defined content in those records. In practice, that means you can’t rely on ad hoc grep, one-off database queries, or “we could pull it if we had to.” You need an operational, repeatable way to turn raw audit records into usable security and compliance outcomes.

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat AU-7(1) as a capability control with three anchors: (1) a documented list of events of interest and the fields required to identify them, (2) centralized audit records with consistent parsing/normalization, and (3) verified searches/detections that analysts can run on demand. Your objective is assessment readiness: demonstrate that the capability exists, is used, and is governed by an owner, procedure, and recurring evidence package.

Regulatory text

Requirement (excerpt): “Provide and implement the capability to process, sort, and search audit records for events of interest based on the following content: {{ insert: param, au-07.01_odp }}.” 1

What the operator must do

  • Provide and implement capability: you must have tooling and configuration in place (not a plan) that supports automated handling of audit records. 1
  • Process, sort, and search: you must be able to ingest/transform logs (process), organize/filter them (sort), and retrieve events using queries (search). 1
  • For events of interest: you must define which events matter for your environment (security, privacy, operations) and show you can find them. 1
  • Based on content: you must specify which fields in the audit records you rely on (for example: actor, target, action, result, timestamp, source IP) and implement searches that use those fields. 1

Practical read: AU-7(1) is about “searchable, structured audit logs with defined detections,” not “we store log files somewhere.”

Plain-English interpretation (what AU-7(1) really means)

You need a working, repeatable way to answer: “Did this event happen, to this system/data, by this identity, during this time window, and what was the outcome?” using automated processing and search over audit records.

“Automatic processing” does not require “AI.” It means the system does the heavy lifting: ingestion, parsing, indexing, and queryable storage, so searches are reliable and fast enough to support investigations, incident response, and audits.

Who it applies to

AU-7(1) commonly applies where NIST SP 800-53 is in scope, including:

  • Federal information systems implementing NIST SP 800-53 controls. 2
  • Contractor systems handling federal data where contracts, ATO requirements, or program security requirements call for 800-53 alignment. 2

Operationally, expect AU-7(1) to be assessed for:

  • Centralized logging/SIEM and log analytics platforms
  • Cloud audit logging (IaaS/PaaS/SaaS control plane logs)
  • Identity systems (SSO/MFA/IdP), privileged access tooling, and endpoint/server logs
  • High-value applications (customer portals, payment flows, regulated data apps)
  • Network/security telemetry (firewall, WAF, IDS/IPS), where those are part of your audit record strategy

What you actually need to do (step-by-step)

Step 1: Name an owner and draw the boundary

  • Assign a control owner (often Security Operations, Platform Security, or IT Operations) and a GRC point-of-contact who maintains the evidence package.
  • Define the system boundary: which environments, cloud accounts, applications, and platforms produce audit records in scope for AU-7(1).

Deliverable: AU-7(1) control record with owner, in-scope systems, and review cadence.

Step 2: Define “events of interest” and the content fields you will use

Create a short, explicit catalog of events of interest. Keep it operator-friendly. Examples:

  • Authentication events: successful and failed logins, MFA challenge failures
  • Privilege events: role assignments, admin actions, policy changes
  • Data access events: access to sensitive datasets, export/download activity
  • Security control events: logging disabled, audit policy changes
  • Integrity events: changes to critical configuration, deployment actions

For each event, define minimum audit record content you will use to find it, such as:

  • Timestamp (with timezone/UTC convention)
  • Actor (user/service account), session, and authentication context
  • Target resource (system, table, object, API endpoint)
  • Action/event type and outcome (success/failure)
  • Source attributes (IP, device, region) where available

Deliverable: “Events of Interest Matrix” (table) mapping event → required fields → log source(s) → query/detection name.

Step 3: Centralize audit records and normalize formats

Implement a log pipeline that supports automated processing and search:

  • Confirm each log source is enabled and forwarding (cloud audit logs, OS logs, app logs, IdP logs).
  • Parse/normalize key fields into consistent names (even if only within the SIEM).
  • Ensure indexing supports the fields you defined in Step 2.

Deliverable: Log source inventory with status (enabled, flowing, parsed), plus parsing rules/config export where your platform supports it.

Step 4: Implement processing, sorting, and searching as real operator workflows

This is where assessments are won or lost. You need concrete, demonstrated capability:

  • Processing: ingestion + parsing + enrichment where needed (asset tags, identity attributes).
  • Sorting: saved views/dashboards that filter by system, severity, identity type, or environment.
  • Searching: saved searches or detection rules tied to events of interest.

Minimum viable output:

  • Saved search for each major event family (auth, privilege, data access, config change).
  • Documented query logic with the fields used (the “content” requirement).

Deliverable: A library of saved searches/detections mapped to your Events of Interest Matrix.

Step 5: Test the capability and make it repeatable

Run a tabletop-style log validation:

  • Generate a known event (for example: a test admin role assignment in a non-production account).
  • Verify the audit record appears in the centralized platform.
  • Execute the saved search and confirm it returns the event with the expected fields.

Deliverable: Test record (change ticket or test plan), screenshot/export of matching event, and a short results note.

Step 6: Package evidence for audits and continuous monitoring

Assessors want to see design + operation:

  • Design: documented events of interest, required fields, and implemented tooling.
  • Operation: proof you can search and retrieve records reliably.

This is where Daydream often becomes the practical resolution: map AU-7(1) to a named owner, a step-by-step procedure, and a recurring evidence checklist so evidence collection stops being a scramble.

Required evidence and artifacts to retain

Keep artifacts that show both configuration and real outcomes:

Governance

  • AU-7(1) control write-up: scope, owner, procedure, tools in use
  • Events of Interest Matrix (event → fields → sources → saved searches)

Technical configuration

  • Log source inventory and enablement evidence (screenshots/config exports)
  • SIEM/log analytics configuration showing ingestion/parsing/indexing for key fields
  • Saved searches/detection rules list (export where possible)

Operational proof

  • Query results exports or screenshots for representative events
  • Test plan/results for log pipeline validation
  • Incident or investigation tickets referencing the searches (redacted as needed)

Change management

  • Change records for parsing rules, log source onboarding, retention changes, and detection updates

Common exam/audit questions and hangups

Assessors tend to probe these areas:

  1. “What are your events of interest?”
    Hangup: teams answer with “everything.” Provide a curated list tied to risk and systems in scope.

  2. “Which fields do you rely on to find them?”
    Hangup: fields are inconsistent across sources. Show normalization and document where fields differ.

  3. “Show me the search.”
    Hangup: searches exist but are not saved, not named, or owned by one analyst. Maintain a shared library.

  4. “Prove it works.”
    Hangup: no test evidence. Keep a small set of repeatable tests and rerun after major changes.

  5. “Is it implemented for the whole boundary?”
    Hangup: gaps in cloud accounts, subsidiaries, or specific apps. Keep an explicit in-scope/out-of-scope list.

Frequent implementation mistakes (and how to avoid them)

  • Mistake: Treating log storage as compliance.
    Fix: demonstrate search capability with saved queries tied to events of interest. 1

  • Mistake: No defined “content” fields.
    Fix: create a field standard (minimum audit record content) and map each event to those fields. 1

  • Mistake: Searches are manual, tribal knowledge.
    Fix: require saved searches/detections with names, owners, and change control.

  • Mistake: Can’t correlate identity across sources.
    Fix: normalize identity (UPN/email/service account IDs) and enrich logs with directory attributes where feasible.

  • Mistake: Evidence is recreated during audits.
    Fix: set a recurring evidence routine. Daydream can track owners, procedures, and evidence artifacts so you can produce them without a fire drill.

Risk implications (why this control fails in real incidents)

If you cannot process, sort, and search audit records reliably, you will struggle to:

  • confirm whether suspicious activity occurred,
  • determine scope during incident response,
  • show traceability for privileged actions and configuration changes,
  • support forensics and after-action reporting.

That translates into operational risk (longer investigations, missed containment windows) and compliance risk (failed control testing, audit findings).

Practical 30/60/90-day execution plan

Days 0–30 (baseline and definitions)

  • Assign AU-7(1) owner and finalize system boundary.
  • Draft Events of Interest Matrix and minimum required fields.
  • Inventory log sources and identify gaps (not enabled, not centralized, missing fields).
  • Pick the “system of record” for searching (SIEM/log analytics platform).

Days 31–60 (implementation and search library)

  • Enable/route missing audit sources into the central platform.
  • Implement parsing/normalization for required fields.
  • Build and name saved searches/detections for each event family.
  • Create dashboards/views that support sorting (by system, identity, outcome, time).

Days 61–90 (prove, package, and operationalize)

  • Execute repeatable tests for representative events and retain outputs.
  • Formalize runbooks: “How to search for X event” mapped to saved queries.
  • Put change control around detections/parsers/log onboarding.
  • Stand up recurring evidence collection and review workflow (owner attestation + exports/screenshots).

Frequently Asked Questions

What counts as “automatic processing” for AU-7(1)?

A centralized log capability that ingests and parses audit records, indexes key fields, and supports repeatable searches and sorting without manual file-by-file handling. The capability must be implemented and demonstrable. 1

Do we need a SIEM to meet au-7(1): automatic processing requirement?

NIST doesn’t mandate a specific product in the control text. You do need a working platform that can process, sort, and search audit records at scale, and you must retain evidence that it works. 1

How do we define “events of interest” without boiling the ocean?

Start with identity/authentication, privileged actions, critical configuration changes, and sensitive data access for in-scope systems. Document the fields required to find each event, then expand based on incidents and risk assessments. 1

What evidence is strongest for auditors?

A traceable package: events-of-interest table, log source inventory, saved search/detection exports, and a small set of test executions that show the searches returning expected records. Pair that with an owner and a written procedure.

Our logs are centralized, but fields are inconsistent across sources. Is that a fail?

It becomes a finding when you can’t reliably search for events of interest based on defined content. Normalize the minimum set of fields in the SIEM (or document source-specific mappings) and keep those mappings under change control. 1

How should a GRC team operationalize AU-7(1) without living in the SIEM?

Require a maintained search library mapped to events of interest, plus recurring evidence exports and test results. Tools like Daydream help by mapping AU-7(1) to an owner, an implementation procedure, and a recurring evidence checklist so audits don’t depend on individual analysts.

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as “automatic processing” for AU-7(1)?

A centralized log capability that ingests and parses audit records, indexes key fields, and supports repeatable searches and sorting without manual file-by-file handling. The capability must be implemented and demonstrable. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Do we need a SIEM to meet au-7(1): automatic processing requirement?

NIST doesn’t mandate a specific product in the control text. You do need a working platform that can process, sort, and search audit records at scale, and you must retain evidence that it works. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we define “events of interest” without boiling the ocean?

Start with identity/authentication, privileged actions, critical configuration changes, and sensitive data access for in-scope systems. Document the fields required to find each event, then expand based on incidents and risk assessments. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

What evidence is strongest for auditors?

A traceable package: events-of-interest table, log source inventory, saved search/detection exports, and a small set of test executions that show the searches returning expected records. Pair that with an owner and a written procedure.

Our logs are centralized, but fields are inconsistent across sources. Is that a fail?

It becomes a finding when you can’t reliably search for events of interest based on defined content. Normalize the minimum set of fields in the SIEM (or document source-specific mappings) and keep those mappings under change control. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How should a GRC team operationalize AU-7(1) without living in the SIEM?

Require a maintained search library mapped to events of interest, plus recurring evidence exports and test results. Tools like Daydream help by mapping AU-7(1) to an owner, an implementation procedure, and a recurring evidence checklist so audits don’t depend on individual analysts.

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream