AU-13(1): Use of Automated Tools

AU-13(1) requires you to monitor open-source information and information sites using automated tools, so your team can detect publicly reported threats, exposures, and security signals relevant to your systems and third parties. Operationalize it by defining what sources and keywords matter, deploying tooling with triage workflows, and retaining evidence that monitoring runs on a schedule and drives action. 1

Key takeaways:

  • AU-13(1) is an OSINT monitoring control: automated collection plus a documented, repeatable workflow from alert to disposition. 1
  • Your biggest audit risk is “we do this informally” without artifacts: tool configuration, alert logs, triage records, and management review. 1
  • Scope matters: monitoring must map to your system boundary, mission/business functions, and key third parties, not a generic news feed. 2

AU-13(1): use of automated tools requirement is a practical control that examiners expect to see operating, not just written down. It sits in the Audit and Accountability (AU) family, but in practice it functions like continuous intelligence gathering: you watch open sources for indicators that your organization, your systems, or your third parties may be exposed, discussed, or actively targeted. 2

For a Compliance Officer, CCO, or GRC lead, the fastest path is to treat AU-13(1) as a lightweight, repeatable “monitor → triage → act → record” loop. The goal is not perfect coverage of the internet. The goal is defendable, risk-based monitoring with automated tooling and evidence that (a) monitoring occurs, (b) alerts are reviewed, (c) outcomes are tracked, and (d) monitoring improvements are governed. 1

This page gives requirement-level implementation guidance you can hand to an operator: what to monitor, which teams must participate, how to set up workflows, and what artifacts to retain so an assessor can re-perform your control testing without guesswork.

Requirement: AU-13(1) use of automated tools (OSINT monitoring)

Plain-English interpretation: You must use automated tools to monitor open-source information and information sites for security-relevant information tied to your environment, then route findings through a documented triage and response process. 1

This is not “read security news.” AU-13(1) expects:

  • Defined monitoring targets (your org, brands, domains, IP ranges where appropriate, key applications, critical suppliers, executives).
  • Automated collection/alerting (not ad hoc manual searches).
  • Documented handling (who reviews, how often, what counts as actionable, how you close the loop). 1

Regulatory text

“Monitor open-source information and information sites using {{ insert: param, au-13.01_odp }}.” 1

Operator meaning: You need a defined set of open-source locations and an automated method (tooling) that monitors them. Because the text references an organization-defined parameter (ODP), your implementation must explicitly define what “open-source information and information sites” means for your organization (sources, frequency, and triggers), then prove the monitoring runs and is reviewed. 1

Who it applies to (entity and operational context)

Entities:

  • Federal information systems.
  • Contractor systems handling federal data. 1

Operational context: AU-13(1) applies when you operate systems where public reporting or public leakage signals matter to security outcomes, including:

  • Publicly reachable apps and APIs.
  • Cloud environments with internet-facing assets.
  • Third parties with network connectivity or sensitive data access.
  • Brand or executive impersonation exposure that can drive phishing or account takeover risk.
  • Public vulnerability disclosures that affect your technology stack. 2

What you actually need to do (step-by-step)

1) Define your organization-defined parameters (ODPs)

Write a short AU-13(1) “monitoring specification” that sets:

  • Sources: which categories you monitor (vulnerability databases, code repositories, paste sites, social platforms, forums, threat intel blogs, breach notification sites, domain registration changes, certificate transparency logs, app store listings, etc.).
  • Search terms/indicators: domains, subdomains, brand names, product names, key email domains, executive names, critical application names, third-party names tied to critical services.
  • Cadence: continuous alerting where possible; otherwise scheduled runs.
  • Triage SLA: internal expectation for initial review and escalation triggers (guidance value; document what you can meet). This step is where most programs fail: they never write down what they monitor, so the control is untestable. 1

2) Assign roles and decision rights

Minimum operating model:

  • Control owner (GRC/Compliance): maintains the AU-13(1) procedure, evidence checklist, and reporting.
  • Security operations: reviews alerts and executes technical validation.
  • Threat intel / IR lead (if separate): adjudicates whether an item becomes an incident, problem ticket, or risk entry.
  • Third-party risk: coordinates outreach when signals relate to a third party (breach mention, exposed credentials, vulnerable product). Document escalation paths and who can declare “false positive” vs “requires action.” 2

3) Select and configure automated tooling

Your tool choice can vary; assessors care more about repeatability and evidence than brand names. Configure:

  • Source connectors (where the tool pulls from).
  • Keyword lists and entity profiles.
  • Alert routing (ticketing system, SIEM/SOAR, email to monitored mailbox with retention, Slack channel with exportable history).
  • Deduplication and suppression rules to control noise.
  • Access control on the tool (auditability matters because it’s in the AU family). 2

Daydream fit (earned mention): if your gap is operational proof, Daydream can function as the control system of record by mapping AU-13(1) to an owner, a documented procedure, and recurring evidence artifacts so audits do not depend on tribal knowledge. 1

4) Build the “alert → triage → disposition” workflow

Create a standard triage template that captures:

  • Alert source and timestamp.
  • What matched (keyword/entity).
  • Validation steps taken (screenshots, reproduction notes, links).
  • Impact assessment (system boundary, data types, third parties involved).
  • Disposition category: false positive, informational, watchlist, vulnerability management item, incident, third-party issue, brand protection issue.
  • Ticket/incident/risk register linkage.
  • Closure notes and approver. Runbooks matter because AU-13(1) is easy to perform inconsistently across analysts. 2

5) Tie monitoring outputs to existing control motions

Avoid creating a parallel universe. Route outcomes into:

  • Vulnerability management for disclosed CVEs affecting your stack.
  • IAM processes for exposed credential findings.
  • Incident response for credible breach indicators.
  • Third-party risk management for supplier exposure signals.
  • Change management for emergency mitigations. This linkage is what converts “monitoring” into risk reduction a reviewer can follow. 2

6) Establish governance and recurring review

Put AU-13(1) on a standing agenda for security governance:

  • Trend review (types of alerts, false positive drivers).
  • Coverage review (new brands, acquisitions, new third parties).
  • Tool health review (missed runs, connector failures).
  • Evidence sampling (spot-check a set of alerts end-to-end). Keep minutes or an attestation record. 2

Required evidence and artifacts to retain

Keep artifacts that allow an assessor to re-perform and trace decisions:

  1. AU-13(1) procedure: ODPs, sources, cadence, roles, triage steps. 1
  2. Tool configuration exports: monitored entities/keywords, enabled sources/connectors, routing rules.
  3. Alert logs: time-stamped alerts with immutable history (or exported reports).
  4. Triage records: tickets/cases showing validation steps and disposition.
  5. Escalation evidence: linked incident records, vuln tickets, third-party outreach notes.
  6. Governance artifacts: recurring review notes, control owner attestation, metrics definition (even if qualitative).
  7. Access control list for the monitoring tool and evidence of periodic access review if your program requires it. 2

Common exam/audit questions and hangups

Assessors commonly probe:

  • “Show me what you monitor.” If you cannot produce your ODPs and configurations, you will struggle. 1
  • “Prove it’s automated.” Screenshots of manual searches are weak unless paired with scheduled tooling outputs. 1
  • “Walk me through one alert end-to-end.” They will pick a sample and ask for evidence from detection to closure.
  • “How do you prevent alert fatigue?” They expect tuning, deduping, and documented suppression logic.
  • “How do you cover third parties?” They want to see monitoring tied to critical suppliers, not just your brand. 2

Frequent implementation mistakes (and how to avoid them)

Mistake Why it fails Fix
Treating AU-13(1) as “threat intel newsletter” No defined sources, no automation, no evidence trail Document ODPs, configure tool alerts, retain logs 1
Monitoring too broadly on day one Noise overwhelms triage; people stop reviewing Start with critical entities and expand via governance review
No linkage to ticketing/IR/vuln mgmt Alerts die in inboxes; no action proof Require a disposition and downstream record for actionable items
No ownership Everyone assumes someone else is watching Name a control owner and backup; define escalation paths
Evidence lives in tools no one can export You cannot satisfy audit sampling Pick tools/workflows that support export, retention, and immutable logs

Enforcement context and risk implications

No public enforcement cases were provided in the source catalog for AU-13(1), so this page does not list case citations.

From a risk standpoint, failure modes are operational: you miss early warning signals (credential leaks, brand impersonation, exposed data mentions, disclosed vulnerabilities) and cannot show reasonable monitoring during an assessment. The control also affects incident response credibility because OSINT monitoring is often how organizations learn they are publicly exposed. 2

Practical 30/60/90-day execution plan

First 30 days (stand up and define)

  • Draft AU-13(1) monitoring specification (sources, keywords/entities, cadence, triage rules). 1
  • Assign control owner and triage team; document escalation paths.
  • Select tool(s) and configure initial monitoring targets.
  • Create triage template in your ticketing system and require dispositions.

Days 31–60 (operate, tune, and prove)

  • Run monitoring operationally; hold a weekly triage review with recorded outcomes.
  • Tune keyword lists, dedupe rules, and suppressions based on false positives.
  • Produce your first evidence package: configuration export, alert samples, and end-to-end tickets.
  • Map AU-13(1) to owner, implementation procedure, and recurring evidence artifacts in Daydream (or your GRC system) to formalize assessment readiness. 1

Days 61–90 (govern and integrate)

  • Add third-party and critical product monitoring targets based on your risk tiering.
  • Integrate alert routing with IR/vulnerability management workflows.
  • Establish recurring governance review and document decisions (coverage expansion, tuning changes).
  • Run an internal mock audit: sample alerts, re-perform triage, verify artifacts are complete.

Frequently Asked Questions

What counts as “open-source information and information sites” for AU-13(1)?

It’s an organization-defined parameter in the control text, so you must define your sources and categories and document them. Auditors expect your definition to be risk-based and tied to your system boundary. 1

Do we need a paid threat intelligence platform to satisfy AU-13(1)?

The requirement is “use automated tools,” not “buy a specific product.” You can meet intent with tools that automate collection/alerting and preserve evidence, as long as you can show configuration, alert history, and triage outcomes. 1

How do we prove the monitoring is actually happening?

Retain tool run histories or alert logs with timestamps plus exported configuration showing what is monitored. Pair that with tickets/cases that show triage and closure for a sample of alerts. 1

How should third-party risk management be involved?

Add critical third parties (and key products they provide) to your monitored entities/keywords, then define an escalation path to third-party owners when alerts implicate them. Keep outreach records and vendor communications as part of the disposition evidence. 2

We get too many false positives. Will that fail an audit?

Noise itself is not the failure; lack of tuning and lack of triage discipline is. Show suppression logic, periodic review decisions, and that actionable findings consistently become tracked work items. 2

What’s the minimum documentation an assessor will accept?

A written AU-13(1) procedure with ODPs, evidence of automated monitoring outputs, and at least a few end-to-end alert dispositions linked to downstream actions. Without those, the control is hard to test. 1

Footnotes

  1. NIST SP 800-53 Rev. 5 OSCAL JSON

  2. NIST SP 800-53 Rev. 5

Frequently Asked Questions

What counts as “open-source information and information sites” for AU-13(1)?

It’s an organization-defined parameter in the control text, so you must define your sources and categories and document them. Auditors expect your definition to be risk-based and tied to your system boundary. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Do we need a paid threat intelligence platform to satisfy AU-13(1)?

The requirement is “use automated tools,” not “buy a specific product.” You can meet intent with tools that automate collection/alerting and preserve evidence, as long as you can show configuration, alert history, and triage outcomes. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How do we prove the monitoring is actually happening?

Retain tool run histories or alert logs with timestamps plus exported configuration showing what is monitored. Pair that with tickets/cases that show triage and closure for a sample of alerts. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

How should third-party risk management be involved?

Add critical third parties (and key products they provide) to your monitored entities/keywords, then define an escalation path to third-party owners when alerts implicate them. Keep outreach records and vendor communications as part of the disposition evidence. (Source: NIST SP 800-53 Rev. 5)

We get too many false positives. Will that fail an audit?

Noise itself is not the failure; lack of tuning and lack of triage discipline is. Show suppression logic, periodic review decisions, and that actionable findings consistently become tracked work items. (Source: NIST SP 800-53 Rev. 5)

What’s the minimum documentation an assessor will accept?

A written AU-13(1) procedure with ODPs, evidence of automated monitoring outputs, and at least a few end-to-end alert dispositions linked to downstream actions. Without those, the control is hard to test. (Source: NIST SP 800-53 Rev. 5 OSCAL JSON)

Operationalize this requirement

Map requirement text to controls, owners, evidence, and review workflows inside Daydream.

See Daydream