Centralised log collection with bulk-export anomaly alerting
The most common dwell-time signal in the catalogue is a bulk-query or bulk-export pattern that nobody alerted on. Collect the logs, retain them, and alert when they tell you what's happening.
- Quadrant
- Strategic move
- Ease
- 3 / 5
- Impact
- 4 / 5
- Control family
- Logging
- Cost band
- medium
- Catalogued incidents
- 8
What the control is
Centralised log collection means every system in scope — endpoints, servers, identity infrastructure, cloud control planes, SaaS application audit logs, network telemetry — emits its security-relevant events to a single aggregation tier where they can be correlated, retained, queried and alerted on. The aggregation tier is typically a SIEM or an equivalent log-analytics platform; the choice of product is implementation detail. The architectural commitment is that the log is collected centrally, not inspected only at the source system, and that the retention is long enough to support incident reconstruction.
Anomaly alerting on top of the collection layer is what turns logs from forensic record into operational defence. The alerting rules need to cover the high-impact scenarios specifically: bulk customer-record query against CRM, bulk employee-record query against HR, off-hours administrative session, unusual identity-system configuration change, mass data-egress event, unusual cloud-API call pattern from an unfamiliar source. Generic SIEM-out-of-the-box rules tend to alert on commodity-malware noise; the rules that catch the catalogue’s real patterns are usually written by hand against the specific application’s audit log schema.
Why it matters
The catalogue’s longest dwell-time incidents are also its most informative on this control. Wynn Resorts: ShinyHunters spent five months inside Oracle PeopleSoft exfiltrating 800,000 employee records before the company became aware via the extortion-portal listing — an attacker, in an HR system, running queries against the entire employee population, undetected. Salt Typhoon spent months inside the lawful-intercept systems of nine US telecommunications carriers before any individual carrier surfaced the campaign. Volt Typhoon’s pre-positioning across US critical-infrastructure operators spanned years before CISA’s joint advisories made the campaign public. UNC3886 across Singapore’s four major telcos took eleven months of Operation CYBER GUARDIAN to evict. SolarWinds Sunburst sat undetected for nine months between the malicious build and the Mandiant disclosure. The OPM breach (2015): months of dwell time before the government became aware. Target 2013: forty days from the initial point-of-sale malware deployment to public disclosure. 23andMe (2023): credential-stuffing volume that absolutely should have been visible in the authentication logs from day one.
In every one of those incidents, the relevant log signals existed. The systems were emitting events. The bulk queries were recorded. The off-hours admin sessions were logged. What was missing was the central collection (in some cases) and the alerting tuning (in most). The catalogue’s signal is that the difference between a six-week containment and a six-month catastrophe is whether the alerts that catch the bulk-export pattern have been written.
Where the regulators sit
NCSC’s Cyber Assessment Framework principle C1 (“Security monitoring”) requires that “the activities of all networks and information systems supporting essential functions are monitored to detect potential security problems and to track the ongoing effectiveness of protective security measures.” The supporting guidance is detailed about what events to collect and how to alert. NIST SP 800-92 (“Guide to Computer Security Log Management”) is the foundational US-government standard. NIST SP 800-137 (“Information Security Continuous Monitoring”) covers the broader programmatic posture. CIS Controls v8 Control 8 (“Audit Log Management”) prescribes the collection, retention and alerting requirements with explicit sub-controls. MITRE ATT&CK’s data-source taxonomy provides the technique-level mapping for what needs to be collected to detect specific adversary behaviours.
The framework view is consistent. The disagreement is about implementation, retention duration and alerting strategy — not about whether to do it.
Where it usually breaks
Three failure modes recur. The first is application-layer log coverage. SIEM teams reflexively cover Windows event logs, network telemetry and endpoint EDR; they often do not cover the audit logs of the applications that hold the data — CRM, HR, ERP, code repositories, cloud control planes. Those are exactly the logs that catch the bulk-export pattern. The fix is to map the data-classification scheme onto the log-collection coverage and treat any tier-1 application as in-scope by default.
The second is alerting tuning. A SIEM with raw collection and no high-fidelity alerts is a forensic record, not a defence. The alerts that matter — bulk-export from CRM, off-hours admin from HR, unusual identity-system configuration change — usually need to be hand-written against the application schema. Templates exist for the major platforms; they need adapting to the local environment, not deploying out of the box.
The third is retention. NCSC, NIST and CIS converge on 90 days as a floor. Most regulated environments need a year. The cost of long-tail retention has fallen dramatically with cloud-native log platforms; the historical objection of cost is increasingly weak.
What good looks like
Every tier-1 application’s audit logs collected centrally with at least 90 days of hot retention and a year in cold storage. Hand-written alerts on bulk-export queries against customer- and employee-data systems. Alerts on identity-system configuration change. Alerts on off-hours admin session. A documented playbook tying each alert to an investigation procedure. A monthly review of false-positive rates and alert-coverage gaps. Logs hashed or signed at write to prevent tampering by an attacker who reaches the logging tier.
The cost is the platform plus the SOC effort to write and tune the rules. The benefit is the difference between a five-month dwell time and a five-day one.
Where this control would have changed the outcome
- Wynn Resorts — ShinyHunters Oracle PeopleSoft breach ShinyHunters exploited an unpatched Oracle PeopleSoft flaw at Wynn Resorts in 2025, exfiltrating 800,000 employee records and demanding $1.5M — confirmed months later when the listing went public.
- US telecoms — Salt Typhoon espionage campaign Salt Typhoon, a Chinese state-sponsored group, compromised lawful-intercept systems at nine US telecom carriers, reading wiretap lists and senior officials' communications for months before detection.
- US critical infrastructure — Volt Typhoon pre-positioning Chinese state-sponsored Volt Typhoon silently pre-positioned inside US water, power and communications infrastructure for years, building persistent access for potential future use.
- Singapore telecommunications — UNC3886 espionage Singapore's Cyber Security Agency confirmed UNC3886 had persistent rootkit access across all four major Singapore telcos; the eviction operation took eleven months.
- SolarWinds — Sunburst supply-chain compromise Russian SVR operators compromised SolarWinds' Orion build server and pushed the Sunburst backdoor via a signed software update to 18,000 customers including nine federal agencies.
- US Office of Personnel Management — federal records breach Chinese state-sponsored actors exfiltrated 21.5 million federal personnel records from the Office of Personnel Management, including security-clearance files with detailed background investigation data.
- Target Corporation — 2013 card breach Attackers entered Target's network through an HVAC supplier's stolen credentials, deployed memory-scraping malware on point-of-sale terminals, and exfiltrated 40M cards and 70M customer records.
- 23andMe — credential-stuffing breach Attackers credential-stuffed 14,000 23andMe accounts, then exploited the DNA Relatives feature to harvest profile data on 6.9 million users including ancestry and health predisposition records.
Sources
- NCSC Cyber Assessment Framework — C1: Security monitoring // primary
- NIST SP 800-92 — Guide to Computer Security Log Management // primary
- NIST SP 800-137 — Information Security Continuous Monitoring (ISCM) // primary
- CIS Controls v8 — Control 8: Audit Log Management // primary
- MITRE ATT&CK — DS0028: Logon Session; DS0017: Command Execution // analysis