Domain 7: Security Operations Module 44 of 84

Logging and Monitoring Activities

CISSP Domain 7 — Security Operations A — Investigations and Monitoring 10–12 minutes

Why Most Security Operations Centers Miss What Matters

Consider this: a mid-size enterprise generates roughly 10,000 to 50,000 log events per second across its servers, firewalls, endpoints, and applications. That is between 864 million and 4.3 billion events per day. Somewhere in that volume, an attacker is authenticating with stolen credentials, moving laterally through the network, and staging data for exfiltration.

Logging without monitoring is just storage. Monitoring without correlation is just noise. The value of both depends entirely on how you design the pipeline from source to alert.

This module covers CISSP exam objective 7.2: conduct logging and monitoring activities. ISC2 tests your ability to design a monitoring strategy — not operate a SIEM console. You need to understand what to log, where to send it, how to protect it, and how to turn raw events into actionable intelligence.


Log Types and Sources

Every system in your environment produces logs. The challenge is not generating logs — it is collecting the right logs, from the right sources, at the right level of detail.

System Logs

Operating system events: boot sequences, service start/stop, hardware errors, kernel messages. On Windows, the System event log. On Linux, syslog and journald. System logs tell you about the health and state of the platform itself.

Security Logs

Authentication events, authorization decisions, privilege escalation, account lockouts, policy changes. These are the primary logs for detecting unauthorized access. Windows Security event log and Linux auth.log are the most frequently referenced sources. Failed login attempts, successful logins at unusual times, and privilege changes are the events that matter most.

Application Logs

Generated by applications: web servers, databases, email systems, custom business applications. Application logs capture transaction details, errors, access patterns, and application-specific events. A database query log showing bulk data exports at 3:00 AM may be the only indicator of an insider threat.

Network Logs

Firewall logs, router logs, proxy logs, DNS query logs, VPN connection logs. Network logs show traffic patterns, blocked connections, allowed traffic, and communication with external addresses. NetFlow records provide metadata about network conversations (source, destination, ports, bytes transferred) without capturing packet content.


Log Management Pipeline

Raw logs are useful only if they reach a central location where they can be analyzed. The log management pipeline has four stages.

  1. Collection — Logs are gathered from sources using agents (installed on endpoints), agentless collection (syslog forwarding, API polling), or network taps. Every source in the environment should be sending logs to a central system. Sources that are not sending logs are blind spots.
  2. Aggregation — Logs from multiple sources are brought together in a centralized repository. This is where normalization happens — converting different log formats into a common schema so that a Windows authentication event and a Linux authentication event can be compared directly.
  3. Correlation — Related events across multiple sources are connected. A failed VPN login from an unusual country, followed by a successful login two minutes later, followed by a large file download from a sensitive share — each event alone may be unremarkable, but the sequence tells a story. Correlation rules and analytics turn individual events into security-relevant findings.
  4. Retention — Logs must be stored for a defined period based on regulatory requirements, organizational policy, and investigation needs. PCI DSS requires one year of audit log retention with the last three months immediately available. HIPAA requires six years. Many organizations default to 90 days of online storage with one year of archive, but the right retention period depends on your regulatory obligations and investigation timelines.

SIEM Architecture and Operations

A Security Information and Event Management (SIEM) system is the central platform for the log management pipeline. It collects, normalizes, correlates, and alerts on security events.

A SIEM provides:

  • Real-time correlation — Rules that connect events across sources and time to identify attack patterns
  • Dashboards and visualization — Operational views showing alert volumes, trending events, and system health
  • Alerting — Notifications when correlation rules match, triggering investigation or automated response
  • Search and investigation — The ability to query historical logs when investigating an incident or hunting for threats
  • Reporting — Compliance reports demonstrating log collection, review, and retention

From a management perspective, the SIEM is only as effective as its tuning. A SIEM with poorly written correlation rules produces thousands of false positives, which creates alert fatigue. Analysts stop investigating alerts because most turn out to be noise. Meanwhile, the real attack is buried in a queue of 500 unreviewed alerts.

Effective SIEM management requires ongoing tuning: suppressing known false positives, refining correlation rules based on the threat environment, and validating that critical log sources are still feeding data. A log source that silently stops sending events is one of the most dangerous gaps in any monitoring program.


Continuous Monitoring Strategy

NIST SP 800-137 defines continuous monitoring as maintaining ongoing awareness of information security, vulnerabilities, and threats. This is not just log review — it encompasses the entire process of collecting data, analyzing it, and making decisions based on findings.

A continuous monitoring strategy should define:

  • What metrics and events are monitored
  • How frequently each data source is reviewed
  • Who is responsible for monitoring and response
  • What thresholds trigger investigation or escalation
  • How monitoring effectiveness is measured and reported to management

The exam distinguishes between continuous monitoring (ongoing awareness program) and continuous auditing (ongoing compliance verification). Both contribute to security posture, but they serve different purposes and different audiences.


Network Monitoring

Network monitoring provides visibility into traffic patterns, communications, and potential threats moving through the infrastructure.

  • NetFlow / IPFIX — Metadata about network conversations: source/destination IP, ports, protocol, byte count, duration. Useful for identifying communication patterns, data exfiltration (unusually large outbound transfers), and command-and-control traffic. NetFlow does not capture content — only conversation metadata.
  • Packet capture (PCAP) — Full content capture of network traffic. Provides complete visibility but generates massive storage requirements. Typically deployed selectively — on critical segments, during active investigations, or triggered by IDS alerts.
  • IDS/IPS alerts — Intrusion detection and prevention systems generate alerts based on signature matches (known attack patterns) or anomaly detection (deviations from normal behavior). IDS alerts feed into the SIEM for correlation with other data sources.
  • DNS monitoring — DNS query logs reveal what domains systems are resolving. Connections to known malicious domains, domain generation algorithms (DGA), and DNS tunneling all leave traces in DNS logs.

User Activity and Privileged User Monitoring

Not all threats come from outside the network. Insider threats — whether malicious or accidental — require monitoring of user behavior.

User activity monitoring captures what users do within systems: files accessed, applications launched, websites visited, data downloaded. The scope of monitoring must be balanced against privacy requirements and communicated to users through acceptable use policies.

Privileged user monitoring requires special attention because privileged accounts (administrators, root, service accounts) can bypass security controls and access any data in the environment. Monitoring privileged users includes:

  • Session recording — Capturing privileged session activity (commands executed, screens viewed) for later review
  • Command logging — Recording all commands executed by administrators on critical systems
  • Privileged access management (PAM) — Tools that broker access to privileged accounts, enforce approval workflows, and log all privileged activity
  • Behavioral analytics — Establishing baselines for normal privileged activity and alerting on deviations (an admin accessing systems they have never touched before, or executing commands at unusual hours)

Log Integrity Protection

Logs are only trustworthy if they have not been modified. An attacker who gains privileged access to a system can alter or delete logs to cover their tracks. Log integrity protection ensures that tampering is detectable.

  • Centralized logging — Forwarding logs to a separate, hardened system that the attacker cannot access with the compromised credentials. If the only copy of a log file lives on the compromised system, the attacker controls the evidence.
  • Write-once storage — Storing logs on media or platforms that prevent modification after writing (WORM storage, immutable cloud storage buckets).
  • Cryptographic verification — Hashing log entries or log files and storing the hashes separately. If a log entry is modified, the hash mismatch reveals the tampering.
  • Time synchronization — All systems must use a common time source (NTP). Without synchronized clocks, correlating events across systems is unreliable, and timeline reconstruction during investigations becomes ambiguous.

Monitoring Baselines and Anomaly Detection

You cannot detect abnormal behavior without first defining what normal looks like. A monitoring baseline captures typical patterns: normal login times, typical network traffic volumes, standard data access patterns, usual geographic access locations.

Once baselines are established, anomaly detection identifies deviations:

  • A user who typically accesses 50 files per day suddenly accesses 5,000
  • Network traffic to a country where the organization has no business presence
  • A service account authenticating interactively for the first time
  • Database queries returning ten times the normal result set size

Baselines must be periodically updated to reflect legitimate changes in the environment. A new business relationship with a company in another country will change the normal traffic pattern. If baselines are not maintained, the monitoring system either generates false positives for legitimate activity or fails to detect genuine anomalies against an outdated model of normal.


Pattern Recognition

Logging and monitoring questions on the CISSP follow these structures:

  • Missing log source — If a scenario describes a blind spot in detection, the root cause is a system or source not feeding logs to the central monitoring platform.
  • Alert fatigue — Too many false positives leads to analysts ignoring or deprioritizing alerts. The answer is tuning, not adding more analysts.
  • Log tampering — When an attacker covers their tracks, the answer involves centralized logging, write-once storage, or cryptographic integrity verification.
  • Time correlation — When events from different systems cannot be matched, the answer is NTP/time synchronization.

Trap Patterns

Watch for these wrong answers:

  • “Log everything at maximum verbosity” — Maximum verbosity creates storage problems, processing bottlenecks, and makes finding relevant events harder. Logging should be calibrated to security needs and regulatory requirements.
  • “The SIEM will detect all attacks automatically” — A SIEM only detects what its correlation rules and data sources allow. It requires ongoing tuning, threat intelligence updates, and human analysis.
  • “Monitoring privileged users violates their trust” — Privileged user monitoring is a security requirement, not a trust issue. It protects both the organization and the administrators (who can demonstrate they acted appropriately during an incident).
  • “Logs on the local system are sufficient” — Local logs can be modified or deleted by an attacker who compromises the system. Centralized logging is a baseline requirement for trustworthy evidence.

Scenario Practice


Question 1

A security operations center receives an average of 3,000 alerts per day. Analysts investigate roughly 200 and close the rest without review due to staffing constraints. A post-incident review reveals that a successful data breach triggered an alert two weeks before detection, but the alert was never investigated.

What is the MOST effective improvement?

A. Hire additional SOC analysts to investigate all 3,000 daily alerts
B. Tune SIEM correlation rules to reduce false positives and prioritize alerts by risk severity
C. Implement automated blocking for all alerts that are not investigated within one hour
D. Reduce the number of log sources feeding the SIEM to decrease alert volume

Answer & reasoning

Correct: B

Alert fatigue is the problem. When analysts cannot investigate the majority of alerts, the solution is reducing noise through better correlation rules and prioritizing the remaining alerts by business risk. Hiring more analysts (A) does not fix poorly tuned rules. Automatic blocking (C) will disrupt legitimate activity. Reducing log sources (D) creates blind spots.


Question 2

During an investigation, the forensic team discovers that critical security logs on a compromised web server show no activity during a 4-hour window that coincides with the estimated time of the attack. The SIEM shows the server stopped sending logs during that same window.

What is the MOST likely explanation?

A. The web server experienced a hardware failure that prevented logging
B. The attacker disabled or tampered with logging on the compromised server
C. The SIEM experienced a collection failure and dropped the logs
D. The server was not configured to log activity during that maintenance window

Answer & reasoning

Correct: B

A gap in logs that precisely coincides with an attack window strongly suggests the attacker disabled or tampered with logging. This is a standard attacker technique to cover tracks. The fact that both local logs and SIEM reception stopped simultaneously points to the source being manipulated rather than a SIEM collection failure. This scenario demonstrates why centralized, write-once logging is essential — if the attacker cannot reach the central log store, the forwarded copies remain intact even if local logs are destroyed.


Question 3

An organization is designing its SIEM deployment. The security architect proposes collecting logs from firewalls, IDS, and Windows domain controllers. The CISO asks what additional log source would provide the greatest improvement in detecting insider threats.

What should the architect recommend?

A. Additional firewall logs from the DMZ segment
B. User activity monitoring including file access, email, and data transfer logs from business applications
C. Physical access control system logs from building entry points
D. Hardware health monitoring logs from server management interfaces

Answer & reasoning

Correct: B

Insider threats are best detected through user behavior analytics: what data users access, what they download, what they email externally, and how those patterns compare to their normal baseline. Firewalls and IDS are primarily perimeter-focused. Physical access logs (C) can correlate location to activity but do not reveal what the insider does on the network. User activity monitoring from business applications fills the gap between network-level visibility and user-level behavior.


Key Takeaway

Think of logging and monitoring as a supply chain. Raw logs are the raw materials. The SIEM is the factory that processes them. Correlation rules are the quality control. Alerts are the finished product delivered to analysts. If any link in that chain fails — a source stops sending logs, normalization drops critical fields, correlation rules are outdated, or analysts are overwhelmed — the entire monitoring capability degrades. Your job as a security manager is to design, maintain, and continuously validate every link in that chain.

Next Module Module 45: Configuration Management