Domain 6: Security Assessment and Testing Module 40 of 84

Security Process Data Collection

CISSP Domain 6 — Security Assessment and Testing A — Assessments and Testing 9–11 minutes

You Cannot Assess What You Do Not Collect

Security testing generates findings. Security operations generate logs. Compliance programs generate evidence. But none of it matters if the data is not collected consistently, stored reliably, and organized in a way that supports analysis and audit.

This module covers CISSP exam objective 6.3: collect security process data. The exam tests your understanding of what data to collect, how to collect it, and how to preserve its integrity for both operational and legal purposes. The managerial perspective here is about building the collection infrastructure — not configuring individual log sources.


Log Management and Aggregation

Logs are the raw material of security assessment. Every system generates logs, but scattered logs across hundreds of systems are almost useless for analysis.

Effective log management requires:

  • Centralized collection — Logs from servers, network devices, applications, cloud services, and endpoints flow to a central repository. Without centralization, correlating events across systems requires manual effort that does not scale.
  • Normalization — Different systems produce logs in different formats. Normalization converts them into a consistent format so they can be searched, compared, and correlated. A Windows event log and a Linux syslog entry describing the same type of event should be queryable in the same way.
  • Time synchronization — All log sources must use synchronized time (NTP). Without consistent timestamps, correlating events across systems becomes unreliable. An incident timeline reconstructed from unsynchronized logs can be misleading or inadmissible.
  • Integrity protection — Logs must be protected from tampering. If an attacker can modify or delete logs, they can cover their tracks. Write-once storage, hash chains, and centralized collection to a separate, hardened system all protect log integrity.
  • Retention policy — Logs must be retained long enough to support investigations and compliance requirements. Many regulations specify minimum retention periods. The policy should define retention based on the data type, regulatory requirements, and organizational risk.

SIEM Systems

A Security Information and Event Management (SIEM) system is the central platform for log aggregation, correlation, alerting, and investigation.

SIEM provides value through:

  • Correlation — Connecting related events across different systems. A failed login on one server, followed by a successful login on another, followed by a data transfer to an external IP — each event alone may be unremarkable, but the sequence tells a story.
  • Alerting — Generating notifications when predefined conditions are met. Alert rules should be tuned to minimize false positives without missing true incidents. An alert that fires 500 times daily trains analysts to ignore it.
  • Dashboards — Providing real-time visibility into security posture. Dashboards show event volumes, alert trends, response times, and other operational metrics.
  • Investigation support — When an incident occurs, the SIEM provides the searchable data needed to reconstruct the timeline, identify affected systems, and determine the scope of impact.
  • Compliance reporting — Many regulations require evidence of security monitoring. SIEM reports demonstrate that the organization actively monitors its environment.

The exam tests SIEM from a governance and program perspective: why you need one, what it should collect, and how it supports assessment and audit — not how to write correlation rules or configure parsers.


Key Performance Indicators and Key Risk Indicators

Metrics are how you measure whether the security program is working. Two categories serve different purposes.

Key Performance Indicators (KPIs)

KPIs measure how well security processes are operating. They answer “are we executing our program effectively?”

  • Mean time to detect (MTTD) and mean time to respond (MTTR) to security incidents
  • Percentage of systems patched within the defined SLA
  • Vulnerability scan coverage (percentage of assets scanned on schedule)
  • Percentage of access reviews completed on time
  • Security awareness training completion rates

Key Risk Indicators (KRIs)

KRIs measure changes in risk exposure. They answer “is our risk level changing, and in which direction?”

  • Number of critical vulnerabilities remaining open past the remediation deadline
  • Number of privileged accounts without multi-factor authentication
  • Volume of data transferred to external destinations
  • Rate of failed authentication attempts across the enterprise
  • Number of third-party vendors with overdue security assessments

The distinction matters for the exam: KPIs tell you about process effectiveness, KRIs tell you about risk trajectory. A security manager reporting to executive leadership needs both.


Account Management Data

Account management generates data that supports both operational security and audit evidence.

  • Account provisioning and deprovisioning records — When were accounts created and by whom? Were they authorized through proper channels? Were terminated employee accounts disabled promptly?
  • Access review results — Periodic access reviews verify that users still need the access they have. The results — who reviewed, what decisions were made, and when — are audit evidence.
  • Privileged account activity — Privileged accounts (administrators, root, service accounts) require enhanced monitoring. Who accessed what, when, and what actions were performed.
  • Authentication anomalies — Failed login patterns, impossible travel (login from two distant locations within minutes), and off-hours access attempts all warrant collection and analysis.

Backup Verification and Training Metrics

Backup Verification Data

Backups that are not verified are assumptions, not controls. Collection should include:

  • Backup completion logs (success, failure, partial)
  • Restore test results and dates
  • Recovery time measurements against RTO targets
  • Data integrity verification (checksums, hash comparisons)

An auditor asking “prove your backups work” needs more than backup job logs. They need evidence of successful restoration testing.

Security Training Metrics

Training data demonstrates that the organization is building human security capabilities:

  • Completion rates by department and role
  • Assessment scores and pass/fail rates
  • Phishing simulation click rates and reporting rates
  • Time between training completion and measurable behavior change

Completion rate alone is a weak metric. Effective training programs collect behavioral data that shows whether training changed what people actually do.


Evidence Collection and Chain of Custody

When security data may be needed for legal proceedings, investigations, or regulatory inquiries, collection procedures must preserve the data’s integrity and admissibility.

Evidence Collection Procedures

  • Identification — Determine what data is relevant to the inquiry and where it exists.
  • Preservation — Prevent the data from being altered, deleted, or overwritten. This may involve imaging drives, locking log files, or issuing legal hold notices.
  • Collection — Gather the evidence using documented, repeatable procedures. The method used to collect the data matters as much as the data itself.
  • Documentation — Record who collected what, when, where, and how. Every step is documented.

Chain of Custody

Chain of custody is the documented trail showing who had possession of evidence at every point from collection to presentation. A break in the chain — any period where evidence was unaccounted for — can render it inadmissible.

Chain of custody records should document:

  • Who collected the evidence and when
  • How it was transported and stored
  • Every transfer of custody between individuals
  • How integrity was verified at each transfer (hashes, seals)

Sampling Methods for Large Datasets

When the volume of data is too large to review in its entirety, auditors and assessors use sampling. The sampling method affects the reliability of conclusions.

  • Statistical sampling — Items are selected using mathematical methods that allow the auditor to project conclusions about the entire population with a defined confidence level. This is the most defensible approach.
  • Judgmental (non-statistical) sampling — The auditor selects items based on professional judgment, targeting areas of higher risk or known concern. This is targeted but cannot be statistically projected to the full population.
  • Discovery sampling — Used when the expected error rate is near zero. If even one exception is found in the sample, it indicates a systemic problem. This method is useful for testing controls that should have zero failures, like segregation of duties.

The exam tests whether you understand when each sampling method is appropriate and the limitations of each. Statistical sampling supports quantified conclusions. Judgmental sampling is efficient but subjective. Discovery sampling is best for zero-tolerance controls.


Pattern Recognition

Data collection questions on the CISSP follow these patterns:

  • Missing collection — A scenario describes a security failure where the root cause is that relevant data was not being collected. The answer points to the missing data source.
  • KPI vs. KRI selection — The question describes a metric and you determine whether it measures process performance or risk exposure.
  • Evidence integrity — A scenario describes evidence handling where a step was skipped. The answer identifies what was violated (chain of custody, preservation, documentation).
  • Sampling method selection — The question describes an audit scenario and the right sampling method depends on the data volume, expected error rate, and required confidence level.

Trap Patterns

Watch for these wrong answers:

  • “Collect everything and sort it out later” — Unfocused collection overwhelms storage and analysis capacity. Collection should be intentional and aligned with assessment objectives and compliance requirements.
  • “Training completion rates prove the program is effective” — Completion measures participation, not behavior change. An effective metric would show reduced incident rates or improved phishing simulation performance.
  • “Backup logs prove recoverability” — A backup job completing successfully proves data was written. Only a restore test proves data can be recovered. Logs without restoration evidence are incomplete assurance.
  • “Judgmental sampling is as defensible as statistical sampling” — Judgmental sampling is targeted but subjective. Only statistical sampling supports quantified confidence levels projected across the full population.

Scenario Practice


Question 1

A security team investigates a data breach and discovers that the attacker deleted access logs on the compromised server before exfiltrating data. The on-server logs are the only record of authentication activity. No centralized log collection was in place.

What process failure enabled the attacker to cover their tracks?

A. The server operating system did not generate detailed enough logs
B. Logs were not forwarded to a centralized, protected log management system in real time
C. The attacker used an encrypted connection that evaded detection
D. The incident response team did not respond quickly enough

Answer & reasoning

Correct: B

When logs exist only on the local system, an attacker with administrative access can delete them. Centralized log collection copies logs to a separate, protected system in real time or near-real time. Even if the attacker deletes local logs, the centralized copies remain intact. This is a fundamental log management architecture requirement.


Question 2

A CISO presents quarterly metrics to the board showing that 98% of employees completed security awareness training and the organization patched 95% of systems within the defined SLA. Despite these numbers, phishing incidents increased 30% and two critical breaches occurred through unpatched systems.

What is the MOST likely issue with the CISO’s reporting approach?

A. The metrics are fabricated
B. The board does not understand security metrics
C. The CISO is reporting KPIs (process activity) without corresponding KRIs (risk outcomes) that would reveal the actual security posture
D. The training program and patching process need to be outsourced

Answer & reasoning

Correct: C

KPIs showing high completion and patching rates measure process execution. But KRIs — such as phishing click rates, incidents from unpatched systems, or critical vulnerability age — would reveal that the processes are not producing the intended risk reduction. Reporting only KPIs creates a misleadingly positive picture. Both KPIs and KRIs are needed for accurate security posture reporting.


Question 3

An internal auditor needs to verify that the organization’s segregation of duties controls are working correctly across 50,000 user accounts. The expected violation rate should be zero — no user should have both the ability to create and approve financial transactions.

Which sampling method is MOST appropriate?

A. Statistical sampling with a 95% confidence level
B. Judgmental sampling focused on finance department accounts
C. Discovery sampling, where finding even one violation indicates a control failure
D. No sampling is needed — review all 50,000 accounts manually

Answer & reasoning

Correct: C

Discovery sampling is designed for controls where the expected error rate is zero. For segregation of duties, even a single violation represents a control failure. Discovery sampling efficiently tests this by examining a sample sized to detect at least one error if any exist in the population. Manual review of all accounts is ideal but impractical at this scale. Statistical sampling is appropriate when some error rate is expected.


Key Takeaway

Data collection is the foundation beneath every assessment, audit, and investigation. Without centralized logs, you cannot investigate incidents. Without KPIs and KRIs, you cannot measure program effectiveness or risk trajectory. Without proper evidence procedures, your findings may be inadmissible. Without the right sampling method, your audit conclusions may be unreliable. The exam treats data collection as a governance discipline: know what you need, collect it intentionally, protect its integrity, and retain it according to policy.

Next Module Module 41: Test Output Analysis and Reporting