Domain 6: Security Assessment and Testing Module 38 of 84

Assessment and Audit Strategy Design

CISSP Domain 6 — Security Assessment and Testing A — Assessments and Testing 10–12 minutes

The Difference Between Testing and Trusting

Every organization claims its security controls work. Assessment and audit are how you prove it — or discover they do not. The distinction matters: a security team that only builds controls without systematically verifying them is operating on assumption, not evidence.

An assessment program that only runs when auditors arrive is not an assessment program. It is a response to external pressure.

This module covers CISSP exam objective 6.1: design and validate assessment, test, and audit strategies. ISC2 wants you to think like someone who builds the assessment program itself — choosing the right types of assessments, setting appropriate frequency, defining scope, and aligning the entire effort to organizational risk.


Assessment Types and Their Purposes

Not all assessments answer the same question. Each type targets a different layer of assurance.

  • Vulnerability assessment — Identifies known weaknesses in systems, configurations, and software. It answers “what could be exploited?” but does not attempt exploitation. Think of it as a broad scan for potential problems.
  • Penetration testing — Goes beyond identification to attempt actual exploitation. It answers “can an attacker actually get in, and how far?” Pen tests validate whether vulnerabilities are genuinely exploitable under real conditions.
  • Security audit — Measures controls against a defined standard or policy. Audits answer “are we doing what we said we would do?” They are about conformance, not discovery.
  • Compliance assessment — A targeted evaluation against regulatory or contractual requirements. It answers “do we meet specific legal or industry mandates?” PCI DSS assessments and HIPAA readiness reviews fall here.

The exam expects you to select the right assessment type for a given scenario. A vulnerability assessment is appropriate when you need breadth. A penetration test is appropriate when you need depth and proof of exploitability. An audit is appropriate when you need to verify adherence to a standard.


Assessment Planning and Scoping

An assessment without a clear scope is an exercise in wasted effort. Scoping defines what is in bounds, what is excluded, and what constraints apply.

Planning considerations include:

  • Objectives — What question is the assessment trying to answer? Compliance verification, control effectiveness, threat validation, or risk identification each require different approaches.
  • Scope boundaries — Which systems, networks, applications, and business processes are included? Excluding critical systems from scope does not reduce their risk.
  • Timing and scheduling — Will the assessment run during business hours or maintenance windows? Production testing carries different risk than testing against staging environments.
  • Resource requirements — Internal teams, external firms, specialized tools, and management time all need to be allocated before the assessment begins.
  • Communication plan — Who is notified before, during, and after the assessment? Surprising the SOC with an unannounced pen test wastes everyone’s time and creates unnecessary incident response overhead.

Internal vs. External Assessments

Internal and external assessments serve different purposes, and a mature program includes both.

Internal assessments are conducted by the organization’s own security team or internal audit function. They provide continuous assurance and deep organizational context. Internal assessors understand the business, have access to documentation, and can follow up on findings without contractual limitations. The weakness: they may lack independence or face political pressure to soften findings.

External assessments are performed by independent third parties. They bring fresh perspective, specialized expertise, and the independence that stakeholders and regulators expect. External assessors are less likely to accept “that is just how we do it” as a satisfactory answer. The tradeoff: they cost more and require time to understand the organization’s environment.

The exam frequently tests whether you understand when external independence is required versus when internal assessment is sufficient.


First, Second, and Third-Party Audits

Audit relationships map to who is doing the auditing and why.

  • First-party audit — The organization audits itself. Internal audit teams evaluate whether controls meet the organization’s own policies and standards. These provide operational value but limited external credibility.
  • Second-party audit — A customer or business partner audits the organization. This happens when a client needs assurance that its vendor meets contractual security obligations. Right-to-audit clauses in contracts enable second-party audits.
  • Third-party audit — An independent, accredited firm conducts the audit against a recognized standard. ISO 27001 certification audits and SOC examinations are third-party audits. They carry the most external credibility because of the auditor’s independence.

Audit Standards and Frameworks

Different audit standards answer different questions for different audiences.

  • ISO 27001 — An international standard for information security management systems (ISMS). Certification requires a third-party audit against the standard’s requirements. It covers governance, risk management, and control implementation. Surveillance audits occur annually, with full recertification every three years.
  • SOC 1 (SSAE 18) — Examines controls relevant to financial reporting. If your service affects a customer’s financial statements, SOC 1 is the appropriate report. Previously governed by SAS 70, now under SSAE 18.
  • SOC 2 — Evaluates controls against the Trust Services Criteria: security, availability, processing integrity, confidentiality, and privacy. Type I examines control design at a point in time. Type II examines control operating effectiveness over a period (usually 6–12 months). SOC 2 Type II provides stronger assurance because it covers operational performance, not just design intent.
  • SOC 3 — A public-facing summary of a SOC 2 examination. It provides general assurance without the detail of a SOC 2 report. Suitable for marketing or public trust purposes, but insufficient for due diligence.

The exam will test whether you can match the right audit standard to a given scenario. Financial reporting controls point to SOC 1. Vendor security assurance points to SOC 2 Type II. Public trust without detail points to SOC 3.


Assessment Frequency and Triggers

Fixed schedules are a baseline, not a strategy. Assessment frequency should be driven by risk, not by calendar alone.

  • Scheduled assessments — Annual penetration tests, quarterly vulnerability scans, and periodic internal audits form the recurring baseline. Regulatory requirements often mandate minimum frequencies.
  • Trigger-based assessments — Significant changes should prompt additional assessment. These triggers include major infrastructure changes, new application deployments, mergers and acquisitions, significant incidents, regulatory changes, and changes in threat intelligence.
  • Continuous assessment — Automated scanning, continuous monitoring, and real-time control validation supplement scheduled and trigger-based activities. This is the direction the industry is moving, but it does not replace the depth of periodic manual assessments.

A common exam pattern: a question describes a significant change followed by a breach, and the root cause is that no assessment was triggered by the change.


Risk-Based Assessment Prioritization

No organization can assess everything with equal intensity. Risk-based prioritization directs the most thorough assessment activities toward the highest-risk areas.

Prioritization factors include:

  • Data sensitivity — Systems that process regulated or high-classification data warrant more frequent and thorough assessment.
  • Exposure — Internet-facing systems, third-party integrations, and remote access points have higher exposure and deserve more attention.
  • Change velocity — Systems that change frequently accumulate configuration drift and new attack surface faster than stable systems.
  • Historical findings — Systems with a track record of vulnerabilities or failed controls need more oversight, not less.
  • Business criticality — Revenue-generating systems and systems supporting essential operations merit priority assessment.

Rules of Engagement

Rules of engagement (ROE) define the boundaries and conditions under which an assessment — particularly a penetration test — will operate. Without documented ROE, testers risk causing business disruption, and the organization risks legal exposure.

ROE should specify:

  • Authorization — Written, signed authorization from an executive with the authority to approve testing. Verbal approval is insufficient.
  • Scope — Specific systems, networks, and applications included and excluded.
  • Permitted techniques — Which attack methods are allowed? Social engineering, denial-of-service attempts, and physical access testing each carry specific risks that must be explicitly authorized or prohibited.
  • Timing constraints — When testing may and may not occur, including blackout periods around critical business events.
  • Communication protocols — How the testing team communicates with the organization during the assessment, including escalation procedures for critical findings discovered mid-test.
  • Data handling — How the testing team will protect any sensitive data encountered during the assessment, and how evidence will be stored, transmitted, and destroyed.
  • Emergency stop procedures — How to halt testing immediately if unintended disruption occurs.

Pattern Recognition

Domain 6 assessment strategy questions tend to follow these patterns:

  • Matching assessment type to scenario — The question describes a need (breadth vs. depth, compliance vs. discovery) and you choose the appropriate assessment type.
  • Frequency and trigger questions — A change occurs and no assessment follows. The answer points to the missing trigger-based assessment.
  • Audit party selection — The question describes who needs assurance and you select the right audit party (first, second, or third).
  • SOC report matching — The scenario describes an audience and a need, and you choose SOC 1, SOC 2 Type I, SOC 2 Type II, or SOC 3.

Trap Patterns

Watch for these wrong answers:

  • “Test everything equally” — Risk-based prioritization is always the expected approach. Treating all systems with the same assessment intensity ignores resource constraints and risk differences.
  • “Annual testing is sufficient” — Fixed schedules alone are inadequate. Significant changes require trigger-based assessment regardless of the schedule.
  • “Internal audit provides the same assurance as external audit” — Internal audits have value, but they lack the independence that regulators and external stakeholders expect.
  • “SOC 2 Type I proves controls are effective” — Type I only evaluates control design at a point in time. Type II evaluates operating effectiveness over a period.

Scenario Practice


Question 1

A financial services company needs to provide assurance to its banking clients that its cloud platform meets security, availability, and confidentiality requirements. The clients want evidence that controls have been operating effectively over the past year, not just that they were designed correctly.

Which report should the company obtain?

A. SOC 1 Type II
B. SOC 2 Type II
C. SOC 2 Type I
D. SOC 3

Answer & reasoning

Correct: B

The clients need assurance about security, availability, and confidentiality (Trust Services Criteria, which means SOC 2, not SOC 1). They want evidence of operating effectiveness over time, which requires Type II. SOC 2 Type I only examines design at a point in time. SOC 3 lacks the detail clients need for due diligence.


Question 2

An organization completed its annual penetration test four months ago. Since then, the company migrated its customer-facing application to a new cloud provider and deployed a new API gateway. No additional testing has been performed.

What assessment gap exists?

A. The annual penetration test should have been rescheduled to occur after the migration
B. The significant infrastructure changes should have triggered a new assessment of the affected systems
C. Vulnerability scanning would have been sufficient to cover the new environment
D. The cloud provider’s security certifications eliminate the need for additional testing

Answer & reasoning

Correct: B

Significant infrastructure changes — cloud migration, new API gateway — are assessment triggers. The annual penetration test validated the previous environment, not the current one. The cloud provider’s certifications cover their controls, not the organization’s configuration and integration. Vulnerability scanning alone does not provide the depth of a penetration test.


Question 3

A healthcare organization is planning its first penetration test. The CISO wants to include social engineering against clinical staff and testing of the electronic health records system during business hours. The legal team has concerns.

What should be established BEFORE the test proceeds?

A. Verbal authorization from the CIO
B. Written rules of engagement that define authorized techniques, scope, timing, and emergency procedures, signed by executive leadership
C. A non-disclosure agreement with the testing team
D. Notification to all employees that a penetration test will occur

Answer & reasoning

Correct: B

Social engineering and production-system testing carry significant risk. Written rules of engagement with executive sign-off define exactly what is authorized, protect both the organization and the testers, and address legal concerns. Verbal authorization is insufficient for this scope. An NDA is necessary but does not define testing boundaries. Notifying all employees would undermine social engineering testing.


Key Takeaway

Assessment strategy design is about asking the right questions at the right time with the right level of intensity. The exam will not ask you to configure a scanner or write a test plan. It will ask you to select the correct assessment type for a scenario, identify when a trigger-based assessment was missed, match audit standards to their intended audience, and recognize when internal assessment is insufficient and external independence is required. Every answer should connect back to risk: higher risk demands more frequent, more independent, and more thorough assessment.

Next Module Module 39: Security Control Testing Methods