Module 17 of 25 · Practice & Strategy

Secure software development lifecycle

45 min read 4 outcomes 1 interactive explorer + terminal + drag challenge 5 standards cited

By the end of this module you will be able to:

  • Explain why integrating security into the SDLC is cheaper than retrofitting it
  • Map the Microsoft SDL phases (training, requirements, design, implementation, verification, release, response) to activities and security gates
  • Apply OWASP SAMM v2.0 to assess an organisation's security maturity
  • Write a verifiable security requirement using the “system shall” format
Lines of C code on a monitor, representing software vulnerabilities like Heartbleed

Real-world incident · 1 April 2014

A missing bounds check. Two years undetected. Half the internet exposed.

CVE-2014-0160 was a flaw in OpenSSL's implementation of the TLS heartbeat extension. When a client sent a heartbeat request, it included a length field claiming how many bytes it had sent. The server read that field and returned the same number of bytes from its own memory. The bug: the server never checked that the claimed length matched the actual payload. A client could claim it had sent 65,535 bytes while sending only one, and the server would dutifully read 64 kilobytes of its process memory and send it back.

That 64 KB could contain anything held in the OpenSSL process at that moment: private TLS keys, session tokens, usernames, passwords. Discovered independently by Neel Mehta at Google and researchers at Codenomicon, the vulnerability was estimated to affect approximately 500,000 servers on the public web. Yahoo Mail, AWS load balancers, Cisco routers, and Android 4.1 devices were all affected. Remediation cost the industry over $500 million. Certificate authorities had to reissue certificates. Users were advised to change passwords across every affected service.

The change that introduced Heartbleed was committed to OpenSSL in December 2011 by a doctoral student. It passed code review and remained in production for more than two years. A SAST rule checking for missing bounds validation on memory copy calls, or mandatory security review for cryptographic library changes, would have caught it at the time of submission. OpenSSL had neither. The lesson is not about the individual who wrote the code; it is about the absence of a process that makes such defects detectable before they ship.

The code was open source and reviewed by thousands. How did a buffer over-read hide for two years?

Heartbleed was not exotic. It was a buffer over-read, a class of defect first documented in 1972. A mature secure SDLC would have caught it in the implementation phase. This module covers the practices that prevent such defects from ever reaching production.

With the learning outcomes established, this module begins by examining why shift left? in depth.

17.1 Why shift left?

The economic argument for early security is well-established. An IBM Systems Sciences Institute study, widely cited in industry and referenced in NIST guidance, found that a defect caught during the requirements phase costs roughly one unit to fix. The same defect, found in production, costs approximately 100 units to remediate when you account for incident response, patch distribution, customer communication, and reputational damage.

“Shifting left” means moving security activities earlier in the development pipeline rather than concentrating them in a testing phase at the end. The idea is not to remove testing; it is to add security thinking at every earlier stage so that the testing phase finds nothing catastrophic. Heartbleed illustrates this precisely: the flaw existed at the requirements level (no requirement to validate heartbeat payload length) and the implementation level (no SAST rule to catch missing bounds checks). Testing at release found nothing because no one was looking in the right place.

Security is not a feature to be added at the end; it is a quality attribute that must be designed in from the start.

NIST SP 800-160 Vol. 1 - Section 3.1: Security as a System Quality Attribute

NIST SP 800-160 Vol. 1 establishes security as a system engineering concern rather than a late-stage add-on. This framing is the foundation for every shift-left practice covered in this module.

Common misconception

We can add security in the testing phase.

Security testing finds defects; it does not fix insecure architecture. A penetration test can confirm that a session management design is vulnerable to session fixation, but it cannot redesign the session management model without rebuilding the feature. That rebuild costs far more than designing the session model correctly in the first place. Testing is necessary but insufficient as a sole security mechanism.

With an understanding of why shift left? in place, the discussion can now turn to the Microsoft SDL phases, which builds directly on these foundations.

17.2 Microsoft SDL: seven phases

The Microsoft Security Development Lifecycle was first published in 2004 following the company's internal Trustworthy Computing initiative and updated in 2022 for modern DevOps and cloud environments. It defines seven phases, each with mandatory security activities that must be completed before proceeding to the next:

  1. Training
  2. Requirements
  3. Design
  4. Implementation
  5. Verification
  6. Release
  7. Response

Click each phase in the explorer below to see its required security activities and the consequence of skipping it.

With an understanding of the Microsoft SDL phases in place, the discussion can now turn to owasp samm v2.0: measuring maturity, which builds directly on these foundations.

Loading interactive component...

The SDL is not a prescription but a framework. Some phases apply universally (Training, Requirements); others scale with risk (fuzz testing is mandatory for network-facing parsers but optional for a simple CRUD form). What matters is that each phase has an explicit security gate: a documented check that must pass before work proceeds. The absence of those gates is precisely the gap that allowed the Heartbleed commit to pass through review undetected.

17.3 OWASP SAMM v2.0: measuring maturity

Knowing the SDL phases exists and actually implementing them consistently are two different things. OWASP SAMM (Software Assurance Maturity Model) v2.0 provides a structured way to measure where an organisation currently sits and what improvement looks like. It organises security activities into five business functions, each subdivided into two practices.

Business FunctionPractice 1Practice 2
GovernanceStrategy & MetricsPolicy & Compliance
DesignThreat AssessmentSecurity Requirements
ImplementationSecure BuildSecure Deployment
VerificationArchitecture AssessmentSecurity Testing
OperationsIncident ManagementEnvironment Management

Each practice is rated at one of three maturity levels. Level 1 indicates the activity is performed on an initial, ad-hoc basis. Level 2 means the activity is structured, documented, and consistently applied. Level 3 means the activity is optimised through metrics, continuous improvement, and organisation-wide coverage. A team that has automated SAST on pull requests but has no written policy and no exception-tracking metrics sits at Level 2 for Secure Build.

SAMM provides a yardstick to measure security practices and a roadmap to improve them.

OWASP SAMM v2.0 - Introduction

SAMM is designed for incremental improvement, not overnight transformation. Reaching Level 2 across all five business functions is a more meaningful milestone than reaching Level 3 in one practice while others remain at Level 0.

With an understanding of owasp samm v2.0: measuring maturity in place, the discussion can now turn to cwe top 25 most dangerous weaknesses, which builds directly on these foundations.

17.4 CWE Top 25 most dangerous weaknesses

The MITRE CWE Top 25:2023 list identifies the most impactful software weakness classes based on NVD CVE frequency and severity scores. It gives development and security teams a prioritised target list for SAST rules, code review focus, and training content. The full list covers 25 weakness categories; the top five account for a disproportionate share of critical vulnerabilities.

#1CWE-787

Out-of-Bounds Write

Writing past buffer end , dominant in C/C++ code, including Heartbleed's cousin CWE-126.

#2CWE-79

Cross-Site Scripting (XSS)

Unsanitised user input rendered as HTML , attacker runs scripts in the victim's browser.

#3CWE-89

SQL Injection

User input concatenated into SQL queries , attacker reads or modifies the database.

#5CWE-78

OS Command Injection

Unsanitised input passed to a shell command , attacker executes arbitrary OS commands.

#7CWE-416

Use After Free

Memory accessed after being freed , exploitable for code execution in C/C++ runtimes.

Common misconception

CWE and CVE are the same thing.

CWE (Common Weakness Enumeration) describes classes of software weakness , for example, CWE-126 describes the general class of buffer over-reads. CVE (Common Vulnerabilities and Exposures) identifies specific, named instances of those weaknesses in specific products , for example, CVE-2014-0160 is the specific instance of CWE-126 in OpenSSL 1.0.1 through 1.0.1f. CWE is the taxonomy; CVE is the catalogue of individual specimens.

With an understanding of cwe top 25 most dangerous weaknesses in place, the discussion can now turn to security requirements as nfrs, which builds directly on these foundations.

17.5 Security requirements as NFRs

A security requirement is only useful if it is testable. Vague requirements make it impossible to determine whether a product is compliant, and they are routinely deprioritised in sprint planning because no one can write a failing test for them. The “system shall” format forces specificity by requiring: the subject (the system), the obligation (shall), the specific control, the measurable threshold, and the test reference.

Weak requirement

“The system shall be secure.”

Not measurable, not testable, not assignable. Passes every audit until something goes wrong.

Strong requirement

“The system shall reject any input exceeding 255 characters in the username field and return HTTP 400. Verified by automated test TC-AUTH-012.”

Specific, measurable, and linked to a test case. Fails or passes definitively.

OWASP ASVS (Application Security Verification Standard) v5.0.0 provides over 300 pre-written, testable security requirements across 14 verification categories. Rather than writing requirements from scratch, teams can select the relevant ASVS chapters for their application type and adapt the language to their context. Using an established catalogue also makes requirements traceable to an external standard, which simplifies compliance evidence for ISO 27001 and SOC 2 audits.

Loading interactive component...
17.6 Check your understanding

According to the IBM System Sciences Institute, fixing a defect in production costs approximately how much more than fixing it in the requirements phase?

Heartbleed (CVE-2014-0160) belonged to which CWE category?

An OWASP SAMM Level 1 rating for an activity means:

Loading interactive component...

Key takeaways

  • Fixing a security defect costs up to 100× more in production than in requirements — shift-left is an economic decision, not just a technical one.
  • Microsoft SDL embeds security gates throughout the lifecycle: Training, Requirements, Design, Implementation, Verification, Release, and Response must each be completed before proceeding to the next.
  • OWASP SAMM v2.0 measures maturity across Governance, Design, Implementation, Verification, and Operations at three levels.
  • CWE Top 25:2023 identifies the most impactful software weakness classes; CWE describes weakness types, CVE identifies specific product instances.
  • A good security requirement is specific and testable: ‘The system shall reject inputs exceeding 255 characters’ — not ‘The system shall be secure’.

You can now embed security into every phase of the development lifecycle. But even a well-designed SDLC operates within a network. How do you reduce the attack surface that network exposure creates, and what does zero trust mean in practice? Module 18 covers exposure reduction and zero trust architecture.

Standards and sources cited in this module

  1. Microsoft Security Development Lifecycle

    SDL Practices, 2022 edition

    Defines the seven-phase SDL model described in this module.

  2. OWASP SAMM v2.0

    Business Functions and Maturity Levels

    The maturity model used to assess and improve security practices across five business functions.

  3. NIST SP 800-160 Vol. 1

    Section 3.1: Security as a System Quality Attribute

    Establishes the principle that security must be designed in, not added later.

  4. MITRE CWE Top 25 Most Dangerous Software Weaknesses 2023

    Scoring and ranking methodology

    Provides the weakness taxonomy referenced throughout this module.

  5. OWASP Application Security Verification Standard v5.0.0

    Security Requirements chapters

    Pre-written, testable security requirements across 14 verification categories.

Module 17 of 25 · Cybersecurity Practice & Strategy