CPD timing for this level

Practice and Strategy time breakdown

This is the first pass of a defensible timing model for this level, based on what is actually on the page: reading, labs, checkpoints, and reflection.

Reading
13m
1,956 words · base 10m × 1.3
Labs
30m
2 activities × 15m
Checkpoints
5m
1 blocks × 5m
Reflection
40m
5 modules × 8m
Estimated guided time
1h 28m
Based on page content and disclosed assumptions.
Claimed level hours
16h
Claim includes reattempts, deeper practice, and capstone work.
The claimed hours are higher than the current on-page estimate by about 15h. That gap is where I will add more guided practice and assessment-grade work so the hours are earned, not declared.

What changes at this level

Level expectations

I want each level to feel independent, but also clearly deeper than the last. This panel makes the jump explicit so the value is obvious.

Anchor standards (course wide)
Core Internet RFCs (DNS, TCP/IP, TLS)
Assessment intent
Practice and Strategy

Operational reasoning, monitoring signals, and layer-aware security controls.

Assessment style
Format: scenario
Questions: 50
Timed: 75 minutes
Pass standard
80%

Not endorsed by a certification body. This is my marking standard for consistency and CPD evidence.

Evidence you can save (CPD friendly)
  • A security-by-layer map: control, what it prevents, what it detects, and how you test it.
  • A minimal monitoring plan: latency percentiles, loss, DNS success rate, TLS handshake failures, and application errors, plus the runbook links.
  • A short incident narrative written like an operator: timeline, hypothesis, evidence, action, and what you changed so it does not repeat.

Network models for security and operations

Level progress0%

CPD tracking

Fixed hours for this level: 16. Timed assessment time is included once on pass.

View in My CPD
Progress minutes
0.0 hours

CPD assessment

Network models Practice

Certificates support your career and help keep the site free for learners using the browser only tier. Sign in before you learn if you want progress and CPD evidence recorded.

During timed exams, Professor Ransford is paused and copy actions are restricted to reduce casual cheating.
CPD and certification alignment (guidance, not endorsed):

This level focuses on placing controls and signals correctly, which supports both networking and security practice. It maps well to:

  • Cisco CCNA (operational understanding and diagnosis)
  • CompTIA Network+ (applied networking with practical troubleshooting)
  • Security practice pathways where network evidence matters (detection, incident response, and hardening)
How to use Practice
This is where the model becomes useful at work. You place controls where they can actually function, and you monitor signals that tell you something real.
Good practice
For each control, write what it prevents, what it detects, and how you would test it. If you cannot test it, it is probably theatre.
Bad practice
Best practice

In practice, the model becomes valuable when you can place controls and signals correctly. This level is about security and operations thinking, grounded in correct network foundations.


🛡️

Security by layer

Concept block
Controls that can actually work
Place controls where they apply. A control that cannot help is theatre.
Place controls where they apply. A control that cannot help is theatre.
Assumptions
Controls match failure modes
You can test controls
Failure modes
Edge controls for app bugs
Overtrusting segmentation

Here is a simple way to avoid security theatre. Put each control where it can actually work.

Examples:

  1. A strong password policy does not fix a broken access control check in an API.
  2. TLS protects data in transit. It does not validate what the application does with data.
  3. A firewall can block unwanted traffic. It cannot fix an unsafe query inside your code.

Use layers to place your controls, then verify the control with a test.

Worked example. The control you chose cannot physically help

A team sees credential stuffing attempts and responds by adding a Web Application Firewall rule. That might reduce noise at the edge, but it does not fix the real weakness if the application still allows unlimited login attempts, weak session handling, or poor MFA coverage.

My opinion: “security controls” should be treated like engineering, not like decoration. A control is only real if you can describe the failure mode it prevents and how you would test it.

Common mistakes (security theatre starter pack)

  • Treating TLS as “security solved”. TLS is a transport protection, not an application authorisation strategy.
  • Treating NAT as a firewall and assuming it provides policy.
  • Spending time on controls that sound serious while leaving basic identity and logging gaps untouched.

Verification. A control is not real until you can test it

  • State the control intent in one sentence. Example: “Only the app tier can reach the database tier on port 5432.”
  • State the test. Example: “Attempt a connection from a blocked zone and confirm it fails, and that the block is logged.”

📈

Observability and signals

Concept block
Signals to action
Observability matters when signals trigger action, not only dashboards.
Observability matters when signals trigger action, not only dashboards.
Assumptions
Signals are measurable
Response is defined
Failure modes
Dashboards without action
Alert fatigue

When a user says "the site is slow", you need signals that separate causes.

  1. Link signals such as packet loss and retransmissions.
  2. Transport signals such as connection time and resets.
  3. Application signals such as response time and error rate.

If you only have application logs, you will miss many failure patterns.


🧪

Formal verification mindset (why it exists, and how to borrow it safely)

Concept block
Verification mindset
Verification is asking: what would prove me wrong, and can I run that test safely.
Verification is asking: what would prove me wrong, and can I run that test safely.
Assumptions
You can define failure
Safety boundary holds
Failure modes
Confirmation bias
Unsafe testing

When networking is “just web browsing”, bugs are annoying. When networking is control systems, safety systems, or critical infrastructure, bugs are expensive or dangerous. Formal methods exist because “we tested it a bit” is not a convincing argument when failure has a high cost.

Finite state machines (FSMs). The protocol is the allowed transitions

If you want a simple mental model: an FSM is a list of states and the events that move between them. Protocol specs often hide this behind prose. Implementations cannot. They must decide what to do for every packet in every state.

Petri nets. Concurrency is where protocols get weird

Petri nets are a way to model concurrent systems and prove properties like “no deadlock” or “this message eventually gets delivered”. You do not need to become a formal methods researcher to benefit. The practical takeaway is: concurrency plus retries plus timeouts creates edge cases you will not find by reading happy-path diagrams.

Verification. Steal the habit, not the vocabulary

  • Write down the states that matter (even if informal).
  • List the events that can occur out of order (timeouts, retries, duplicate messages).
  • Define one safety property. Example: “We never apply the same command twice.”
  • Define one liveness property. Example: “If the network recovers, the system eventually converges.”

Worked example. “The site is slow” is three different problems

When someone says “slow”, I immediately ask: slow to connect, slow to get the first byte, or slow to load everything. Those map to different signals: DNS + TCP + TLS timings, server response time (TTFB), and then asset loading.

My opinion: if you only watch averages, you are basically blind. Users feel tail behaviour. Averages are comforting. Percentiles are honest.

Maths ladder (optional). Percentiles, tail latency, and queues

Foundations. What does p95 mean

p95 is the value that 95% of observations are at or below. If p95 latency is 800 ms, then 5% of requests are slower than 800 ms.

Why it matters: that 5% can still be thousands of users.

Undergraduate. Little’s Law (queueing intuition)

A very useful relationship in steady state systems is:

(L = \lambda W)

Definitions:

  • (L): average number of items in the system (queue + being served)
  • (\lambda): arrival rate (items per second)
  • (W): average time in the system (seconds)

Interpretation: if arrivals go up and service capacity does not, (W) rises. That rise often shows up as tail latency before it shows up in averages.

Rigour direction (taste). Heavy tails and why p99 can dominate user trust

Real systems often have heavy tails: rare slow events that are much slower than the typical case. Those events dominate user memory and trust.

You do not fix heavy tails by chasing one average number. You fix them by finding the rare slow path and removing it.

Verification. A minimal signal set that actually separates causes

  • DNS resolution time
  • TCP connect time
  • TLS handshake time
  • Time to first byte (TTFB)
  • Response status rate (4xx/5xx)
  • Retransmission rate (when available)

🔎

Packet captures without hacking

Concept block
Capture to interpretation
Packet capture is evidence. Use it safely and ethically.
Packet capture is evidence. Use it safely and ethically.
Assumptions
Ethical boundary is clear
Filters are deliberate
Failure modes
Unsafe experimentation
Overinterpreting

Packet capture is observation. It is not exploitation. Only capture traffic you own or are authorised to inspect.

The goal in this course is interpretation:

  1. Identify the protocols you see.
  2. Confirm the order of events.
  3. Relate a symptom to one layer boundary.

You do not need to craft attack traffic to learn these skills.

Common mistakes (captures edition)

  • Capturing too much and drowning in noise instead of defining a hypothesis first.
  • Forgetting that encrypted payloads are expected (TLS). You are often looking at metadata and timing, not the content.
  • Capturing traffic you are not authorised to inspect. If you do not have permission, do not do it.

Verification. What you should extract from a safe capture

  • Protocol order (DNS → TCP → TLS → HTTP)
  • Evidence of retransmissions or resets
  • Timing between events (where the wait is)

🧱

Enterprise segmentation and trust zones

Concept block
Blast radius
Segmentation reduces blast radius. It does not eliminate compromise.
Segmentation reduces blast radius. It does not eliminate compromise.
Assumptions
Segmentation is enforced
High-risk paths are restricted
Failure modes
Flat networks
Hidden trust

Segmentation is the practice of limiting which systems can talk to which systems. It reduces blast radius. It also makes incidents easier to contain.

I think in zones:

  1. Public edge and CDN.
  2. Application tier.
  3. Data tier.
  4. Admin and management plane.

The most common failure is letting the management plane become reachable from places it should not be.

Verification. A segmentation rule you can defend

  • Draw the zones and list allowed flows.
  • For one “should never happen” flow, state how you prevent it (policy) and how you detect it (logging/alerts).

Capstone

Concept block
Next action
The goal is a justified next action, not a clever guess.
The goal is a justified next action, not a clever guess.
Assumptions
You can justify
You can verify
Failure modes
Random changes
No feedback

Write a one page troubleshooting report for a made up incident. The incident can be "users cannot load the login page" or "some users get random timeouts".

Your report must include:

  1. Symptom and impact.
  2. Three hypotheses, one per layer boundary.
  3. One test per hypothesis.
  4. What evidence would confirm or disprove each hypothesis.

CPD evidence (operator-grade, still simple)

  • What I studied: security by layer, observability signals, safe packet capture interpretation, segmentation and trust zones.
  • What I produced: a one-page incident report with hypotheses, tests, and evidence.
  • What changed in my practice: one habit. Example: “I will record connect time and TLS time separately from server response time.”
  • Evidence artefact: the report plus a screenshot of the triage matrix choices.

Quick check

Scenario: You add TLS and want to claim 'secure in transit'. What does TLS actually protect

Scenario: TLS is enabled but customer data still leaks. What’s the most defensible explanation

Scenario: You’re designing for incidents. What is segmentation for, in one sentence

Scenario: Your dashboard is green but users are angry. What signal proves why this can happen

Scenario: You need to confirm protocol behaviour without doing anything unsafe. What is a safe use of packet capture

Quick feedback

Optional. This helps improve accuracy and usefulness. No accounts required.