CPD assessment

Cybersecurity Applied

Certificates support your career and help keep the site free for learners using the browser only tier. Sign in before you learn if you want progress and CPD evidence recorded.

During timed exams, Professor Ransford is paused and copy actions are restricted to reduce casual cheating.

CPD timing for this level

Applied time breakdown

This is the first pass of a defensible timing model for this level, based on what is actually on the page: reading, labs, checkpoints, and reflection.

Reading
28m
4,014 words · base 21m × 1.3
Labs
120m
8 activities × 15m
Checkpoints
35m
7 blocks × 5m
Reflection
56m
7 modules × 8m
Estimated guided time
4h 59m
Based on page content and disclosed assumptions.
Claimed level hours
16h
Claim includes reattempts, deeper practice, and capstone work.
The claimed hours are higher than the current on-page estimate by about 12h. That gap is where I will add more guided practice and assessment-grade work so the hours are earned, not declared.

What changes at this level

Level expectations

I want each level to feel independent, but also clearly deeper than the last. This panel makes the jump explicit so the value is obvious.

Anchor standards (course wide)
NIST Cybersecurity Framework (CSF 2.0)ISO/IEC 27001 and 27002
Assessment intent
Applied

Scenario based judgement, common failure modes, and trade-offs between controls.

Assessment style
Format: scenario
Questions: 50
Timed: 75 minutes
Pass standard
80%

Not endorsed by a certification body. This is my marking standard for consistency and CPD evidence.

Evidence you can save (CPD friendly)
  • A one page threat model for a small product: abuse cases, controls, and what you would log.
  • An attack surface inventory: what is exposed, what can be removed, what must be protected, and why.
  • A short risk trade-off write-up: two controls, one constraint, and a defensible choice.

Applied Cybersecurity

Level progress0%

CPD tracking

Fixed hours for this level: 16. Timed assessment time is included once on pass.

View in My CPD
Progress minutes
0.0 hours
CPD and certification alignment (guidance, not endorsed):

Applied is built around threat modelling, web security flows, and detection signals. That maps well to:

  • CompTIA Security+ and CySA+ (skills overlap): practical defensive reasoning and monitoring.
  • (ISC)² SSCP: hands-on security administration concepts.
  • OWASP Top 10 and ASVS (orientation): web risks and verification habits.
  • NIST CSF 2.0: protect and detect thinking with evidence.
How to use Applied
This is the level where you learn to sound calm in a security incident because you have a method, not because you are fearless.
Good practice
Take one system, draw the boundaries, then test one assumption using evidence. Repeat. That loop is what competent security looks like.
Bad practice
Best practice

Applied is where we stop describing computers and start thinking like attackers and defenders. We keep the language human, but we anchor every idea to assets, entry points, and the controls that actually change outcomes.


🎯

Module A1. Threat modelling as design

Concept block
Threat modelling is design
Threat modelling is how you choose where to spend effort before the incident chooses for you.
Threat modelling is how you choose where to spend effort before the incident chooses for you.
Assumptions
Scope is explicit
We can name abuse paths
Failure modes
Threat list without controls
Generic threats

Security improves fastest when I can answer one question clearly. What matters, who might attack it, and how it could fail.

A threat model is a simple story about risk. I list each . I name the . I map and the . This keeps me focused on reality instead of memorising exotic exploits.

In simple terms, threat modelling is how I decide what to worry about first. It is not a perfect prediction. It is a structured way to turn "security feels scary" into "these three controls will reduce harm most for this system."

In the real world, a cyber person does not sit around naming movie villains. We sit with engineers, product, and sometimes ops, and we ask annoying questions. Where does data enter, where does it leave, who can change it, and what happens if it is wrong. We then write down the assumptions and test them.

How it fails. Teams either go too broad (a huge diagram nobody reads) or too narrow (only listing SQL injection because it is familiar). Another failure is skipping the human actors. Support staff, vendors, and internal admins are where many realistic attack paths live.

How to do it well. Keep scope small, tie each threat to an entry point, and tie each entry point to a control you can actually implement or measure. Accept trade offs. If a control adds friction, decide where that friction is worth it and where it will be bypassed.

Small threat model in one glance

People and data cross boundaries. Assumptions must be explicit.

User → Web app → API → Database
Public internet to web app is a trust boundary. API credentials and personal data are assets.
Attacker pushes inputs at the web form and the API. Controls: validation, auth, logging.

Before you use the tool, the goal is not to produce a pretty canvas. The goal is to spot one or two high leverage failures. When you fill it in, focus on what would actually hurt and what someone could realistically do with the access they have.

After you use the tool, sanity check your output. Do your threats connect to real entry points, or did you drift into generic fear. Also check whether your controls are specific. "Be secure" is not a control. "Require MFA for staff logins and alert on new device sign in" is closer to a control.

Quick check. Threat modelling

Why start with assets before attacks

Scenario: A support agent can reset customer passwords after answering two questions. What is the asset and what is the attack surface

What is a trust boundary

Scenario: You have a mobile app, an API, and an admin console. Which boundary usually deserves the most paranoia first, and why

How do entry points relate to controls

What makes a threat model useful


🔍

Module A3. Web security in practice

Concept block
Input to database flow
Most web vulnerabilities are untrusted input reaching a sensitive operation.
Most web vulnerabilities are untrusted input reaching a sensitive operation.
Assumptions
Inputs are untrusted
Sensitive operations are gated
Failure modes
Validation gaps
Missing rate limits

Most breaches start with what is exposed, not with exotic zero days.

Operating systems run services, services open ports, and apps expose inputs. The attack surface grows with every feature and every default left unchanged.

In simple terms: attack surface is everything someone can touch. If they can touch it, they can try to break it, confuse it, or use it in a way you did not plan for.

In the real world, what I actually do is inventory exposure: endpoints, admin panels, file uploads, third party scripts, public buckets, test environments, and forgotten subdomains. Then I ask which of those touches sensitive data or powerful actions. That is where I look first.

Common failure classes repeat: , broken access control, unsafe defaults, and through a reachable entry point. These are not rare. They are the same few mistakes showing up in different clothes.

How it fails. A feature gets shipped behind a toggle, then the toggle is left on in production. A debug route leaks stack traces. An admin page is "temporary" and ends up indexed by a crawler. A dependency is added and nobody reviews what it loads in the browser.

How to do it well. Reduce what is exposed, secure what must be exposed, and make sure every exposed input has validation, authentication where appropriate, and monitoring. The trade off is speed. More exposure makes development easier until the day it makes incident response harder.

Where exposure creeps in

A simple web app and the inputs that expand attack surface.

Browser → API → Database
Inputs: login form, file upload, query parameter, admin console
Each input crosses a trust boundary. Validation and auth must match the risk.

Before you use the tool, treat each toggle as a real decision. When you turn something on, ask two questions. What new input did I create, and what new assumption did I just make.

After you use the tool, look for the "quiet" exposure. A lot of damage comes from boring things like default ports, metadata, admin routes, and third party scripts. The common mistake is focusing only on big obvious inputs and missing the small ones that are easiest to probe.

Quick check. Web security

What expands attack surface the fastest

Scenario: A search box lets users type anything and the results suddenly include other users' records. What class of failure might this be

Scenario: A developer leaves a debug route enabled in production because it is convenient. Why is that dangerous

How does metadata leak

Scenario: Where should validation live if you want it to actually protect you

Why map trust boundaries when reviewing attack surface


🔐

Module A2. Identity and access control

Concept block
Policy and enforcement
Access control works when policy is clear and enforcement is consistent at every entry point.
Access control works when policy is clear and enforcement is consistent at every entry point.
Assumptions
Least privilege is maintained
Admin is separated
Failure modes
Shadow access
Policy drift

Authentication answers who you are. Authorisation answers what you may do. Sessions remember you between requests. Mixing them up is why many breaches feel trivial.

A session is a bridge in a stateless world. Cookies or tokens are just carriers. Their safety depends on creation, storage, expiry, rotation and server side checks.

In simple terms, a session is the site remembering you between clicks. Without it, every request looks like a stranger. With it, every request is trusted as "you", which is powerful and therefore risky.

In the real world, what I see is not magic crypto. I see teams choosing between cookies and tokens, setting expiries, deciding how logout works, and then finding out that one missing authorisation check exposed data. I also see incidents where a stolen session gets reused from a different device, and nobody notices because there is no monitoring on session anomalies.

Common failures: predictable IDs, missing , and replaying stolen . Another failure is confusing authentication with authorisation: users log in correctly, but the app forgets to check what they are allowed to do.

How to do it well. Use secure and HttpOnly cookies where possible, set short lifetimes for high risk sessions, rotate tokens on sensitive events, and invalidate server side on logout. Also treat authorisation as a rule that must be enforced on every request, not just in the UI. The trade off is user convenience. Short sessions can annoy users, so you need to decide where you accept friction and where you compensate with better UX.

Session flow at a glance

Login creates a session token that rides with each request.

Login → Server issues token → Browser stores cookie
Request with cookie → Server checks signature and role
Rotate token on privilege change, expire and invalidate on logout.

Before you use the session flow tool, focus on what is stored where. The browser will happily send cookies back to the server. The question is whether the server should believe them, and under what conditions.

After you use it, check you can explain where the trust comes from. If your model is "the cookie proves it is me forever," you are missing expiry, rotation, and invalidation.

Before you use the hijack demo: this is not teaching you to attack anyone. It is showing why session identifiers are sensitive. If someone gets one, they often do not need your password.

After you use it, focus on the controls that break the replay. Short lifetimes, rotation, binding sessions to device signals, and server side invalidation all reduce the window. A common mistake is assuming HTTPS alone prevents session theft. HTTPS protects transport, not your entire browser environment.

Real-world impact of getting cookie security wrong

In 2018, British Airways experienced a data breach affecting 500,000 customers. Attackers injected malicious code into the airline's website that stole payment card details. The attack worked because cookies were not properly secured. The ICO fined British Airways £20 million for the breach. The direct cost was significant, but the reputational damage was worse. Customers lost trust, and the incident raised questions about the airline's ability to protect sensitive payment information.

Why cookie security matters

Cookies carry session identifiers and authentication tokens. If they are not configured correctly, attackers can steal sessions, hijack accounts, and access sensitive data. Missing the Secure flag means cookies can be sent over unencrypted HTTP. Missing HttpOnly means JavaScript can access cookies, making XSS attacks more dangerous. Missing SameSite means cookies can be sent in cross-site requests, enabling CSRF attacks.

Practice with the Cookie Checker

Use the cookie checker tool to analyse real cookie configurations. See what secure cookies look like versus insecure ones. Understand how missing flags create vulnerabilities. This hands-on practice builds the skills you need to spot cookie security issues in real applications.

Quick check. Identity and sessions

How do authentication and authorisation differ

Why are secure and HttpOnly flags important

What breaks when you skip authorisation

Scenario: A session token is stolen and replayed from another device. What controls reduce the damage window

How does session rotation help

Name one sign of privilege escalation

What should happen on logout


📊

Module A6. Logging and detection basics

Concept block
Detection loop
Detection is a loop: collect signals, decide, act, and learn from outcomes.
Detection is a loop: collect signals, decide, act, and learn from outcomes.
Assumptions
Signals map to behaviour
A runbook exists
Failure modes
Noisy alerts
No evidence captured

Prevention is never perfect. Good detection shortens the time between a bad action and your response.

I log what I fear: authentication, authorisation, sensitive access, privilege changes, unusual network patterns.

In simple terms, logs are the footprints. Monitoring is deciding which footprints matter. Response is what you do when you see them.

A is only useful when it becomes a that can turn into an . A log line without context is trivia. A signal has enough context that a human can make a decision.

In the real world, this is triage. You see a burst of failed logins, a login from a new country, an API key used at 3am, or a privilege change followed by data export. You decide what to investigate first, and you decide what evidence to preserve.

How it fails: alert fatigue, missing context, and no owner. Teams create rules that fire constantly, then everyone ignores the channel. Or they log everything but cannot search it. Or the person who gets paged has no permission to do anything meaningful.

How to do it well. Start with a few high value signals tied to real harm, tune them, and write a short playbook. Decide your trade offs. More alerts can catch more badness but will also burn people out. A smaller set of reliable signals often beats a thousand noisy ones.

Risk thinking joins likelihood and impact. Controls reduce one or both but add cost and friction. This is why a good risk decision is not "maximum security." It is "enough security for what we are protecting, given constraints."

From events to action

Systems emit logs, you filter for signal, then alert a human or playbook.

App + Infra → Log pipeline → Storage
Filters + rules → Alerts → On-call or dashboard
Measure false positives and detection time to tune quality.

Before you use the log triage tool, do not try to read every line. Try to spot the pattern and the story. Ask what changed, who did it, and what is the plausible next step for an attacker.

After you use it, check your priorities. A common mistake is chasing the weirdest looking event instead of the highest impact path. Another mistake is ignoring baseline behaviour. "Unusual" only makes sense when you know what normal looks like.

Before you use the risk trade off tool, this is about making trade offs explicit. You are practising how to talk about risk without sounding like a robot or a fortune teller.

After you use it, focus on residual risk. The common mistake is assuming one control takes risk to zero. Another mistake is ignoring cost and usability until users bypass the control. Good controls reduce harm without breaking the business.

Quick check. Logging and risk

Scenario: You see 200 failed logins followed by one success, then a password reset and a data export. What should you investigate first

Why log authentication and authorisation events

What makes a good signal

Why do too many alerts hurt

How does likelihood differ from impact

What is residual risk

Who owns responding to alerts


🧪

Module A5. Verification and release gates

Concept block
Ship or stop
Release gates are decision logic: what must be true before we expose users.
Release gates are decision logic: what must be true before we expose users.
Assumptions
Tests reflect reality
Failures are actionable
Failure modes
Green build, unsafe release
Gates that block delivery

Applied learning is only useful if you can verify it. Verification is how you check that a control really works. It is also how you avoid security theatre.

In practice, you verify three things.

  1. The control exists where it should exist
  2. The control fails safely when something is wrong
  3. The control is monitored so failure does not stay hidden

Quick check. Verification

What is verification in security

Scenario: A control exists only in the UI (front-end) and not on the server. Why is that a problem

Why is verification important

Scenario: You ship a new auth rule but you do not log failures. What risk did you create

🔌

Module A4. API and service security

Concept block
Service chain security
Distributed systems fail when trust is assumed between services.
Distributed systems fail when trust is assumed between services.
Assumptions
Service identity exists
Authz is consistent
Failure modes
Broken authorisation
Replay and reuse

APIs are how systems talk to each other. This makes them powerful. It also makes them a common path for abuse.

When you review an API, focus on these ideas.

  1. Authentication and authorisation for each action
  2. Rate limits and abuse controls
  3. Key scoping and rotation
  4. Replay and idempotency for sensitive actions

Quick check. APIs

Why are APIs high impact

Scenario: An API key leaks. What should have limited the blast radius

Why do rate limits matter

Scenario: A payment endpoint is retried because of a timeout. What design property prevents double-charging

🏁

Module A7. Applied capstone

Concept block
Feature security review pack
The goal is a small pack you can defend: risks, controls, tests, and evidence.
The goal is a small pack you can defend: risks, controls, tests, and evidence.
Assumptions
Evidence is part of the deliverable
Trade-offs are written down
Failure modes
Docs without verification
Controls without owners

This capstone is a short design review. Choose one feature you understand. Write down assets, entry points, and what could go wrong. Then choose controls that reduce harm and describe how you would verify them.

Quick check. Capstone

What is the point of the capstone

Scenario: You pick a feature that touches personal data. What should you write down first

Scenario: Give one preventive control and one detective control for a risky endpoint

What makes a capstone defensible


Quick feedback

Optional. This helps improve accuracy and usefulness. No accounts required.