CPD assessment
Cybersecurity Applied
Certificates support your career and help keep the site free for learners using the browser only tier. Sign in before you learn if you want progress and CPD evidence recorded.
CPD timing for this level
Applied time breakdown
This is the first pass of a defensible timing model for this level, based on what is actually on the page: reading, labs, checkpoints, and reflection.
What changes at this level
Level expectations
I want each level to feel independent, but also clearly deeper than the last. This panel makes the jump explicit so the value is obvious.
Scenario based judgement, common failure modes, and trade-offs between controls.
Not endorsed by a certification body. This is my marking standard for consistency and CPD evidence.
- A one page threat model for a small product: abuse cases, controls, and what you would log.
- An attack surface inventory: what is exposed, what can be removed, what must be protected, and why.
- A short risk trade-off write-up: two controls, one constraint, and a defensible choice.
Applied Cybersecurity
CPD tracking
Fixed hours for this level: 16. Timed assessment time is included once on pass.
View in My CPDApplied is built around threat modelling, web security flows, and detection signals. That maps well to:
- CompTIA Security+ and CySA+ (skills overlap): practical defensive reasoning and monitoring.
- (ISC)² SSCP: hands-on security administration concepts.
- OWASP Top 10 and ASVS (orientation): web risks and verification habits.
- NIST CSF 2.0: protect and detect thinking with evidence.
Applied is where we stop describing computers and start thinking like attackers and defenders. We keep the language human, but we anchor every idea to assets, entry points, and the controls that actually change outcomes.
🎯Module A1. Threat modelling as design
Security improves fastest when I can answer one question clearly. What matters, who might attack it, and how it could fail.
In simple terms, threat modelling is how I decide what to worry about first. It is not a perfect prediction. It is a structured way to turn "security feels scary" into "these three controls will reduce harm most for this system."
In the real world, a cyber person does not sit around naming movie villains. We sit with engineers, product, and sometimes ops, and we ask annoying questions. Where does data enter, where does it leave, who can change it, and what happens if it is wrong. We then write down the assumptions and test them.
How it fails. Teams either go too broad (a huge diagram nobody reads) or too narrow (only listing SQL injection because it is familiar). Another failure is skipping the human actors. Support staff, vendors, and internal admins are where many realistic attack paths live.
How to do it well. Keep scope small, tie each threat to an entry point, and tie each entry point to a control you can actually implement or measure. Accept trade offs. If a control adds friction, decide where that friction is worth it and where it will be bypassed.
Small threat model in one glance
People and data cross boundaries. Assumptions must be explicit.
Before you use the tool, the goal is not to produce a pretty canvas. The goal is to spot one or two high leverage failures. When you fill it in, focus on what would actually hurt and what someone could realistically do with the access they have.
After you use the tool, sanity check your output. Do your threats connect to real entry points, or did you drift into generic fear. Also check whether your controls are specific. "Be secure" is not a control. "Require MFA for staff logins and alert on new device sign in" is closer to a control.
Quick check. Threat modelling
Why start with assets before attacks
Scenario: A support agent can reset customer passwords after answering two questions. What is the asset and what is the attack surface
What is a trust boundary
Scenario: You have a mobile app, an API, and an admin console. Which boundary usually deserves the most paranoia first, and why
How do entry points relate to controls
What makes a threat model useful
🔍Module A3. Web security in practice
Most breaches start with what is exposed, not with exotic zero days.
Operating systems run services, services open ports, and apps expose inputs. The attack surface grows with every feature and every default left unchanged.
In simple terms: attack surface is everything someone can touch. If they can touch it, they can try to break it, confuse it, or use it in a way you did not plan for.
In the real world, what I actually do is inventory exposure: endpoints, admin panels, file uploads, third party scripts, public buckets, test environments, and forgotten subdomains. Then I ask which of those touches sensitive data or powerful actions. That is where I look first.
How it fails. A feature gets shipped behind a toggle, then the toggle is left on in production. A debug route leaks stack traces. An admin page is "temporary" and ends up indexed by a crawler. A dependency is added and nobody reviews what it loads in the browser.
How to do it well. Reduce what is exposed, secure what must be exposed, and make sure every exposed input has validation, authentication where appropriate, and monitoring. The trade off is speed. More exposure makes development easier until the day it makes incident response harder.
Where exposure creeps in
A simple web app and the inputs that expand attack surface.
Before you use the tool, treat each toggle as a real decision. When you turn something on, ask two questions. What new input did I create, and what new assumption did I just make.
After you use the tool, look for the "quiet" exposure. A lot of damage comes from boring things like default ports, metadata, admin routes, and third party scripts. The common mistake is focusing only on big obvious inputs and missing the small ones that are easiest to probe.
Quick check. Web security
What expands attack surface the fastest
Scenario: A search box lets users type anything and the results suddenly include other users' records. What class of failure might this be
Scenario: A developer leaves a debug route enabled in production because it is convenient. Why is that dangerous
How does metadata leak
Scenario: Where should validation live if you want it to actually protect you
Why map trust boundaries when reviewing attack surface
🔐Module A2. Identity and access control
Authentication answers who you are. Authorisation answers what you may do. Sessions remember you between requests. Mixing them up is why many breaches feel trivial.
A session is a bridge in a stateless world. Cookies or tokens are just carriers. Their safety depends on creation, storage, expiry, rotation and server side checks.
In simple terms, a session is the site remembering you between clicks. Without it, every request looks like a stranger. With it, every request is trusted as "you", which is powerful and therefore risky.
In the real world, what I see is not magic crypto. I see teams choosing between cookies and tokens, setting expiries, deciding how logout works, and then finding out that one missing authorisation check exposed data. I also see incidents where a stolen session gets reused from a different device, and nobody notices because there is no monitoring on session anomalies.
How to do it well. Use secure and HttpOnly cookies where possible, set short lifetimes for high risk sessions, rotate tokens on sensitive events, and invalidate server side on logout. Also treat authorisation as a rule that must be enforced on every request, not just in the UI. The trade off is user convenience. Short sessions can annoy users, so you need to decide where you accept friction and where you compensate with better UX.
Session flow at a glance
Login creates a session token that rides with each request.
Before you use the session flow tool, focus on what is stored where. The browser will happily send cookies back to the server. The question is whether the server should believe them, and under what conditions.
After you use it, check you can explain where the trust comes from. If your model is "the cookie proves it is me forever," you are missing expiry, rotation, and invalidation.
Before you use the hijack demo: this is not teaching you to attack anyone. It is showing why session identifiers are sensitive. If someone gets one, they often do not need your password.
After you use it, focus on the controls that break the replay. Short lifetimes, rotation, binding sessions to device signals, and server side invalidation all reduce the window. A common mistake is assuming HTTPS alone prevents session theft. HTTPS protects transport, not your entire browser environment.
In 2018, British Airways experienced a data breach affecting 500,000 customers. Attackers injected malicious code into the airline's website that stole payment card details. The attack worked because cookies were not properly secured. The ICO fined British Airways £20 million for the breach. The direct cost was significant, but the reputational damage was worse. Customers lost trust, and the incident raised questions about the airline's ability to protect sensitive payment information.
Cookies carry session identifiers and authentication tokens. If they are not configured correctly, attackers can steal sessions, hijack accounts, and access sensitive data. Missing the Secure flag means cookies can be sent over unencrypted HTTP. Missing HttpOnly means JavaScript can access cookies, making XSS attacks more dangerous. Missing SameSite means cookies can be sent in cross-site requests, enabling CSRF attacks.
Use the cookie checker tool to analyse real cookie configurations. See what secure cookies look like versus insecure ones. Understand how missing flags create vulnerabilities. This hands-on practice builds the skills you need to spot cookie security issues in real applications.
Quick check. Identity and sessions
How do authentication and authorisation differ
Why are secure and HttpOnly flags important
What breaks when you skip authorisation
Scenario: A session token is stolen and replayed from another device. What controls reduce the damage window
How does session rotation help
Name one sign of privilege escalation
What should happen on logout
📊Module A6. Logging and detection basics
Prevention is never perfect. Good detection shortens the time between a bad action and your response.
I log what I fear: authentication, authorisation, sensitive access, privilege changes, unusual network patterns.
In simple terms, logs are the footprints. Monitoring is deciding which footprints matter. Response is what you do when you see them.
In the real world, this is triage. You see a burst of failed logins, a login from a new country, an API key used at 3am, or a privilege change followed by data export. You decide what to investigate first, and you decide what evidence to preserve.
How it fails: alert fatigue, missing context, and no owner. Teams create rules that fire constantly, then everyone ignores the channel. Or they log everything but cannot search it. Or the person who gets paged has no permission to do anything meaningful.
How to do it well. Start with a few high value signals tied to real harm, tune them, and write a short playbook. Decide your trade offs. More alerts can catch more badness but will also burn people out. A smaller set of reliable signals often beats a thousand noisy ones.
Risk thinking joins likelihood and impact. Controls reduce one or both but add cost and friction. This is why a good risk decision is not "maximum security." It is "enough security for what we are protecting, given constraints."
From events to action
Systems emit logs, you filter for signal, then alert a human or playbook.
Before you use the log triage tool, do not try to read every line. Try to spot the pattern and the story. Ask what changed, who did it, and what is the plausible next step for an attacker.
After you use it, check your priorities. A common mistake is chasing the weirdest looking event instead of the highest impact path. Another mistake is ignoring baseline behaviour. "Unusual" only makes sense when you know what normal looks like.
Before you use the risk trade off tool, this is about making trade offs explicit. You are practising how to talk about risk without sounding like a robot or a fortune teller.
After you use it, focus on residual risk. The common mistake is assuming one control takes risk to zero. Another mistake is ignoring cost and usability until users bypass the control. Good controls reduce harm without breaking the business.
Quick check. Logging and risk
Scenario: You see 200 failed logins followed by one success, then a password reset and a data export. What should you investigate first
Why log authentication and authorisation events
What makes a good signal
Why do too many alerts hurt
How does likelihood differ from impact
What is residual risk
Who owns responding to alerts
🧪Module A5. Verification and release gates
Applied learning is only useful if you can verify it. Verification is how you check that a control really works. It is also how you avoid security theatre.
In practice, you verify three things.
- The control exists where it should exist
- The control fails safely when something is wrong
- The control is monitored so failure does not stay hidden
Quick check. Verification
What is verification in security
Scenario: A control exists only in the UI (front-end) and not on the server. Why is that a problem
Why is verification important
Scenario: You ship a new auth rule but you do not log failures. What risk did you create
🔌Module A4. API and service security
APIs are how systems talk to each other. This makes them powerful. It also makes them a common path for abuse.
When you review an API, focus on these ideas.
- Authentication and authorisation for each action
- Rate limits and abuse controls
- Key scoping and rotation
- Replay and idempotency for sensitive actions
Quick check. APIs
Why are APIs high impact
Scenario: An API key leaks. What should have limited the blast radius
Why do rate limits matter
Scenario: A payment endpoint is retried because of a timeout. What design property prevents double-charging
🏁Module A7. Applied capstone
This capstone is a short design review. Choose one feature you understand. Write down assets, entry points, and what could go wrong. Then choose controls that reduce harm and describe how you would verify them.
Quick check. Capstone
What is the point of the capstone
Scenario: You pick a feature that touches personal data. What should you write down first
Scenario: Give one preventive control and one detective control for a risky endpoint
What makes a capstone defensible
