AI track
Notes, labs and CPD
- FoundationsData, vectors and honest accuracy.
- IntermediateEvaluation, leakage and simple pipelines.
- AdvancedTransformers, agents and responsible use.
- SummaryRecaps, games and quick labs.
AI track progress
0 of 16 sections complete
The track stays in simple sentences while building serious skills. Use the levels below as a checklist or jump straight to the labs if you want to test a hunch.
🧠Levels and routes
Foundations
AI Foundations
Start with what AI is, how data is turned into numbers and why simple models matter.
Intermediate
AI Intermediate
Work with evaluation, overfitting, simple pipelines and where models break in the real world.
Advanced
AI Advanced
Dig into transformers, agents, diffusion and how to combine them into serious systems.
Summary
Summary and games
Test everything you know with games, scenarios and recap dashboards.
📦What you will build
🧭Mapping and evidence
Mapping
How this course stays defensible
This links the same four things CPD reviewers care about: what you learn, how you practise, how you are assessed, and what evidence you can show.
- NIST AI Risk Management Framework (AI RMF 1.0) (NIST)
- ISO/IEC 23894 (AI risk management) (ISO/IEC)
Correct mental models for data, training, evaluation, and common pitfalls.
Scenario based evaluation and pipeline decisions, including drift and governance basics.
Agentic systems, safety, monitoring, and governance aligned to NIST AI RMF and ISO 23894.
🧩Module coverage matrix
Coverage matrix
Module-level coverage
This matrix makes the course defensible: each module is tied to an outcome focus, the anchor standards, and the evidence you can produce.
| Level | Module | Outcome focus | Domains | Alignment | Assessment | Evidence |
|---|---|---|---|---|---|---|
| Foundations | Is Ai ai-foundations-what-is-ai Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Define AI clearly, separate hype from capability, and choose realistic use cases. | foundations | NIST AI RMF 1.0: Map | Practice assessment | Template + rubric |
| Foundations | And Representation ai-foundations-data-and-representation Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Explain how data is represented and why representation choices change model behaviour. | data | NIST AI RMF 1.0: Map | Practice assessment | Template + rubric |
| Foundations | Paradigms ai-foundations-learning-paradigms Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Choose learning paradigms appropriately and understand common failure modes. | foundations | NIST AI RMF 1.0: Map | Practice assessment | Template + rubric |
| Foundations | Ai Basics ai-foundations-responsible-ai-basics Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Identify AI risks early and choose governance defaults that reduce harm. | governance | NIST AI RMF 1.0: Govern · NIST AI RMF 1.0: Manage | Practice assessment | Template + rubric |
| Intermediate | And Patterns ai-intermediate-prompts-and-patterns Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Use prompting as an interface: test, version, and evaluate behaviour changes. | prompts | NIST AI RMF 1.0: Measure | Formative checkpoints | Template + rubric |
| Intermediate | And Search ai-intermediate-embeddings-and-search Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Use embeddings and search to improve retrieval and relevance with evidence. | rag | NIST AI RMF 1.0: Measure | Formative checkpoints | Template + rubric |
| Intermediate | With Docs ai-intermediate-rag-with-docs Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Build RAG systems that ground answers and fail safely when retrieval is weak. | rag | NIST AI RMF 1.0: Manage · NIST AI RMF 1.0: Measure | Formative checkpoints | Template + rubric |
| Intermediate | Agents ai-intermediate-simple-agents Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Introduce agent/tool use safely with scoped permissions, logging, and stop conditions. | agents | NIST AI RMF 1.0: Manage | Formative checkpoints | Template + rubric |
| Advanced | And Agents ai-advanced-transformers-and-agents Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Understand transformers and agents as systems, not magic, and control failure modes. | agents | NIST AI RMF 1.0: Map · NIST AI RMF 1.0: Manage | Formative checkpoints | Template + rubric |
| Advanced | And Generation ai-advanced-diffusion-and-generation Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Reason about generation risks (misinformation, safety) and choose mitigations. | governance | NIST AI RMF 1.0: Manage | Formative checkpoints | Template + rubric |
| Advanced | And Monitoring ai-advanced-production-and-monitoring Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Monitor drift and system performance, and define signals that trigger intervention. | monitoring | NIST AI RMF 1.0: Measure · NIST AI RMF 1.0: Manage | Formative checkpoints | Template + rubric |
| Advanced | And Strategy ai-advanced-governance-and-strategy Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Set decision rights, evidence expectations, and review triggers for AI in production. | governance | NIST AI RMF 1.0: Govern | Formative checkpoints | Template + rubric |
| Summary | Summary Concepts ai-summary-concepts Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Consolidate the core mental models and vocabulary across the track. | foundations | NIST AI RMF 1.0: Map | Formative checkpoints | Template + rubric |
| Summary | Summary Scenarios ai-summary-scenarios Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Practise scenario judgement across evaluation, drift, and governance decisions. | evaluation, governance | NIST AI RMF 1.0: Measure · NIST AI RMF 1.0: Manage | Formative checkpoints | Template + rubric |
| Summary | Summary Create ai-summary-create Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Turn learning into an artefact you can defend: plan, evidence, and next steps. | evidence | NIST AI RMF 1.0: Govern | Formative checkpoints | Template + rubric |
| Summary | Summary Master ai-summary-master Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Identify remaining weak areas and set a revision loop using evidence and monitoring. | monitoring | NIST AI RMF 1.0: Manage | Formative checkpoints | Template + rubric |
📚How to use this track
- Move through the four levels in order, or dip into a section you need for work right now.
- Record a few minutes when you practise so CPD stays honest and local to your browser.
- Share the labs with friends or a class and compare answers before you peek at mine.
- Keep notes on what surprised you. That is where the real learning hides.
⏱️CPD timing
CPD timing
Time estimate (transparent)
I publish time estimates because CPD needs to be defensible. The goal is honesty, not marketing.
Guided learning
30h
Core levels, structured learning
Practice and consolidation
3h
Summary, drills, revisits
Notional range
20 to 45 hours
Quick: core concepts + one exercise per module. Standard: exercises + reflections for CPD evidence. Deep: extra drills and portfolio artefacts.
How I estimate time
I use a notional learning hours approach and I keep the assumptions visible. Where modules are content heavy, I add practice so the hours are earned, not claimed.
- Reading: 225 words per minute, multiplied by 1.3 for note taking and checking understanding.
- Labs and practice: about 15 minutes per guided activity, including at least one retry.
- Reflection for CPD: about 8 minutes per module for a short defensible note and evidence link.
- Assessments: about 1.4 minutes per question for reading, thinking, and review.
If you study faster or slower, your hours will differ. What matters is that the method is consistent and the activities are real.
📚Standards and certifications
Standards and certifications
The map we anchor to
I map each course to reputable standards so your learning is defensible at work. I also show common certifications and how their language differs.
Important: This content aligns with these standards and certifications for learning purposes. This is guidance, not endorsement. We are not affiliated with certification providers unless explicitly stated.
Primary anchor standards
- NIST AI Risk Management Framework (AI RMF 1.0)NIST
A practical way to talk about AI risk, governance, and controls across the lifecycle.
Official reference - ISO/IEC 23894 (AI risk management)ISO/IEC
Risk management framing that is recognisable in governance and assurance conversations.
Official reference
Certification routes
This course is not endorsed by certification bodies. It is built to prepare you honestly, including where exams simplify reality.
- foundationMicrosoft AI-900 and AI-102Microsoft
A structured route that helps learners translate concepts into real cloud services and constraints.
- practitionerCloud AI practitioner tracks (AWS, Google Cloud)Major cloud vendors
Useful if your goal is building and operating AI systems in production rather than only studying theory.
Organisations and resources
These are the kinds of organisations professionals reference. If you learn how to use them properly, you become harder to mislead.
- NIST
What it is: A standards body whose AI risk and cybersecurity work is widely referenced.
Why it matters: It provides shared language that is acceptable in industry and government conversations.
- ISO/IEC
What it is: International standards bodies that publish management system and risk standards.
Why it matters: Useful for governance and assurance framing, especially when an organisation needs audit-friendly controls.
- Academic research institutions
What it is: Where many AI methods are first validated, challenged, and refined.
Why it matters: It keeps the course honest about limits, evidence, and what is still uncertain.
🧪Assessment blueprint
Assessment and practice assessment
AI assessment blueprint (planned)
The AI assessment system is being upgraded to match the same blueprint discipline as the other tracks. Until then, use the checkpoints and the labs as your practice loop.
Foundations
mixedCorrect mental models for data, training, evaluation, and common pitfalls.
Applied
scenarioScenario based evaluation and pipeline decisions, including drift and governance basics.
Modern systems
mixedAgentic systems, safety, monitoring, and governance aligned to NIST AI RMF and ISO 23894.
Design rules
- Every tier must include at least one question that tests system thinking, not only model trivia.
- Where the learner must make a judgement call, the marking should reward correct assumptions and defensible reasoning.
🧾Terminology translation
Terminology translation
AI risk and evaluation in production
AI can look brilliant in a demo and still be unsafe in a real system. The words below stop you lying to yourself by accident.
Model versus system
Plain English
The model is the maths. The system is everything around it: data pipelines, UI, humans, policies, monitoring, and incentives.
How standards use it
NIST AI RMF 1.0
Risk lives in the socio-technical system, not only the model.
ISO/IEC 23894
Treats risk management as lifecycle control, including context and deployment conditions.
Common mistake
Passing a benchmark and declaring the system safe.
My take
If you only test the model, you are testing the easiest part.
Quick check
Give one example of a model that is fine but a system that is unsafe.
Verification versus validation
Plain English
Verification is built right. Validation is right thing built.
How standards use it
Engineering quality practice
A common discipline split that stops teams from confusing correctness with usefulness.
Common mistake
Using validated to mean it ran once and did not crash.
My take
If you cannot state what right means, you have neither.
Quick check
What would you verify and what would you validate for a model used in hiring?
Drift
Plain English
The world changes, so the model’s performance changes.
How standards use it
Production ML practice
Monitoring must cover quality and distribution shifts, not only uptime and cost.
Common mistake
Monitoring only uptime and acting surprised when quality collapses.
My take
If a model touches real decisions, you monitor quality like you monitor money, because both will leak.
Quick check
Name one drift signal you would measure for a text classifier.
🛠️Quick practice
Checkpoint
Why keep AI notes plain and compact
Which three stops does the track cover
