AI course
Notes, labs, and CPD
Start with the basics of data and small models, then move through model behaviour, pipelines, and modern model families. Everything stays in plain language with quick labs so you can build confidence without guessing.
- FoundationsData, vectors, and honest accuracy.
- AppliedEvaluation, leakage, and simple pipelines.
- Practice & StrategyTransformers, agents, and responsible use.
- SummaryRecaps, games, and quick labs.
What you will learn
Overview
I explain what a model is, what it can and cannot do, and how to test ideas safely. When you see a claim about AI, you should have a simple way to check it rather than trusting confidence.
Your progress
0%0 of 16 sections complete
Time estimate
Move steadily and keep short notes on what surprised you. Speed comes from understanding, not from skipping.
AI learning flow
A repeatable cycle you can use for any AI project
Use this flow to avoid skipping evaluation and safety checks.
Rendering diagram...
🧠Core path
AI Foundations
Start with what AI is, how data is turned into numbers and why simple models matter.
Applied AI
Work with evaluation, overfitting, simple pipelines and where models break in the real world.
Practice & Strategy
Dig into transformers, agents, diffusion and how to combine them into serious systems.
Summary and games
Test everything you know with games, scenarios and recap labs.
Getting started
How to use this course
Use the AI course with confidence
Keep your pace steady and apply each level to a real question.
- 1
Follow the four levels in sequence
Start with foundations and move forward so terms, methods, and judgement stay consistent.
- 2
Practise with small weekly sessions
Record short, honest CPD entries after each session so your progress remains useful.
- 3
Test your assumptions with a lab
After each concept, run one lab and explain the result in plain language.
- 4
Keep a reflection note
Write what surprised you and what changed your decision. This builds expert intuition.
Hands-on
Further practice
Optional tool
Plan a tiny AI habit
Pick one daily habit to practise from this course.
Open this when you are ready. It reinforces learning rather than replaces it.
Open tool panel
Optional tool
Plan a tiny AI habit
Pick one daily habit to practise from this course.
Open this when you are ready. It reinforces learning rather than replaces it.
Read the explanation above, then try the tool, then compare your output with the example. If you are new, it is fine to skip and return later.
Quick check
Checkpoint
2 questions
For auditors and CPD
Reference and standards
These panels are for CPD defensibility, standards alignment, and audit evidence. Most learners can skip these entirely and return when they need formal documentation.
Show reference panels7 sections · timing, artefacts, assessment, terminology, standards, mapping, coverage
CPD timing
Time estimate (transparent)
I publish time estimates because CPD needs to be defensible. The goal is honesty, not marketing.
Guided learning
30h
Core levels, structured learning
Practice and consolidation
3h
Summary, drills, revisits
Notional range
20 to 45 hours
Quick: core concepts + one exercise per module. Standard: exercises + reflections for CPD evidence. Deep: extra drills and portfolio artefacts.
How I estimate time
I use a notional learning hours approach and I keep the assumptions visible. Where modules are content heavy, I add practice so the hours are earned, not claimed.
- Reading: 225 words per minute, multiplied by 1.3 for note taking and checking understanding.
- Labs and practice: about 15 minutes per guided activity, including at least one retry.
- Reflection for CPD: about 8 minutes per module for a short defensible note and evidence link.
- Assessments: about 1.4 minutes per question for reading, thinking, and review.
If you study faster or slower, your hours will differ. What matters is that the method is consistent and the activities are real.
Assessment and practice assessment
AI assessment blueprint (planned)
The AI assessment system is being upgraded to match the same blueprint discipline as the other tracks. Until then, use the checkpoints and the labs as your practice loop.
Foundations
mixedCorrect mental models for data, training, evaluation, and common pitfalls.
Applied
scenarioScenario based evaluation and pipeline decisions, including drift and governance basics.
Modern systems
mixedAgentic systems, safety, monitoring, and governance aligned to NIST AI RMF and ISO 23894.
Design rules
- Every tier must include at least one question that tests system thinking, not only model trivia.
- Where the learner must make a judgement call, the marking should reward correct assumptions and defensible reasoning.
Mapping
How this course stays defensible
This links the same four things CPD reviewers care about: what you learn, how you practise, how you are assessed, and what evidence you can show.
Correct mental models for data, training, evaluation, and common pitfalls.
Scenario based evaluation and pipeline decisions, including drift and governance basics.
Agentic systems, safety, monitoring, and governance aligned to NIST AI RMF and ISO 23894.
Coverage matrix
Module-level coverage
This matrix makes the course defensible: each module is tied to an outcome focus, the anchor standards, and the evidence you can produce.
| Level | Module | Outcome focus | Domains | Alignment | Assessment | Evidence |
|---|---|---|---|---|---|---|
| Foundations | Is Ai ai-foundations-what-is-ai Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Define AI clearly, separate hype from capability, and choose realistic use cases. | foundations | NIST AI RMF 1.0: Map | Practice assessment | Template + rubric |
| Foundations | And Representation ai-foundations-data-and-representation Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Explain how data is represented and why representation choices change model behaviour. | data | NIST AI RMF 1.0: Map | Practice assessment | Template + rubric |
| Foundations | Paradigms ai-foundations-learning-paradigms Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Choose learning paradigms appropriately and understand common failure modes. | foundations | NIST AI RMF 1.0: Map | Practice assessment | Template + rubric |
| Foundations | Ai Basics ai-foundations-responsible-ai-basics Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Identify AI risks early and choose governance defaults that reduce harm. | governance | NIST AI RMF 1.0: Govern · NIST AI RMF 1.0: Manage | Practice assessment | Template + rubric |
| Applied | And Training ai-intermediate-models-and-training Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Mental models and applied judgement | - | - | Practice assessment | Template + rubric |
| Applied | Features Representation ai-intermediate-data-features-representation Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Mental models and applied judgement | - | - | Practice assessment | Template + rubric |
| Applied | Intermediate Evaluation ai-intermediate-evaluation Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Evaluation and slice testing | - | - | Practice assessment | Template + rubric |
| Applied | Intermediate Deployment ai-intermediate-deployment Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Mental models and applied judgement | - | - | Practice assessment | Template + rubric |
| Applied | Intermediate Governance ai-intermediate-governance Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Governance and decision rights | - | - | Practice assessment | Template + rubric |
| Practice & Strategy | And Architectures ai-advanced-systems-and-architectures Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Design AI systems with clear boundaries, data paths, and safe defaults that survive real operations. | systems | NIST AI RMF 1.0: Map · NIST AI RMF 1.0: Manage | Practice assessment | Template + rubric |
| Practice & Strategy | Cost Reliability ai-advanced-scaling-cost-reliability Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Trade off cost, latency, and reliability using evidence rather than guesswork. | operations | NIST AI RMF 1.0: Measure · NIST AI RMF 1.0: Manage | Practice assessment | Template + rubric |
| Practice & Strategy | Monitoring Governance ai-advanced-evaluation-monitoring-governance Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Define evaluation signals, monitoring triggers, and governance checks for production AI. | monitoring, governance | NIST AI RMF 1.0: Measure · NIST AI RMF 1.0: Govern | Practice assessment | Template + rubric |
| Summary | Summary Concepts ai-summary-concepts Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Consolidate the core mental models and vocabulary across the track. | foundations | NIST AI RMF 1.0: Map | Formative checkpoints | Template + rubric |
| Summary | Summary Scenarios ai-summary-scenarios Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Practise scenario judgement across evaluation, drift, and governance decisions. | evaluation, governance | NIST AI RMF 1.0: Measure · NIST AI RMF 1.0: Manage | Formative checkpoints | Template + rubric |
| Summary | Summary Create ai-summary-create Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Turn learning into an artefact you can defend: plan, evidence, and next steps. | evidence | NIST AI RMF 1.0: Govern | Formative checkpoints | Template + rubric |
| Summary | Summary Master ai-summary-master Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management) | Identify remaining weak areas and set a revision loop using evidence and monitoring. | monitoring | NIST AI RMF 1.0: Manage | Formative checkpoints | Template + rubric |
All content is protected. By enrolling, you agree to our terms.
View Course Terms & IP Policy