AI track

Notes, labs and CPD

We start with the basics of data and small models, then move through model behaviour, pipelines and modern model families. Everything stays in plain language with quick labs so non-technical brains do not feel lost.
  • FoundationsData, vectors and honest accuracy.
  • IntermediateEvaluation, leakage and simple pipelines.
  • AdvancedTransformers, agents and responsible use.
  • SummaryRecaps, games and quick labs.

AI track progress

0 of 16 sections complete

0%
CPD hours
AI
30
hours
Hours are fixed by the course design. Timed assessment time is included once on pass.

The track stays in simple sentences while building serious skills. Use the levels below as a checklist or jump straight to the labs if you want to test a hunch.

🧠

Levels and routes

Foundations

AI Foundations

Start with what AI is, how data is turned into numbers and why simple models matter.

8 hrs
Level progress0%

Intermediate

AI Intermediate

Work with evaluation, overfitting, simple pipelines and where models break in the real world.

10 hrs
Level progress0%

Advanced

AI Advanced

Dig into transformers, agents, diffusion and how to combine them into serious systems.

12 hrs
Level progress0%

Summary

Summary and games

Test everything you know with games, scenarios and recap dashboards.

3 hrs
Level progress0%

📦

What you will build

Use case boundaries memo
Foundations output
A one-page note: what the system is allowed to do, what it must not do, and what happens on a bad day.
Evaluation and red flag checklist
Intermediate output
A defensible evaluation plan: metrics, slice tests, leakage checks, and the failure you most want to catch.
Monitoring and governance pack
Advanced output
A lightweight governance pack: drift signals, tool use logging, incident triggers, and safe rollback steps.

🧭

Mapping and evidence

Mapping

How this course stays defensible

This links the same four things CPD reviewers care about: what you learn, how you practise, how you are assessed, and what evidence you can show.

Primary anchor standards
  • NIST AI Risk Management Framework (AI RMF 1.0) (NIST)
  • ISO/IEC 23894 (AI risk management) (ISO/IEC)

Correct mental models for data, training, evaluation, and common pitfalls.

Evidence artefact
Use case boundaries memo
A one-page note: what the system is allowed to do, what it must not do, and what happens on a bad day.

Scenario based evaluation and pipeline decisions, including drift and governance basics.

Evidence artefact
Evaluation and red flag checklist
A defensible evaluation plan: metrics, slice tests, leakage checks, and the failure you most want to catch.

Agentic systems, safety, monitoring, and governance aligned to NIST AI RMF and ISO 23894.

Evidence artefact
Monitoring and governance pack
A lightweight governance pack: drift signals, tool use logging, incident triggers, and safe rollback steps.

🧩

Module coverage matrix

Coverage matrix

Module-level coverage

This matrix makes the course defensible: each module is tied to an outcome focus, the anchor standards, and the evidence you can produce.

Artefact templates
LevelModuleOutcome focusDomainsAlignmentAssessmentEvidence
Foundations
Is Ai
ai-foundations-what-is-ai
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Define AI clearly, separate hype from capability, and choose realistic use cases.foundationsNIST AI RMF 1.0: MapPractice assessmentTemplate + rubric
Foundations
And Representation
ai-foundations-data-and-representation
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Explain how data is represented and why representation choices change model behaviour.dataNIST AI RMF 1.0: MapPractice assessmentTemplate + rubric
Foundations
Paradigms
ai-foundations-learning-paradigms
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Choose learning paradigms appropriately and understand common failure modes.foundationsNIST AI RMF 1.0: MapPractice assessmentTemplate + rubric
Foundations
Ai Basics
ai-foundations-responsible-ai-basics
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Identify AI risks early and choose governance defaults that reduce harm.governanceNIST AI RMF 1.0: Govern · NIST AI RMF 1.0: ManagePractice assessmentTemplate + rubric
Intermediate
And Patterns
ai-intermediate-prompts-and-patterns
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Use prompting as an interface: test, version, and evaluate behaviour changes.promptsNIST AI RMF 1.0: MeasureFormative checkpointsTemplate + rubric
Intermediate
And Search
ai-intermediate-embeddings-and-search
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Use embeddings and search to improve retrieval and relevance with evidence.ragNIST AI RMF 1.0: MeasureFormative checkpointsTemplate + rubric
Intermediate
With Docs
ai-intermediate-rag-with-docs
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Build RAG systems that ground answers and fail safely when retrieval is weak.ragNIST AI RMF 1.0: Manage · NIST AI RMF 1.0: MeasureFormative checkpointsTemplate + rubric
Intermediate
Agents
ai-intermediate-simple-agents
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Introduce agent/tool use safely with scoped permissions, logging, and stop conditions.agentsNIST AI RMF 1.0: ManageFormative checkpointsTemplate + rubric
Advanced
And Agents
ai-advanced-transformers-and-agents
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Understand transformers and agents as systems, not magic, and control failure modes.agentsNIST AI RMF 1.0: Map · NIST AI RMF 1.0: ManageFormative checkpointsTemplate + rubric
Advanced
And Generation
ai-advanced-diffusion-and-generation
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Reason about generation risks (misinformation, safety) and choose mitigations.governanceNIST AI RMF 1.0: ManageFormative checkpointsTemplate + rubric
Advanced
And Monitoring
ai-advanced-production-and-monitoring
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Monitor drift and system performance, and define signals that trigger intervention.monitoringNIST AI RMF 1.0: Measure · NIST AI RMF 1.0: ManageFormative checkpointsTemplate + rubric
Advanced
And Strategy
ai-advanced-governance-and-strategy
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Set decision rights, evidence expectations, and review triggers for AI in production.governanceNIST AI RMF 1.0: GovernFormative checkpointsTemplate + rubric
Summary
Summary Concepts
ai-summary-concepts
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Consolidate the core mental models and vocabulary across the track.foundationsNIST AI RMF 1.0: MapFormative checkpointsTemplate + rubric
Summary
Summary Scenarios
ai-summary-scenarios
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Practise scenario judgement across evaluation, drift, and governance decisions.evaluation, governanceNIST AI RMF 1.0: Measure · NIST AI RMF 1.0: ManageFormative checkpointsTemplate + rubric
Summary
Summary Create
ai-summary-create
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Turn learning into an artefact you can defend: plan, evidence, and next steps.evidenceNIST AI RMF 1.0: GovernFormative checkpointsTemplate + rubric
Summary
Summary Master
ai-summary-master
Anchors: NIST AI Risk Management Framework (AI RMF 1.0), ISO/IEC 23894 (AI risk management)
Identify remaining weak areas and set a revision loop using evidence and monitoring.monitoringNIST AI RMF 1.0: ManageFormative checkpointsTemplate + rubric

📚

How to use this track

  1. Move through the four levels in order, or dip into a section you need for work right now.
  2. Record a few minutes when you practise so CPD stays honest and local to your browser.
  3. Share the labs with friends or a class and compare answers before you peek at mine.
  4. Keep notes on what surprised you. That is where the real learning hides.

⏱️

CPD timing

CPD timing

Time estimate (transparent)

I publish time estimates because CPD needs to be defensible. The goal is honesty, not marketing.

Guided learning

30h

Core levels, structured learning

Practice and consolidation

3h

Summary, drills, revisits

Notional range

20 to 45 hours

Quick: core concepts + one exercise per module. Standard: exercises + reflections for CPD evidence. Deep: extra drills and portfolio artefacts.

How I estimate time

I use a notional learning hours approach and I keep the assumptions visible. Where modules are content heavy, I add practice so the hours are earned, not claimed.

  • Reading: 225 words per minute, multiplied by 1.3 for note taking and checking understanding.
  • Labs and practice: about 15 minutes per guided activity, including at least one retry.
  • Reflection for CPD: about 8 minutes per module for a short defensible note and evidence link.
  • Assessments: about 1.4 minutes per question for reading, thinking, and review.

If you study faster or slower, your hours will differ. What matters is that the method is consistent and the activities are real.

📚

Standards and certifications

Standards and certifications

The map we anchor to

I map each course to reputable standards so your learning is defensible at work. I also show common certifications and how their language differs.

Important: This content aligns with these standards and certifications for learning purposes. This is guidance, not endorsement. We are not affiliated with certification providers unless explicitly stated.

Primary anchor standards

  • NIST AI Risk Management Framework (AI RMF 1.0)
    NIST

    A practical way to talk about AI risk, governance, and controls across the lifecycle.

    Official reference
  • ISO/IEC 23894 (AI risk management)
    ISO/IEC

    Risk management framing that is recognisable in governance and assurance conversations.

    Official reference

Certification routes

This course is not endorsed by certification bodies. It is built to prepare you honestly, including where exams simplify reality.

  • Microsoft AI-900 and AI-102
    Microsoft
    foundation

    A structured route that helps learners translate concepts into real cloud services and constraints.

  • Cloud AI practitioner tracks (AWS, Google Cloud)
    Major cloud vendors
    practitioner

    Useful if your goal is building and operating AI systems in production rather than only studying theory.

Organisations and resources

These are the kinds of organisations professionals reference. If you learn how to use them properly, you become harder to mislead.

  • NIST

    What it is: A standards body whose AI risk and cybersecurity work is widely referenced.

    Why it matters: It provides shared language that is acceptable in industry and government conversations.

  • ISO/IEC

    What it is: International standards bodies that publish management system and risk standards.

    Why it matters: Useful for governance and assurance framing, especially when an organisation needs audit-friendly controls.

  • Academic research institutions

    What it is: Where many AI methods are first validated, challenged, and refined.

    Why it matters: It keeps the course honest about limits, evidence, and what is still uncertain.

🧪

Assessment blueprint

Assessment and practice assessment

AI assessment blueprint (planned)

The AI assessment system is being upgraded to match the same blueprint discipline as the other tracks. Until then, use the checkpoints and the labs as your practice loop.

Foundations

mixed

Correct mental models for data, training, evaluation, and common pitfalls.

Applied

scenario

Scenario based evaluation and pipeline decisions, including drift and governance basics.

Modern systems

mixed

Agentic systems, safety, monitoring, and governance aligned to NIST AI RMF and ISO 23894.

Design rules
  • Every tier must include at least one question that tests system thinking, not only model trivia.
  • Where the learner must make a judgement call, the marking should reward correct assumptions and defensible reasoning.

🧾

Terminology translation

Terminology translation

AI risk and evaluation in production

AI can look brilliant in a demo and still be unsafe in a real system. The words below stop you lying to yourself by accident.

Model versus system

Plain English

The model is the maths. The system is everything around it: data pipelines, UI, humans, policies, monitoring, and incentives.

How standards use it

  • NIST AI RMF 1.0

    Risk lives in the socio-technical system, not only the model.

  • ISO/IEC 23894

    Treats risk management as lifecycle control, including context and deployment conditions.

Common mistake

Passing a benchmark and declaring the system safe.

My take

If you only test the model, you are testing the easiest part.

Quick check

Give one example of a model that is fine but a system that is unsafe.

Verification versus validation

Plain English

Verification is built right. Validation is right thing built.

How standards use it

  • Engineering quality practice

    A common discipline split that stops teams from confusing correctness with usefulness.

Common mistake

Using validated to mean it ran once and did not crash.

My take

If you cannot state what right means, you have neither.

Quick check

What would you verify and what would you validate for a model used in hiring?

Drift

Plain English

The world changes, so the model’s performance changes.

How standards use it

  • Production ML practice

    Monitoring must cover quality and distribution shifts, not only uptime and cost.

Common mistake

Monitoring only uptime and acting surprised when quality collapses.

My take

If a model touches real decisions, you monitor quality like you monitor money, because both will leak.

Quick check

Name one drift signal you would measure for a text classifier.

🛠️

Quick practice

Checkpoint

Why keep AI notes plain and compact

Which three stops does the track cover

Quick feedback

Optional. This helps improve accuracy and usefulness. No accounts required.