What this course is (and what it is not)
I treat AI as applied statistics plus engineering. That means we care about vocabulary, data, evaluation, and operational behaviour, not only impressive demos.
This course guides you from basics to advanced topics using clear definitions (including acronyms) and examples from sectors like energy. You will learn classic AI ideas (search, problem solving, and knowledge representation) and modern AI (machine learning, deep learning, transformers, generative AI, NLP, and computer vision). You will also learn the part that makes systems trustworthy: ethics, governance, standards, and cybersecurity.
This course prefers accuracy over theatrics. If a topic is fast moving, I will say what is stable, what is contested, and what you should verify in your own context.
The learning path (Foundations → Intermediate → Advanced)
AI as a system, not a model
Models sit inside products with data, boundaries, and monitoring
Data
Collection, labels, noise, leakage.
Models
Training, evaluation, generalisation.
Systems
Deployment, retrieval, latency, cost.
Trust
Safety, security, governance, evidence.
My rule: if you cannot explain what happens on the model’s bad day, you are not ready to automate anything important.
Modules you will cover (the comprehensive map)
This is the big map, so you can see where everything fits:
- Module 1. Introduction to AI: what AI is, what ML is, narrow vs general AI, and how to talk about models without mysticism.
- Module 2. History of AI: why the field cycles between hype and disappointment, and what changed with data, compute, and scale.
- Module 3. Search and problem solving: BFS, DFS, A*, and adversarial search. The point is not trivia. The point is how to frame problems.
- Module 4. Knowledge representation and agents: rules, logic, ontologies, agents, and why uncertainty matters.
- Module 5. Supervised and unsupervised learning: classification, regression, clustering, leakage, and evaluation.
- Module 6. Reinforcement learning: reward, policies, and what goes wrong in practice.
- Module 7. Neural networks and deep learning: gradients, overfitting, CNNs, sequence models, and practical training instincts.
- Module 8. Attention and transformers: why attention changed NLP, and how to reason about capabilities and limits.
- Module 9. Generative AI: language models, diffusion, and the kinds of errors you should expect.
- Module 10. NLP: embeddings, tokenisation, retrieval, and evaluation.
- Module 11. Computer vision: classification, detection, segmentation, and operational pitfalls.
- Module 12. Ethics, bias, and fairness: definitions, measurement, and what “responsible” means when trade-offs conflict.
- Module 13. Interpretability: when you need explanations, what you can and cannot claim, and how to audit models.
- Module 14. Standards, regulation, and governance: how to build documentation and controls that survive scrutiny.
- Module 15. Cybersecurity for AI: adversarial manipulation, data poisoning, model theft, and pipeline security.
- Module 16. Case studies: especially energy, but also healthcare, finance, and public sector.
- Module 17. Engineering and architecture for AI (MLOps): versioning, evaluation gates, deployment patterns, and monitoring.
Interactive practice (what you will actually do)
You will not only read. You will practice with tools that make mistakes visible.
Safety, ethics, and governance (built in, not bolted on)
AI systems fail in ways traditional software does not. That changes the control strategy.
- Privacy: treat personal data as toxic until proven safe.
- Integrity: validate inputs, version data and models, and log decisions.
- Availability: protect budgets and degrade safely under load.
Safety first
Align with the CIA triad in practice: confidentiality (PII handling), integrity (validation and tamper evidence), availability (rate limiting and graceful degradation).
What you will produce (portfolio artefacts)
- A one-page AI system sketch showing data flow, model boundary, and safety checks.
- An evaluation plan with metrics, thresholds, and what you will do when results degrade.
- A short risk note covering bias, misuse, and security threats for one scenario.
- A monitoring checklist for drift, quality, and cost.
