Foundations · Module 1

What AI is and why it matters now

AI is a way of learning patterns from data so a system can make predictions, rank options, or automate decisions.

1h 4 outcomes AI Foundations

Previously

Start with AI Foundations

Start from data, simple models, and how to read accuracy and bias without drowning in maths.

This module

What AI is and why it matters now

AI is a way of learning patterns from data so a system can make predictions, rank options, or automate decisions.

Next

Data and representation

In AI, the word data sounds fancy, but it is usually boring.

Progress

Mark this module complete when you can explain it without rereading every paragraph.

Why this matters

When people say AI, they often mean a system that takes input, applies a learned pattern, and produces an output.

What you will be able to do

  • 1 Explain what ai is and why it matters now in your own words and apply it to a realistic scenario.
  • 2 A model is one component inside a system. Decisions, guardrails, and feedback live outside the model.
  • 3 Check the assumption "The model is not the product" and explain what changes if it is false.
  • 4 Check the assumption "There is a fallback path" and explain what changes if it is false.

Before you begin

  • No previous technical background required
  • Read the section explanation before using tools

Common ways people get this wrong

  • Fluent but wrong outputs. A confident tone can hide weak evidence. The system must be able to say what it used and what it does not know.
  • Automation bias. People trust outputs because they are fast and formatted. The system should make uncertainty visible and keep humans in control where needed.
  • Silent drift. Inputs change, users change, and the world changes. Without monitoring, quality degrades quietly until the damage is obvious.

Main idea at a glance

Rules based software vs model based systems

Rules are explicit. Models are learned from data.

Stage 1

Rules-based approach

I write conditions explicitly. If this, then that. It is predictable, explainable, and works well for simple cases.

I think rules are useful when the logic is well understood and stable. But they do not adapt.

When people say AI, they often mean a system that takes input, applies a learned pattern, and produces an output. The learned pattern is the model. The act of learning that pattern is training. Using the trained model to produce results is inference.

The difference between training and inference matters because the risks are different. Training is where you bake in assumptions from the dataset. Inference is where the model meets reality. If reality changes, the model can behave badly even if training looked perfect.

AI is powerful because it can learn patterns too complex to write as hand made rules. It is also fragile because it can learn the wrong pattern. A model can look clever while failing quietly. The skill is to ask what it is really using as evidence.

AI matters now because systems touch decisions that used to be manual. Hiring screens, fraud checks, support routing, and medical triage all use models to move faster. That speed is useful, but it can also amplify mistakes at scale. This is why foundations matter. I need to know what the model is doing before I trust it.

Imagine a support team that uses an AI model to route urgent messages. If the model learns that certain phrases usually mean “urgent”, it may quietly miss urgent messages written in a different style. The risk is not only wrong answers. The risk is wrong priorities at speed.

In practice, the first useful question is “What happens on the model’s bad day?” If the answer is “we do not know”, the system is not ready to be trusted for anything high impact.

My calm view on hype is this. AI is not magic. It is applied statistics plus engineering. Modern systems can already draft text, generate code, search, summarise, and call tools in ways that feel impressive. They also still fail quietly, overfit to shortcuts, and sound more certain than the evidence deserves. If you learn the basics, you can use the tools well without borrowing the vendor marketing.

Worked example. A spam filter that looks good on paper and fails in real life

Worked example. A spam filter that looks good on paper and fails in real life

Let’s build a mental model you can reuse. Imagine we want to classify emails as spam or not spam. We choose features that feel sensible: subject length, number of links, the sender domain, a few keywords. We train, it looks great, and everyone celebrates.

Then it ships. A week later, someone complains that customer invoices are being flagged as spam. Not because the model is “stupid”, but because it learned a shortcut. In training, spam often had many links, and invoices also have many links. So the model did what you rewarded it for: it used link count as a strong signal.

The lesson: the model does not understand what spam is. It understands what was correlated with the label in your training set. If your training set had different kinds of invoices, or if you measured the wrong thing, the model will happily optimise the wrong target.

Common mistakes I see (and how to spot them early)

Common mistakes (and what I do instead)

Mistake: worshipping accuracy
High accuracy can hide the mistakes you actually care about. I always ask what the false positives and false negatives cost in real life.
Mistake: no bad day plan
If the model is wrong, who notices, who decides, and how do we roll back. If you cannot answer that, it is not ready for anything important.
Mistake: treating the model output as the decision
A model output is usually one input into a decision. The system still needs thresholds, escalation paths, and evidence.
Mistake: pretending errors are symmetric
False positives and false negatives often cause different harms. I pick metrics and thresholds based on the harm profile, not what looks clean on a chart.

Verification. Can you explain the system, not just the model

Verification checklist before you trust this model

Keep the explanation concrete so a non-technical reviewer can audit your decision.

  1. Define the input, output, and harm cost in one sentence each

    If you cannot explain these clearly, your system boundary is still unclear.

  2. List at least two bad-day inputs

    Name realistic edge cases your training data probably under-represents.

  3. Set a low-confidence behaviour now

    Choose escalation, additional data request, or safe refusal before release.

After this section you should be able to

Section outcomes

  1. Explain what a model is and why it exists in a system

    Describe model output as one input into a wider operational decision.

  2. Explain what breaks when training data and real inputs diverge

    Spot distribution and context mismatch before users feel the failure.

  3. Explain the trade-off between automation speed and human judgement

    Choose escalation points for high-impact and uncertain decisions.

CPD evidence prompt (copy friendly)

If you are logging CPD, keep the entry short and specific. If you can attach an artefact, do it.

CPD note template

What I studied
Core AI vocabulary, the difference between training and inference, and why AI systems fail when data and reality diverge.
What I practised
I used at least one practice activity to inspect inputs and outputs and wrote down one failure mode I could now recognise earlier.
What changed in my practice
I now ask what happens on the model’s bad day before I trust any automated decision.
Evidence artefact
A one-page note describing the system boundary, the decision, the main failure costs, and the fallback path.

Mental model

AI system boundary

A model is one component inside a system. Decisions, guardrails, and feedback live outside the model.

  1. 1

    User

  2. 2

    Input

  3. 3

    Model

  4. 4

    Policy and guardrails

  5. 5

    Outcome

  6. 6

    Monitoring and review

Assumptions to keep in mind

  • The model is not the product. The product includes input checks, policy, and a way to review outcomes. If you only ship a model, you ship a guess.
  • There is a fallback path. When uncertainty is high or the stakes are high, the system needs a safe next step. That might be a human review, a simpler rule, or a refusal to answer.
  • We can observe outcomes. If you cannot see what the system causes, you cannot improve it. Observability is part of correctness.

Failure modes to notice

  • Fluent but wrong outputs. A confident tone can hide weak evidence. The system must be able to say what it used and what it does not know.
  • Automation bias. People trust outputs because they are fast and formatted. The system should make uncertainty visible and keep humans in control where needed.
  • Silent drift. Inputs change, users change, and the world changes. Without monitoring, quality degrades quietly until the damage is obvious.

Key terms

model
A function learned from data that maps inputs to outputs.
training
The process of fitting a model to data so it can learn patterns.
inference
Running a trained model to produce predictions on new inputs.
dataset
A collection of examples used to train and evaluate a model.

Check yourself

Quick check. What is AI

0 of 12 opened

Scenario. Someone says 'the AI decided'. What is the most accurate way to describe what a model actually does

It is a learned function that maps inputs to outputs. The decision is a system choice built around that output.

Scenario. A model flags a legitimate invoice email as spam. Give one likely data reason

The training data associated link-heavy emails with spam, so the model learned a shortcut and misclassified invoices that also contain multiple links.

Scenario. You are building a spam filter. What is 'training' in plain terms

Fitting a model to examples so it learns patterns that map inputs to the label.

Scenario. The spam filter is live and scoring new emails. What is 'inference'

Using the trained model to make predictions on new inputs.

Scenario. Your model is 98% accurate but still causes harm. How can both be true

Accuracy can hide who is harmed. If the errors concentrate in a minority group or in high impact cases, the headline number looks good while real outcomes are bad.

Scenario. In most products, what does an AI system actually output into the wider workflow

A label, score, ranking, or a generated response that the system then uses as one input into a decision.

Why do models need monitoring after launch

Because real inputs change, behaviour drifts, and failure can become silent if nobody measures it.

What is one reason a rule based system may be preferred

It is easier to explain and can be safer for simple, high assurance requirements.

What is a practical habit when reading AI claims

Ask what data it learned from, what it outputs, what the error costs, and what the fallback is.

Scenario. A model correctly identifies 99% of emails as spam but the 1% error rate includes critical customer messages. Why is accuracy alone insufficient

Accuracy does not reveal where errors occur. Critical failures can be rare but still unacceptable if they block important communications.

Scenario. Your model performs perfectly in testing but fails on real user inputs. What is one likely cause

The test data may not represent real-world distribution, or training data may be missing important edge cases that appear in production.

Why should you ask 'what happens on the model's bad day' before trusting an AI system

It reveals failure modes and helps you understand when the system should not be trusted, requiring human oversight or fallback procedures.

Artefact and reflection

Artefact

A short module note with one key definition and one practical example

Reflection

Where in your work would explain what ai is and why it matters now in your own words and apply it to a realistic scenario. change a decision, and what evidence would make you trust that change?

Optional practice

Toggle between examples like spam filters, recommendation systems and chat assistants, and see what each one takes as input and gives as output.