100% Free with Unlimited Retries

AI Agents Foundations Assessment

Finished the content? Take the assessment for free. Retry as many times as you need. It is timed, properly invigilated, and actually means something.

Sign in to track progress and get your certificate

You can take the assessment without signing in, but your progress will not be tracked and you will not receive a certificate of completion. If you complete the course without signing in, you will need to sign in and complete it again to get your certificate. Sign in now

Unlimited retries

Take the assessment as many times as you need

Free certificate

Get a detailed certificate when you pass

Donation supported

We run on donations to keep everything free

Everything is free – If you find this useful and can afford to, please consider making a donation to help us keep courses free, update content regularly, and support learners who cannot pay.

Timed assessment
Detailed feedback
No credit card required

CPD timing for this level

Foundations time breakdown

This is the first pass of a defensible timing model for this level, based on what is actually on the page: reading, labs, checkpoints, and reflection.

Reading
28m
4,094 words · base 21m × 1.3
Labs
0m
0 activities × 15m
Checkpoints
10m
2 blocks × 5m
Reflection
40m
5 modules × 8m
Estimated guided time
1h 18m
Based on page content and disclosed assumptions.
Claimed level hours
20h
Claim includes reattempts, deeper practice, and capstone work.
The claimed hours are higher than the current on-page estimate by about 19h. That gap is where I will add more guided practice and assessment-grade work so the hours are earned, not declared.

What changes at this level

Level expectations

I want each level to feel independent, but also clearly deeper than the last. This panel makes the jump explicit so the value is obvious.

Anchor standards (course wide)
OWASP Top 10 for LLM Applications 2025OWASP Top 10 for Agentic Applications 2026NIST AI Risk Management Framework (AI RMF 1.0)ISO/IEC 42001
Assessment intent
Foundations

Understand the core ReAct, tool, and memory patterns.

Assessment style
Format: mixed
Pass standard
Coming next

Not endorsed by a certification body. This is my marking standard for consistency and CPD evidence.

Level progress0%

CPD tracking

Fixed hours for this level: not specified. Timed assessment time is included once on pass.

View in My CPD
Progress minutes
0.0 hours

Stage 1: Foundations

Welcome to the beginning of your journey into AI agents. I have designed this stage for people who have never written a line of code, never opened a terminal, and think machine learning is something that happens in factories. By the end of these 20 hours, you will have run your first AI model on your own computer and had a conversation with it.

A promise to you

I will never assume you know something I have not taught you. If I use a technical term, I will explain it. If I ask you to do something on your computer, I will show you exactly how. You belong here, regardless of your background.


Module 1.1: Understanding AI (4 hours)

Learning Objectives

By the end of this module, you will be able to:

  1. Define artificial intelligence, machine learning, and deep learning
  2. Explain how neural networks process information (at a conceptual level)
  3. Identify AI applications in everyday life
  4. Distinguish between narrow AI, general AI, and superintelligence
  5. Assess claims about AI capabilities critically

1.1.1 What is Artificial Intelligence?

Let me start with what AI is not. AI is not a conscious being. It does not think the way you think. It does not have feelings, desires, or goals of its own. Understanding this from the start will help you build better systems and avoid common mistakes.

The Human Analogy

Think of AI like teaching a child to recognise cats. You do not give the child a rulebook saying "cats have four legs, whiskers, and pointy ears." Instead, you show them thousands of pictures of cats and non-cats until they develop an internal understanding of what makes something a cat.

This is precisely how modern AI works. Instead of programming explicit rules (if whiskers AND four legs AND pointy ears THEN cat), we show the AI millions of examples and let it discover the patterns itself.

The Three Layers of AI

Artificial Intelligence

Any system that appears to exhibit intelligence. This includes everything from simple rule-based systems ("if temperature above 25 degrees, turn on air conditioning") to systems that can generate creative content like essays and images.

Machine Learning

A subset of AI where systems improve through experience. Instead of explicit programming, we provide data and the system learns patterns from it.

Deep Learning

A subset of machine learning that uses artificial neural networks with multiple layers. These "deep" networks can learn complex patterns in large datasets.

What AI Can and Cannot Do (As of 2026)

I think it is important to be honest about current capabilities. AI has made remarkable progress, but it also has significant limitations.

AI Capabilities and Limitations

Understanding what is realistic

✅ AI Does Well

  • • Pattern recognition in data
  • • Language generation and translation
  • • Image and speech recognition
  • • Game playing and strategy
  • • Data analysis and prediction
  • • Code generation and debugging

❌ AI Struggles With

  • • True understanding and reasoning
  • • Common sense in novel situations
  • • Physical world interaction
  • • Emotional intelligence
  • • Original creativity (truly new ideas)
  • • Explaining its own reasoning

1.1.2 Key Terminology

Before we go further, let me define some terms you will encounter throughout this course.

Model

The trained AI system itself. Think of it as a brain that has learned from examples. When you use ChatGPT or Claude, you are interacting with a model.

Training

The process of teaching an AI using examples. During training, the model adjusts millions (or billions) of internal numbers to get better at its task.

Inference

When the AI uses what it learned to respond to new inputs. This is what happens when you ask ChatGPT a question. The training is already done. The model is now making inferences.

Parameters

The internal "knowledge" of a model, stored as numbers. GPT-4 has hundreds of billions of parameters. These numbers collectively encode everything the model knows.

Context Window

How much information the AI can consider at once. If a model has a 100,000 token context window, it can "see" roughly 75,000 words of text at a time.

Token

A piece of text, roughly 4 characters or 3/4 of a word on average. AI models process text as tokens, not as letters or words.

Hallucination

When AI generates plausible-sounding but incorrect information. The model is not lying. It genuinely does not know the difference between what it invented and what is true.


1.1.3 Common Misconceptions

I want to address some things you may have heard about AI that are not quite accurate.

AI understands what it reads

Reality: AI identifies patterns in text. It processes language mathematically, not semantically. When you ask an AI about a book, it is not recalling the experience of reading. It is predicting what text should come next based on patterns.

AI is conscious or sentient

Reality: AI is sophisticated pattern matching. It has no subjective experience, desires, or consciousness. It may simulate emotions in text, but it does not feel them.

AI will replace all human jobs

Reality: AI augments human capabilities. Tasks, not entire jobs, get automated. New roles emerge. The Industrial Revolution did not eliminate work. It transformed it. AI will do the same.

More parameters equals better AI

Reality: Architecture, training data quality, and fine-tuning matter more than raw size. A well-trained smaller model can outperform a poorly trained larger one on specific tasks.


Module 1.2: From LLMs to Agents (4 hours)

Learning Objectives

By the end of this module, you will be able to:

  1. Explain what Large Language Models are and how they work
  2. Distinguish between chatbots, LLMs, and AI agents
  3. Understand why agents represent an evolution beyond chatbots
  4. Identify use cases where agents are more appropriate than simple LLMs

1.2.1 Understanding Large Language Models

A Large Language Model (LLM) is like an incredibly well-read assistant who has consumed most of human knowledge available on the internet. When you ask it a question, it does not "look up" the answer. It predicts what text should come next based on patterns learned during training.

How LLMs Process Your Questions

Key LLM Characteristics:

  • Stateless: Each conversation starts fresh unless you provide the history
  • Reactive: They respond to prompts but do not initiate action
  • Knowledge-bounded: Limited to what was in their training data
  • Single-turn focus: Optimised for question-answer exchanges

1.2.2 What Makes an AI Agent Different?

Here is where things get interesting. An AI agent is not just a chatbot that can answer questions. It is a system that can:

  1. Perceive: Sense and understand its environment
  2. Reason: Plan and make decisions
  3. Act: Execute tasks in the real world
The AI Agent Loop

LLM vs Agent comparison

You ask: "What meetings do I have tomorrow?"

LLM Response: "I don't have access to your calendar. You would need to check your calendar application directly."

Agent Response: The agent checks your Google Calendar API, finds you have a 10am standup and a 2pm client call, and tells you: "Tomorrow you have two meetings: a standup at 10am and a client call at 2pm. Would you like me to prepare any materials for either?"

The difference is action. The agent does not just tell you it cannot help. It goes and gets the information.

🎯 Interactive: Explore the Differences

Use this interactive tool to explore the fundamental differences between LLMs, chatbots, and AI agents. Click on each concept to compare their capabilities, test your understanding with scenarios, and see concrete examples.

LLM vs Chatbot vs Agent

Explore the architectural and capability differences between these three AI system types through interactive visualisation.

Interactive

1.2.3 The Anatomy of an AI Agent

Every AI agent has four core components:

Core Agent Components

The building blocks of every AI agent

🧠 Brain (LLM)

The reasoning engine. Processes inputs, makes decisions, and generates responses. This is usually a Large Language Model like GPT-4, Claude, or Llama.

🔧 Tools

Capabilities the agent can use. Email sending, web browsing, file reading, code execution, API calls, database queries, and more.

💾 Memory

What the agent remembers. Short-term memory for the current conversation. Long-term memory for facts that persist across sessions.

📋 Planning

How the agent decides what to do. Breaking complex tasks into steps, choosing which tools to use, and determining when the goal is achieved.

🎯 Interactive: ReAct Pattern in Action

The pattern is fundamental to how modern AI agents work. This simulator shows you step-by-step how an agent thinks through a problem.

ReAct Pattern Simulator

Based on "ReAct: Synergizing Reasoning and Acting in Language Models" (Yao et al., 2022)

Interactive demonstration of how AI agents interleave thinking with tool execution

Read Paper
NIST AI RMFOWASP LLM Top 10AWS ML SpecialtyCertification Level

Select Scenario

User Query

"Calculate the Value at Risk (VaR) for our portfolio given last month's returns: [2.3%, -1.5%, 0.8%, -3.2%, 1.1%] at 95% confidence level."

NIST AI RMF MAP 1.1ISO 31000:2018Basel III
Speed
0 / 80%

💡 Understanding the ReAct Pattern

The ReAct pattern interleaves reasoning (Thought) with actions (Tool calls), allowing agents to dynamically adjust their approach based on observations.

🧠

Thought

Agent reasons about the task, formulates plans, and decides next steps

Action

Agent selects and invokes a tool with specific parameters

👁️

Observation

Agent receives and processes the tool's response

Answer

Agent synthesises findings into a final response

📐 Core Decision Function (Simplified)

P(action | state) = softmax(W × [thought_embedding, context])

Where:
  softmax(x_i) = exp(x_i) / Σ exp(x_j)  // Normalises to probability
  W = learned weight matrix
  
Decision threshold: P > 0.85 → execute action

Module 1.3: Your Computer's Command Line (4 hours)

Learning Objectives

By the end of this module, you will be able to:

  1. Open a terminal on Windows or macOS
  2. Navigate the file system using basic commands
  3. Run Python scripts from the command line
  4. Understand what environment variables are

1.3.1 Why Learn the Command Line?

I know the command line can look intimidating. A black screen with blinking text feels like something from a 1980s film. But here is the truth: the command line is the most powerful way to interact with your computer, and almost all AI development happens here.

A word of encouragement

Everyone who is good at the command line was once staring at a blank terminal wondering what to type. The difference is they kept trying. You will make mistakes. You will type commands that do nothing or throw errors. That is normal. That is learning.


1.3.2 Opening Your Terminal

On macOS:

  1. Press Cmd + Space to open Spotlight
  2. Type "Terminal"
  3. Press Enter

You will see a window with text ending in a $ or % symbol. This is called the prompt. It is waiting for your command.

On Windows:

  1. Press Win + X
  2. Select "Windows Terminal" or "PowerShell"

You will see a window with text ending in >. This is your prompt.


1.3.3 Essential Commands

Here are the commands you will use most often. I am giving you both macOS/Linux and Windows versions.

Seeing where you are:

# macOS/Linux
pwd

# Windows
cd

This shows your current directory (folder). When you first open the terminal, you are usually in your home directory.

Listing files:

# macOS/Linux
ls

# Windows
dir

This shows all files and folders in your current location.

Changing directories:

# Both platforms
cd Documents

# Go up one level
cd ..

# Go to home directory
cd ~

Creating a folder:

# macOS/Linux
mkdir my-ai-project

# Windows
mkdir my-ai-project

Creating a file:

# macOS/Linux
touch hello.py

# Windows
echo. > hello.py

Case sensitivity

On macOS and Linux, file names are case-sensitive. Hello.py and hello.py are different files. On Windows, they are the same. This catches people out more than you might expect.


Module 1.4: Setting Up Your Environment (4 hours)

Learning Objectives

By the end of this module, you will be able to:

  1. Install Python on your computer
  2. Create and activate virtual environments
  3. Install Ollama for local AI model inference
  4. Set up VS Code for AI development

1.4.1 Installing Python

Python is the language of AI development. Almost every AI framework, library, and tool is written in or has bindings for Python. Let us install it.

On macOS:

# Check if Python is already installed
python3 --version

# If not, install using Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
brew install python@3.12

On Windows:

  1. Go to python.org/downloads
  2. Download Python 3.12 or later
  3. Run the installer
  4. Important: Check "Add Python to PATH" before clicking Install

Verify the installation:

python3 --version
# Should output something like: Python 3.12.x

1.4.2 Virtual Environments

A virtual environment is an isolated space for your project's dependencies. Without it, different projects might conflict with each other.

Creating a virtual environment:

# Navigate to your project folder
cd my-ai-project

# Create the environment
python3 -m venv venv

# Activate it (macOS/Linux)
source venv/bin/activate

# Activate it (Windows)
.\venv\Scripts\activate

When activated, you will see (venv) at the start of your prompt. This means you are inside the virtual environment.

Always activate

Before working on your project, always activate the virtual environment. It is easy to forget, and then you will install packages in the wrong place.


1.4.3 Installing Ollama

Ollama lets you run powerful AI models locally on your own computer. No API keys needed. No data leaving your machine. Free and private.

On macOS:

# Using Homebrew
brew install ollama

# Or download from ollama.com

On Windows:

  1. Go to ollama.com/download
  2. Download and run the Windows installer
  3. Follow the setup wizard

Verify the installation:

ollama --version

Download your first model:

# Start the Ollama server (runs in background)
ollama serve

# In a new terminal, download a model
ollama pull llama3.2:3b

This downloads the Llama 3.2 model with 3 billion parameters. It is about 2GB and runs well on most modern computers.


Module 1.5: Your First AI Interaction (4 hours)

Learning Objectives

By the end of this module, you will be able to:

  1. Run a local LLM using Ollama
  2. Have a conversation with an AI model
  3. Write and run a Python script that talks to an LLM
  4. Understand what prompts are and how to craft them

1.5.1 Chatting with Ollama

Let us have your first conversation with a local AI model.

Start a chat:

ollama run llama3.2:3b

You will see a prompt like >>>. Type your message and press Enter.

>>> Hello! Can you explain what you are?

The model will respond. Try a few more questions:

  • "What is the capital of France?"
  • "Write a haiku about programming."
  • "Explain machine learning to a 10-year-old."

To exit, type /bye or press Ctrl+D.


1.5.2 Your First Python Script

Now let us write a Python script that talks to Ollama. This is your first step toward building AI agents.

Create the file:

cd my-ai-project
touch first_llm.py

Write the code:

"""
My First LLM Interaction
=========================
A simple script that sends a prompt to a local LLM
and prints the response.

Author: [Your Name]
Date: January 2026
"""

import requests
import json


def ask_llm(prompt: str, model: str = "llama3.2:3b") -> str:
    """
    Send a prompt to the local Ollama server and get a response.
    
    Args:
        prompt: The question or instruction to send
        model: Which model to use (default: llama3.2:3b)
        
    Returns:
        The model's response as a string
    """
    response = requests.post(
        "http://localhost:11434/api/generate",
        json={
            "model": model,
            "prompt": prompt,
            "stream": False
        }
    )
    
    if response.status_code == 200:
        return response.json().get("response", "No response received")
    else:
        return f"Error: {response.status_code}"


def main():
    """Main function to demonstrate LLM interaction."""
    print("Welcome to your first LLM interaction!")
    print("=" * 50)
    print()
    
    # Ask the model a question
    question = "What are three interesting facts about the moon?"
    
    print(f"Question: {question}")
    print()
    print("Thinking...")
    print()
    
    answer = ask_llm(question)
    
    print("Answer:")
    print(answer)


if __name__ == "__main__":
    main()

Install the required package:

pip install requests

Run the script:

python first_llm.py

Congratulations! You have just written a program that talks to an AI. This is the foundation of everything we will build.


1.5.3 Understanding Prompts

A prompt is the text you send to an AI model. Crafting good prompts is a skill that takes practice. Here are some principles:

Prompt Engineering Basics

How to communicate effectively with LLMs

1. Be Specific

Bad: "Tell me about Python"

Good: "Explain how Python's list comprehensions work with two examples"

2. Provide Context

Bad: "How do I fix this error?"

Good: "I'm running Python 3.12 on macOS and getting 'ModuleNotFoundError'. Here is my code: ..."

3. Specify Format

Bad: "Give me some ideas"

Good: "Give me 5 project ideas, each with a one-sentence description and difficulty rating"

4. Set Role

Bad: "Explain databases"

Good: "You are a patient teacher. Explain databases to someone who has never used a computer before"

🎯 Interactive: Build Your Own Tool Schema

AI agents use tools by calling functions with specific parameters. Understanding how to design clear tool schemas is essential for building reliable agents. Use this interactive builder to explore and create tool definitions.

Select an example tool to analyse:

Prompt Engineering Best Practices
✅ Provide Specific Context
Include relevant details about your situation, environment, and what you have already tried. Example: 'I am using Python 3.12 on macOS with VS Code. I installed pandas using pip but get ImportError when importing.'
❌ Avoid Vague Requests
⭐ Set Role and Format
Tell the AI what role to adopt and exactly how you want the response structured. Example: 'You are a Python tutor. Explain list comprehensions in 3 steps with examples for a beginner. Format as numbered list.'

Stage 1 Assessment

Module 1.1-1.2: AI Fundamentals Quiz

What is the relationship between AI, Machine Learning, and Deep Learning?

What is a hallucination in the context of AI?

What is the main difference between an LLM and an AI Agent?

What is a token in the context of LLMs?

Which is NOT a core component of an AI Agent?

Module 1.3-1.5: Practical Skills Quiz

What command shows your current directory on macOS?

Why should you use virtual environments for Python projects?

What does Ollama allow you to do?

Which is an example of a GOOD prompt?

What port does the Ollama server typically run on?


Summary

In this stage, you have learned:

  1. What AI really is: Pattern matching at scale, not consciousness or understanding

  2. The difference between LLMs and Agents: Agents can perceive, reason, and act. LLMs can only generate text.

  3. Command line basics: Navigating your file system and running commands

  4. Environment setup: Python, virtual environments, and Ollama

  5. Your first AI interaction: Talking to a local model and writing Python code to do it programmatically

You are ready

You now have all the foundational knowledge and tools to start building AI agents. In Stage 2, we will dive deeper into how agents actually think and make decisions.

Ready to test your knowledge?

AI Agents Foundations Assessment

Validate your learning with practice questions and earn a certificate to evidence your CPD. Try three preview questions below, then take the full assessment.

50+

Questions

45

Minutes

PDF

Certificate

Everything is free with unlimited retries

  • Take the full assessment completely free, as many times as you need
  • Detailed feedback on every question explaining why answers are correct or incorrect
  • Free downloadable PDF certificate with details of what you learned and hours completed
  • Personalised recommendations based on topics you found challenging

Sign in to get tracking and your certificate

You can complete this course without signing in, but your progress will not be saved and you will not receive a certificate. If you complete the course without signing in, you will need to sign in and complete it again to get your certificate.

We run on donations. Everything here is free because we believe education should be accessible to everyone. If you have found this useful and can afford to, please consider making a donation to help us keep courses free, update content regularly, and support learners who cannot pay. Your support makes a real difference.

During timed assessments, copy actions are restricted and AI assistance is paused to ensure fair evaluation. Your certificate will include a verification URL that employers can use to confirm authenticity.

Course materials are protected by intellectual property rights.View terms

Quick feedback

Optional. This helps improve accuracy and usefulness. No accounts required.

Rating (optional)

Related Architecture Templates4

Production-ready templates aligned with industry frameworks. Download in multiple formats.

View all

Password and Passphrase Coach

/ai-agents/foundations

Foundation

Scores passwords and suggests stronger passphrases.

MFA Method Picker

/ai-agents/foundations

Foundation

Chooses MFA methods based on threat fit and device context.

Session and Token Hygiene Checker

/ai-agents/foundations

Practitioner

Evaluates session lifetimes, refresh, rotation, and cookie settings.

URL Risk Triage Tool

/ai-agents/foundations

Foundation

Checks URLs for risky patterns and produces a quick decision.

Related categories:SecurityIntegrationEmerging