100% Free with Unlimited Retries

AI Agents Practical Building Assessment

Finished the content? Take the assessment for free. Retry as many times as you need. It is timed, properly invigilated, and actually means something.

Sign in to track progress and get your certificate

You can take the assessment without signing in, but your progress will not be tracked and you will not receive a certificate of completion. If you complete the course without signing in, you will need to sign in and complete it again to get your certificate. Sign in now

Unlimited retries

Take the assessment as many times as you need

Free certificate

Get a detailed certificate when you pass

Donation supported

We run on donations to keep everything free

Everything is free – If you find this useful and can afford to, please consider making a donation to help us keep courses free, update content regularly, and support learners who cannot pay.

Timed assessment
Detailed feedback
No credit card required

CPD timing for this level

Practical Building time breakdown

This is the first pass of a defensible timing model for this level, based on what is actually on the page: reading, labs, checkpoints, and reflection.

Reading
26m
3,979 words · base 20m × 1.3
Labs
0m
0 activities × 15m
Checkpoints
10m
2 blocks × 5m
Reflection
40m
5 modules × 8m
Estimated guided time
1h 16m
Based on page content and disclosed assumptions.
Claimed level hours
30h
Claim includes reattempts, deeper practice, and capstone work.
The claimed hours are higher than the current on-page estimate by about 29h. That gap is where I will add more guided practice and assessment-grade work so the hours are earned, not declared.

What changes at this level

Level expectations

I want each level to feel independent, but also clearly deeper than the last. This panel makes the jump explicit so the value is obvious.

Anchor standards (course wide)
OWASP Top 10 for LLM Applications 2025OWASP Top 10 for Agentic Applications 2026NIST AI Risk Management Framework (AI RMF 1.0)ISO/IEC 42001
Assessment intent
Practical building

Build and orchestrate agents with safe tool use and reliable workflows.

Assessment style
Format: mixed
Pass standard
Coming next

Not endorsed by a certification body. This is my marking standard for consistency and CPD evidence.

Level progress0%

CPD tracking

Fixed hours for this level: not specified. Timed assessment time is included once on pass.

View in My CPD
Progress minutes
0.0 hours

Stage 3: Practical Building

This is where the rubber meets the road. Everything you have learned so far comes together as we build real, working AI agents. By the end of this stage, you will have created:

  • A complete ReAct agent from scratch
  • A multi-agent system with specialised roles
  • Visual workflows using n8n
  • An MCP server that connects to Claude Desktop

Learning by doing

I believe you cannot truly understand something until you build it. This stage is intentionally heavy on code. Type it out. Run it. Break it. Fix it. That is how you learn.


Module 3.1: Building Your First Agent (6 hours)

Learning Objectives

By the end of this module, you will be able to:

  1. Build a complete agent from scratch using Python
  2. Implement the ReAct pattern with real tools
  3. Debug and troubleshoot agent behaviour
  4. Deploy an agent locally

3.1.1 The Complete Agent Implementation

Let us build a production-ready single agent step by step. This is not a toy example. This is the foundation for real applications.

"""
Complete AI Agent Implementation
================================
A production-ready single agent using the ReAct pattern.

This module provides everything you need to build an agent
that can reason about problems and use tools to solve them.

Author: Ransford Amponsah
Course: AI Agents - From Foundation to Mastery
License: MIT

Requirements:
- Python 3.10+
- requests library
- Ollama running locally (ollama serve)
"""

from typing import Dict, List, Any, Optional, Callable
from dataclasses import dataclass, field
from enum import Enum
from datetime import datetime
import json
import re
import requests


class AgentStatus(Enum):
    """Current state of the agent."""
    IDLE = "idle"
    THINKING = "thinking"
    ACTING = "acting"
    WAITING = "waiting"
    COMPLETE = "complete"
    ERROR = "error"


@dataclass
class Tool:
    """
    Definition of a tool that the agent can use.
    
    A tool is a capability we give to the agent. It could be
    searching the web, doing calculations, reading files, or
    anything else you can express as a Python function.
    
    Attributes:
        name: Unique identifier for the tool (use snake_case)
        description: Human-readable description (shown to LLM)
        function: The Python function to call
        parameters: JSON Schema describing expected inputs
    """
    name: str
    description: str
    function: Callable
    parameters: Dict[str, Any]
    
    def to_prompt_format(self) -> str:
        """Format tool for inclusion in agent prompt."""
        params = ", ".join(
            f"{k}: {v.get('description', 'no description')}"
            for k, v in self.parameters.get("properties", {}).items()
        )
        return f"- {self.name}({params}): {self.description}"
    
    def execute(self, **kwargs) -> Any:
        """Execute the tool with given arguments."""
        return self.function(**kwargs)


@dataclass
class AgentState:
    """
    Complete state of the agent at any point.
    
    We track everything the agent has done and seen.
    This makes debugging much easier.
    """
    messages: List[Dict[str, str]] = field(default_factory=list)
    status: AgentStatus = AgentStatus.IDLE
    current_thought: Optional[str] = None
    pending_action: Optional[Dict[str, Any]] = None
    observations: List[str] = field(default_factory=list)
    iterations: int = 0
    error: Optional[str] = None


class ReActAgent:
    """
    A ReAct (Reasoning + Acting) Agent implementation.
    
    This agent follows the pattern:
    1. Thought: Reason about what to do
    2. Action: Choose and execute a tool
    3. Observation: Process the result
    4. Repeat until goal achieved or max iterations
    
    Example usage:
        agent = ReActAgent(
            model="llama3.2:3b",
            system_prompt="You are a helpful research assistant.",
            tools=[search_tool, calculator_tool]
        )
        
        result = agent.run("What is the population of London multiplied by 2?")
        print(result)
    """
    
    REACT_PROMPT_TEMPLATE = '''You are an AI assistant using the ReAct pattern.
You have access to these tools:

{tools}

For EVERY response, you MUST use this EXACT format:

Thought: [Your reasoning about what to do next]
Action: [tool_name]
Action Input: [input for the tool as valid JSON]

OR when you have the final answer:

Thought: [Your reasoning about why you are done]
Final Answer: [Your complete response to the user]

RULES:
1. Always start with "Thought:"
2. Only use tools that are listed above
3. Action Input must be valid JSON
4. Only output "Final Answer:" when you truly have the answer
5. Be concise but complete

{system_prompt}

User Query: {query}
'''
    
    def __init__(
        self,
        model: str = "llama3.2:3b",
        system_prompt: str = "You are a helpful assistant.",
        tools: Optional[List[Tool]] = None,
        max_iterations: int = 10,
        ollama_url: str = "http://localhost:11434"
    ):
        """
        Initialise the ReAct agent.
        
        Args:
            model: Ollama model name
            system_prompt: Custom instructions for the agent
            tools: List of Tool objects the agent can use
            max_iterations: Maximum reasoning loops
            ollama_url: URL of Ollama server
        """
        self.model = model
        self.system_prompt = system_prompt
        self.tools = tools or []
        self.max_iterations = max_iterations
        self.ollama_url = ollama_url
        self.state = AgentState()
        
        # Create tool lookup dictionary
        self.tool_map = {tool.name: tool for tool in self.tools}
    
    def _format_tools(self) -> str:
        """Format all tools for the prompt."""
        if not self.tools:
            return "No tools available. Answer using only your knowledge."
        return "\n".join(tool.to_prompt_format() for tool in self.tools)
    
    def _call_llm(self, prompt: str) -> str:
        """
        Call the Ollama LLM with a prompt.
        
        Args:
            prompt: The complete prompt to send
            
        Returns:
            The model's response text
        """
        try:
            response = requests.post(
                f"{self.ollama_url}/api/generate",
                json={
                    "model": self.model,
                    "prompt": prompt,
                    "stream": False
                },
                timeout=60
            )
            response.raise_for_status()
            return response.json().get("response", "")
        except requests.exceptions.ConnectionError:
            raise RuntimeError(
                "Cannot connect to Ollama. Is it running? "
                "Start with: ollama serve"
            )
        except requests.exceptions.Timeout:
            raise RuntimeError("Ollama request timed out")
    
    def _parse_response(self, response: str) -> Dict[str, Any]:
        """
        Parse the LLM's ReAct-formatted response.
        
        Args:
            response: Raw text from LLM
            
        Returns:
            Dictionary with thought, action (optional), or final_answer
        """
        result = {
            "thought": None, 
            "action": None, 
            "action_input": None, 
            "final_answer": None
        }
        
        # Extract Thought
        thought_match = re.search(
            r"Thought:\s*(.+?)(?=Action:|Final Answer:|$)", 
            response, 
            re.DOTALL
        )
        if thought_match:
            result["thought"] = thought_match.group(1).strip()
        
        # Check for Final Answer
        final_match = re.search(
            r"Final Answer:\s*(.+?)$", 
            response, 
            re.DOTALL
        )
        if final_match:
            result["final_answer"] = final_match.group(1).strip()
            return result
        
        # Extract Action
        action_match = re.search(r"Action:\s*(\w+)", response)
        if action_match:
            result["action"] = action_match.group(1).strip()
        
        # Extract Action Input
        input_match = re.search(
            r"Action Input:\s*(.+?)(?=Thought:|Action:|$)", 
            response, 
            re.DOTALL
        )
        if input_match:
            try:
                result["action_input"] = json.loads(
                    input_match.group(1).strip()
                )
            except json.JSONDecodeError:
                # If not valid JSON, treat as string
                result["action_input"] = {
                    "input": input_match.group(1).strip()
                }
        
        return result
    
    def _execute_action(
        self, 
        action: str, 
        action_input: Dict[str, Any]
    ) -> str:
        """
        Execute a tool action.
        
        Args:
            action: Name of the tool to execute
            action_input: Arguments to pass to the tool
            
        Returns:
            String observation of the result
        """
        if action not in self.tool_map:
            return f"Error: Unknown tool '{action}'. Available: {list(self.tool_map.keys())}"
        
        tool = self.tool_map[action]
        
        try:
            result = tool.execute(**action_input)
            if isinstance(result, (dict, list)):
                return f"Success: {json.dumps(result)}"
            return f"Success: {str(result)}"
        except TypeError as e:
            return f"Error: Invalid arguments for {action}: {e}"
        except Exception as e:
            return f"Error executing {action}: {e}"
    
    def run(self, query: str) -> str:
        """
        Run the agent on a user query.
        
        Args:
            query: The user's question or task
            
        Returns:
            The agent's final answer
        """
        # Build initial prompt
        prompt = self.REACT_PROMPT_TEMPLATE.format(
            tools=self._format_tools(),
            system_prompt=self.system_prompt,
            query=query
        )
        
        conversation = prompt
        
        for iteration in range(self.max_iterations):
            print(f"\n--- Iteration {iteration + 1} ---")
            
            # Get LLM response
            response = self._call_llm(conversation)
            print(f"Agent: {response[:200]}...")
            
            # Parse the response
            parsed = self._parse_response(response)
            
            if parsed["thought"]:
                print(f"Thought: {parsed['thought']}")
            
            # Check for final answer
            if parsed["final_answer"]:
                print(f"Final Answer: {parsed['final_answer']}")
                return parsed["final_answer"]
            
            # Execute action if present
            if parsed["action"]:
                print(f"Action: {parsed['action']}")
                print(f"Action Input: {parsed['action_input']}")
                
                observation = self._execute_action(
                    parsed["action"],
                    parsed["action_input"] or {}
                )
                print(f"Observation: {observation}")
                
                # Add to conversation
                conversation += f"\n{response}\nObservation: {observation}\n"
            else:
                # No action, might be stuck
                conversation += f"\n{response}\n"
                conversation += (
                    "\nYou must either use a tool (Action: ...) "
                    "or provide a Final Answer.\n"
                )
        
        return "I was unable to complete this task within the allowed iterations."

3.1.2 Creating Tools for Your Agent

Now let us create some useful tools for our agent.

"""
Agent Tools
===========
Practical tools that extend agent capabilities.
"""

import ast
import operator
from datetime import datetime


def calculator(expression: str) -> float:
    """
    Evaluate a mathematical expression safely.
    
    We use Python's AST to parse and evaluate expressions
    without using eval() which would be dangerous.
    """
    ops = {
        ast.Add: operator.add,
        ast.Sub: operator.sub,
        ast.Mult: operator.mul,
        ast.Div: operator.truediv,
        ast.Pow: operator.pow,
    }
    
    def _eval(node):
        if isinstance(node, ast.Num):
            return node.n
        elif isinstance(node, ast.BinOp):
            return ops[type(node.op)](
                _eval(node.left), 
                _eval(node.right)
            )
        else:
            raise ValueError(f"Unsupported: {type(node)}")
    
    return _eval(ast.parse(expression, mode='eval').body)


def get_current_time(timezone: str = "UTC") -> str:
    """Get the current time."""
    return datetime.now().strftime("%Y-%m-%d %H:%M:%S") + f" ({timezone})"


def search_knowledge_base(query: str) -> str:
    """
    Search a knowledge base.
    
    In production, this would query a vector database or API.
    Here we use a simple simulation.
    """
    kb = {
        "london population": "London has a population of approximately 9 million people.",
        "python": "Python is a high-level programming language created by Guido van Rossum.",
        "ai agents": "AI Agents are systems that perceive their environment and take actions.",
        "react pattern": "ReAct combines reasoning and acting in an interleaved manner.",
    }
    
    query_lower = query.lower()
    for key, value in kb.items():
        if key in query_lower:
            return value
    
    return f"No information found for: {query}"


# Create tool instances

calculator_tool = Tool(
    name="calculator",
    description="Perform mathematical calculations. Use for any maths.",
    function=lambda expression: {"result": calculator(expression)},
    parameters={
        "type": "object",
        "properties": {
            "expression": {
                "type": "string",
                "description": "Mathematical expression, e.g., '2 + 3 * 4'"
            }
        },
        "required": ["expression"]
    }
)

time_tool = Tool(
    name="get_time",
    description="Get the current date and time.",
    function=lambda **kwargs: {"time": get_current_time()},
    parameters={
        "type": "object",
        "properties": {
            "timezone": {
                "type": "string",
                "description": "Timezone name (default: UTC)"
            }
        }
    }
)

search_tool = Tool(
    name="search",
    description="Search knowledge base for information about topics.",
    function=lambda query: {"result": search_knowledge_base(query)},
    parameters={
        "type": "object",
        "properties": {
            "query": {
                "type": "string",
                "description": "What to search for"
            }
        },
        "required": ["query"]
    }
)


# Example usage
if __name__ == "__main__":
    # Create agent with tools
    agent = ReActAgent(
        model="llama3.2:3b",
        system_prompt="You are a helpful research assistant. Be concise.",
        tools=[calculator_tool, time_tool, search_tool],
        max_iterations=5
    )
    
    # Test query
    result = agent.run("What is 25 multiplied by 4?")
    print(f"\nFinal Result: {result}")

Module 3.2: Multi-Agent Systems (6 hours)

Learning Objectives

By the end of this module, you will be able to:

  1. Understand why multi-agent systems are needed
  2. Implement the Supervisor pattern
  3. Implement the Swarm pattern
  4. Design agent communication protocols

3.2.1 Why Multiple Agents?

Single vs Multi-Agent Performance

Research shows that when tasks involve more than 2 domains, single agents degrade rapidly. Multi-agent systems maintain consistent quality because specialised agents outperform generalists.


3.2.2 The Supervisor Pattern

A central supervisor routes requests to specialised sub-agents.

Supervisor Architecture
"""
Multi-Agent Supervisor Pattern
==============================
A supervisor agent coordinates specialised sub-agents.
"""

from typing import List, Dict, Any, Optional
from dataclasses import dataclass
from enum import Enum


class AgentRole(Enum):
    """Roles for specialised agents."""
    RESEARCHER = "researcher"
    WRITER = "writer"
    CODER = "coder"
    ANALYST = "analyst"


@dataclass
class AgentMessage:
    """Message passed between agents."""
    sender: str
    recipient: str
    content: str
    message_type: str  # "task", "result", "query", "status"
    metadata: Dict[str, Any] = None


class SpecialisedAgent:
    """A specialised agent with a specific focus area."""
    
    def __init__(
        self,
        name: str,
        role: AgentRole,
        system_prompt: str,
        tools: List = None
    ):
        self.name = name
        self.role = role
        self.system_prompt = system_prompt
        self.tools = tools or []
    
    def process(self, message: AgentMessage) -> AgentMessage:
        """Process an incoming message and return a response."""
        prompt = f"""You are {self.name}, a specialist {self.role.value}.

{self.system_prompt}

Task from supervisor:
{message.content}

Provide your expert response:
"""
        
        response = self._call_llm(prompt)
        
        return AgentMessage(
            sender=self.name,
            recipient=message.sender,
            content=response,
            message_type="result"
        )
    
    def _call_llm(self, prompt: str) -> str:
        """Call the underlying LLM."""
        import requests
        response = requests.post(
            "http://localhost:11434/api/generate",
            json={"model": "llama3.2:3b", "prompt": prompt, "stream": False}
        )
        return response.json().get("response", "")


class SupervisorAgent:
    """Supervisor that coordinates multiple specialised agents."""
    
    ROUTING_PROMPT = """You are a supervisor coordinating a team of specialists.

Available specialists:
{agents}

User request:
{query}

Decide which specialist should handle this. Respond with JSON:
{{"agent": "agent_name", "task": "specific task for them"}}
"""
    
    def __init__(self, agents: List[SpecialisedAgent]):
        self.agents = {agent.name: agent for agent in agents}
    
    def run(self, query: str) -> str:
        """Process a user query through the multi-agent system."""
        # Route to appropriate agent
        routing = self._route_request(query)
        
        agent_name = routing.get("agent")
        task = routing.get("task", query)
        
        if agent_name in self.agents:
            agent = self.agents[agent_name]
            message = AgentMessage(
                sender="supervisor",
                recipient=agent_name,
                content=task,
                message_type="task"
            )
            response = agent.process(message)
            return response.content
        
        return "No suitable agent found for this request."
    
    def _route_request(self, query: str) -> Dict:
        """Determine which agent should handle the request."""
        agents_desc = "\n".join(
            f"- {name}: {agent.role.value}"
            for name, agent in self.agents.items()
        )
        
        prompt = self.ROUTING_PROMPT.format(agents=agents_desc, query=query)
        
        import requests
        response = requests.post(
            "http://localhost:11434/api/generate",
            json={"model": "llama3.2:3b", "prompt": prompt, "stream": False}
        )
        
        import json
        try:
            return json.loads(response.json().get("response", "{}"))
        except:
            return {"agent": list(self.agents.keys())[0], "task": query}

3.2.3 The Swarm Pattern

In a swarm, agents hand off directly to each other without going through a supervisor.

Swarm Architecture

🎯 Interactive: Agent Workflow Designer

Before diving into visual tools like n8n, use this interactive designer to understand how agent workflows are structured. Explore template workflows or design your own multi-step agent processes.

Research Report Generator

Agent workflow that researches a topic and produces a structured report

📥Input
📤Output
🔧Tool Call
🧠Reasoning
Condition

Workflow Steps

1
📥InputReceive Topic

User provides research topic

2
🔧Tool CallWeb Search

Search for authoritative sources

Tool: search_web()

3
🧠ReasoningEvaluate Sources

Filter for credible, recent sources

4
🔧Tool CallRead Articles

Extract key information from top sources

Tool: read_url()

5
🧠ReasoningSynthesise Findings

Combine information into coherent narrative

6
🔧Tool CallGenerate Outline

Create structured report outline

Tool: write_file()

7
📤OutputReturn Report

Deliver formatted research report

Workflow Analysis

7

Total Steps

3

Tool Calls

0

Decision Points

Design tips:

  • • Start with clear inputs and end with clear outputs
  • • Add reasoning steps to explain complex decisions
  • • Use conditions for error handling and edge cases
  • • Keep tool calls focused on single responsibilities

Module 3.3: Workflow Automation with n8n (6 hours)

Learning Objectives

By the end of this module, you will be able to:

  1. Understand n8n's architecture and capabilities
  2. Build AI-powered workflows visually
  3. Integrate with external services
  4. Implement human-in-the-loop patterns

3.3.1 What is n8n?

n8n (pronounced "n-eight-n") is a workflow automation platform that lets you connect different apps and services. Think of it as building with LEGO blocks, but for software.

Key Features:

  • Visual drag-and-drop interface
  • 400+ built-in integrations
  • Native AI capabilities
  • Self-hosted or cloud options
  • Fair-code license (free for personal use)

3.3.2 Installing n8n

Using Docker (Recommended):

# macOS / Linux
docker run -it --rm --name n8n \
  -p 5678:5678 \
  -v n8n_data:/home/node/.n8n \
  n8nio/n8n
# Windows PowerShell
docker run -it --rm --name n8n `
  -p 5678:5678 `
  -v n8n_data:/home/node/.n8n `
  n8nio/n8n

Access n8n at: http://localhost:5678


3.3.3 Building an AI Workflow

Let us build an email classification and auto-responder.

Email Classification Workflow

Module 3.4: Model Context Protocol (MCP) (6 hours)

Learning Objectives

By the end of this module, you will be able to:

  1. Understand MCP architecture and purpose
  2. Build a simple MCP server
  3. Connect MCP to AI clients like Claude Desktop
  4. Implement security best practices

3.4.1 What is MCP?

The Model Context Protocol (MCP) is an open standard for connecting AI models to external tools and data sources. Think of it as "USB-C for AI".

MCP Architecture

Before MCP: Every AI app needed custom integrations with every tool. 10 apps x 100 tools = 1,000 integrations

With MCP: Each app implements MCP once, each tool implements MCP once. 10 apps + 100 tools = 110 implementations


3.4.2 Building an MCP Server

"""
Simple MCP Server
=================
An MCP server providing weather information.

Run with: python weather_mcp_server.py
Then connect from an MCP client (Claude Desktop, etc.)
"""

import asyncio
import json
from typing import Any, Dict
from dataclasses import dataclass


@dataclass
class Tool:
    """An MCP tool definition."""
    name: str
    description: str
    input_schema: Dict[str, Any]


class MCPServer:
    """Basic MCP Server implementation."""
    
    def __init__(self, name: str, version: str = "1.0.0"):
        self.name = name
        self.version = version
        self.tools: Dict[str, Tool] = {}
        self._tool_handlers: Dict[str, callable] = {}
    
    def register_tool(
        self,
        name: str,
        description: str,
        input_schema: Dict[str, Any],
        handler: callable
    ):
        """Register a tool with the server."""
        self.tools[name] = Tool(name, description, input_schema)
        self._tool_handlers[name] = handler
    
    async def handle_request(self, request: Dict) -> Dict:
        """Handle an incoming MCP request (JSON-RPC 2.0)."""
        method = request.get("method")
        params = request.get("params", {})
        request_id = request.get("id")
        
        if method == "tools/list":
            result = {
                "tools": [
                    {
                        "name": t.name,
                        "description": t.description,
                        "inputSchema": t.input_schema
                    }
                    for t in self.tools.values()
                ]
            }
        elif method == "tools/call":
            tool_name = params.get("name")
            arguments = params.get("arguments", {})
            
            if tool_name in self._tool_handlers:
                handler = self._tool_handlers[tool_name]
                result_data = handler(**arguments)
                result = {
                    "content": [{"type": "text", "text": json.dumps(result_data)}]
                }
            else:
                return {"error": {"code": -32601, "message": f"Unknown tool: {tool_name}"}}
        else:
            return {"error": {"code": -32601, "message": f"Unknown method: {method}"}}
        
        return {"jsonrpc": "2.0", "id": request_id, "result": result}


# Create server and register tools
server = MCPServer("weather-server")

WEATHER_DATA = {
    "london": {"temp": 12, "condition": "cloudy"},
    "paris": {"temp": 15, "condition": "sunny"},
    "tokyo": {"temp": 18, "condition": "clear"},
}

def get_weather(location: str) -> Dict:
    """Get weather for a location."""
    loc = location.lower()
    if loc in WEATHER_DATA:
        return {"location": location, **WEATHER_DATA[loc]}
    return {"error": f"No weather data for: {location}"}

server.register_tool(
    name="get_weather",
    description="Get current weather for a city",
    input_schema={
        "type": "object",
        "properties": {
            "location": {"type": "string", "description": "City name"}
        },
        "required": ["location"]
    },
    handler=get_weather
)

3.4.3 Connecting to Claude Desktop

Create a configuration file:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "weather": {
      "command": "python",
      "args": ["/path/to/weather_mcp_server.py"]
    }
  }
}

Restart Claude Desktop. Claude now has access to your weather tools!


Module 3.5: Integration and APIs (6 hours)

Learning Objectives

By the end of this module, you will be able to:

  1. Connect agents to external APIs
  2. Handle authentication and rate limits
  3. Build robust error handling
  4. Create production-ready integrations

3.5.1 API Integration Best Practices

API Integration Checklist

Building reliable integrations

🔐 Authentication

  • • Store API keys in environment variables
  • • Never commit secrets to version control
  • • Use OAuth where available
  • • Rotate keys regularly

⏱️ Rate Limiting

  • • Implement exponential backoff
  • • Track rate limit headers
  • • Queue requests when near limits
  • • Cache responses when possible

🔄 Error Handling

  • • Retry transient failures
  • • Log errors for debugging
  • • Provide meaningful error messages
  • • Fail gracefully, not catastrophically

📊 Monitoring

  • • Track request latency
  • • Monitor error rates
  • • Set up alerts for failures
  • • Log API usage for billing

Stage 3 Assessment

Module 3.1-3.2: Agent Building Quiz

What is the purpose of the to_prompt_format method in a Tool class?

In the Supervisor pattern, what is the supervisor's main role?

What happens when an agent tries to use a tool that does not exist?

Why do we use JSON for Action Input in the ReAct pattern?

What is the main advantage of the Swarm pattern over Supervisor?

Module 3.3-3.5: Integration Quiz

What protocol does MCP use for communication?

What is the main benefit of MCP over custom integrations?

What is n8n best suited for?

Why should API keys be stored in environment variables?

What is exponential backoff?


Summary

In this stage, you have built:

  1. A complete ReAct agent with tools for calculation, search, and more

  2. Multi-agent systems using both Supervisor and Swarm patterns

  3. Visual workflows with n8n for no-code AI automation

  4. An MCP server that connects to Claude Desktop

  5. Robust API integrations with proper authentication and error handling

You can build things now

You now have the practical skills to build real AI agents. In Stage 4, we will ensure your agents are secure and ethical.

Ready to test your knowledge?

AI Agents Practical Building Assessment

Validate your learning with practice questions and earn a certificate to evidence your CPD. Try three preview questions below, then take the full assessment.

50+

Questions

45

Minutes

PDF

Certificate

Everything is free with unlimited retries

  • Take the full assessment completely free, as many times as you need
  • Detailed feedback on every question explaining why answers are correct or incorrect
  • Free downloadable PDF certificate with details of what you learned and hours completed
  • Personalised recommendations based on topics you found challenging

Sign in to get tracking and your certificate

You can complete this course without signing in, but your progress will not be saved and you will not receive a certificate. If you complete the course without signing in, you will need to sign in and complete it again to get your certificate.

We run on donations. Everything here is free because we believe education should be accessible to everyone. If you have found this useful and can afford to, please consider making a donation to help us keep courses free, update content regularly, and support learners who cannot pay. Your support makes a real difference.

During timed assessments, copy actions are restricted and AI assistance is paused to ensure fair evaluation. Your certificate will include a verification URL that employers can use to confirm authenticity.

Course materials are protected by intellectual property rights.View terms

Quick feedback

Optional. This helps improve accuracy and usefulness. No accounts required.

Rating (optional)

Related Architecture Templates4

Production-ready templates aligned with industry frameworks. Download in multiple formats.

View all

Password and Passphrase Coach

/ai-agents/practical-building

Foundation

Scores passwords and suggests stronger passphrases.

MFA Method Picker

/ai-agents/practical-building

Foundation

Chooses MFA methods based on threat fit and device context.

Session and Token Hygiene Checker

/ai-agents/practical-building

Practitioner

Evaluates session lifetimes, refresh, rotation, and cookie settings.

URL Risk Triage Tool

/ai-agents/practical-building

Foundation

Checks URLs for risky patterns and produces a quick decision.

Related categories:SecurityIntegrationEmerging