Practical building · Module 1
Building your first agent
Let us build a production-ready single agent step by step.
Previously
Start with Practical building
Hands-on implementation of real-world agent systems.
This module
Building your first agent
Let us build a production-ready single agent step by step.
Next
Multi-agent systems
In practice, when tasks cross more than a couple of domains, a single agent often degrades quickly.
Progress
Mark this module complete when you can explain it without rereading every paragraph.
Why this matters
Now let us create some useful tools for our agent.
What you will be able to do
- 1 Build a complete ReAct style agent in Python.
- 2 Wire at least two tools and handle tool failures safely.
- 3 Debug a run using a clear thought, action, observation log.
Before you begin
- Core concepts completed or equivalent understanding
- Basic confidence with workflow and integration terms
Common ways people get this wrong
- Hidden complexity. Adding tools and memory too early makes failures harder to locate.
- No fallback path. If the agent fails, users still need a clear next step.
3.1.1 The Complete Agent Implementation
Let us build a production-ready single agent step by step. This is not a toy example. This is the foundation for real applications.
"""
Complete AI Agent Implementation
================================
A production-ready single agent using the ReAct pattern.
This module provides everything you need to build an agent
that can reason about problems and use tools to solve them.
Author: Ransford Amponsah
Course: AI Agents - From Foundation to Mastery
License: MIT
Requirements:
- Python 3.10+
- requests library
- Ollama running locally (ollama serve)
"""
from typing import Dict, List, Any, Optional, Callable
from dataclasses import dataclass, field
from enum import Enum
from datetime import datetime
import json
import re
import requests
class AgentStatus(Enum):
"""Current state of the agent."""
IDLE = "idle"
THINKING = "thinking"
ACTING = "acting"
WAITING = "waiting"
COMPLETE = "complete"
ERROR = "error"
@dataclass
class Tool:
"""
Definition of a tool that the agent can use.
A tool is a capability we give to the agent. It could be
searching the web, doing calculations, reading files, or
anything else you can express as a Python function.
Attributes:
name: Unique identifier for the tool (use snake_case)
description: Human-readable description (shown to LLM)
function: The Python function to call
parameters: JSON Schema describing expected inputs
"""
name: str
description: str
function: Callable
parameters: Dict[str, Any]
def to_prompt_format(self) -> str:
"""Format tool for inclusion in agent prompt."""
params = ", ".join(
f"{k}: {v.get('description', 'no description')}"
for k, v in self.parameters.get("properties", {}).items()
)
return f"- {self.name}({params}): {self.description}"
def execute(self, **kwargs) -> Any:
"""Execute the tool with given arguments."""
return self.function(**kwargs)
@dataclass
class AgentState:
"""
Complete state of the agent at any point.
We track everything the agent has done and seen.
This makes debugging much easier.
"""
messages: List[Dict[str, str]] = field(default_factory=list)
status: AgentStatus = AgentStatus.IDLE
current_thought: Optional[str] = None
pending_action: Optional[Dict[str, Any]] = None
observations: List[str] = field(default_factory=list)
iterations: int = 0
error: Optional[str] = None
class ReActAgent:
"""
A ReAct (Reasoning + Acting) Agent implementation.
This agent follows the pattern:
1. Thought: Reason about what to do
2. Action: Choose and execute a tool
3. Observation: Process the result
4. Repeat until goal achieved or max iterations
Example usage:
agent = ReActAgent(
model="llama3.2:3b",
system_prompt="You are a helpful research assistant.",
tools=[search_tool, calculator_tool]
)
result = agent.run("What is the population of London multiplied by 2?")
print(result)
"""
REACT_PROMPT_TEMPLATE = '''You are an AI assistant using the ReAct pattern.
You have access to these tools:
{tools}
For EVERY response, you MUST use this EXACT format:
Thought: [Your reasoning about what to do next]
Action: [tool_name]
Action Input: [input for the tool as valid JSON]
OR when you have the final answer:
Thought: [Your reasoning about why you are done]
Final Answer: [Your complete response to the user]
RULES:
1. Always start with "Thought:"
2. Only use tools that are listed above
3. Action Input must be valid JSON
4. Only output "Final Answer:" when you truly have the answer
5. Be concise but complete
{system_prompt}
User Query: {query}
'''
def __init__(
self,
model: str = "llama3.2:3b",
system_prompt: str = "You are a helpful assistant.",
tools: Optional[List[Tool]] = None,
max_iterations: int = 10,
ollama_url: str = "http://localhost:11434"
):
"""
Initialise the ReAct agent.
Args:
model: Ollama model name
system_prompt: Custom instructions for the agent
tools: List of Tool objects the agent can use
max_iterations: Maximum reasoning loops
ollama_url: URL of Ollama server
"""
self.model = model
self.system_prompt = system_prompt
self.tools = tools or []
self.max_iterations = max_iterations
self.ollama_url = ollama_url
self.state = AgentState()
# Create tool lookup dictionary
self.tool_map = {tool.name: tool for tool in self.tools}
def _format_tools(self) -> str:
"""Format all tools for the prompt."""
if not self.tools:
return "No tools available. Answer using only your knowledge."
return "\n".join(tool.to_prompt_format() for tool in self.tools)
def _call_llm(self, prompt: str) -> str:
"""
Call the Ollama LLM with a prompt.
Args:
prompt: The complete prompt to send
Returns:
The model's response text
"""
try:
response = requests.post(
f"{self.ollama_url}/api/generate",
json={
"model": self.model,
"prompt": prompt,
"stream": False
},
timeout=60
)
response.raise_for_status()
return response.json().get("response", "")
except requests.exceptions.ConnectionError:
raise RuntimeError(
"Cannot connect to Ollama. Is it running? "
"Start with: ollama serve"
)
except requests.exceptions.Timeout:
raise RuntimeError("Ollama request timed out")
def _parse_response(self, response: str) -> Dict[str, Any]:
"""
Parse the LLM's ReAct-formatted response.
Args:
response: Raw text from LLM
Returns:
Dictionary with thought, action (optional), or final_answer
"""
result = {
"thought": None,
"action": None,
"action_input": None,
"final_answer": None
}
# Extract Thought
thought_match = re.search(
r"Thought:\s*(.+?)(?=Action:|Final Answer:|$)",
response,
re.DOTALL
)
if thought_match:
result["thought"] = thought_match.group(1).strip()
# Check for Final Answer
final_match = re.search(
r"Final Answer:\s*(.+?)$",
response,
re.DOTALL
)
if final_match:
result["final_answer"] = final_match.group(1).strip()
return result
# Extract Action
action_match = re.search(r"Action:\s*(\w+)", response)
if action_match:
result["action"] = action_match.group(1).strip()
# Extract Action Input
input_match = re.search(
r"Action Input:\s*(.+?)(?=Thought:|Action:|$)",
response,
re.DOTALL
)
if input_match:
try:
result["action_input"] = json.loads(
input_match.group(1).strip()
)
except json.JSONDecodeError:
# If not valid JSON, treat as string
result["action_input"] = {
"input": input_match.group(1).strip()
}
return result
def _execute_action(
self,
action: str,
action_input: Dict[str, Any]
) -> str:
"""
Execute a tool action.
Args:
action: Name of the tool to execute
action_input: Arguments to pass to the tool
Returns:
String observation of the result
"""
if action not in self.tool_map:
return f"Error: Unknown tool '{action}'. Available: {list(self.tool_map.keys())}"
tool = self.tool_map[action]
try:
result = tool.execute(**action_input)
if isinstance(result, (dict, list)):
return f"Success: {json.dumps(result)}"
return f"Success: {str(result)}"
except TypeError as e:
return f"Error: Invalid arguments for {action}: {e}"
except Exception as e:
return f"Error executing {action}: {e}"
def run(self, query: str) -> str:
"""
Run the agent on a user query.
Args:
query: The user's question or task
Returns:
The agent's final answer
"""
# Build initial prompt
prompt = self.REACT_PROMPT_TEMPLATE.format(
tools=self._format_tools(),
system_prompt=self.system_prompt,
query=query
)
conversation = prompt
for iteration in range(self.max_iterations):
print(f"\n--- Iteration {iteration + 1} ---")
# Get LLM response
response = self._call_llm(conversation)
print(f"Agent: {response[:200]}...")
# Parse the response
parsed = self._parse_response(response)
if parsed["thought"]:
print(f"Thought: {parsed['thought']}")
# Check for final answer
if parsed["final_answer"]:
print(f"Final Answer: {parsed['final_answer']}")
return parsed["final_answer"]
# Execute action if present
if parsed["action"]:
print(f"Action: {parsed['action']}")
print(f"Action Input: {parsed['action_input']}")
observation = self._execute_action(
parsed["action"],
parsed["action_input"] or {}
)
print(f"Observation: {observation}")
# Add to conversation
conversation += f"\n{response}\nObservation: {observation}\n"
else:
# No action, might be stuck
conversation += f"\n{response}\n"
conversation += (
"\nYou must either use a tool (Action: ...) "
"or provide a Final Answer.\n"
)
return "I was unable to complete this task within the allowed iterations."3.1.2 Creating Tools for Your Agent
Now let us create some useful tools for our agent.
"""
Agent Tools
===========
Practical tools that extend agent capabilities.
"""
import ast
import operator
from datetime import datetime
def calculator(expression: str) -> float:
"""
Evaluate a mathematical expression safely.
We use Python's AST to parse and evaluate expressions
without using eval() which would be dangerous.
"""
ops = {
ast.Add: operator.add,
ast.Sub: operator.sub,
ast.Mult: operator.mul,
ast.Div: operator.truediv,
ast.Pow: operator.pow,
}
def _eval(node):
if isinstance(node, ast.Num):
return node.n
elif isinstance(node, ast.BinOp):
return ops[type(node.op)](
_eval(node.left),
_eval(node.right)
)
else:
raise ValueError(f"Unsupported: {type(node)}")
return _eval(ast.parse(expression, mode='eval').body)
def get_current_time(timezone: str = "UTC") -> str:
"""Get the current time."""
return datetime.now().strftime("%Y-%m-%d %H:%M:%S") + f" ({timezone})"
def search_knowledge_base(query: str) -> str:
"""
Search a knowledge base.
In production, this would query a vector database or API.
Here we use a simple simulation.
"""
kb = {
"london population": "London has a population of approximately 9 million people.",
"python": "Python is a high-level programming language created by Guido van Rossum.",
"ai agents": "AI Agents are systems that perceive their environment and take actions.",
"react pattern": "ReAct combines reasoning and acting in an interleaved manner.",
}
query_lower = query.lower()
for key, value in kb.items():
if key in query_lower:
return value
return f"No information found for: {query}"
# Create tool instances
calculator_tool = Tool(
name="calculator",
description="Perform mathematical calculations. Use for any maths.",
function=lambda expression: {"result": calculator(expression)},
parameters={
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "Mathematical expression, e.g., '2 + 3 * 4'"
}
},
"required": ["expression"]
}
)
time_tool = Tool(
name="get_time",
description="Get the current date and time.",
function=lambda **kwargs: {"time": get_current_time()},
parameters={
"type": "object",
"properties": {
"timezone": {
"type": "string",
"description": "Timezone name (default: UTC)"
}
}
}
)
search_tool = Tool(
name="search",
description="Search knowledge base for information about topics.",
function=lambda query: {"result": search_knowledge_base(query)},
parameters={
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "What to search for"
}
},
"required": ["query"]
}
)
# Example usage
if __name__ == "__main__":
# Create agent with tools
agent = ReActAgent(
model="llama3.2:3b",
system_prompt="You are a helpful research assistant. Be concise.",
tools=[calculator_tool, time_tool, search_tool],
max_iterations=5
)
# Test query
result = agent.run("What is 25 multiplied by 4?")
print(f"\nFinal Result: {result}")Mental model
A minimal agent pipeline
Start with a small loop you can test. Complexity is earned.
-
1
Input
-
2
Prompt and constraints
-
3
Model
-
4
One tool
-
5
Output
Assumptions to keep in mind
- You can test the loop. If you cannot reproduce behaviour, you cannot improve it safely.
- Side effects are limited. Start with read only tools before you allow writes.
Failure modes to notice
- Hidden complexity. Adding tools and memory too early makes failures harder to locate.
- No fallback path. If the agent fails, users still need a clear next step.
Check yourself
Quick check. Build your first agent
0 of 4 opened
What are the three repeating steps in a ReAct loop
Thought, action, observation.
Why should tool inputs be validated
To prevent misuse, reduce accidents, and keep the agent inside safe boundaries.
Scenario. The agent calls the wrong tool repeatedly. What is a practical fix
Tighten tool descriptions, reduce overlap, and add a stopping condition or a clarifying question.
What should you record to make runs debuggable
The prompt, the tool calls, the observations, and the final output, plus any errors.
Artefact and reflection
Artefact
A working agent script you can reuse later.
Reflection
Where in your work would build a complete react style agent in python. change a decision, and what evidence would make you trust that change?
Optional practice
Run the agent on three tasks and record one failure honestly.