Foundations ยท Module 5
Your first AI interaction
Let us have your first conversation with a local AI model.
Previously
Setting up your environment
Python is the language of AI development.
This module
Your first AI interaction
Let us have your first conversation with a local AI model.
Next
Foundations practice test
Test recall and judgement against the governed stage question bank before you move on.
Progress
Mark this module complete when you can explain it without rereading every paragraph.
Why this matters
Now let us write a Python script that talks to Ollama.
What you will be able to do
- 1 Run a local model with Ollama and hold a short conversation.
- 2 Write a tiny Python script that calls a local model endpoint.
- 3 Explain what a prompt is and why context limits exist.
Before you begin
- No previous technical background required
- Read the section explanation before using tools
Common ways people get this wrong
- Confident mistakes. Models can sound certain while being wrong. Confidence is not a guarantee.
- Over trust. If you treat output as authority, you outsource judgement and take on hidden risk.
1.5.1 Chatting with Ollama
Let us have your first conversation with a local AI model.
Start a chat:
ollama run llama3.2:3bYou will see a prompt like >>>. Type your message and press Enter.
>>> Hello! Can you explain what you are?The model will respond. Try a few more questions:
"What is the capital of France?"
"Write a haiku about programming."
"Explain machine learning to a 10-year-old."
To exit, type /bye or press Ctrl+D.
1.5.2 Your First Python Script
Now let us write a Python script that talks to Ollama. This is your first step toward building AI agents.
Create the file:
cd my-ai-project
touch first_llm.pyWrite the code:
"""
My First LLM Interaction
=========================
A simple script that sends a prompt to a local LLM
and prints the response.
Author: [Your Name]
Date: 2026-03-17
"""
import requests
import json
def ask_llm(prompt: str, model: str = "llama3.2:3b") -> str:
"""
Send a prompt to the local Ollama server and get a response.
Args:
prompt: The question or instruction to send
model: Which model to use (default: llama3.2:3b)
Returns:
The model's response as a string
"""
response = requests.post(
"http://localhost:11434/api/generate",
json={
"model": model,
"prompt": prompt,
"stream": False
}
)
if response.status_code == 200:
return response.json().get("response", "No response received")
else:
return f"Error: {response.status_code}"
def main():
"""Main function to demonstrate LLM interaction."""
print("Welcome to your first LLM interaction!")
print("=" * 50)
print()
# Ask the model a question
question = "What are three interesting facts about the moon?"
print(f"Question: {question}")
print()
print("Thinking...")
print()
answer = ask_llm(question)
print("Answer:")
print(answer)
if __name__ == "__main__":
main()Install the required package:
pip install requestsRun the script:
python first_llm.pyCongratulations! You have just written a program that talks to an AI. This is the foundation of everything we will build.
1.5.3 Understanding prompts
A prompt is the text you send to an AI model. Crafting good prompts is a skill that takes practice. Here are some principles:
๐ฏ Interactive. Build your own tool schema
AI agents use tools by calling functions with specific parameters. Understanding how to design clear tool schemas is essential for building reliable agents. Use this interactive builder to explore and create tool definitions.
Interactive lab
Tool Schema Builder
This module includes an interactive practice component. Open the deeper tool or workspace step when you want to test the idea rather than only read it.
Prompt Engineering Best Practices
- โ Provide Specific Context
- Include relevant details about your situation, environment, and what you have already tried. Example: 'I am using Python 3.12 on macOS with VS Code. I installed pandas using pip but get ImportError when importing.'
- โ Avoid Vague Requests
- Asking open-ended questions without context leads to generic, unhelpful responses. Bad example: 'My code does not work. Can you help?'
- โญ Set Role and Format
- Tell the AI what role to adopt and exactly how you want the response structured. Example: 'You are a Python tutor. Explain list comprehensions in 3 steps with examples for a beginner. Format as numbered list.'
Stage 1 Assessment
You have now completed the foundation explanations and guided practice. This assessment checks whether you can separate model basics from agent behaviour and apply safe first steps in practice.
Summary
In this stage, you have learned:
What AI really is: Pattern matching at scale, not consciousness or understanding
The difference between LLMs and Agents: Agents can perceive, reason, and act. LLMs can only generate text.
Command line basics: Navigating your file system and running commands
Environment setup: Python, virtual environments, and Ollama
Your first AI interaction: Talking to a local model and writing Python code to do it programmatically
Mental model
Prompt, response, verification
A useful first habit is to treat model output as a draft that you verify, not a fact you obey.
-
1
Prompt
-
2
Model
-
3
Draft output
-
4
Check and revise
-
5
Final decision
Assumptions to keep in mind
- You validate claims. If a claim matters, check it. Look for sources, run the test, or ask for evidence.
- You state constraints. Clear constraints reduce wandering answers and make checking easier.
Failure modes to notice
- Confident mistakes. Models can sound certain while being wrong. Confidence is not a guarantee.
- Over trust. If you treat output as authority, you outsource judgement and take on hidden risk.
Check yourself
Quick check. Your first AI interaction
0 of 5 opened
What does ollama run do
It starts a local chat session with a model so you can send prompts and get responses.
Why do prompts benefit from specificity and context
Because the model is guessing what you mean from text alone. Clear constraints reduce vague or unhelpful answers.
Scenario. The model gives a confident answer that feels wrong. What is the safe next step
Treat it as unverified, ask for the reasoning or sources, and check against a trusted reference or a quick test.
What is a context window
The amount of text the model can consider at once. If the conversation is longer than that window, earlier details drop out.
Why is a local model useful for early learning
You can experiment freely, keep data on your machine, and learn the basics without worrying about API keys.
Modules 1.1 to 1.2. AI fundamentals quiz
0 of 5 opened
What is the relationship between AI, Machine Learning, and Deep Learning?
Correct answer: AI is the broadest term, containing ML, which contains Deep Learning
AI is the broadest category encompassing all intelligent systems. Machine Learning is a subset of AI where systems learn from data. Deep Learning is a subset of ML using neural networks with multiple layers.
What is a hallucination in the context of AI?
Correct answer: When the AI generates plausible-sounding but incorrect information
A hallucination is when an AI generates plausible-sounding but incorrect information. The model is not lying. It genuinely cannot distinguish between what it invented and what is true.
What is the main difference between an LLM and an AI Agent?
Correct answer: Agents can take actions in the real world using tools
The key difference is that AI Agents can perceive their environment, reason about goals, and take actions using tools. LLMs can only generate text responses. Agents add the ability to actually do things.
What is a token in the context of LLMs?
Correct answer: A piece of text, roughly 4 characters on average
A token is a piece of text, roughly 4 characters or 3/4 of a word on average. AI models process text as tokens, not as letters or words. Context windows are measured in tokens.
Which is NOT a core component of an AI Agent?
Correct answer: Cryptocurrency wallet
The four core components of an AI Agent are: Brain (LLM) for reasoning, Tools for taking actions, Memory for persistence, and Planning for decision-making. Cryptocurrency is not a core component.
Modules 1.3 to 1.5. Practical skills quiz
0 of 5 opened
What command shows your current directory on macOS?
Correct answer: pwd
The 'pwd' command (print working directory) shows your current location in the file system on macOS and Linux. On Windows, 'cd' without arguments shows the current directory.
Why should you use virtual environments for Python projects?
Correct answer: They isolate project dependencies to prevent conflicts
Virtual environments isolate project dependencies. Without them, different projects might require conflicting versions of the same package, causing errors.
What does Ollama allow you to do?
Correct answer: Run AI models locally on your computer
Ollama lets you run AI models locally on your own computer. No API keys needed, no data leaving your machine. It is free and private.
Which is an example of a GOOD prompt?
Correct answer: Explain how Python list comprehensions work with two examples
Good prompts are specific and provide context. 'Explain how Python list comprehensions work with two examples' tells the AI exactly what you want and in what format.
What port does the Ollama server typically run on?
Correct answer: 11434
The Ollama server runs on port 11434 by default. When making API calls, you connect to http://localhost:11434.
Artefact and reflection
Artefact
Your first_llm.py script plus one prompt you are proud of.
Reflection
Where in your work would run a local model with ollama and hold a short conversation. change a decision, and what evidence would make you trust that change?
Optional practice
Ask the model three questions, then reflect on one wrong or vague answer.