LLM & Agent Lab

LLM & Agent Lab

This is the part of the building where language models show their workings. Try prompts, wire simple tools together, watch an agent pick a path and see where things break in a low risk way.

Security reminderThis studio is for education and experimentation. Do not upload production data or secrets. Outputs are demos; review before using anywhere safety-critical or financial.

1. Prompt bench

Prompts are instructions. Temperature nudges creativity; max tokens caps output length. Try variations to feel the shifts.

2. Tool calling sandbox

Tools are abilities the model can call. We show each step so you can see when a tool is invoked and how the answer is composed.

3. Agent flow builder

A sketch of routing logic: classify intent, pick a tool if allowed, then compose an answer. Good for building intuition before heavier agent frameworks.

Allow calculator
On
Allow clock
On
Allow Docs search
On

Flow sketch

  • 1. User input
  • 2. Intent classifier (math, time, docs, general)
  • 3. Tool node (calculator, clock, docs search)
  • 4. Answer composer

This is a simplified routing demo. Real agents add memory, retries, guards and better selection logic.

4. Grounding on your own notes

Ask a question, retrieve relevant chunks from your notes, and draft an answer. Stays in-browser for safety.

Show retrieved chunks
On

Add a custom chunk (optional)

Answer

No answer yet.

Retrieved chunks

Sample note
Availability is often expressed as nines; 99.9% means ~8.76 hours down per year.
Sample note
In vector search, cosine similarity measures the angle between embeddings.
Sample note
Hashing turns data into fixed length digests for integrity checks.

5. Recent LLM runs

Prompt bench, tools, agents and grounding activity logged here.

Open Control Room
No runs yet. Try the prompt bench to create your first log.