Core concepts · Module 4

Design patterns

For complex tasks, planning before acting often works better than interleaved reasoning.

45 min 3 outcomes Core concepts

Previously

Memory and context

Short-Term Memory.

This module

Design patterns

For complex tasks, planning before acting often works better than interleaved reasoning.

Next

Architecture fundamentals

State is everything your agent needs to remember to complete its task.

Progress

Mark this module complete when you can explain it without rereading every paragraph.

Why this matters

The agent reviews and improves its own output before presenting it.

What you will be able to do

  • 1 Choose a pattern that matches the task and the risk.
  • 2 Explain when to use planning, reflection, and retries, and when not to.
  • 3 Combine patterns without creating an untestable mess.

Before you begin

  • Foundations-level understanding of this course
  • Confidence with key terms introduced in Stage 1

Common ways people get this wrong

  • Pattern mismatch. A complex pattern for a simple task adds cost and new ways to fail.
  • Hidden coupling. If multiple roles share hidden state, debugging becomes story telling.

Main idea at a glance

Plan-and-Execute Pattern

Stage 1

Complex Task

A multi-step goal that benefits from upfront planning before execution

2.4.2 Plan-and-Execute Pattern

For complex tasks, planning before acting often works better than interleaved reasoning.

2.4.3 Reflection Pattern

The agent reviews and improves its own output before presenting it.

"""
Reflection Pattern Implementation
==================================
Agent reviews and improves its output.
"""


def generate_with_reflection(
    query: str,
    generator_fn,
    critic_fn,
    max_iterations: int = 3
) -> str:
    """
    Generate content with self-reflection loop.
    
    Args:
        query: The user's request
        generator_fn: Function that generates initial content
        critic_fn: Function that critiques and suggests improvements
        max_iterations: Maximum improvement iterations
        
    Returns:
        Final improved content
    """
    # Generate initial response
    content = generator_fn(query)
    
    for i in range(max_iterations):
        # Get critique
        critique = critic_fn(
            query=query,
            content=content
        )
        
        # Check if good enough
        if critique.get("is_satisfactory", False):
            break
        
        # Improve based on feedback
        improvements = critique.get("suggestions", [])
        content = generator_fn(
            f"{query}\n\n"
            f"Previous attempt: {content}\n\n"
            f"Improvements needed: {improvements}\n\n"
            f"Please provide an improved version."
        )
    
    return content

2.4.4 Additional patterns worth knowing

Human-in-the-Loop

Not every decision should be automated. The Human-in-the-Loop pattern inserts approval checkpoints at critical moments. The agent does the research and drafts a recommendation. A human reviews and approves before the action executes. This is essential for high-stakes workflows like financial transactions, medical decisions, or public communications.

Swarm

The Swarm pattern uses lightweight agents that hand off tasks to each other dynamically. Instead of a central supervisor routing work, agents directly transfer control when they recognise that another agent is better suited for the current sub-task. OpenAI's Agents SDK popularised this pattern with its "handoffs" primitive.

Meta's Rule of Two

A design principle from Meta (October 2025) stating that agents should satisfy no more than two of three properties: access to untrusted data, ability to change state, and use of tools. If an agent has all three, it is inherently vulnerable to exploitation. This is a useful constraint when deciding what capabilities to give an agent.

Andrew Ng's four agent design patterns

Andrew Ng, through DeepLearning.AI, identified four foundational patterns for agentic AI that align with what we have covered.

  1. Reflection. The agent reviews its own output and improves it before presenting a final answer.

  2. Tool use. The agent calls external tools to ground its reasoning in real data.

  3. Planning. The agent breaks complex tasks into sub-steps before executing.

  4. Multi-agent collaboration. Multiple specialised agents work together, each handling a different aspect of the problem.

These are not competing patterns. Production systems typically combine several of them.

Mental model

Patterns reduce surprises

Design patterns give you a known shape so you can test, monitor, and debug agent behaviour.

  1. 1

    Pattern

  2. 2

    Roles

  3. 3

    State

  4. 4

    Tools

  5. 5

    Checks

Assumptions to keep in mind

  • Roles are clear. If roles overlap, responsibility blurs and the system becomes hard to reason about.
  • Checks run by default. A check that can be skipped becomes a check that will be skipped.

Failure modes to notice

  • Pattern mismatch. A complex pattern for a simple task adds cost and new ways to fail.
  • Hidden coupling. If multiple roles share hidden state, debugging becomes story telling.

Key terms

Meta's Rule of Two
A design principle from Meta (October 2025) stating that agents should satisfy no more than two of three properties: access to untrusted data, ability to change state, and use of tools. If an agent has all three, it is inherently vulnerable to exploitation. This is a useful constraint when deciding what capabilities to give an agent.

Check yourself

Quick check. Design patterns

0 of 4 opened

When is plan and execute a better choice than a single ReAct loop

When the task has clear sub steps and you want a plan you can inspect before running actions.

What is the point of a reflection step

To catch mistakes and improve quality by reviewing output against a checklist or criteria.

Scenario. A task is low risk and must be fast. Which pattern choice is sensible

Keep it simple. A lighter loop with bounded retries, not heavy reflection.

What is a common way patterns fail when combined badly

They become hard to test and debug. The agent does too much, hides where it went wrong, and costs explode.

Artefact and reflection

Artefact

A simple pattern map you can reuse when you build agents.

Reflection

Where in your work would choose a pattern that matches the task and the risk. change a decision, and what evidence would make you trust that change?

Optional practice

Map one real task to a pattern and justify the choice.