Loading lesson...
Loading lesson...

Business automation · 2023 and onwards
A mid-sized digital marketing agency in Berlin was spending roughly 15 hours per week on a single process: a team member would check each client's Google Analytics account, copy key metrics into a spreadsheet, write a summary paragraph, and email it to the client. The task required no judgement, only patience. It was exactly the kind of work that automation should handle.
The agency built a single n8n workflow. A schedule trigger ran every Monday at 08:00. An HTTP Request node fetched the Analytics data for each client. An AI Agent node, connected to Claude, drafted a personalised summary from the numbers. A Gmail node sent the summary to the client. A PostgreSQL node logged each run with a timestamp and status. The whole workflow took two days to build and test.
The key architectural decisions were not about the visual editor. They were the same decisions you would make in code: what happens when the Analytics API is rate-limited? What if Claude returns an empty response? What if the Gmail send fails? Each of those failure modes required an explicit answer before the workflow went anywhere near production.
n8n is a visual tool, but the decisions behind a production workflow are the same as in code. What makes the difference between a workflow that runs reliably and one that fails silently under production load?
Not every agent system requires code. n8n provides a visual workflow builder with AI nodes, human-in-the-loop approval, and integrations with hundreds of services. This module shows you when and how to use visual automation instead of code-first agent building.
With the learning outcomes established, this module begins by examining what n8n is and when to use it in depth.
n8n (pronounced "n-eight-n") is an open-source workflow automation platform that provides a visual node editor for connecting application programming interfaces (APIs), databases, and AI models. Each "node" represents one operation. Nodes connect to form a workflow that runs in sequence, with branching and looping supported through logic nodes.
Unlike fully no-code platforms, n8n allows JavaScript in any node when the visual approach is insufficient. It can be self-hosted (free, under the n8n fair-code licence) or used via n8n cloud (paid). For most teams, self-hosting via Docker is the starting point because it keeps data and credentials on your own infrastructure.
Use n8n when you are connecting existing tools and services and the people maintaining the workflow are not engineers. Use pure code when you need complex agent logic with deep branching, high-volume production throughput, or fine-grained control over retry behaviour. Both are valid choices; the question is who will maintain the system.
“n8n is source-available under the Sustainable Use License. Self-hosted instances are free for personal and internal business use. Production commercial use of n8n requires a licence.”
n8n Fair-code Licence - github.com/n8n-io/n8n/blob/master/LICENSE.md
Understanding the licence matters before you build critical business processes on n8n. Self-hosted internal use is free. If you are building a product that uses n8n as infrastructure for paying customers, review the licence terms carefully.
With an understanding of what n8n is and when to use it in place, the discussion can now turn to setting up n8n locally, which builds directly on these foundations.
Docker is the recommended installation method because it isolates n8n's dependencies from your local environment and makes upgrades straightforward. The command below starts n8n with persistent data stored in ~/.n8n and exposes the interface on port 5678.
docker run -it --rm \
--name n8n \
-p 5678:5678 \
-v ~/.n8n:/home/node/.n8n \
docker.n8n.io/n8nio/n8n
# n8n is available at http://localhost:5678For production, set an explicit encryption key. Without it, n8n generates one from the machine's hostname. If you move the deployment or the hostname changes, all stored credentials become unreadable.
docker run -it --rm \
--name n8n \
-p 5678:5678 \
-e N8N_ENCRYPTION_KEY=your-secure-random-string \
-v ~/.n8n:/home/node/.n8n \
docker.n8n.io/n8nio/n8nCommon misconception
“Credentials stored in n8n are safe from all internal users once the admin account is secured.”
n8n encrypts credentials at rest, but any n8n admin can use stored credentials in workflows. Encrypting the storage does not prevent an admin from creating a workflow that exfiltrates a stored API key. Treat n8n admin access as equivalent to direct access to the credentials themselves. Rotate API keys if admin access is revoked.
With an understanding of setting up n8n locally in place, the discussion can now turn to the ai agent node and its configuration, which builds directly on these foundations.
The AI Agent node in n8n implements the agent loop internally. You configure the chat model (Claude, GPT-4, or a local model), the memory strategy (how conversation history is stored between runs), and the tools (sub-nodes connected to the agent that it can call). The Max Iterations setting is the equivalent of MAX_STEPSin a coded agent: it prevents the agent from looping indefinitely.
To connect Claude: go to Credentials in the left sidebar, click Add Credential, select Anthropic, enter your ANTHROPIC_API_KEY, and save. In the AI Agent node, set Chat Model to Anthropic and select the model. Tools are connected as sub-nodes via the "Tools" input connector: HTTP Request nodes, Code nodes, PostgreSQL nodes, and Slack nodes are all valid tool sub-nodes.
Keep Max Iterations at 5 to 10 for most production workflows. A high limit (above 20) makes runaway agent loops expensive before they are detected.
“The AI Agent node in n8n runs the full ReAct loop internally: the model reasons, calls a tool via the connected sub-node, observes the result, and continues until it reaches end_turn or the iteration limit.”
n8n AI Agent node documentation - docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent
Understanding that the AI Agent node is the same agent loop you built in Module 11 is important. The visual editor hides the implementation, but the same failure modes apply: vague tool descriptions cause wrong tool selection, and missing stop conditions cause loops.
With an understanding of the ai agent node and its configuration in place, the discussion can now turn to triggers and webhooks, which builds directly on these foundations.
A workflow needs a trigger: the event that starts it. n8n supports two main trigger types for AI workflows. The Webhook Trigger creates an HTTPS endpoint that external systems can POST to. The Schedule Trigger runs the workflow at a defined time using a cron expression, a compact notation for specifying recurring schedules.
Reference incoming webhook data in workflow expressions using n8n's expression syntax: {{ $json.customer_id }} reads the customer_id field from the POST body. This passes data from the trigger into downstream nodes without writing any code.
Common cron expressions for business workflows:
0 9 * * * # Every day at 09:00
0 8 * * 1 # Every Monday at 08:00
0 8 * * 1-5 # Every weekday at 08:00
0 */4 * * * # Every 4 hoursWith an understanding of triggers and webhooks in place, the discussion can now turn to error handling in production workflows, which builds directly on these foundations.
A workflow with no error handling silently fails when the AI API is rate-limited or a downstream service is unavailable. Without an Error Trigger node, failed runs disappear into the execution log without alerting anyone. In a customer-facing workflow, this means customers simply do not receive responses.
Three error handling mechanisms should be present in every production n8n workflow. First, the Error Trigger node: connect it to the main workflow and add a Slack or email notification that captures the workflow name, the failing node, and the error message. Second, Retry on Fail: enable this on every node that calls an external API, with a maximum of 2 to 3 tries and a wait of at least 1,000 milliseconds between attempts. Third, an IF node after the AI Agent node that checks for an error field in the response and routes failures to a human review queue rather than directly to the customer.
Common misconception
“If n8n logs show the workflow completed successfully, the customer received the correct response.”
A workflow can complete without error at the n8n level and still produce an incorrect AI response, an empty response, or a response sent to the wrong recipient. Add a step that validates the AI Agent output before sending: check that it is non-empty, within expected length, and does not contain an error marker. Log the content sent, not just the send status.
With an understanding of error handling in production workflows in place, the discussion can now turn to a complete example: the email triage agent, which builds directly on these foundations.
This workflow reads incoming emails, uses an AI agent to classify and draft responses, and routes them based on urgency. It demonstrates all the patterns covered in this module: a real-time trigger, an AI Agent node with tools, conditional routing, and database logging.
Gmail Trigger (new email arrives)
|
Set node: extract subject, body, sender
|
AI Agent node:
System: "Classify this email as urgent/normal/spam.
Draft a short response if not spam."
Input: {{ $json.subject }} {{ $json.body }}
Max Iterations: 5
|
IF node: classification == "urgent"
True:
-> Slack node: notify #support-team
-> Gmail node: add label URGENT
False:
-> IF node: classification == "spam"
True: Gmail node: move to spam
False: Gmail node: send AI-drafted reply
|
PostgreSQL node: log email, classification, response, timestampYou have built an n8n workflow processing customer support emails. It ran for two days without issue and then started failing silently. Customers stopped receiving responses. You had not added an Error Trigger. What is the first thing you should add, and what information must it capture?
Your AI Agent node's Max Iterations is set to 50. The AI API costs $0.015 per 1,000 output tokens. A runaway loop produces 200 tokens per step. How many tokens could a single stuck workflow produce before hitting the limit?
You need a workflow to run every weekday (Monday to Friday) at 08:00. Which node type and cron expression is correct?
You are building a n8n workflow for a client that requires the AI Agent node to access their internal PostgreSQL database. Where should you store the database credentials?
docs.n8n.io: Getting Started and Core Concepts
Official reference for all nodes, workflow anatomy, credentials management, and trigger configuration used throughout this module.
n8n AI Agent node documentation
docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent
Specific reference for AI Agent node configuration, Max Iterations, memory settings, and connecting tool sub-nodes.
docs.n8n.io/hosting: Docker installation with security configuration
Reference for N8N_ENCRYPTION_KEY configuration and persistent volume setup discussed in Section 13.2.
Interactive cron expression editor
Interactive tool for building and verifying cron expressions. Used to validate the weekday schedule expression in the knowledge check.
github.com/n8n-io/n8n/blob/master/LICENSE.md
Cited in Section 13.1 to clarify that self-hosted internal use is free but commercial redistribution requires a licence.
Module 13 of 25 · Practical Building