0%

n8n AI智能体由浅入深

n8n的AI智能体功能底层是基于LangChain封装的,这使得它非常强大
下面研究一下n8n AI能力,将 n8n 的 AI 学习路径分为 5 个阶段,从最简单的“一问一答”到复杂的“多智能体协作”。

第一阶段:Hello World (基础 LLM 链)

描述

目标:让 AI 处理一段文本(如总结文章、翻译)。这是最基础的“输入 -> 处理 -> 输出”模式。
核心概念
1、Model (模型):AI 的大脑 (如 GPT-4)。
2、Chain (链):将提示词和模型连接起来的逻辑。
构建步骤

  1. 添加 Manual Trigger(手动触发)。
  2. 添加 Basic LLM Chain 节点:这是最简单的 AI 节点,适合单次任务。
  3. 添加 OpenAI Chat Model 节点:连接:将 OpenAI Chat Model 连接到 Basic LLM Chain 的 “Model” 输入点(你会看到节点上有专门的小圆点连接)。
  4. 配置 Basic LLM Chain:Prompt (提示词):请把下面这段话翻译成中文,并用鲁迅的风格
  5. 测试:在 Trigger 中模拟输入 {"text": "Hello world, hope you are doing well."},运行查看结果。

源码

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
{
"name": "My workflow 6",
"nodes": [
{
"parameters": {
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"typeVersion": 1.3,
"position": [
-240,
-32
],
"id": "49f381a2-4c9d-4993-adfa-e1d5d63beb2b",
"name": "When chat message received",
"webhookId": "c7fc669a-63de-4a1e-8401-e5f7c61108e6"
},
{
"parameters": {
"messages": {
"messageValues": [
{
"message": "请把下面这段话翻译成中文,并用鲁迅的风格"
}
]
},
"batching": {}
},
"type": "@n8n/n8n-nodes-langchain.chainLlm",
"typeVersion": 1.7,
"position": [
64,
-32
],
"id": "c4576b18-19c0-4f34-9d4f-eee4cc906cbd",
"name": "Basic LLM Chain"
},
{
"parameters": {
"model": {
"__rl": true,
"mode": "list",
"value": "gpt-4.1-mini"
},
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"typeVersion": 1.2,
"position": [
-16,
176
],
"id": "197e1928-ab2e-4b9d-a266-db41d573d299",
"name": "OpenAI Chat Model",
"credentials": {
"openAiApi": {
"id": "xfOE2dYd3ERD65Dr",
"name": "OpenAi account"
}
}
}
],
"pinData": {},
"connections": {
"When chat message received": {
"main": [
[
{
"node": "Basic LLM Chain",
"type": "main",
"index": 0
}
]
]
},
"OpenAI Chat Model": {
"ai_languageModel": [
[
{
"node": "Basic LLM Chain",
"type": "ai_languageModel",
"index": 0
}
]
]
}
},
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "5950690f-7731-4edf-ba2e-1c005a7acc7d",
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "24e8ce0d76b3b71be4393c3e9c1e6f0c2fe2f9b84660c2d47973ae9ce115bd19"
},
"id": "KCa7sIf3r7M3KXrQ",
"tags": []
}

第二阶段:会记忆的聊天机器人 (Memory)

描述

目标:创建一个能记住上下文的对话机器人。基础链是没有记忆的,你问它“我是谁”,它不知道你上一句说了什么。

核心概念

  • AI Agent:比 Basic Chain 更高级,它可以自动决策、调用工具、管理记忆。
  • Memory (记忆):存储对话历史。

构建步骤

  1. 添加 Chat Trigger (这是一个专门用于测试聊天界面的触发器)。
  2. 添加 AI Agent 节点。
    • 将 Trigger 连接到 Agent。
  3. 添加 OpenAI Chat Model -> 连接到 Agent 的 Model 接口。
  4. 添加 Simple Memory 节点。
    • 连接:将其连接到 Agent 的 “Memory” 接口。
    • 作用:它会把最近的 K 条(比如 5 条)对话记录自动塞给 AI,这样 AI 就知道上下文了。
  5. 测试
    • 点击 Chat Trigger 打开聊天窗口。
    • 输入:“我叫小明。”
    • 再次输入:“我叫什么名字?” -> AI 会回答“你叫小明”。(如果没有 Memory,它是答不出来的)。

工作流

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
{
"name": "My workflow 6",
"nodes": [
{
"parameters": {
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"typeVersion": 1.3,
"position": [
0,
0
],
"id": "ee165563-69fc-41d5-be8d-3f0522440d17",
"name": "When chat message received",
"webhookId": "6987d39b-c474-4023-89b5-d519deba4170"
},
{
"parameters": {
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.agent",
"typeVersion": 2.2,
"position": [
208,
0
],
"id": "d2bb54d9-4090-42d9-ba74-70dbf6fb5720",
"name": "AI Agent"
},
{
"parameters": {
"model": {
"__rl": true,
"mode": "list",
"value": "gpt-4.1-mini"
},
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"typeVersion": 1.2,
"position": [
80,
208
],
"id": "2947a1e1-02ae-45f3-b32a-b69b1bf50816",
"name": "OpenAI Chat Model",
"credentials": {
"openAiApi": {
"id": "xfOE2dYd3ERD65Dr",
"name": "OpenAi account"
}
}
},
{
"parameters": {},
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"typeVersion": 1.3,
"position": [
224,
208
],
"id": "9012e19d-ac87-4a47-90f0-5505cf85bbb3",
"name": "Simple Memory"
}
],
"pinData": {},
"connections": {
"When chat message received": {
"main": [
[
{
"node": "AI Agent",
"type": "main",
"index": 0
}
]
]
},
"OpenAI Chat Model": {
"ai_languageModel": [
[
{
"node": "AI Agent",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"Simple Memory": {
"ai_memory": [
[
{
"node": "AI Agent",
"type": "ai_memory",
"index": 0
}
]
]
}
},
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "2b9e1772-6b0c-4fa2-a98a-c8051519dd6d",
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "24e8ce0d76b3b71be4393c3e9c1e6f0c2fe2f9b84660c2d47973ae9ce115bd19"
},
"id": "KCa7sIf3r7M3KXrQ",
"tags": []
}

第三阶段:给 AI 装上“手” (Tools/Function Calling)

描述

目标:让 AI 做它原本做不到的事(如:算复杂的数学题、查询实时天气、查询数据库)。大模型本身不懂实时信息,数学也很差,我们需要给它“工具”。

核心概念

  • Tools (工具):AI 可以调用的功能。

构建步骤

  1. 保留第二阶段的结构 (Chat Trigger + AI Agent + Model + Memory)。
  2. 修改 AI Agent 配置
    • 确保 Agent 类型选为 “Tools Agent” (或 OpenAI Functions Agent)。
  3. 添加 Calculator 节点 (n8n 内置工具)。
    • 连接:连接到 Agent 的 “Tools” 接口。
  4. (可选) 添加 Wikipedia 节点。
    • 连接:连接到 Agent 的 “Tools” 接口。
  5. 测试
    • 在聊天框问:“253 的 45 次方是多少?”
    • 发生了什么:AI 意识到自己算不准 -> 自动决定调用 Calculator 工具 -> 获取结果 -> 整理语言回答你。
    • 问:“谁赢得了 2022 年世界杯?” -> AI 调用 Wikipedia -> 回答“阿根廷”。

n8n工作流

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
{
"name": "My workflow 6",
"nodes": [
{
"parameters": {
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"typeVersion": 1.3,
"position": [
0,
0
],
"id": "ee165563-69fc-41d5-be8d-3f0522440d17",
"name": "When chat message received",
"webhookId": "6987d39b-c474-4023-89b5-d519deba4170"
},
{
"parameters": {
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.agent",
"typeVersion": 2.2,
"position": [
208,
0
],
"id": "d2bb54d9-4090-42d9-ba74-70dbf6fb5720",
"name": "AI Agent"
},
{
"parameters": {
"model": {
"__rl": true,
"mode": "list",
"value": "gpt-4.1-mini"
},
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"typeVersion": 1.2,
"position": [
80,
208
],
"id": "2947a1e1-02ae-45f3-b32a-b69b1bf50816",
"name": "OpenAI Chat Model",
"credentials": {
"openAiApi": {
"id": "xfOE2dYd3ERD65Dr",
"name": "OpenAi account"
}
}
},
{
"parameters": {},
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"typeVersion": 1.3,
"position": [
224,
208
],
"id": "9012e19d-ac87-4a47-90f0-5505cf85bbb3",
"name": "Simple Memory"
},
{
"parameters": {},
"type": "@n8n/n8n-nodes-langchain.toolWikipedia",
"typeVersion": 1,
"position": [
464,
304
],
"id": "216df2bc-7621-4b4b-be4f-b935df2a351c",
"name": "Wikipedia"
},
{
"parameters": {},
"type": "@n8n/n8n-nodes-langchain.toolCalculator",
"typeVersion": 1,
"position": [
368,
208
],
"id": "8dcfcd32-ad68-49f9-9581-96c4a714daf6",
"name": "Calculator"
}
],
"pinData": {},
"connections": {
"When chat message received": {
"main": [
[
{
"node": "AI Agent",
"type": "main",
"index": 0
}
]
]
},
"OpenAI Chat Model": {
"ai_languageModel": [
[
{
"node": "AI Agent",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"Simple Memory": {
"ai_memory": [
[
{
"node": "AI Agent",
"type": "ai_memory",
"index": 0
}
]
]
},
"Wikipedia": {
"ai_tool": [
[
{
"node": "AI Agent",
"type": "ai_tool",
"index": 0
}
]
]
},
"Calculator": {
"ai_tool": [
[
{
"node": "AI Agent",
"type": "ai_tool",
"index": 0
}
]
]
}
},
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "9620cf18-211c-43c3-b511-65cdf9c4a620",
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "24e8ce0d76b3b71be4393c3e9c1e6f0c2fe2f9b84660c2d47973ae9ce115bd19"
},
"id": "KCa7sIf3r7M3KXrQ",
"tags": []
}

第四阶段:让 AI 读懂你的私有数据 (RAG)

描述

目标:让 AI 基于你的 PDF、Notion 文档或公司手册回答问题,而不是基于它训练时的公有知识。这就是 RAG (检索增强生成)

核心概念

  • Vector Store (向量数据库):存数据的地方(如 Pinecone, Qdrant,或 n8n 内置的 In-Memory)。
  • Embeddings (嵌入):把文字变成数字向量,方便搜索。
  • Retriever (检索器):AI 用来在数据库里找资料的工具。

构建步骤 (简化版)

  1. 建立知识库 (Ingestion)
    • On Form Submission (读取一个 txt 文件) -> Default Data Loader -> Recursive Character Text Splitter (切分长文) -> Pinecone Vector Store (操作选 Insert,为了演示,这里实际选择的是In Memory内存数据库)。
    • 这一步是为了把你的文档存进向量数据库。
  2. 建立问答机器人
    • 结构类似第三阶段 (Agent + Model + Memory)。
    • 添加 Vector Store Tool 节点。
    • 连接:连接到 Agent 的 “Tools” 接口。
    • 配置:在 Vector Store Tool 里配置你的 Pinecone 信息,操作选 “Retrieve” (检索)。
  3. 测试
    • 假设你上传了“公司 2024 年放假安排.txt”。
    • 问 AI:“我们春节放几天假?”
    • AI 会去 Vector Store Tool 搜索相关段落,读完后回答你。

n8n工作流

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
{
"name": "My workflow 6",
"nodes": [
{
"parameters": {
"mode": "insert"
},
"id": "3fdbef0d-5785-4376-8047-b67302b7301e",
"name": "In-Memory Vector Store (Insert)",
"type": "@n8n/n8n-nodes-langchain.vectorStoreInMemory",
"typeVersion": 1,
"position": [
-1120,
48
]
},
{
"parameters": {
"dataType": "binary",
"options": {}
},
"id": "8881fb1e-0d1b-46b8-95ca-7052e74beccf",
"name": "Default Data Loader",
"type": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
"typeVersion": 1,
"position": [
-1280,
320
]
},
{
"parameters": {
"chunkSize": 500,
"chunkOverlap": 50,
"options": {}
},
"id": "cfc88ab7-39da-4758-9b19-35cb4aa898c0",
"name": "Recursive Character Text Splitter",
"type": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter",
"typeVersion": 1,
"position": [
-1280,
512
]
},
{
"parameters": {
"options": {}
},
"id": "96a656b5-0106-497b-9f8a-827bee90513c",
"name": "OpenAI Embeddings",
"type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
"typeVersion": 1,
"position": [
-928,
400
],
"credentials": {
"openAiApi": {
"id": "xfOE2dYd3ERD65Dr",
"name": "OpenAi account"
}
}
},
{
"parameters": {
"options": {
"systemMessage": "你是一个文档分析助手。请根据从 Vector Store 检索到的内容回答问题。如果文档中没有相关信息,请直接说不知道。"
}
},
"id": "2cc82c62-0b96-4e98-88f3-541bc148a105",
"name": "AI Agent (Summarizer)",
"type": "@n8n/n8n-nodes-langchain.agent",
"typeVersion": 1.7,
"position": [
-592,
96
]
},
{
"parameters": {
"options": {}
},
"id": "a7b5b2f7-0d52-447a-a9ea-0fdb1d6bbf1c",
"name": "OpenAI Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"typeVersion": 1,
"position": [
-592,
336
],
"credentials": {
"openAiApi": {
"id": "xfOE2dYd3ERD65Dr",
"name": "OpenAi account"
}
}
},
{
"parameters": {
"name": "uploaded_document",
"description": "User uploaded document content"
},
"id": "f5380b06-9213-4a8c-afc2-68a2c76f26b5",
"name": "Vector Store Tool",
"type": "@n8n/n8n-nodes-langchain.toolVectorStore",
"typeVersion": 1,
"position": [
-432,
256
]
},
{
"parameters": {},
"id": "b7287970-48fa-4320-8598-9f47aec7bd06",
"name": "In-Memory Vector Store (Retrieve)",
"type": "@n8n/n8n-nodes-langchain.vectorStoreInMemory",
"typeVersion": 1,
"position": [
-608,
544
]
},
{
"parameters": {
"formTitle": "Upload",
"formFields": {
"values": [
{
"fieldLabel": "my_file",
"fieldType": "file"
}
]
},
"options": {}
},
"type": "n8n-nodes-base.formTrigger",
"typeVersion": 2.3,
"position": [
-1360,
80
],
"id": "5877db05-ddb5-4c2d-97ee-873daa605caf",
"name": "On form submission",
"webhookId": "023e15e9-4725-4aa5-ad5c-7a10711f5789"
},
{
"parameters": {
"model": {
"__rl": true,
"mode": "list",
"value": "gpt-4.1-mini"
},
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"typeVersion": 1.2,
"position": [
-256,
448
],
"id": "7b5f5f0a-3cea-4656-ae2a-6ff895f09594",
"name": "OpenAI Chat Model1",
"credentials": {
"openAiApi": {
"id": "xfOE2dYd3ERD65Dr",
"name": "OpenAi account"
}
}
},
{
"parameters": {
"options": {}
},
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"typeVersion": 1.3,
"position": [
-768,
-96
],
"id": "bc6b881c-923a-424a-97ff-7dc2d0f4f22b",
"name": "When chat message received",
"webhookId": "66fbe2df-3670-43d0-af69-dbae96341e1d"
}
],
"pinData": {},
"connections": {
"Default Data Loader": {
"ai_document": [
[
{
"node": "In-Memory Vector Store (Insert)",
"type": "ai_document",
"index": 0
}
]
]
},
"Recursive Character Text Splitter": {
"ai_textSplitter": [
[
{
"node": "Default Data Loader",
"type": "ai_textSplitter",
"index": 0
}
]
]
},
"OpenAI Embeddings": {
"ai_embedding": [
[
{
"node": "In-Memory Vector Store (Insert)",
"type": "ai_embedding",
"index": 0
},
{
"node": "In-Memory Vector Store (Retrieve)",
"type": "ai_embedding",
"index": 0
}
]
]
},
"In-Memory Vector Store (Insert)": {
"main": [
[]
]
},
"OpenAI Chat Model": {
"ai_languageModel": [
[
{
"node": "AI Agent (Summarizer)",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"Vector Store Tool": {
"ai_tool": [
[
{
"node": "AI Agent (Summarizer)",
"type": "ai_tool",
"index": 0
}
]
]
},
"In-Memory Vector Store (Retrieve)": {
"ai_vectorStore": [
[
{
"node": "Vector Store Tool",
"type": "ai_vectorStore",
"index": 0
}
]
]
},
"On form submission": {
"main": [
[
{
"node": "In-Memory Vector Store (Insert)",
"type": "main",
"index": 0
}
]
]
},
"OpenAI Chat Model1": {
"ai_languageModel": [
[
{
"node": "Vector Store Tool",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"When chat message received": {
"main": [
[
{
"node": "AI Agent (Summarizer)",
"type": "main",
"index": 0
}
]
]
}
},
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "6ddcf6f3-9ad0-4c3e-ac8b-b957f998e915",
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "24e8ce0d76b3b71be4393c3e9c1e6f0c2fe2f9b84660c2d47973ae9ce115bd19"
},
"id": "KCa7sIf3r7M3KXrQ",
"tags": []
}

第五阶段:多智能体协作 (Multi-Agent)

这是一个基于 n8n 官方文档中 “Workflow as Tool” (工作流即工具) 模式的最小化多智能体(Multi-Agent)示例。

这种模式的核心思想是:一个主智能体(Supervisor)通过调用“工具”来指派任务,而这个“工具”本质上是另一个 n8n 工作流(Worker Agent)。

为了能直接运行,将构建一个 “数学与闲聊团队”

  1. 子智能体 (Worker):一个专门负责计算的数学专家(带有计算器工具)。
  2. 主智能体 (Supervisor):负责接待用户,如果是闲聊就自己回,如果是算数就派给子智能体。

第一步:创建子智能体 (Worker)

这个工作流是实际干活的“数学专家”。它接收问题,算出答案,然后返回。

操作步骤

  1. 复制下面的 JSON。
  2. 在 n8n 画布粘贴。
  3. 保存这个工作流,命名为 Math Worker
  4. 记住它的 ID(保存后,浏览器地址栏 URL workflow/xyz... 中的 xyz 部分就是 ID,或者单纯记住名字)。

子智能体 JSON (Math Worker)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
{
"name": "Math Worker",
"nodes": [
{
"parameters": {},
"id": "73260b4b-2fd8-47f4-bce8-28c15c57052a",
"name": "Execute Workflow Trigger",
"type": "n8n-nodes-base.executeWorkflowTrigger",
"typeVersion": 1,
"position": [
-464,
208
]
},
{
"parameters": {
"assignments": {
"assignments": [
{
"id": "fix-input-name",
"name": "chatInput",
"value": "={{ $json.query || $json.input || $json.message }}",
"type": "string"
}
]
},
"options": {}
},
"id": "94a6532b-9a98-4b7e-9af3-bc7cee7e38a5",
"name": "Adapt Input to chatInput",
"type": "n8n-nodes-base.set",
"typeVersion": 3.4,
"position": [
-240,
208
]
},
{
"parameters": {
"options": {
"systemMessage": "你是一个数学专家。通过 Calculator 工具解决数学问题。请直接输出数字结果。"
}
},
"id": "24ed21fa-6c0c-4a2e-aee5-0619188be12b",
"name": "Math Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"typeVersion": 1.7,
"position": [
-32,
208
]
},
{
"parameters": {
"options": {}
},
"id": "fb53a73e-2350-4e4f-8812-f52d640d9699",
"name": "OpenAI Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"typeVersion": 1,
"position": [
-32,
432
],
"credentials": {
"openAiApi": {
"id": "xfOE2dYd3ERD65Dr",
"name": "OpenAi account"
}
}
},
{
"parameters": {},
"id": "13ffc8ab-a9d8-4945-8a08-80642ba4f239",
"name": "Calculator",
"type": "@n8n/n8n-nodes-langchain.toolCalculator",
"typeVersion": 1,
"position": [
128,
432
]
}
],
"pinData": {},
"connections": {
"Execute Workflow Trigger": {
"main": [
[
{
"node": "Adapt Input to chatInput",
"type": "main",
"index": 0
}
]
]
},
"Adapt Input to chatInput": {
"main": [
[
{
"node": "Math Agent",
"type": "main",
"index": 0
}
]
]
},
"OpenAI Chat Model": {
"ai_languageModel": [
[
{
"node": "Math Agent",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"Calculator": {
"ai_tool": [
[
{
"node": "Math Agent",
"type": "ai_tool",
"index": 0
}
]
]
}
},
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "3e0e70a3-0b0a-4610-9ea7-c8ee73d88bb9",
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "24e8ce0d76b3b71be4393c3e9c1e6f0c2fe2f9b84660c2d47973ae9ce115bd19"
},
"id": "g4Ty6ILA2hHX0Ozr",
"tags": []
}


第二步:创建主智能体 (Supervisor)

这个工作流是“经理”。它有一个特殊的工具 Call n8n Workflow Tool,用来连接上面的子智能体。

操作步骤

  1. 新建一个工作流。
  2. 复制下面的 JSON 并粘贴。
  3. 关键配置
    • 找到节点 “Call n8n Workflow Tool” (它连在 Supervisor Agent 下面)。
    • 双击打开它。
    • Workflow ID 下拉菜单中,选择你刚才保存的 Math Worker 工作流
  4. 配置 OpenAI Credentials (如果是红色的)。
  5. 点击 Chat Trigger 测试。

主智能体 JSON (Supervisor)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
{
"name": "My workflow 7",
"nodes": [
{
"parameters": {
"options": {}
},
"id": "3bfc3b38-9f78-4bba-a8b0-37a8f3fb75c5",
"name": "Chat Trigger",
"type": "@n8n/n8n-nodes-langchain.chatTrigger",
"typeVersion": 1.1,
"position": [
-304,
-96
],
"webhookId": "supervisor-chat-fixed"
},
{
"parameters": {
"options": {
"systemMessage": "你是一个主管。对于普通聊天,请直接友好回复。如果用户问数学问题,请务必调用 'call_math_expert' 工具来获得答案。"
}
},
"id": "f1e6fe8b-f6f9-4c4a-8b3f-83a6496ffd51",
"name": "Supervisor Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"typeVersion": 1.7,
"position": [
-80,
-96
]
},
{
"parameters": {
"options": {}
},
"id": "7b3cc76a-9e20-4c76-8987-de52b5c585ef",
"name": "OpenAI Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"typeVersion": 1,
"position": [
-80,
144
],
"credentials": {
"openAiApi": {
"id": "xfOE2dYd3ERD65Dr",
"name": "OpenAi account"
}
}
},
{
"parameters": {
"name": "call_math_expert",
"description": "当遇到需要精确计算的数学问题时调用此工具。Inputs must be a JSON object with a property 'query' containing the math problem.",
"workflowId": "g4Ty6ILA2hHX0Ozr"
},
"id": "f7813c2a-ff57-4bd1-90fc-3d336d498b1d",
"name": "Call n8n Workflow Tool",
"type": "@n8n/n8n-nodes-langchain.toolWorkflow",
"typeVersion": 1,
"position": [
160,
144
]
}
],
"pinData": {},
"connections": {
"Chat Trigger": {
"main": [
[
{
"node": "Supervisor Agent",
"type": "main",
"index": 0
}
]
]
},
"OpenAI Chat Model": {
"ai_languageModel": [
[
{
"node": "Supervisor Agent",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"Call n8n Workflow Tool": {
"ai_tool": [
[
{
"node": "Supervisor Agent",
"type": "ai_tool",
"index": 0
}
]
]
}
},
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "718ac71f-7d11-460d-b407-2bc0b5153d97",
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "24e8ce0d76b3b71be4393c3e9c1e6f0c2fe2f9b84660c2d47973ae9ce115bd19"
},
"id": "H9pULLyUQxNhewLm",
"tags": []
}

测试方法

Supervisor Agent 的聊天窗口中:

  1. 输入:“你好,你叫什么?”
    • 结果:主智能体直接回答(不调用子智能体)。
  2. 输入:“请帮我计算 (123 * 45) + 888 等于多少?”
    • 结果:主智能体会显示 “Used tool: call_math_expert”,这意味着它成功把任务派发给了 Math Worker,拿到结果后反馈给你。

为什么这算多智能体?

在简单的例子中这看起来像普通工具调用,但在复杂场景下:

  • Worker 可以拥有自己的 Memory(记忆)。
  • Worker 可以拥有自己的 Tools(比如搜索 Google、查询数据库)。
  • 你可以有多个 Worker(比如一个写代码,一个查资料,一个画图),Supervisor 负责在它们之间协调。这就是最基础的 Hierarchical Agent Team (分层智能体团队) 架构。