Automatisation Telegram avec n8n : création d'agents IA
Ce workflow n8n a pour objectif de créer un agent intelligent sur Telegram, capable de répondre à des requêtes variées en utilisant des modèles d'IA avancés. Dans un contexte où la communication instantanée est essentielle, ce type d'automatisation n8n permet aux entreprises de fournir un support client efficace et rapide, tout en intégrant des outils tels que OpenAI et Google Gemini. Les cas d'usage incluent la gestion des questions fréquentes, la fourniture d'informations en temps réel et l'assistance personnalisée.
- Étape 1 : Le flux commence par un déclencheur Telegram qui capte les messages entrants.
- Étape 2 : Un nœud Switch évalue le contenu du message pour déterminer la réponse appropriée.
- Étape 3 : Selon le type de requête, le workflow peut faire appel à OpenAI ou à Google Gemini pour générer des réponses.
- Étape 4 : Les réponses sont ensuite envoyées via Telegram, assurant une interaction fluide. Ce workflow offre des bénéfices significatifs pour les entreprises, notamment une réduction des délais de réponse et une amélioration de l'expérience client, tout en libérant du temps pour les équipes humaines.
Workflow n8n Telegram, support client : vue d'ensemble
Schéma des nœuds et connexions de ce workflow n8n, généré à partir du JSON n8n.
Workflow n8n Telegram, support client : détail des nœuds
Inscris-toi pour voir l'intégralité du workflow
Inscription gratuite
S'inscrire gratuitementBesoin d'aide ?{
"id": "WjyQKQIrpF9AO1Zf",
"meta": {
"instanceId": "044779692a3324ef2f6b23bb7a885c96eeeb4570ffe4cda096e1b9cb0126214c",
"templateCredsSetupCompleted": true
},
"name": "DSP Agent",
"tags": [],
"nodes": [
{
"id": "44c8327c-2317-4661-871c-e83f0e0c99dc",
"name": "Telegram Trigger",
"type": "n8n-nodes-base.telegramTrigger",
"position": [
-80,
20
],
"webhookId": "ece1b7c8-0758-4c1f-8db2-6a14ba1ed182",
"parameters": {
"updates": [
"message"
],
"additionalFields": {
"download": false
}
},
"credentials": {
"telegramApi": {
"id": "jo0nQp1JkF7jiljY",
"name": "Telegram account"
}
},
"typeVersion": 1.1
},
{
"id": "7754451c-5859-4667-bfd4-34d5c0f9fe71",
"name": "Switch",
"type": "n8n-nodes-base.switch",
"position": [
200,
-320
],
"parameters": {
"rules": {
"values": [
{
"outputKey": "text",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "b8cc5586-5c76-4295-b8ba-1cecfa47cc5d",
"operator": {
"type": "string",
"operation": "exists",
"singleValue": true
},
"leftValue": "={{ $json.message.text }}",
"rightValue": ""
}
]
},
"renameOutput": true
},
{
"outputKey": "voice",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "66856d79-632e-4e2d-9e54-6e28df629aeb",
"operator": {
"type": "string",
"operation": "exists",
"singleValue": true
},
"leftValue": "={{ $json.message.voice.file_id }}",
"rightValue": ""
}
]
},
"renameOutput": true
}
]
},
"options": {}
},
"retryOnFail": false,
"typeVersion": 3.2,
"alwaysOutputData": false
},
{
"id": "8ce621b6-8546-4454-b658-675130342d9c",
"name": "Edit Fields",
"type": "n8n-nodes-base.set",
"position": [
520,
-480
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "4e2b9056-34d7-4867-8f1e-4265fe80bb8c",
"name": "text",
"type": "string",
"value": "={{ $('Telegram Trigger').item.json.message.text }}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "e3bfc970-b16b-4a78-8864-19c476274b26",
"name": "Telegram",
"type": "n8n-nodes-base.telegram",
"position": [
420,
-220
],
"webhookId": "21933f09-43da-413d-ab94-a6af068c35b6",
"parameters": {
"fileId": "={{ $json.message.voice.file_id }}",
"resource": "file"
},
"credentials": {
"telegramApi": {
"id": "XyQMIzmMm1P4BOPV",
"name": "Telegram account 2"
}
},
"typeVersion": 1.2
},
{
"id": "6473e7bd-6abf-4c49-adaa-68cb78484824",
"name": "OpenAI",
"type": "@n8n/n8n-nodes-langchain.openAi",
"position": [
560,
-220
],
"parameters": {
"options": {},
"resource": "audio",
"operation": "transcribe"
},
"credentials": {
"openAiApi": {
"id": "hdG9YDSe5xnemDwc",
"name": "OpenAi account"
}
},
"typeVersion": 1.8
},
{
"id": "e7b1d605-ef8e-4d3f-898a-9f947d445630",
"name": "AI Agent",
"type": "@n8n/n8n-nodes-langchain.agent",
"position": [
1040,
0
],
"parameters": {
"text": "={{ $json.text }}",
"options": {
"systemMessage": "=\n**Current time and date:** {{$now}} \n\nHey there! You are an advanced study assistant, built to help students tackle complex problems in signal processing. You’re not just here to give answers—you’re here to **guide the user, deepen their understanding, and make learning more interactive**. \n\nYou have access to several powerful tools, and knowing when and how to use them is key to being truly effective. Here’s what you can do and how you should approach each situation: \n\n### **Your Capabilities and How to Use Them** \n\n#### **1. Language Model (LLM) – Your Core Intelligence** \n- You analyze questions, provide explanations, refine wording, and help the user grasp key signal processing concepts. \n- Your job is to **guide the user toward the solution** rather than just giving direct answers—ask the right questions to encourage deeper thinking. \n\n#### **2. Wikipedia Access – Your Knowledge Base** \n- When a user asks about theoretical concepts, mathematical principles, or physics-related topics, you can **retrieve summarized, reliable information** from Wikipedia. \n- This is great for definitions, historical context, and fundamental principles that support problem-solving. \n\n#### **3. Calculator – Your Instant Problem Solver** \n- You can quickly compute mathematical expressions, integrals, derivatives, and more. \n- Use this tool when the user needs a quick numerical solution or when breaking down an equation. \n\n#### **4. Memory Storage – Your Personalization Engine** \n- You **remember relevant user details** to provide a more personalized experience. \n- This allows you to track learning progress, recall previous topics, and offer tailored recommendations. \n\n#### **5. (Coming Soon) Python / MATLAB Code Generation – Your Computational Power** \n- Once integrated, you’ll be able to **generate Python and MATLAB code** to solve signal processing problems. \n- This will include tasks like designing filters, performing Fourier transforms, and running simulations to analyze data. \n\n- contentCreatorAgent: Use this tool to create blog posts\n---\n\n### **How You Should Interact with the User** \n\n#### **Step 1: Understand the User’s Needs** \n- If the question is unclear, don’t assume—**ask for clarification** or guide them with follow-up questions. \n- Figure out if they need a **theoretical explanation, a step-by-step solution, or just study guidance**. \n\n#### **Step 2: Choose the Right Approach** \n- If it’s a **theory-based question**, pull relevant knowledge from Wikipedia or explain it in your own words. \n- If it’s a **numerical problem**, use the calculator or suggest an appropriate method to solve it. \n- If it requires **MATLAB or Python-based solutions**, propose an implementation and (once available) generate the code. \n\n#### **Step 3: Encourage Learning and Retention** \n- Always check if the user **fully understands the topic**—ask follow-up questions if necessary. \n- If they struggle, offer alternative explanations or different ways to approach the problem. \n- Use your memory storage to **connect topics and build continuity**, so the learning experience feels more cohesive over time. \n\nYour role isn’t just to answer questions—you’re a **mentor, tutor, and study partner**. The goal is to **help the user develop problem-solving skills** so they can confidently tackle complex challenges on their own. \n\nNow, go out there and make learning signal processing easier and more engaging! "
},
"promptType": "define"
},
"typeVersion": 1.8
},
{
"id": "6ff240ec-b6f6-4775-966f-09191e8692f6",
"name": "Google Gemini Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatGoogleGemini",
"position": [
740,
440
],
"parameters": {
"options": {},
"modelName": "models/gemini-1.5-flash-001"
},
"credentials": {
"googlePalmApi": {
"id": "Pw2Xdm6s2G3GQ4kf",
"name": "Google Gemini(PaLM) Api account"
}
},
"typeVersion": 1
},
{
"id": "aa0e7fcf-c816-4b8c-a777-26206a934608",
"name": "Telegram1",
"type": "n8n-nodes-base.telegram",
"onError": "continueRegularOutput",
"position": [
1400,
0
],
"webhookId": "e1966a9e-b402-4d56-92ff-7042f181ed35",
"parameters": {
"text": "={{ $json.output }}",
"chatId": "={{ $('Telegram Trigger').item.json.message.chat.id }}",
"additionalFields": {
"appendAttribution": false
}
},
"credentials": {
"telegramApi": {
"id": "XyQMIzmMm1P4BOPV",
"name": "Telegram account 2"
}
},
"typeVersion": 1.2
},
{
"id": "a634f8e6-adb4-4bcf-a9d3-770e4ed61374",
"name": "Calculator",
"type": "@n8n/n8n-nodes-langchain.toolCalculator",
"position": [
1360,
260
],
"parameters": {},
"typeVersion": 1
},
{
"id": "3ad47acf-5188-4129-b451-3bb066dd103e",
"name": "Wikipedia",
"type": "@n8n/n8n-nodes-langchain.toolWikipedia",
"position": [
1480,
260
],
"parameters": {},
"typeVersion": 1
},
{
"id": "c032dabb-f14b-4656-8bc4-a60315f59436",
"name": "Airtable",
"type": "n8n-nodes-base.airtable",
"position": [
160,
180
],
"parameters": {
"base": {
"__rl": true,
"mode": "list",
"value": "appoBzMsCIm3Bno0X",
"cachedResultUrl": "https://airtable.com/appoBzMsCIm3Bno0X",
"cachedResultName": "Agent memory"
},
"limit": 50,
"table": {
"__rl": true,
"mode": "list",
"value": "tblb5AH2UtMVj3HLZ",
"cachedResultUrl": "https://airtable.com/appoBzMsCIm3Bno0X/tblb5AH2UtMVj3HLZ",
"cachedResultName": "Memory"
},
"options": {},
"operation": "search",
"returnAll": false
},
"credentials": {
"airtableTokenApi": {
"id": "halRA2KiS4b7O1X0",
"name": "Airtable Personal Access Token account"
}
},
"typeVersion": 2.1
},
{
"id": "5613ac95-fafb-40e5-a1b9-00daeec32e9e",
"name": "Aggregate",
"type": "n8n-nodes-base.aggregate",
"position": [
460,
180
],
"parameters": {
"options": {},
"fieldsToAggregate": {
"fieldToAggregate": [
{
"fieldToAggregate": "Memory"
}
]
}
},
"typeVersion": 1
},
{
"id": "1b83f257-539b-40dc-bdf4-fd3a0d83cbcc",
"name": "Merge",
"type": "n8n-nodes-base.merge",
"position": [
840,
0
],
"parameters": {
"mode": "combine",
"options": {},
"combineBy": "combineAll"
},
"typeVersion": 3
},
{
"id": "677cd8fe-74f4-4a7d-8bab-b54df7b0dc78",
"name": "Simple Memory",
"type": "@n8n/n8n-nodes-langchain.memoryBufferWindow",
"position": [
1160,
200
],
"parameters": {
"sessionKey": "={{ $('Telegram Trigger').item.json.message.chat.id }}",
"sessionIdType": "customKey"
},
"typeVersion": 1.3
},
{
"id": "349f4676-0c3a-4432-a541-61835f20d9e6",
"name": "OpenAI Chat Model",
"type": "@n8n/n8n-nodes-langchain.lmChatOpenAi",
"position": [
1000,
200
],
"parameters": {
"model": {
"__rl": true,
"mode": "list",
"value": "gpt-4o-mini",
"cachedResultName": "gpt-4o-mini"
},
"options": {}
},
"credentials": {
"openAiApi": {
"id": "XYV4P1NXYGCO76nI",
"name": "n8n free OpenAI API credits"
}
},
"typeVersion": 1.2
},
{
"id": "0dce63bd-262c-477e-951d-8b598ad74617",
"name": "memory_tool",
"type": "n8n-nodes-base.airtableTool",
"position": [
1600,
220
],
"parameters": {
"base": {
"__rl": true,
"mode": "list",
"value": "appoBzMsCIm3Bno0X",
"cachedResultUrl": "https://airtable.com/appoBzMsCIm3Bno0X",
"cachedResultName": "Agent memory"
},
"table": {
"__rl": true,
"mode": "list",
"value": "tblb5AH2UtMVj3HLZ",
"cachedResultUrl": "https://airtable.com/appoBzMsCIm3Bno0X/tblb5AH2UtMVj3HLZ",
"cachedResultName": "Memory"
},
"columns": {
"value": {
"Memory": "={{ $fromAI('add_Memory', `Write a memory about the user for future referance in 140 characters `, 'string') }}"
},
"schema": [
{
"id": "Memory",
"type": "string",
"display": true,
"removed": false,
"readOnly": false,
"required": false,
"displayName": "Memory",
"defaultMatch": false,
"canBeUsedToMatch": true
}
],
"mappingMode": "defineBelow",
"matchingColumns": [
"id"
],
"attemptToConvertTypes": false,
"convertFieldsToString": false
},
"options": {},
"operation": "create"
},
"credentials": {
"airtableTokenApi": {
"id": "halRA2KiS4b7O1X0",
"name": "Airtable Personal Access Token account"
}
},
"typeVersion": 2.1
},
{
"id": "ac3de286-ccc4-44ae-b3b7-9f169e91253e",
"name": "contentCreatorAgent",
"type": "@n8n/n8n-nodes-langchain.toolWorkflow",
"position": [
1800,
220
],
"parameters": {
"name": "contentCreatorAgent",
"workflowId": {
"__rl": true,
"mode": "list",
"value": "ma0fuAza3j9sB4PL",
"cachedResultName": "My project — contact creator agent"
},
"description": "call this tool whan you need to create contact,post or blog",
"workflowInputs": {
"value": {},
"schema": [],
"mappingMode": "defineBelow",
"matchingColumns": [],
"attemptToConvertTypes": false,
"convertFieldsToString": false
}
},
"typeVersion": 2.1
}
],
"active": false,
"pinData": {},
"settings": {
"executionOrder": "v1"
},
"versionId": "0e1fa96d-3ab3-4155-9468-c28936ca427d",
"connections": {
"Merge": {
"main": [
[
{
"node": "AI Agent",
"type": "main",
"index": 0
}
]
]
},
"OpenAI": {
"main": [
[
{
"node": "Merge",
"type": "main",
"index": 0
}
]
]
},
"Switch": {
"main": [
[
{
"node": "Edit Fields",
"type": "main",
"index": 0
}
],
[
{
"node": "Telegram",
"type": "main",
"index": 0
}
]
]
},
"AI Agent": {
"main": [
[
{
"node": "Telegram1",
"type": "main",
"index": 0
}
]
]
},
"Airtable": {
"main": [
[
{
"node": "Aggregate",
"type": "main",
"index": 0
}
]
]
},
"Telegram": {
"main": [
[
{
"node": "OpenAI",
"type": "main",
"index": 0
}
]
]
},
"Aggregate": {
"main": [
[
{
"node": "Merge",
"type": "main",
"index": 1
}
]
]
},
"Wikipedia": {
"ai_tool": [
[
{
"node": "AI Agent",
"type": "ai_tool",
"index": 0
}
]
]
},
"Calculator": {
"ai_tool": [
[
{
"node": "AI Agent",
"type": "ai_tool",
"index": 0
}
]
]
},
"Edit Fields": {
"main": [
[
{
"node": "Merge",
"type": "main",
"index": 0
}
]
]
},
"memory_tool": {
"ai_tool": [
[
{
"node": "AI Agent",
"type": "ai_tool",
"index": 0
}
]
]
},
"Simple Memory": {
"ai_memory": [
[
{
"node": "AI Agent",
"type": "ai_memory",
"index": 0
}
]
]
},
"Telegram Trigger": {
"main": [
[
{
"node": "Airtable",
"type": "main",
"index": 0
},
{
"node": "Switch",
"type": "main",
"index": 0
}
]
]
},
"OpenAI Chat Model": {
"ai_languageModel": [
[
{
"node": "AI Agent",
"type": "ai_languageModel",
"index": 0
}
]
]
},
"contentCreatorAgent": {
"ai_tool": [
[
{
"node": "AI Agent",
"type": "ai_tool",
"index": 0
}
]
]
},
"Google Gemini Chat Model": {
"ai_languageModel": [
[]
]
}
}
}Workflow n8n Telegram, support client : pour qui est ce workflow ?
Ce workflow s'adresse aux entreprises souhaitant automatiser leur support client sur Telegram, notamment les startups et PME qui cherchent à améliorer leur réactivité. Une connaissance de base des outils d'automatisation et des API est recommandée.
Workflow n8n Telegram, support client : problème résolu
Ce workflow résout le problème de la lenteur dans les réponses aux clients sur Telegram, en automatisant les interactions grâce à des agents IA. Il élimine les frustrations liées aux temps d'attente et réduit le risque d'erreurs humaines dans les réponses. Les utilisateurs bénéficient d'une assistance instantanée, ce qui améliore la satisfaction client et renforce la fidélité.
Workflow n8n Telegram, support client : étapes du workflow
Étape 1 : Le workflow commence avec un déclencheur Telegram qui capte les messages des utilisateurs.
- Étape 1 : Un nœud Switch analyse le contenu des messages pour déterminer la réponse appropriée.
- Étape 2 : Selon les besoins, le flux peut faire appel à OpenAI ou Google Gemini pour générer des réponses pertinentes.
- Étape 3 : Les réponses sont ensuite envoyées via Telegram, assurant une communication fluide.
- Étape 4 : D'autres outils comme Airtable peuvent être utilisés pour stocker et gérer les interactions, tandis que des nœuds de mémoire permettent de conserver le contexte des conversations.
Workflow n8n Telegram, support client : guide de personnalisation
Pour personnaliser ce workflow, vous pouvez modifier le déclencheur Telegram en ajustant les paramètres de votre bot. Il est possible de changer les modèles d'IA utilisés en remplaçant les nœuds OpenAI et Google Gemini par d'autres services d'IA si nécessaire. Vous pouvez également ajuster les règles du nœud Switch pour mieux répondre aux types de requêtes spécifiques de vos utilisateurs. Enfin, assurez-vous de sécuriser vos clés API et de monitorer les performances du flux pour garantir une expérience utilisateur optimale.