Automatisation n8n : déploiement d'actions Docker
- Ce workflow n8n est conçu pour automatiser le déploiement et la gestion d'actions Docker, facilitant ainsi la gestion des conteneurs dans un environnement de développement. Les entreprises qui utilisent Docker pour leurs applications peuvent bénéficier de ce workflow en simplifiant des tâches telles que le démarrage, l'arrêt, et la gestion des conteneurs. En intégrant des services comme SSH et des réponses aux webhooks, ce workflow permet une interaction fluide avec les systèmes de gestion de conteneurs.
- Le processus commence par un déclencheur de type Webhook qui reçoit des requêtes pour initier des actions. Ensuite, plusieurs nœuds de conditions (If) vérifient les critères spécifiques avant d'exécuter des commandes via SSH. Des nœuds de type 'set' sont utilisés pour définir les paramètres nécessaires pour chaque action, comme le démarrage ou l'arrêt des conteneurs. Les nœuds de réponse aux webhooks assurent que l'utilisateur reçoit un retour d'information sur l'état de l'opération.
- Les bénéfices de ce workflow incluent une réduction significative du temps de gestion des conteneurs, une diminution des erreurs humaines et une meilleure traçabilité des actions effectuées. En automatisant ces processus, les équipes peuvent se concentrer sur des tâches à plus forte valeur ajoutée, améliorant ainsi leur efficacité opérationnelle.
Workflow n8n Docker, développement : vue d'ensemble
Schéma des nœuds et connexions de ce workflow n8n, généré à partir du JSON n8n.
Workflow n8n Docker, développement : détail des nœuds
Inscris-toi pour voir l'intégralité du workflow
Inscription gratuite
S'inscrire gratuitementBesoin d'aide ?{
"id": "qps97Q4NEet1Pkm4",
"meta": {
"instanceId": "ffb0782f8b2cf4278577cb919e0cd26141bc9ff8774294348146d454633aa4e3",
"templateCredsSetupCompleted": true
},
"name": "puq-docker-immich-deploy",
"tags": [],
"nodes": [
{
"id": "4831f6e3-50ba-40e8-a58d-948b2aa30d9e",
"name": "If",
"type": "n8n-nodes-base.if",
"position": [
-2060,
-320
],
"parameters": {
"options": {},
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "or",
"conditions": [
{
"id": "b702e607-888a-42c9-b9a7-f9d2a64dfccd",
"operator": {
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $json.server_domain }}",
"rightValue": "={{ $('API').item.json.body.server_domain }}"
}
]
}
},
"typeVersion": 2.2
},
{
"id": "d71b72fb-c9af-4de0-8731-010031c1364c",
"name": "Parametrs",
"type": "n8n-nodes-base.set",
"position": [
-2280,
-320
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "a6328600-7ee0-4031-9bdb-fcee99b79658",
"name": "server_domain",
"type": "string",
"value": "d01-test.uuq.pl"
},
{
"id": "370ddc4e-0fc0-48f6-9b30-ebdfba72c62f",
"name": "clients_dir",
"type": "string",
"value": "/opt/docker/clients"
},
{
"id": "92202bb8-6113-4bc5-9a29-79d238456df2",
"name": "mount_dir",
"type": "string",
"value": "/mnt"
},
{
"id": "baa52df2-9c10-42b2-939f-f05ea85ea2be",
"name": "screen_left",
"type": "string",
"value": "{{"
},
{
"id": "2b19ed99-2630-412a-98b6-4be44d35d2e7",
"name": "screen_right",
"type": "string",
"value": "}}"
}
]
}
},
"typeVersion": 3.4
},
{
"id": "0b195ac8-9eaa-4804-955c-713060806dfe",
"name": "API",
"type": "n8n-nodes-base.webhook",
"position": [
-2600,
-320
],
"webhookId": "718dc487-4899-4589-98be-784c22ebdce0",
"parameters": {
"path": "docker-immich",
"options": {},
"httpMethod": [
"POST"
],
"responseMode": "responseNode",
"authentication": "basicAuth",
"multipleMethods": true
},
"credentials": {
"httpBasicAuth": {
"id": "X3pvXrQxQUWFtpab",
"name": "Immich"
}
},
"typeVersion": 2
},
{
"id": "d47c8d61-4c75-45d9-9424-bec7ec9577c3",
"name": "422-Invalid server domain",
"type": "n8n-nodes-base.respondToWebhook",
"position": [
-2100,
0
],
"parameters": {
"options": {
"responseCode": 422
},
"respondWith": "json",
"responseBody": "[{\n \"status\": \"error\",\n \"error\": \"Invalid server domain\"\n}]"
},
"typeVersion": 1.1,
"alwaysOutputData": false
},
{
"id": "9a9bb067-c75a-483a-a248-e34012dec1bc",
"name": "Code1",
"type": "n8n-nodes-base.code",
"position": [
800,
-240
],
"parameters": {
"mode": "runOnceForEachItem",
"jsCode": "try {\n if ($json.stdout === 'success') {\n return {\n json: {\n status: 'success',\n message: '',\n data: '',\n }\n };\n }\n\n const parsedData = JSON.parse($json.stdout);\n\n return {\n json: {\n status: parsedData.status === 'error' ? 'error' : 'success',\n message: parsedData.message || (parsedData.status === 'error' ? 'An error occurred' : ''),\n data: parsedData || '',\n }\n };\n\n} catch (error) {\n return {\n json: {\n status: 'error',\n message: $json.stdout??$json.error,\n data: '',\n }\n };\n}"
},
"executeOnce": false,
"retryOnFail": false,
"typeVersion": 2,
"alwaysOutputData": false
},
{
"id": "c628384c-d101-485f-9df6-9bbaeeac74aa",
"name": "SSH",
"type": "n8n-nodes-base.ssh",
"onError": "continueErrorOutput",
"position": [
500,
-240
],
"parameters": {
"cwd": "=/",
"command": "={{ $json.sh }}"
},
"credentials": {
"sshPassword": {
"id": "Cyjy61UWHwD2Xcd8",
"name": "d01-test.uuq.pl-puq"
}
},
"executeOnce": true,
"typeVersion": 1
},
{
"id": "d7659a09-4ac5-49aa-bc0a-09a1a6e1e82e",
"name": "Container Actions",
"type": "n8n-nodes-base.switch",
"position": [
-1680,
160
],
"parameters": {
"rules": {
"values": [
{
"outputKey": "start",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "66ad264d-5393-410c-bfa3-011ab8eb234a",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "container_start"
}
]
},
"renameOutput": true
},
{
"outputKey": "stop",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "b48957a0-22c0-4ac0-82ef-abd9e7ab0207",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "container_stop"
}
]
},
"renameOutput": true
},
{
"outputKey": "mount_disk",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "727971bf-4218-41c1-9b07-22df4b947852",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "container_mount_disk"
}
]
},
"renameOutput": true
},
{
"outputKey": "unmount_disk",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "0c80b1d9-e7ca-4cf3-b3ac-b40fdf4dd8f8",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "container_unmount_disk"
}
]
},
"renameOutput": true
},
{
"outputKey": "container_get_acl",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "755e1a9f-667a-4022-9cb5-3f8153f62e95",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "container_get_acl"
}
]
},
"renameOutput": true
},
{
"outputKey": "container_set_acl",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "8d75626f-789e-42fc-be5e-3a4e93a9bbc6",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "container_set_acl"
}
]
},
"renameOutput": true
},
{
"outputKey": "container_get_net",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "c49d811a-735c-42f4-8b77-d0cd47b3d2b8",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "container_get_net"
}
]
},
"renameOutput": true
}
]
},
"options": {}
},
"typeVersion": 3.2
},
{
"id": "d641a4ff-fafb-4ad1-9ced-dc1037c95eb9",
"name": "Service Actions",
"type": "n8n-nodes-base.switch",
"position": [
-900,
-1300
],
"parameters": {
"rules": {
"values": [
{
"outputKey": "test_connection",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "3afdd2f1-fe93-47c2-95cd-bac9b1d94eeb",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "test_connection"
}
]
},
"renameOutput": true
},
{
"outputKey": "create",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "102f10e9-ec6c-4e63-ba95-0fe6c7dc0bd1",
"operator": {
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "create"
}
]
},
"renameOutput": true
},
{
"outputKey": "suspend",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "f62dfa34-6751-4b34-adcc-3d6ba1b21a8c",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "suspend"
}
]
},
"renameOutput": true
},
{
"outputKey": "unsuspend",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "384d2026-b753-4c27-94c2-8f4fc189eb5f",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "unsuspend"
}
]
},
"renameOutput": true
},
{
"outputKey": "terminate",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "0e190a97-827a-4e87-8222-093ff7048b21",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "terminate"
}
]
},
"renameOutput": true
},
{
"outputKey": "change_package",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "6f7832f3-b61d-4517-ab6b-6007998136dd",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "change_package"
}
]
},
"renameOutput": true
}
]
},
"options": {}
},
"typeVersion": 3.2
},
{
"id": "6a494672-a498-4aaf-96a3-07c95376d422",
"name": "API answer",
"type": "n8n-nodes-base.respondToWebhook",
"position": [
820,
0
],
"parameters": {
"options": {
"responseCode": 200
},
"respondWith": "allIncomingItems"
},
"typeVersion": 1.1,
"alwaysOutputData": true
},
{
"id": "ca615470-af60-4f52-b7ca-3aefc3308dbc",
"name": "Inspect",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1140,
-580
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/{{ $('API').item.json.body.domain }}\"\nCONTAINER_NAME=\"{{ $('API').item.json.body.domain }}_immich\"\n\nINSPECT_JSON=\"{}\"\nif sudo docker ps -a --filter \"name=$CONTAINER_NAME\" | grep -q \"$CONTAINER_NAME\"; then\n INSPECT_JSON=$(sudo docker inspect \"$CONTAINER_NAME\")\nfi\n\necho \"{\\\"inspect\\\": $INSPECT_JSON}\"\n\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "5f4a6163-d4dc-45f3-97ad-1c36297c6f0c",
"name": "Stat",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-980,
-480
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/{{ $('API').item.json.body.domain }}\"\nIMG_FILE=\"$COMPOSE_DIR/data.img\"\nMOUNT_DIR=\"{{ $('Parametrs').item.json.mount_dir }}/{{ $('API').item.json.body.domain }}\"\nCONTAINER_NAME=\"{{ $('API').item.json.body.domain }}_immich\"\n\n# Initialize empty container data\nINSPECT_JSON=\"{}\"\nSTATS_JSON=\"{}\"\n\n# Check if container is running\nif sudo docker ps -a --filter \"name=$CONTAINER_NAME\" | grep -q \"$CONTAINER_NAME\"; then\n # Get Docker inspect info in JSON (as raw string)\n INSPECT_JSON=$(sudo docker inspect \"$CONTAINER_NAME\")\n\n # Get Docker stats info in JSON (as raw string)\n STATS_JSON=$(sudo docker stats --no-stream --format \"{{ $('Parametrs').item.json.screen_left }}json .{{ $('Parametrs').item.json.screen_right }}\" \"$CONTAINER_NAME\")\n STATS_JSON=${STATS_JSON:-'{}'}\nfi\n\n# Initialize disk info variables\nMOUNT_USED=\"N/A\"\nMOUNT_FREE=\"N/A\"\nMOUNT_TOTAL=\"N/A\"\nMOUNT_PERCENT=\"N/A\"\nIMG_SIZE=\"N/A\"\nIMG_PERCENT=\"N/A\"\nDISK_STATS_IMG=\"N/A\"\n\n# Check if mount directory exists and is accessible\nif [ -d \"$MOUNT_DIR\" ]; then\n if mount | grep -q \"$MOUNT_DIR\"; then\n # Get disk usage for mounted directory\n DISK_STATS_MOUNT=$(df -h \"$MOUNT_DIR\" | tail -n 1)\n MOUNT_USED=$(echo \"$DISK_STATS_MOUNT\" | awk '{print $3}')\n MOUNT_FREE=$(echo \"$DISK_STATS_MOUNT\" | awk '{print $4}')\n MOUNT_TOTAL=$(echo \"$DISK_STATS_MOUNT\" | awk '{print $2}')\n MOUNT_PERCENT=$(echo \"$DISK_STATS_MOUNT\" | awk '{print $5}')\n fi\nfi\n\n# Check if image file exists\nif [ -f \"$IMG_FILE\" ]; then\n # Get disk usage for image file\n IMG_SIZE=$(du -sh \"$IMG_FILE\" | awk '{print $1}')\nfi\n\n# Manually create a combined JSON object\nFINAL_JSON=\"{\\\"inspect\\\": $INSPECT_JSON, \\\"stats\\\": $STATS_JSON, \\\"disk\\\": {\\\"mounted\\\": {\\\"used\\\": \\\"$MOUNT_USED\\\", \\\"free\\\": \\\"$MOUNT_FREE\\\", \\\"total\\\": \\\"$MOUNT_TOTAL\\\", \\\"percent\\\": \\\"$MOUNT_PERCENT\\\"}, \\\"img_file\\\": {\\\"size\\\": \\\"$IMG_SIZE\\\"}}}\"\n\n# Output the result\necho \"$FINAL_JSON\"\n\nexit 0"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "45dbb5b3-24e2-4b4f-bab4-4d4d6784e2d5",
"name": "Start",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1180,
140
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/{{ $('API').item.json.body.domain }}\"\nIMG_FILE=\"$COMPOSE_DIR/data.img\"\nMOUNT_DIR=\"{{ $('Parametrs').item.json.mount_dir }}/{{ $('API').item.json.body.domain }}\"\nCONTAINER_NAME=\"{{ $('API').item.json.body.domain }}_immich\"\n\n# Function to log an error, write to status file, and print to console\nhandle_error() {\n echo \"error: $1\"\n exit 1\n}\n\nif ! df -h | grep -q \"$MOUNT_DIR\"; then\n handle_error \"The file $IMG_FILE is not mounted to $MOUNT_DIR\"\nfi\n\nif sudo docker ps --filter \"name=$CONTAINER_NAME\" --filter \"status=running\" -q | grep -q .; then\n handle_error \"$CONTAINER_NAME container is running\"\nfi\n\n# Change to the compose directory\ncd \"$COMPOSE_DIR\" > /dev/null 2>&1 || handle_error \"Failed to change directory to $COMPOSE_DIR\"\n\n# Start the Docker containers\nif ! sudo docker compose up -d > /dev/null 2>error.log; then\n ERROR_MSG=$(tail -n 10 error.log)\n handle_error \"Docker-compose failed: $ERROR_MSG\"\nfi\n\n# Success\necho \"success\"\n\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "52ecdeda-eaf0-40f0-a0df-2cab5848bc3f",
"name": "Stop",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1060,
240
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/{{ $('API').item.json.body.domain }}\"\nIMG_FILE=\"$COMPOSE_DIR/data.img\"\nMOUNT_DIR=\"{{ $('Parametrs').item.json.mount_dir }}/{{ $('API').item.json.body.domain }}\"\nCONTAINER_NAME=\"{{ $('API').item.json.body.domain }}_immich\"\n\n# Function to log an error, write to status file, and print to console\nhandle_error() {\n echo \"error: $1\"\n exit 1\n}\n\n# Check if Docker container is running\nif ! sudo docker ps --filter \"name=$CONTAINER_NAME\" --filter \"status=running\" -q | grep -q .; then\n handle_error \"$CONTAINER_NAME container is not running\"\nfi\n\n# Stop and remove the Docker containers (also remove associated volumes)\nif ! sudo docker compose -f \"$COMPOSE_DIR/docker-compose.yml\" down > /dev/null 2>&1; then\n handle_error \"Failed to stop and remove docker-compose containers\"\nfi\n\necho \"success\"\n\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "8a552039-fefd-439e-97e7-412d8eab5486",
"name": "Test Connection1",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-220,
-1320
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\n# Function to log an error, print to console\nhandle_error() {\n echo \"error: $1\"\n exit 1\n}\n\n# Check if Docker is installed\nif ! command -v docker &> /dev/null; then\n handle_error \"Docker is not installed\"\nfi\n\n# Check if Docker service is running\nif ! systemctl is-active --quiet docker; then\n handle_error \"Docker service is not running\"\nfi\n\n# Check if nginx-proxy container is running\nif ! sudo docker ps --filter \"name=nginx-proxy\" --filter \"status=running\" -q > /dev/null; then\n handle_error \"nginx-proxy container is not running\"\nfi\n\n# Check if letsencrypt-nginx-proxy-companion container is running\nif ! sudo docker ps --filter \"name=letsencrypt-nginx-proxy-companion\" --filter \"status=running\" -q > /dev/null; then\n handle_error \"letsencrypt-nginx-proxy-companion container is not running\"\nfi\n\n# If everything is successful\necho \"success\"\n\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "23195584-afb7-4a78-92a5-2433b3887888",
"name": "Deploy",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-220,
-1120
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\n# Get values for variables from templates\nDOMAIN=\"{{ $('API').item.json.body.domain }}\"\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/$DOMAIN\"\nCOMPOSE_FILE=\"$COMPOSE_DIR/docker-compose.yml\"\nSTATUS_FILE=\"$COMPOSE_DIR/status\"\nIMG_FILE=\"$COMPOSE_DIR/data.img\"\nNGINX_DIR=\"$COMPOSE_DIR/nginx\"\nVHOST_DIR=\"/opt/docker/nginx-proxy/nginx/vhost.d\"\nMOUNT_DIR=\"{{ $('Parametrs').item.json.mount_dir }}/$DOMAIN\"\nDOCKER_COMPOSE_TEXT='{{ JSON.stringify($('Deploy-docker-compose').item.json['docker-compose']).base64Encode() }}'\n\nNGINX_MAIN_ACL_FILE=\"$NGINX_DIR/$DOMAIN\"_acl\n\nNGINX_MAIN_TEXT='{{ JSON.stringify($('nginx').item.json['main']).base64Encode() }}'\nNGINX_MAIN_FILE=\"$NGINX_DIR/$DOMAIN\"\nVHOST_MAIN_FILE=\"$VHOST_DIR/$DOMAIN\"\n\nNGINX_MAIN_LOCATION_TEXT='{{ JSON.stringify($('nginx').item.json['main_location']).base64Encode() }}'\nNGINX_MAIN_LOCATION_FILE=\"$NGINX_DIR/$DOMAIN\"_location\nVHOST_MAIN_LOCATION_FILE=\"$VHOST_DIR/$DOMAIN\"_location\n\n\nDISK_SIZE=\"{{ $('API').item.json.body.disk }}\"\n\n# Function to handle errors: write to the status file and print the message to console\nhandle_error() {\n STATUS_JSON=\"{\\\"status\\\": \\\"error\\\", \\\"message\\\": \\\"$1\\\"}\"\n echo \"$STATUS_JSON\" | sudo tee \"$STATUS_FILE\" > /dev/null # Write error to the status file\n echo \"error: $1\" # Print the error message to the console\n exit 1 # Exit the script with an error code\n}\n\n# Check if the directory already exists. If yes, exit with an error.\nif [ -d \"$COMPOSE_DIR\" ]; then\n echo \"error: Directory $COMPOSE_DIR already exists\"\n exit 1\nfi\n\n# Create necessary directories with permissions\nsudo mkdir -p \"$COMPOSE_DIR\" > /dev/null 2>&1 || handle_error \"Failed to create $COMPOSE_DIR\"\nsudo mkdir -p \"$NGINX_DIR\" > /dev/null 2>&1 || handle_error \"Failed to create $NGINX_DIR\"\nsudo mkdir -p \"$MOUNT_DIR\" > /dev/null 2>&1 || handle_error \"Failed to create $MOUNT_DIR\"\n\n# Set permissions on the created directories\nsudo chmod -R 777 \"$COMPOSE_DIR\" > /dev/null 2>&1 || handle_error \"Failed to set permissions on $COMPOSE_DIR\"\nsudo chmod -R 777 \"$NGINX_DIR\" > /dev/null 2>&1 || handle_error \"Failed to set permissions on $NGINX_DIR\"\nsudo chmod -R 777 \"$MOUNT_DIR\" > /dev/null 2>&1 || handle_error \"Failed to set permissions on $MOUNT_DIR\"\n\n# Create docker-compose.yml file\necho -e \"$DOCKER_COMPOSE_TEXT\" | base64 --decode | sed 's/\\\\n/\\n/g' | sed 's/\\\\\"/\"/g' | sed '1s/^\"//' | sed '$s/\"$//' | sudo tee \"$COMPOSE_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $COMPOSE_FILE\"\n\n# Create NGINX configuration files\necho \"\" | sudo tee \"$NGINX_MAIN_ACL_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $NGINX_MAIN_ACL_FILE\"\n\necho -e \"$NGINX_MAIN_TEXT\" | base64 --decode | sed 's/\\\\n/\\n/g' | sed 's/\\\\\"/\"/g' | sed '1s/^\"//' | sed '$s/\"$//' | sudo tee \"$NGINX_MAIN_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $NGINX_MAIN_FILE\"\necho -e \"$NGINX_MAIN_LOCATION_TEXT\" | base64 --decode | sed 's/\\\\n/\\n/g' | sed 's/\\\\\"/\"/g' | sed '1s/^\"//' | sed '$s/\"$//' | sudo tee \"$NGINX_MAIN_LOCATION_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $NGINX_MAIN_LOCATION_FILE\"\n\n# Change to the compose directory\ncd \"$COMPOSE_DIR\" > /dev/null 2>&1 || handle_error \"Failed to change directory to $COMPOSE_DIR\"\n\n# Create data.img file if it doesn't exist\nif [ ! -f \"$IMG_FILE\" ]; then\n sudo fallocate -l \"$DISK_SIZE\"G \"$IMG_FILE\" > /dev/null 2>&1 || sudo truncate -s \"$DISK_SIZE\"G \"$IMG_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $IMG_FILE\"\n sudo mkfs.ext4 \"$IMG_FILE\" > /dev/null 2>&1 || handle_error \"Failed to format $IMG_FILE\" # Format the image as ext4\n sync # Synchronize the data to disk\nfi\n\n# Add an entry to /etc/fstab for mounting if not already present\nif ! grep -q \"$IMG_FILE\" /etc/fstab; then\n echo \"$IMG_FILE $MOUNT_DIR ext4 loop 0 0\" | sudo tee -a /etc/fstab > /dev/null || handle_error \"Failed to add entry to /etc/fstab\"\nfi\n\n# Mount all entries in /etc/fstab\nsudo mount -a || handle_error \"Failed to mount entries from /etc/fstab\"\n\n# Set permissions on the mount directory\nsudo chmod -R 777 \"$MOUNT_DIR\" > /dev/null 2>&1 || handle_error \"Failed to set permissions on $MOUNT_DIR\"\n\nsudo mkdir -p \"$MOUNT_DIR/library\" > /dev/null 2>&1 || handle_error \"Failed to create $MOUNT_DIR/library\"\nsudo chmod -R 777 \"$MOUNT_DIR/library\" > /dev/null 2>&1 || handle_error \"Failed to set permissions on $MOUNT_DIR/library\"\n\nsudo mkdir -p \"$MOUNT_DIR/postgres\" > /dev/null 2>&1 || handle_error \"Failed to create $MOUNT_DIR/postgres\"\nsudo chmod -R 777 \"$MOUNT_DIR/postgres\" > /dev/null 2>&1 || handle_error \"Failed to set permissions on $MOUNT_DIR/postgres\"\n\nsudo mkdir -p \"$MOUNT_DIR/cache\" > /dev/null 2>&1 || handle_error \"Failed to create $MOUNT_DIR/cache\"\nsudo chmod -R 777 \"$MOUNT_DIR/cache\" > /dev/null 2>&1 || handle_error \"Failed to set permissions on $MOUNT_DIR/cache\"\n\n# Copy NGINX configuration files instead of creating symbolic links\nsudo cp -f \"$NGINX_MAIN_FILE\" \"$VHOST_MAIN_FILE\" || handle_error \"Failed to copy $NGINX_MAIN_FILE to $VHOST_MAIN_FILE\"\nsudo chmod 777 \"$VHOST_MAIN_FILE\" || handle_error \"Failed to set permissions on $VHOST_MAIN_FILE\"\n\nsudo cp -f \"$NGINX_MAIN_LOCATION_FILE\" \"$VHOST_MAIN_LOCATION_FILE\" || handle_error \"Failed to copy $NGINX_MAIN_LOCATION_FILE to $VHOST_MAIN_LOCATION_FILE\"\nsudo chmod 777 \"$VHOST_MAIN_LOCATION_FILE\" || handle_error \"Failed to set permissions on $VHOST_MAIN_LOCATION_FILE\"\n\n# Start Docker containers using docker-compose\nif ! sudo docker compose up -d > /dev/null 2>error.log; then\n ERROR_MSG=$(tail -n 10 error.log) # Read the last 10 lines from error.log\n handle_error \"Docker-compose failed: $ERROR_MSG\"\nfi\n\n# If everything is successful, update the status file and print success message\necho \"active\" | sudo tee \"$STATUS_FILE\" > /dev/null\necho \"success\"\n\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "10eda6b6-4d9e-4725-a1fb-53196efc0727",
"name": "Suspend",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-220,
-960
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nDOMAIN=\"{{ $('API').item.json.body.domain }}\"\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/$DOMAIN\"\nCOMPOSE_FILE=\"$COMPOSE_DIR/docker-compose.yml\"\nSTATUS_FILE=\"$COMPOSE_DIR/status\"\nIMG_FILE=\"$COMPOSE_DIR/data.img\"\nNGINX_DIR=\"$COMPOSE_DIR/nginx\"\nVHOST_DIR=\"/opt/docker/nginx-proxy/nginx/vhost.d\"\nMOUNT_DIR=\"{{ $('Parametrs').item.json.mount_dir }}/$DOMAIN\"\n\nVHOST_MAIN_FILE=\"$VHOST_DIR/$DOMAIN\"\nVHOST_MAIN_LOCATION_FILE=\"$VHOST_DIR/$DOMAIN\"_location\n\n# Function to log an error, write to status file, and print to console\nhandle_error() {\n echo \"$1\" | sudo tee \"$STATUS_FILE\" > /dev/null\n echo \"error: $1\"\n exit 1\n}\n\n# Stop and remove Docker containers (also remove associated volumes)\nif [ -f \"$COMPOSE_FILE\" ]; then\n if ! sudo docker compose -f \"$COMPOSE_FILE\" down > /dev/null 2>&1; then\n handle_error \"Failed to stop and remove docker-compose containers\"\n fi\nelse\n echo \"Warning: docker-compose.yml not found, skipping container stop.\"\nfi\n\n# Remove mount entry from /etc/fstab if it exists\nif grep -q \"$IMG_FILE\" /etc/fstab; then\n sudo sed -i \"\\|$(printf '%s\\n' \"$IMG_FILE\" | sed 's/[.[\\*^$]/\\\\&/g')|d\" /etc/fstab\nfi\n\n# Unmount the image if it is mounted\nif mount | grep -q \"$MOUNT_DIR\"; then\n sudo umount \"$MOUNT_DIR\" > /dev/null 2>&1 || handle_error \"Failed to unmount $MOUNT_DIR\"\nfi\n\n# Remove the mount directory\nif [ -d \"$MOUNT_DIR\" ]; then\n sudo rm -rf \"$MOUNT_DIR\" > /dev/null 2>&1 || handle_error \"Failed to remove $MOUNT_DIR\"\nfi\n\n# Remove NGINX configuration files\n[ -f \"$VHOST_MAIN_FILE\" ] && sudo rm -f \"$VHOST_MAIN_FILE\" || handle_error \"Warning: $VHOST_MAIN_FILE not found.\"\n[ -f \"$VHOST_MAIN_LOCATION_FILE\" ] && sudo rm -f \"$VHOST_MAIN_LOCATION_FILE\" || handle_error \"Warning: $VHOST_MAIN_LOCATION_FILE not found.\"\n\n# Update status\necho \"suspended\" | sudo tee \"$STATUS_FILE\" > /dev/null\n\n# Success\necho \"success\"\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "4a3958f5-f4bb-4079-a56e-08d3a0047cb2",
"name": "Terminated",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-220,
-620
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nDOMAIN=\"{{ $('API').item.json.body.domain }}\"\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/$DOMAIN\"\nCOMPOSE_FILE=\"$COMPOSE_DIR/docker-compose.yml\"\nSTATUS_FILE=\"$COMPOSE_DIR/status\"\nIMG_FILE=\"$COMPOSE_DIR/data.img\"\nNGINX_DIR=\"$COMPOSE_DIR/nginx\"\nVHOST_DIR=\"/opt/docker/nginx-proxy/nginx/vhost.d\"\n\nVHOST_MAIN_FILE=\"$VHOST_DIR/$DOMAIN\"\nVHOST_MAIN_LOCATION_FILE=\"$VHOST_DIR/$DOMAIN\"_location\nVHOST_CONSOLE_FILE=\"$VHOST_DIR/console.$DOMAIN\"\nVHOST_CONSOLE_LOCATION_FILE=\"$VHOST_DIR/console.$DOMAIN\"_location\nMOUNT_DIR=\"{{ $('Parametrs').item.json.mount_dir }}/$DOMAIN\"\n\n# Function to log an error, write to status file, and print to console\nhandle_error() {\n echo \"error: $1\"\n exit 1\n}\n\n# Stop and remove the Docker containers\nif [ -f \"$COMPOSE_FILE\" ]; then\n sudo docker compose -f \"$COMPOSE_FILE\" down > /dev/null 2>&1\nfi\n\n# Remove the mount entry from /etc/fstab if it exists\nif grep -q \"$IMG_FILE\" /etc/fstab; then\n sudo sed -i \"\\|$(printf '%s\\n' \"$IMG_FILE\" | sed 's/[.[\\*^$]/\\\\&/g')|d\" /etc/fstab\nfi\n\n# Unmount the image if it is still mounted\nif mount | grep -q \"$MOUNT_DIR\"; then\n sudo umount \"$MOUNT_DIR\" > /dev/null 2>&1 || handle_error \"Failed to unmount $MOUNT_DIR\"\nfi\n\n# Remove all related directories and files\nfor item in \"$MOUNT_DIR\" \"$COMPOSE_DIR\" \"$VHOST_MAIN_FILE\" \"$VHOST_MAIN_LOCATION_FILE\" \"$VHOST_CONSOLE_FILE\" \"$VHOST_CONSOLE_LOCATION_FILE\"; do\n if [ -e \"$item\" ]; then\n sudo rm -rf \"$item\" || handle_error \"Failed to remove $item\"\n fi\ndone\n\necho \"success\"\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "48111c3c-6d9d-46e0-8c65-19ec818ccec0",
"name": "Unsuspend",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-220,
-800
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nDOMAIN=\"{{ $('API').item.json.body.domain }}\"\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/$DOMAIN\"\nCOMPOSE_FILE=\"$COMPOSE_DIR/docker-compose.yml\"\nSTATUS_FILE=\"$COMPOSE_DIR/status\"\nIMG_FILE=\"$COMPOSE_DIR/data.img\"\nNGINX_DIR=\"$COMPOSE_DIR/nginx\"\nVHOST_DIR=\"/opt/docker/nginx-proxy/nginx/vhost.d\"\nMOUNT_DIR=\"{{ $('Parametrs').item.json.mount_dir }}/$DOMAIN\"\nDOCKER_COMPOSE_TEXT='{{ JSON.stringify($('Deploy-docker-compose').item.json['docker-compose']).base64Encode() }}'\n\nNGINX_MAIN_ACL_FILE=\"$NGINX_DIR/$DOMAIN\"_acl\n\nNGINX_MAIN_TEXT='{{ JSON.stringify($('nginx').item.json['main']).base64Encode() }}'\nNGINX_MAIN_FILE=\"$NGINX_DIR/$DOMAIN\"\nVHOST_MAIN_FILE=\"$VHOST_DIR/$DOMAIN\"\n\nNGINX_MAIN_LOCATION_TEXT='{{ JSON.stringify($('nginx').item.json['main_location']).base64Encode() }}'\nNGINX_MAIN_LOCATION_FILE=\"$NGINX_DIR/$DOMAIN\"_location\nVHOST_MAIN_LOCATION_FILE=\"$VHOST_DIR/$DOMAIN\"_location\n\nDISK_SIZE=\"{{ $('API').item.json.body.disk }}\"\n\n# Function to log an error, write to status file, and print to console\nhandle_error() {\n echo \"$1\" | sudo tee \"$STATUS_FILE\" > /dev/null\n echo \"error: $1\"\n exit 1\n}\n\nupdate_nginx_acl() {\n ACL_FILE=$1\n LOCATION_FILE=$2\n \n if [ -s \"$ACL_FILE\" ]; then # Проверяем, что файл существует и не пустой\n VALID_LINES=$(grep -vE '^\\s*$' \"$ACL_FILE\") # Убираем пустые строки\n if [ -n \"$VALID_LINES\" ]; then # Если есть непустые строки\n while IFS= read -r line; do\n echo \"allow $line;\" | sudo tee -a \"$LOCATION_FILE\" > /dev/null || handle_error \"Failed to update $LOCATION_FILE\"\n done <<< \"$VALID_LINES\"\n echo \"deny all;\" | sudo tee -a \"$LOCATION_FILE\" > /dev/null || handle_error \"Failed to update $LOCATION_FILE\"\n fi\n fi\n}\n\n# Create necessary directories with permissions\nfor dir in \"$COMPOSE_DIR\" \"$NGINX_DIR\" \"$MOUNT_DIR\"; do\n sudo mkdir -p \"$dir\" || handle_error \"Failed to create $dir\"\n sudo chmod -R 777 \"$dir\" || handle_error \"Failed to set permissions on $dir\"\ndone\n\n# Check if the image is already mounted using fstab\nif ! grep -q \"$IMG_FILE\" /etc/fstab; then\n echo \"$IMG_FILE $MOUNT_DIR ext4 loop 0 0\" | sudo tee -a /etc/fstab > /dev/null || handle_error \"Failed to add fstab entry for $IMG_FILE\"\nfi\n\n# Apply the fstab changes and mount the image\nif ! mount | grep -q \"$MOUNT_DIR\"; then\n sudo mount -a || handle_error \"Failed to mount image using fstab\"\nfi\n\n# Create docker-compose.yml file\necho -e \"$DOCKER_COMPOSE_TEXT\" | base64 --decode | sed 's/\\\\n/\\n/g' | sed 's/\\\\\"/\"/g' | sed '1s/^\"//' | sed '$s/\"$//' | sudo tee \"$COMPOSE_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $COMPOSE_FILE\"\n\n# Create NGINX configuration files\necho -e \"$NGINX_MAIN_TEXT\" | base64 --decode | sed 's/\\\\n/\\n/g' | sed 's/\\\\\"/\"/g' | sed '1s/^\"//' | sed '$s/\"$//' | sudo tee \"$NGINX_MAIN_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $NGINX_MAIN_FILE\"\necho -e \"$NGINX_MAIN_LOCATION_TEXT\" | base64 --decode | sed 's/\\\\n/\\n/g' | sed 's/\\\\\"/\"/g' | sed '1s/^\"//' | sed '$s/\"$//' | sudo tee \"$NGINX_MAIN_LOCATION_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $NGINX_MAIN_LOCATION_FILE\"\n\n# Copy NGINX configuration files instead of creating symbolic links\nsudo cp -f \"$NGINX_MAIN_FILE\" \"$VHOST_MAIN_FILE\" || handle_error \"Failed to copy $NGINX_MAIN_FILE to $VHOST_MAIN_FILE\"\nsudo chmod 777 \"$VHOST_MAIN_FILE\" || handle_error \"Failed to set permissions on $VHOST_MAIN_FILE\"\n\nsudo cp -f \"$NGINX_MAIN_LOCATION_FILE\" \"$VHOST_MAIN_LOCATION_FILE\" || handle_error \"Failed to copy $NGINX_MAIN_LOCATION_FILE to $VHOST_MAIN_LOCATION_FILE\"\nsudo chmod 777 \"$VHOST_MAIN_LOCATION_FILE\" || handle_error \"Failed to set permissions on $VHOST_MAIN_LOCATION_FILE\"\n\nupdate_nginx_acl \"$NGINX_MAIN_ACL_FILE\" \"$VHOST_MAIN_LOCATION_FILE\"\n\n# Change to the compose directory\ncd \"$COMPOSE_DIR\" || handle_error \"Failed to change directory to $COMPOSE_DIR\"\n\n# Start Docker containers using docker-compose\n> error.log\nif ! sudo docker compose up -d > error.log 2>&1; then\n ERROR_MSG=$(tail -n 10 error.log) # Read the last 10 lines from error.log\n handle_error \"Docker-compose failed: $ERROR_MSG\"\nfi\n\n# If everything is successful, update the status file and print success message\necho \"active\" | sudo tee \"$STATUS_FILE\" > /dev/null\necho \"success\"\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "7d0852f4-6fae-40cb-a2a3-a9d3b6ec3ae7",
"name": "Mount Disk",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1180,
360
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/{{ $('API').item.json.body.domain }}\"\nIMG_FILE=\"$COMPOSE_DIR/data.img\"\nMOUNT_DIR=\"{{ $('Parametrs').item.json.mount_dir }}/{{ $('API').item.json.body.domain }}\"\n\n# Function to log an error, write to status file, and print to console\nhandle_error() {\n echo \"error: $1\"\n exit 1\n}\n\n# Create necessary directories with permissions\nsudo mkdir -p \"$MOUNT_DIR\" > /dev/null 2>&1 || handle_error \"Failed to create $MOUNT_DIR\"\nsudo chmod 777 \"$MOUNT_DIR\" > /dev/null 2>&1 || handle_error \"Failed to set permissions on $MOUNT_DIR\"\n\nif df -h | grep -q \"$MOUNT_DIR\"; then\n handle_error \"The file $IMG_FILE is mounted to $MOUNT_DIR\"\nfi\n\nif ! grep -q \"$IMG_FILE\" /etc/fstab; then\n echo \"$IMG_FILE $MOUNT_DIR ext4 loop 0 0\" | sudo tee -a /etc/fstab > /dev/null || handle_error \"Failed to add entry to /etc/fstab\"\nfi\n\nsudo mount -a || handle_error \"Failed to mount entries from /etc/fstab\"\n\necho \"success\"\n\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "659022ef-b3c2-4785-b22c-c6f26f5af400",
"name": "Unmount Disk",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1060,
460
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/{{ $('API').item.json.body.domain }}\"\nIMG_FILE=\"$COMPOSE_DIR/data.img\"\nMOUNT_DIR=\"{{ $('Parametrs').item.json.mount_dir }}/{{ $('API').item.json.body.domain }}\"\nCONTAINER_NAME=\"{{ $('API').item.json.body.domain }}_immich\"\n\n# Function to log an error, write to status file, and print to console\nhandle_error() {\n echo \"error: $1\"\n exit 1\n}\n\nif ! df -h | grep -q \"$MOUNT_DIR\"; then\n handle_error \"The file $IMG_FILE is not mounted to $MOUNT_DIR\"\nfi\n\n# Remove the mount entry from /etc/fstab if it exists\nif grep -q \"$IMG_FILE\" /etc/fstab; then\n sudo sed -i \"\\|$(printf '%s\\n' \"$IMG_FILE\" | sed 's/[.[\\*^$]/\\\\&/g')|d\" /etc/fstab\nfi\n\n# Unmount the image if it is mounted (using fstab)\nif mount | grep -q \"$MOUNT_DIR\"; then\n sudo umount \"$MOUNT_DIR\" > /dev/null 2>&1 || handle_error \"Failed to unmount $MOUNT_DIR\"\nfi\n\n# Remove the mount directory (if needed)\nif ! sudo rm -rf \"$MOUNT_DIR\" > /dev/null 2>&1; then\n handle_error \"Failed to remove $MOUNT_DIR\"\nfi\n\necho \"success\"\n\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "bb2283d3-792f-43e3-93d2-b3469979ac14",
"name": "Log",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-840,
-380
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nCONTAINER_NAME=\"{{ $('API').item.json.body.domain }}_immich\"\nLOGS_JSON=\"{}\"\n\n# Function to return error in JSON format\nhandle_error() {\n echo \"{\\\"status\\\": \\\"error\\\", \\\"message\\\": \\\"$1\\\"}\"\n exit 1\n}\n\n# Check if the container exists\nif ! sudo docker ps -a | grep -q \"$CONTAINER_NAME\" > /dev/null 2>&1; then\n handle_error \"Container $CONTAINER_NAME not found\"\nfi\n\n# Get logs of the container\nLOGS=$(sudo docker logs --tail 1000 \"$CONTAINER_NAME\" 2>&1)\nif [ $? -ne 0 ]; then\n handle_error \"Failed to retrieve logs for $CONTAINER_NAME\"\nfi\n\n# Format logs as JSON\necho \"$LOGS\" | jq -R -s '{\"logs\": .}'\n\nexit 0"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "3a49cc69-9ec6-47a9-a070-4f763bc29189",
"name": "ChangePackage",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-220,
-440
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\n# Get values for variables from templates\nDOMAIN=\"{{ $('API').item.json.body.domain }}\"\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/$DOMAIN\"\nCOMPOSE_FILE=\"$COMPOSE_DIR/docker-compose.yml\"\nSTATUS_FILE=\"$COMPOSE_DIR/status\"\nIMG_FILE=\"$COMPOSE_DIR/data.img\"\nNGINX_DIR=\"$COMPOSE_DIR/nginx\"\nVHOST_DIR=\"/opt/docker/nginx-proxy/nginx/vhost.d\"\nMOUNT_DIR=\"{{ $('Parametrs').item.json.mount_dir }}/$DOMAIN\"\nDOCKER_COMPOSE_TEXT='{{ JSON.stringify($('Deploy-docker-compose').item.json['docker-compose']).base64Encode() }}'\n\nNGINX_MAIN_TEXT='{{ JSON.stringify($('nginx').item.json['main']).base64Encode() }}'\nNGINX_MAIN_FILE=\"$NGINX_DIR/$DOMAIN\"\nVHOST_MAIN_FILE=\"$VHOST_DIR/$DOMAIN\"\n\nNGINX_MAIN_LOCATION_TEXT='{{ JSON.stringify($('nginx').item.json['main_location']).base64Encode() }}'\nNGINX_MAIN_LOCATION_FILE=\"$NGINX_DIR/$DOMAIN\"_location\nVHOST_MAIN_LOCATION_FILE=\"$VHOST_DIR/$DOMAIN\"_location\n\n\nDISK_SIZE=\"{{ $('API').item.json.body.disk }}\"\n\n# Function to log an error, write to status file, and print to console\nhandle_error() {\n STATUS_JSON=\"{\\\"status\\\": \\\"error\\\", \\\"message\\\": \\\"$1\\\"}\"\n echo \"$STATUS_JSON\" | sudo tee \"$STATUS_FILE\" > /dev/null\n echo \"error: $1\"\n exit 1\n}\n\n# Create docker-compose.yml file\necho -e \"$DOCKER_COMPOSE_TEXT\" | base64 --decode | sed 's/\\\\n/\\n/g' | sed 's/\\\\\"/\"/g' | sed '1s/^\"//' | sed '$s/\"$//' | sudo tee \"$COMPOSE_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $COMPOSE_FILE\"\n\n# Check if the compose file exists before stopping the container\nif [ -f \"$COMPOSE_FILE\" ]; then\n sudo docker-compose -f \"$COMPOSE_FILE\" down > /dev/null 2>&1 || handle_error \"Failed to stop containers\"\nelse\n handle_error \"docker-compose.yml not found\"\nfi\n\n# Unmount the image if it is currently mounted\nif mount | grep -q \"$MOUNT_DIR\"; then\n sudo umount \"$MOUNT_DIR\" > /dev/null 2>&1 || handle_error \"Failed to unmount $MOUNT_DIR\"\nfi\n\n# Create docker-compose.yml file\necho \"$DOCKER_COMPOSE_TEXT\" | sudo tee \"$COMPOSE_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $COMPOSE_FILE\"\n\n# Create NGINX configuration files\necho -e \"$NGINX_MAIN_TEXT\" | base64 --decode | sed 's/\\\\n/\\n/g' | sed 's/\\\\\"/\"/g' | sed '1s/^\"//' | sed '$s/\"$//' | sudo tee \"$NGINX_MAIN_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $NGINX_MAIN_FILE\"\necho -e \"$NGINX_MAIN_LOCATION_TEXT\" | base64 --decode | sed 's/\\\\n/\\n/g' | sed 's/\\\\\"/\"/g' | sed '1s/^\"//' | sed '$s/\"$//' | sudo tee \"$NGINX_MAIN_LOCATION_FILE\" > /dev/null 2>&1 || handle_error \"Failed to create $NGINX_MAIN_LOCATION_FILE\"\n\n# Resize the disk image if it exists\nif [ -f \"$IMG_FILE\" ]; then\n sudo truncate -s \"$DISK_SIZE\"G \"$IMG_FILE\" > /dev/null 2>&1 || handle_error \"Failed to resize $IMG_FILE (truncate)\"\n sudo e2fsck -fy \"$IMG_FILE\" > /dev/null 2>&1 || handle_error \"Filesystem check failed on $IMG_FILE\"\n sudo resize2fs \"$IMG_FILE\" > /dev/null 2>&1 || handle_error \"Failed to resize filesystem on $IMG_FILE\"\nelse\n handle_error \"Disk image $IMG_FILE does not exist\"\nfi\n\n# Mount the disk only if it is not already mounted\nif ! mount | grep -q \"$MOUNT_DIR\"; then\n sudo mount -a || handle_error \"Failed to mount entries from /etc/fstab\"\nfi\n\n# Change to the compose directory\ncd \"$COMPOSE_DIR\" > /dev/null 2>&1 || handle_error \"Failed to change directory to $COMPOSE_DIR\"\n\n# Copy NGINX configuration files instead of creating symbolic links\nsudo cp -f \"$NGINX_MAIN_FILE\" \"$VHOST_MAIN_FILE\" || handle_error \"Failed to copy $NGINX_MAIN_FILE to $VHOST_MAIN_FILE\"\nsudo chmod 777 \"$VHOST_MAIN_FILE\" || handle_error \"Failed to set permissions on $VHOST_MAIN_FILE\"\n\nsudo cp -f \"$NGINX_MAIN_LOCATION_FILE\" \"$VHOST_MAIN_LOCATION_FILE\" || handle_error \"Failed to copy $NGINX_MAIN_LOCATION_FILE to $VHOST_MAIN_LOCATION_FILE\"\nsudo chmod 777 \"$VHOST_MAIN_LOCATION_FILE\" || handle_error \"Failed to set permissions on $VHOST_MAIN_LOCATION_FILE\"\n\n# Start Docker containers using docker-compose\nif ! sudo docker-compose up -d > /dev/null 2>error.log; then\n ERROR_MSG=$(tail -n 10 error.log) # Read the last 10 lines from error.log\n handle_error \"Docker-compose failed: $ERROR_MSG\"\nfi\n\n# Update status file\necho \"active\" | sudo tee \"$STATUS_FILE\" > /dev/null\n\necho \"success\"\n\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "4d8352f8-c78c-4450-beb4-70260285928d",
"name": "Sticky Note",
"type": "n8n-nodes-base.stickyNote",
"position": [
-2640,
-1280
],
"parameters": {
"color": 6,
"width": 639,
"height": 909,
"content": "## 👋 Welcome to PUQ Docker Immich deploy!\n## Template for Immich: API Backend for WHMCS/WISECP by PUQcloud\n\nv.1\n\nThis is an n8n template that creates an API backend for the WHMCS/WISECP module developed by PUQcloud.\n\n## Setup Instructions\n\n### 1. Configure API Webhook and SSH Access\n- Create a Credential (Basic Auth) for the **Webhook API Block** in n8n.\n- Create a Credential for **SSH access** to a server with Docker installed (**SSH Block**).\n\n### 2. Modify Template Parameters\nIn the **Parameters** block of the template, update the following settings:\n\n- `server_domain` – must match the domain of the WHMCS/WISECP Docker server.\n- `clients_dir` – directory where user data related to Docker and disks will be stored.\n- `mount_dir` – default mount point for the container disk (recommended not to change).\n\n**Do not modify** the following technical parameters:\n\n- `screen_left`\n- `screen_right`\n\n## Additional Resources\n- Full documentation: [https://doc.puq.info/books/docker-immich-whmcs-module](https://doc.puq.info/books/docker-immich-whmcs-module)\n- WHMCS module: [https://puqcloud.com/whmcs-module-docker-immich.php](https://puqcloud.com/whmcs-module-docker-immich.php)\n\n"
},
"typeVersion": 1
},
{
"id": "e403bcab-056e-48d6-8b61-0fb9a3871dc2",
"name": "Deploy-docker-compose",
"type": "n8n-nodes-base.set",
"position": [
-1240,
-1400
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "docker-compose",
"type": "string",
"value": "=name: \"{{ $('API').item.json.body.domain }}\"\n\nservices:\n {{ $('API').item.json.body.domain }}_immich:\n container_name: {{ $('API').item.json.body.domain }}_immich\n image: ghcr.io/immich-app/immich-server:release\n restart: unless-stopped\n volumes:\n - {{ $('Parametrs').item.json.mount_dir }}/{{ $('API').item.json.body.domain }}/library:/usr/src/app/upload\n - /etc/localtime:/etc/localtime:ro\n environment:\n - LETSENCRYPT_HOST={{ $('API').item.json.body.domain }}\n - VIRTUAL_HOST={{ $('API').item.json.body.domain }}\n - DB_HOSTNAME={{ $('API').item.json.body.domain }}_db\n - DB_PASSWORD={{ $('API').item.json.body.password }}\n - DB_USERNAME={{ $('API').item.json.body.username }}\n - DB_DATABASE_NAME=immich\n - REDIS_HOSTNAME={{ $('API').item.json.body.domain }}_redis\n - IMMICH_MACHINE_LEARNING_URL=http://{{ $('API').item.json.body.domain }}_ml:3003\n depends_on:\n - {{ $('API').item.json.body.domain }}_redis\n - {{ $('API').item.json.body.domain }}_db\n healthcheck:\n disable: false\n networks:\n - nginx-proxy_web\n mem_limit: \"{{ $('API').item.json.body.ram }}G\"\n cpus: \"{{ $('API').item.json.body.cpu }}\"\n\n {{ $('API').item.json.body.domain }}_ml:\n container_name: {{ $('API').item.json.body.domain }}_ml\n image: ghcr.io/immich-app/immich-machine-learning:release\n volumes:\n - {{ $('Parametrs').item.json.mount_dir }}/{{ $('API').item.json.body.domain }}/cache:/cache\n restart: always\n healthcheck:\n disable: false\n networks:\n - nginx-proxy_web\n mem_limit: \"{{ $('API').item.json.body.ram }}G\"\n cpus: \"{{ $('API').item.json.body.cpu }}\"\n\n {{ $('API').item.json.body.domain }}_redis:\n container_name: {{ $('API').item.json.body.domain }}_redis\n image: docker.io/redis:6.2-alpine@sha256:148bb5411c184abd288d9aaed139c98123eeb8824c5d3fce03cf721db58066d8\n healthcheck:\n test: redis-cli ping || exit 1\n restart: always\n networks:\n - nginx-proxy_web\n mem_limit: \"{{ $('API').item.json.body.ram }}G\"\n cpus: \"{{ $('API').item.json.body.cpu }}\"\n\n {{ $('API').item.json.body.domain }}_db:\n container_name: {{ $('API').item.json.body.domain }}_db\n image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:739cdd626151ff1f796dc95a6591b55a714f341c737e27f045019ceabf8e8c52\n environment:\n POSTGRES_PASSWORD: {{ $('API').item.json.body.password }}\n POSTGRES_USER: {{ $('API').item.json.body.username }}\n POSTGRES_DB: immich\n POSTGRES_INITDB_ARGS: '--data-checksums'\n volumes:\n - {{ $('Parametrs').item.json.mount_dir }}/{{ $('API').item.json.body.domain }}/postgres:/var/lib/postgresql/data\n healthcheck:\n test: >-\n pg_isready --dbname=\"immich\" --username=\"{{ $('API').item.json.body.username }}\" || exit 1;\n Chksum=\"$$(psql --dbname=\"immich\" --username=\"{{ $('API').item.json.body.username }}\" --tuples-only --no-align\n --command='SELECT COALESCE(SUM(checksum_failures), 0) FROM pg_stat_database')\";\n echo \"checksum failure count is $$Chksum\";\n [ \"$$Chksum\" = '0' ] || exit 1\n interval: 5m\n start_interval: 30s\n start_period: 5m\n command: >-\n postgres\n -c shared_preload_libraries=vectors.so\n -c 'search_path=\"$$user\", public, vectors'\n -c logging_collector=on\n -c max_wal_size=2GB\n -c shared_buffers=512MB\n -c wal_compression=on\n restart: always\n networks:\n - nginx-proxy_web\n mem_limit: \"{{ $('API').item.json.body.ram }}G\"\n cpus: \"{{ $('API').item.json.body.cpu }}\"\n\nnetworks:\n nginx-proxy_web:\n external: true\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "991b6d61-6d73-4df8-ab1d-31e53d945a8a",
"name": "Version",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1080,
1300
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nCONTAINER_NAME=\"{{ $('API').item.json.body.domain }}_immich\"\nVERSION_JSON=\"{}\"\n\n# Function to return error in JSON format\nhandle_error() {\n echo \"{\\\"status\\\": \\\"error\\\", \\\"message\\\": \\\"$1\\\"}\"\n exit 1\n}\n\n# Check if the container exists\nif ! sudo docker ps -a | grep -q \"$CONTAINER_NAME\" > /dev/null 2>&1; then\n handle_error \"Container $CONTAINER_NAME not found\"\nfi\n\n# Get the MinIO version from the container (first line only)\nVERSION=$(sudo docker exec \"$CONTAINER_NAME\" immich --version)\n\n# Format version as JSON\nVERSION_JSON=\"{\\\"version\\\": \\\"$VERSION\\\"}\"\n\necho \"$VERSION_JSON\"\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "97c4026f-f14f-4c06-b676-fc5ea47d7fff",
"name": "Users",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1140,
1460
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nCONTAINER_NAME=\"{{ $('API').item.json.body.domain }}_db\"\nUSERNAME=\"{{ $('API').item.json.body.username }}\"\n\n# Function to return error in JSON format\nhandle_error() {\n echo \"{\\\"status\\\": \\\"error\\\", \\\"message\\\": \\\"$1\\\"}\"\n exit 1\n}\n\n# Run query inside container and format JSON output\nUSERS=$(sudo docker exec -i $CONTAINER_NAME psql -U $USERNAME -d immich -t -A -c \"SELECT COALESCE(json_agg(users), '[]') FROM users;\" 2>&1)\nif [ $? -ne 0 ] || [[ $USERS == *\"ERROR\"* ]]; then\n handle_error \"Failed to retrieve users from database: $USERS\"\nfi\n\n# Trim whitespace and construct JSON response\nUSERS_JSON=\"{\\\"status\\\": \\\"success\\\", \\\"users\\\": $USERS}\"\n\necho \"$USERS_JSON\"\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "979eebfb-47f7-4250-9a6a-d87f78682686",
"name": "If1",
"type": "n8n-nodes-base.if",
"position": [
-1780,
-1260
],
"parameters": {
"options": {},
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "or",
"conditions": [
{
"id": "8602bd4c-9693-4d5f-9e7d-5ee62210baca",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "create"
},
{
"id": "1c630b59-0e5a-441d-8aa5-70b31338d897",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "change_package"
},
{
"id": "b3eb7052-a70f-438e-befd-8c5240df32c7",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "unsuspend"
}
]
}
},
"typeVersion": 2.2
},
{
"id": "5ae1c86d-ca50-4db5-8fb3-4e8bcf70b482",
"name": "nginx",
"type": "n8n-nodes-base.set",
"position": [
-1520,
-1400
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "main",
"type": "string",
"value": "= client_max_body_size 50000M;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";\n proxy_redirect off;\n proxy_read_timeout 600s;\n proxy_send_timeout 600s;\n send_timeout 600s;"
},
{
"id": "6507763a-21d4-4ff0-84d2-5dc9d21b7430",
"name": "main_location",
"type": "string",
"value": "=proxy_pass_request_headers on;\nproxy_set_header Host $host;\nproxy_set_header X-Forwarded-Host $http_host;\nproxy_set_header X-Forwarded-Proto $scheme;\nproxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; \nproxy_set_header Upgrade $http_upgrade;\nproxy_set_header Connection \"upgrade\";\nproxy_read_timeout 86400;"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "92968208-7eb5-4138-bd11-d7ae2f83e6a8",
"name": "Container Stat",
"type": "n8n-nodes-base.switch",
"position": [
-1620,
-480
],
"parameters": {
"rules": {
"values": [
{
"outputKey": "inspect",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "66ad264d-5393-410c-bfa3-011ab8eb234a",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "container_information_inspect"
}
]
},
"renameOutput": true
},
{
"outputKey": "stats",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "b48957a0-22c0-4ac0-82ef-abd9e7ab0207",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "container_information_stats"
}
]
},
"renameOutput": true
},
{
"outputKey": "log",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "50ede522-af22-4b7a-b1fd-34b27fd3fadd",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "container_log"
}
]
},
"renameOutput": true
},
{
"outputKey": "dependent_containers_information_stats",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "9f6b0bb4-e402-401f-8980-27aa38619627",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "dependent_containers_information_stats"
}
]
},
"renameOutput": true
}
]
},
"options": {}
},
"typeVersion": 3.2
},
{
"id": "8889e19f-580a-4358-a423-3b6b42bdb970",
"name": "GET ACL",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1180,
560
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\n# Get values for variables from templates\nDOMAIN=\"{{ $('API').item.json.body.domain }}\"\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/$DOMAIN\"\nNGINX_DIR=\"$COMPOSE_DIR/nginx\"\n\nNGINX_MAIN_ACL_FILE=\"$NGINX_DIR/$DOMAIN\"_acl\n\n# Function to log an error and exit\nhandle_error() {\n echo \"error: $1\"\n exit 1\n}\n\n# Read files if they exist, else assign empty array\nif [[ -f \"$NGINX_MAIN_ACL_FILE\" ]]; then\n MAIN_IPS=$(cat \"$NGINX_MAIN_ACL_FILE\" | jq -R -s 'split(\"\\n\") | map(select(length > 0))')\nelse\n MAIN_IPS=\"[]\"\nfi\n\n# Output JSON\necho \"{ \\\"main_ips\\\": $MAIN_IPS}\"\n\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "3f07ea30-20e6-475e-87a1-c17b4ebf9938",
"name": "SET ACL",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1060,
700
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\n# Get values for variables from templates\nDOMAIN=\"{{ $('API').item.json.body.domain }}\"\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/$DOMAIN\"\nNGINX_DIR=\"$COMPOSE_DIR/nginx\"\nVHOST_DIR=\"/opt/docker/nginx-proxy/nginx/vhost.d\"\n\nNGINX_MAIN_ACL_FILE=\"$NGINX_DIR/$DOMAIN\"_acl\nNGINX_MAIN_ACL_TEXT=\"{{ $('API').item.json.body.main_ips }}\"\nVHOST_MAIN_LOCATION_FILE=\"$VHOST_DIR/$DOMAIN\"_location\nNGINX_MAIN_LOCATION_FILE=\"$NGINX_DIR/$DOMAIN\"_location\n\n# Function to log an error and exit\nhandle_error() {\n echo \"error: $1\"\n exit 1\n}\n\nupdate_nginx_acl() {\n ACL_FILE=$1\n LOCATION_FILE=$2\n \n if [ -s \"$ACL_FILE\" ]; then\n VALID_LINES=$(grep -vE '^\\s*$' \"$ACL_FILE\")\n if [ -n \"$VALID_LINES\" ]; then\n while IFS= read -r line; do\n echo \"allow $line;\" | sudo tee -a \"$LOCATION_FILE\" > /dev/null || handle_error \"Failed to update $LOCATION_FILE\"\n done <<< \"$VALID_LINES\"\n echo \"deny all;\" | sudo tee -a \"$LOCATION_FILE\" > /dev/null || handle_error \"Failed to update $LOCATION_FILE\"\n fi\n fi\n}\n\n# Create or overwrite the file with the content from variables\necho \"$NGINX_MAIN_ACL_TEXT\" | sudo tee \"$NGINX_MAIN_ACL_FILE\" > /dev/null\n\nsudo cp -f \"$NGINX_MAIN_LOCATION_FILE\" \"$VHOST_MAIN_LOCATION_FILE\" || handle_error \"Failed to copy $NGINX_MAIN_LOCATION_FILE to $VHOST_MAIN_LOCATION_FILE\"\nsudo chmod 777 \"$VHOST_MAIN_LOCATION_FILE\" || handle_error \"Failed to set permissions on $VHOST_MAIN_LOCATION_FILE\"\n\nupdate_nginx_acl \"$NGINX_MAIN_ACL_FILE\" \"$VHOST_MAIN_LOCATION_FILE\"\n\n# Reload Nginx with sudo\nif sudo docker exec nginx-proxy nginx -s reload; then\n echo \"success\"\nelse\n handle_error \"Failed to reload Nginx.\"\nfi\n\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "3ed565af-3349-4078-a100-80e795d8ce43",
"name": "GET NET",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1180,
840
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\n# Get values for variables from templates\nDOMAIN=\"{{ $('API').item.json.body.domain }}\"\nCONTAINER_NAME=\"{{ $('API').item.json.body.domain }}_immich\"\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/$DOMAIN\"\nNGINX_DIR=\"$COMPOSE_DIR/nginx\"\nNET_IN_FILE=\"$COMPOSE_DIR/net_in\"\nNET_OUT_FILE=\"$COMPOSE_DIR/net_out\"\n\n# Function to log an error and exit\nhandle_error() {\n echo \"error: $1\"\n exit 1\n}\n\n# Get current network statistics from container\nSTATS=$(sudo docker exec \"$CONTAINER_NAME\" cat /proc/net/dev | grep eth0) || handle_error \"Failed to get network stats\"\nNET_IN_NEW=$(echo \"$STATS\" | awk '{print $2}') # RX bytes (received)\nNET_OUT_NEW=$(echo \"$STATS\" | awk '{print $10}') # TX bytes (transmitted)\n\n# Ensure directory exists\nmkdir -p \"$COMPOSE_DIR\"\n\n# Read old values, create files if they don't exist\nif [[ -f \"$NET_IN_FILE\" ]]; then\n NET_IN_OLD=$(sudo cat \"$NET_IN_FILE\")\nelse\n NET_IN_OLD=0\nfi\n\nif [[ -f \"$NET_OUT_FILE\" ]]; then\n NET_OUT_OLD=$(sudo cat \"$NET_OUT_FILE\")\nelse\n NET_OUT_OLD=0\nfi\n\n# Save new values\necho \"$NET_IN_NEW\" | sudo tee \"$NET_IN_FILE\" > /dev/null\necho \"$NET_OUT_NEW\" | sudo tee \"$NET_OUT_FILE\" > /dev/null\n\n# Output JSON\necho \"{ \\\"net_in_new\\\": $NET_IN_NEW, \\\"net_out_new\\\": $NET_OUT_NEW, \\\"net_in_old\\\": $NET_IN_OLD, \\\"net_out_old\\\": $NET_OUT_OLD }\"\n\nexit 0\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "5be9a4c6-1eb5-4bd4-b802-f820c90a53a4",
"name": "Dependent containers Stat",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1100,
-260
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nCOMPOSE_DIR=\"{{ $('Parametrs').item.json.clients_dir }}/{{ $('API').item.json.body.domain }}\"\nIMG_FILE=\"$COMPOSE_DIR/data.img\"\nMOUNT_DIR=\"{{ $('Parametrs').item.json.mount_dir }}/{{ $('API').item.json.body.domain }}\"\n\nCONTAINER_NAME_ML=\"{{ $('API').item.json.body.domain }}_ml\"\nCONTAINER_NAME_DB=\"{{ $('API').item.json.body.domain }}_db\"\nCONTAINER_NAME_REDIS=\"{{ $('API').item.json.body.domain }}_redis\"\n\n# Initialize empty container data\nINSPECT_JSON_ML=\"{}\"\nSTATS_JSON_ML=\"{}\"\n\nINSPECT_JSON_DB=\"{}\"\nSTATS_JSON_DB=\"{}\"\n\nINSPECT_JSON_REDIS=\"{}\"\nSTATS_JSON_REDIS=\"{}\"\n\n# Check if container is running\nif sudo docker ps -a --filter \"name=$CONTAINER_NAME_ML\" | grep -q \"$CONTAINER_NAME_ML\"; then\n # Get Docker inspect info in JSON (as raw string)\n INSPECT_JSON_ML=$(sudo docker inspect \"$CONTAINER_NAME_ML\")\n # Get Docker stats info in JSON (as raw string)\n STATS_JSON_ML=$(sudo docker stats --no-stream --format \"{{ $('Parametrs').item.json.screen_left }}json .{{ $('Parametrs').item.json.screen_right }}\" \"$CONTAINER_NAME_ML\")\n STATS_JSON_ML=${STATS_JSON_ML:-'{}'}\nfi\n\n# Check if container is running\nif sudo docker ps -a --filter \"name=$CONTAINER_NAME_DB\" | grep -q \"$CONTAINER_NAME_DB\"; then\n # Get Docker inspect info in JSON (as raw string)\n INSPECT_JSON_DB=$(sudo docker inspect \"$CONTAINER_NAME_DB\")\n # Get Docker stats info in JSON (as raw string)\n STATS_JSON_DB=$(sudo docker stats --no-stream --format \"{{ $('Parametrs').item.json.screen_left }}json .{{ $('Parametrs').item.json.screen_right }}\" \"$CONTAINER_NAME_DB\")\n STATS_JSON_DB=${STATS_JSON_DB:-'{}'}\nfi\n\n# Check if container is running\nif sudo docker ps -a --filter \"name=$CONTAINER_NAME_REDIS\" | grep -q \"$CONTAINER_NAME_REDIS\"; then\n # Get Docker inspect info in JSON (as raw string)\n INSPECT_JSON_REDIS=$(sudo docker inspect \"$CONTAINER_NAME_REDIS\")\n # Get Docker stats info in JSON (as raw string)\n STATS_JSON_REDIS=$(sudo docker stats --no-stream --format \"{{ $('Parametrs').item.json.screen_left }}json .{{ $('Parametrs').item.json.screen_right }}\" \"$CONTAINER_NAME_REDIS\")\n STATS_JSON_REDIS=${STATS_JSON_REDIS:-'{}'}\nfi\n\n# Manually create a combined JSON object\nFINAL_JSON=\"{\\\"inspect_ml\\\": $INSPECT_JSON_ML, \\\"stats_ml\\\": $STATS_JSON_ML,\\\"inspect_db\\\": $INSPECT_JSON_DB, \\\"stats_db\\\": $STATS_JSON_DB,\\\"inspect_redis\\\": $INSPECT_JSON_REDIS, \\\"stats_redis\\\": $STATS_JSON_REDIS}\"\n\n# Output the result\necho \"$FINAL_JSON\"\n\nexit 0"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "dd2d7265-3b13-468b-b80b-e769572c7d26",
"name": "Change Password",
"type": "n8n-nodes-base.set",
"onError": "continueRegularOutput",
"position": [
-1140,
1660
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "21f4453e-c136-4388-be90-1411ae78e8a5",
"name": "sh",
"type": "string",
"value": "=#!/bin/bash\n\nCONTAINER_NAME=\"{{ $('API').item.json.body.domain }}_immich\"\nNEW_PASSWORD=\"{{ $('API').item.json.body.password }}\"\n\n# Function to return error in JSON format\nhandle_error() {\n echo \"{\\\"status\\\": \\\"error\\\", \\\"message\\\": \\\"$1\\\"}\"\n exit 1\n}\n\n# Run the password reset command with auto-input\nRESET_RESULT=$(sudo docker exec -i $CONTAINER_NAME bin/immich-admin reset-admin-password <<EOF\n$NEW_PASSWORD\nEOF\n)\n\n# Check if the reset was successful\nif [[ $RESET_RESULT == *\"The admin password has been updated.\"* ]]; then\n echo \"{\\\"status\\\": \\\"success\\\"}\"\n exit 0\nelse\n handle_error \"Failed to reset admin password: $RESET_RESULT\"\nfi\n"
}
]
}
},
"typeVersion": 3.4,
"alwaysOutputData": true
},
{
"id": "58598b7a-20cb-4643-b7fb-db4b27fbfdec",
"name": "Immich",
"type": "n8n-nodes-base.switch",
"position": [
-1680,
1380
],
"parameters": {
"rules": {
"values": [
{
"outputKey": "version",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "66ad264d-5393-410c-bfa3-011ab8eb234a",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "app_version"
}
]
},
"renameOutput": true
},
{
"outputKey": "users",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "b48957a0-22c0-4ac0-82ef-abd9e7ab0207",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "app_users"
}
]
},
"renameOutput": true
},
{
"outputKey": "change_password",
"conditions": {
"options": {
"version": 2,
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "and",
"conditions": [
{
"id": "7df93a6e-b308-4703-9df8-24ea296a1443",
"operator": {
"name": "filter.operator.equals",
"type": "string",
"operation": "equals"
},
"leftValue": "={{ $('API').item.json.body.command }}",
"rightValue": "change_password"
}
]
},
"renameOutput": true
}
]
},
"options": {}
},
"typeVersion": 3.2
}
],
"active": true,
"pinData": {},
"settings": {
"timezone": "America/Winnipeg",
"callerPolicy": "workflowsFromSameOwner",
"executionOrder": "v1"
},
"versionId": "e5fc53af-d1a6-42ec-b795-4381c782d159",
"connections": {
"If": {
"main": [
[
{
"node": "Container Stat",
"type": "main",
"index": 0
},
{
"node": "Container Actions",
"type": "main",
"index": 0
},
{
"node": "Immich",
"type": "main",
"index": 0
},
{
"node": "If1",
"type": "main",
"index": 0
}
],
[
{
"node": "422-Invalid server domain",
"type": "main",
"index": 0
}
]
]
},
"API": {
"main": [
[
{
"node": "Parametrs",
"type": "main",
"index": 0
}
],
[]
]
},
"If1": {
"main": [
[
{
"node": "nginx",
"type": "main",
"index": 0
}
],
[
{
"node": "Service Actions",
"type": "main",
"index": 0
}
]
]
},
"Log": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"SSH": {
"main": [
[
{
"node": "Code1",
"type": "main",
"index": 0
}
],
[
{
"node": "Code1",
"type": "main",
"index": 0
}
]
]
},
"Stat": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Stop": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Code1": {
"main": [
[
{
"node": "API answer",
"type": "main",
"index": 0
}
]
]
},
"Start": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Users": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"nginx": {
"main": [
[
{
"node": "Deploy-docker-compose",
"type": "main",
"index": 0
}
]
]
},
"Deploy": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Immich": {
"main": [
[
{
"node": "Version",
"type": "main",
"index": 0
}
],
[
{
"node": "Users",
"type": "main",
"index": 0
}
],
[
{
"node": "Change Password",
"type": "main",
"index": 0
}
]
]
},
"GET ACL": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"GET NET": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Inspect": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"SET ACL": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Suspend": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
],
[]
]
},
"Version": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Parametrs": {
"main": [
[
{
"node": "If",
"type": "main",
"index": 0
}
]
]
},
"Unsuspend": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Mount Disk": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Terminated": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Unmount Disk": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"ChangePackage": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Container Stat": {
"main": [
[
{
"node": "Inspect",
"type": "main",
"index": 0
}
],
[
{
"node": "Stat",
"type": "main",
"index": 0
}
],
[
{
"node": "Log",
"type": "main",
"index": 0
}
],
[
{
"node": "Dependent containers Stat",
"type": "main",
"index": 0
}
]
]
},
"Change Password": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Service Actions": {
"main": [
[
{
"node": "Test Connection1",
"type": "main",
"index": 0
}
],
[
{
"node": "Deploy",
"type": "main",
"index": 0
}
],
[
{
"node": "Suspend",
"type": "main",
"index": 0
}
],
[
{
"node": "Unsuspend",
"type": "main",
"index": 0
}
],
[
{
"node": "Terminated",
"type": "main",
"index": 0
}
],
[
{
"node": "ChangePackage",
"type": "main",
"index": 0
}
]
]
},
"Test Connection1": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
},
"Container Actions": {
"main": [
[
{
"node": "Start",
"type": "main",
"index": 0
}
],
[
{
"node": "Stop",
"type": "main",
"index": 0
}
],
[
{
"node": "Mount Disk",
"type": "main",
"index": 0
}
],
[
{
"node": "Unmount Disk",
"type": "main",
"index": 0
}
],
[
{
"node": "GET ACL",
"type": "main",
"index": 0
}
],
[
{
"node": "SET ACL",
"type": "main",
"index": 0
}
],
[
{
"node": "GET NET",
"type": "main",
"index": 0
}
]
]
},
"Deploy-docker-compose": {
"main": [
[
{
"node": "Service Actions",
"type": "main",
"index": 0
}
]
]
},
"Dependent containers Stat": {
"main": [
[
{
"node": "SSH",
"type": "main",
"index": 0
}
]
]
}
}
}Workflow n8n Docker, développement : pour qui est ce workflow ?
Ce workflow s'adresse principalement aux équipes de développement et d'opérations qui utilisent Docker pour la gestion de conteneurs. Il est idéal pour les entreprises de taille moyenne à grande qui cherchent à automatiser leurs processus de déploiement et à améliorer leur efficacité opérationnelle. Un niveau technique intermédiaire est recommandé pour la personnalisation et l'intégration de ce workflow.
Workflow n8n Docker, développement : problème résolu
Ce workflow résout le problème de la gestion manuelle des conteneurs Docker, qui peut être source d'erreurs et de pertes de temps. En automatisant les actions telles que le démarrage, l'arrêt et la gestion des conteneurs, il élimine les frustrations liées aux interventions manuelles répétitives. Les utilisateurs bénéficient d'une gestion plus fluide et rapide de leurs environnements Docker, ce qui leur permet de se concentrer sur le développement et l'optimisation de leurs applications.
Workflow n8n Docker, développement : étapes du workflow
Étape 1 : Le workflow commence par un déclencheur Webhook qui reçoit des requêtes pour initier des actions Docker.
- Étape 1 : Des nœuds de conditions (If) vérifient les critères spécifiques avant d'exécuter des commandes.
- Étape 2 : Les nœuds SSH exécutent des commandes pour démarrer ou arrêter les conteneurs selon les besoins.
- Étape 3 : Les nœuds de type 'set' définissent les paramètres nécessaires pour chaque action.
- Étape 4 : Les nœuds de réponse aux webhooks assurent que l'utilisateur reçoit un retour d'information sur l'état de l'opération.
Workflow n8n Docker, développement : guide de personnalisation
Pour personnaliser ce workflow, vous pouvez modifier les paramètres des nœuds 'set' pour adapter les commandes SSH aux spécificités de votre environnement Docker. Assurez-vous de mettre à jour les chemins d'accès et les identifiants de connexion nécessaires pour les actions. Vous pouvez également ajouter des nœuds supplémentaires pour intégrer d'autres services ou outils selon vos besoins. Pour sécuriser le flux, envisagez d'utiliser des méthodes d'authentification appropriées pour les webhooks et les connexions SSH.