27 min read

How to Self-Host n8n: Complete Guide to Installation, Security, Webhooks, and Remote Access

n8n is the most powerful open-source workflow automation platform available. Unlike Zapier or Make, self-hosting n8n means no per-task fees, no execution limits, and your credentials never leave your own hardware.

n8n · Workflow Automation · Docker · Raspberry Pi · 2026

How to Self-Host n8n: Complete Guide to Installation, Security, Webhooks, and Remote Access

n8n is the most powerful open-source workflow automation platform available. Unlike Zapier or Make, self-hosting n8n means no per-task fees, no execution limits, and your credentials never leave your own hardware. This guide covers every installation method (Docker Compose with PostgreSQL, npm on Raspberry Pi), every critical environment variable including N8N_ENCRYPTION_KEY and WEBHOOK_URL, how to secure your instance before exposing it, how to receive webhooks from external services like Stripe and GitHub, how to back up your workflows and credentials, and how to make n8n accessible from anywhere using a Localtonet tunnel, without port forwarding or a static IP.

🐳 Docker Compose · PostgreSQL 🍓 Raspberry Pi 4 / 5 · npm · PM2 🌐 Webhooks · Remote Access · No Port Forwarding

What Is n8n?

n8n (pronounced "nodemation") is an open-source, fair-code workflow automation platform. It was first released in 2019 and has grown to over 230,000 active users. In mid-2025, n8n's valuation reached $1.5 billion, driven by its rapid growth and expanding enterprise user base. It provides a node-based visual editor where each node represents an integration or function. You connect nodes on a canvas to build automation workflows, and you can write JavaScript or Python inside Code nodes when visual nodes are not enough.

n8n supports over 400 integrations natively and gives you access to 900+ community workflow templates. It is also the most AI-native automation platform currently available, with nearly 70 dedicated nodes for LangChain, LLM orchestration, RAG pipelines, and AI agents.

🔗 400+ Integrations HTTP, Webhook, Slack, GitHub, Stripe, Google Sheets, PostgreSQL, MySQL, OpenAI, and hundreds more. Build HTTP Request nodes to connect any REST API.
🤖 AI-Native Platform Native LangChain integration with ~70 AI nodes. Build AI agent workflows, RAG pipelines, and multi-model orchestration on your own hardware.
💻 Code When You Need It Write JavaScript or Python directly inside Code nodes. Install npm packages in self-hosted mode. No restrictions on custom logic.
📡 Webhook Triggers The Webhook node creates a public endpoint that triggers workflows from GitHub, Stripe, Twilio, Shopify, or any service that sends HTTP events.
Schedule and Event Triggers Cron-based scheduling, polling triggers, and event-driven execution. Workflows run automatically without manual intervention.
🔒 Fair-Code License Free to self-host with unlimited workflows and executions. No per-task fees. Only infrastructure costs. Enterprise features available under a paid license.

n8n vs Zapier vs Make: Which Should You Choose?

Before committing to self-hosting n8n, it is worth understanding how it compares to the two major cloud alternatives. The differences are fundamental, not just cosmetic.

Feature n8n (Self-Hosted) Zapier Make (formerly Integromat)
Pricing model Free (infrastructure cost only) Per task (each action = 1 task) Per operation (each step = 1 operation)
Free tier limits Unlimited when self-hosted 100 tasks/month, 5 Zaps 1,000 operations/month
Cloud paid starting price $22/month (2,500 executions) $19.99/month (750 tasks) ~$9/month (10,000 operations)
High-volume cost Fixed infrastructure cost (e.g., $5-10/month VPS or Pi) Scales steeply: more tasks = much higher bill Better than Zapier but still operation-based
Data privacy 100% on your hardware, never leaves US cloud servers, subject to US law EU-based, better GDPR compliance
Hosting Self-hosted or n8n Cloud Cloud only Cloud only
Custom code Full JavaScript and Python, install npm packages Limited JavaScript, no packages, 6 MB limit No custom code execution
AI workflows ~70 native LangChain nodes, self-hosted LLMs Basic AI steps, no local LLM support Basic AI integrations
Integrations 400+ native + any HTTP API via HTTP Request node 8,000+ native integrations 2,000+ integrations
Technical skill required Medium to high (server management + JavaScript helpful) Low (designed for non-technical users) Medium (automation concepts needed)
Execution counting Per workflow run (not per step) Per task (each step in each run) Per operation (each step)

When self-hosting n8n makes financial sense

n8n's execution-based pricing is a fundamentally different model from Zapier's task-based model. A workflow that processes 10,000 records counts as 1 execution in n8n, but as 10,000 tasks in Zapier. For any workflow processing significant data volumes, self-hosted n8n becomes dramatically cheaper. A common scenario: a team spending $200/month on Zapier can run the same workflows on a $10/month VPS or a $70 Raspberry Pi with no monthly software fee.

System Requirements

Component Minimum Recommended for Production
CPU 1 core, 64-bit 2+ cores (Raspberry Pi 4/5 or x86_64)
RAM 1 GB 2-4 GB (more if running AI workflows)
Storage 8 GB SSD recommended for PostgreSQL performance
Node.js v20.19 (minimum) v20.x LTS or v22.x (n8n supports up to v24)
Database SQLite (built-in, development only) PostgreSQL (required for production and queue mode)
OS Any Linux (amd64 or arm64), macOS, Windows Ubuntu 22.04+ or Raspberry Pi OS Bookworm 64-bit
SQLite is for development only

n8n defaults to SQLite, which is fine for testing on your local machine. For any production use, switch to PostgreSQL. SQLite causes slow editor performance, database lock errors, and does not support n8n's queue mode for scaled deployments. The Docker Compose setup below uses PostgreSQL from the start.

Installation Method 1: Docker Compose with PostgreSQL (Recommended)

Docker Compose is the method recommended by n8n for most self-hosting scenarios. It isolates n8n and its database in containers, makes upgrades a one-command operation, and handles all dependency management automatically.

Step 1: Install Docker

# Linux (Ubuntu, Debian):
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER
# Log out and back in for the group change to take effect

# Verify:
docker --version
docker compose version

Expected output:

Docker version 27.x.x, build xxxxxxx
Docker Compose version v2.x.x

Step 2: Generate the encryption key

The N8N_ENCRYPTION_KEY is the most critical configuration value in any n8n deployment. n8n uses it to encrypt all credentials (API keys, OAuth tokens, passwords) before storing them in the database. Generate it now, before first launch, and store it somewhere safe:

openssl rand -hex 32

Expected output (yours will be different):

a3f8d2e1b4c7f9a0e2d5c8b1a4f7e0d3c6b9a2e5d8c1b4f7a0e3d6c9b2a5f8e1
If you lose N8N_ENCRYPTION_KEY, all credentials are permanently lost

This is the most common disaster in self-hosted n8n deployments. If you redeploy n8n without this key, or if n8n generates a new one on startup (which it does if the key is not explicitly set), every stored credential shows "could not be decrypted." There is no recovery. Store this key in a password manager or secrets vault immediately. The key must be identical across all n8n instances if you run queue mode workers.

Step 3: Create the project directory and .env file

mkdir ~/n8n && cd ~/n8n
cat > .env << 'EOF'
# Database
POSTGRES_USER=n8n
POSTGRES_PASSWORD=change_this_db_password
POSTGRES_DB=n8n

# n8n Core (replace with your actual values)
N8N_ENCRYPTION_KEY=your_64_hex_key_from_step_2
N8N_HOST=localhost
N8N_PORT=5678
N8N_PROTOCOL=http

# Timezone (change to your timezone)
GENERIC_TIMEZONE=Europe/Istanbul
TZ=Europe/Istanbul

# Execution data pruning (recommended for production)
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168
EOF

Step 4: Create docker-compose.yml

cat > docker-compose.yml << 'EOF'
version: "3.8"

services:

  postgres:
    image: postgres:16
    container_name: n8n-postgres
    restart: unless-stopped
    env_file: .env
    environment:
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${POSTGRES_DB}
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -h localhost -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 5s
      timeout: 5s
      retries: 10

  n8n:
    image: docker.n8n.io/n8nio/n8n:latest
    container_name: n8n
    restart: unless-stopped
    ports:
      - "5678:5678"
    env_file: .env
    environment:
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
      - DB_POSTGRESDB_USER=${POSTGRES_USER}
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
      - N8N_HOST=${N8N_HOST}
      - N8N_PORT=${N8N_PORT}
      - N8N_PROTOCOL=${N8N_PROTOCOL}
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
      - TZ=${TZ}
      - EXECUTIONS_DATA_PRUNE=${EXECUTIONS_DATA_PRUNE}
      - EXECUTIONS_DATA_MAX_AGE=${EXECUTIONS_DATA_MAX_AGE}
      - N8N_RUNNERS_ENABLED=true
    volumes:
      - n8n_data:/home/node/.n8n
      - ./local-files:/files
    depends_on:
      postgres:
        condition: service_healthy

volumes:
  postgres_data:
  n8n_data:
EOF
# Create the shared files directory:
mkdir -p ./local-files

Step 5: Start n8n

docker compose up -d

Docker pulls the images and starts both containers. Check status:

docker compose ps

Expected output:

NAME           IMAGE                          STATUS
n8n            docker.n8n.io/n8nio/n8n:latest  Up 30 seconds
n8n-postgres   postgres:16                    Up 30 seconds (healthy)

Open http://localhost:5678 in your browser. The n8n setup wizard asks you to create the first owner account. Fill in your name, email, and password. This is not basic auth — it is n8n's built-in user management system.

# View live logs to confirm n8n started correctly:
docker compose logs -f n8n

Expected startup log:

n8n ready on 0.0.0.0, port 5678
Version: 1.xx.x
Workflow execution: running in main mode

Installation Method 2: Raspberry Pi with npm

The Raspberry Pi 4 (4 GB RAM) and Raspberry Pi 5 are excellent platforms for running a personal or small-team n8n instance. The npm installation method is lightweight and uses less RAM than Docker, which is important on a Pi. n8n requires Node.js v20.19 or higher. The default apt repository on Raspberry Pi OS ships with older versions, so you need to install Node.js manually.

Pi Zero, Pi 1, and Pi 2 are not compatible

n8n requires Node.js v20+. Node.js v20+ only supports ARMv7 and ARM64 architectures. Raspberry Pi Zero, Pi 1, and Pi 2 use ARMv6, which is not supported. You need at least a Raspberry Pi 3 (ARMv7, 32-bit) or, for the best experience, a Pi 4 or Pi 5 with the 64-bit OS.

Step 1: Check your architecture

uname -m

Expected output:

# Raspberry Pi 4 / 5 with 64-bit OS:
aarch64

# Raspberry Pi 3 / 4 with 32-bit OS:
armv7l

For the best experience, use the 64-bit version of Raspberry Pi OS (available in Raspberry Pi Imager). n8n runs on both armv7l and aarch64, but 64-bit is more stable and better supported.

Step 2: Install Node.js v20 via NodeSource

sudo apt update && sudo apt upgrade -y

# Add NodeSource repository for Node.js 20.x:
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs

# Verify:
node --version
npm --version

Expected output:

v20.x.x
10.x.x

Step 3: Configure npm global directory (avoid sudo issues)

mkdir -p ~/.npm-global
npm config set prefix '~/.npm-global'
echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.bashrc
source ~/.bashrc

Step 4: Install n8n

npm install -g n8n

Installation takes 2-5 minutes on a Pi 4. Expected completion output:

added 589 packages in 3m
npm notice created a lockfile as package-lock.json. You should commit this file.

# Verify:
n8n --version

Expected output:

1.xx.x

Step 5: Create an environment file

mkdir -p ~/.n8n
cat > ~/.n8n/.env << 'EOF'
N8N_ENCRYPTION_KEY=your_64_hex_key_here
DB_TYPE=sqlite
GENERIC_TIMEZONE=Europe/Istanbul
TZ=Europe/Istanbul
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168
N8N_RUNNERS_ENABLED=true
EOF

For SQLite (default, fine for personal use on Pi), no additional database setup is needed. The database is stored at ~/.n8n/database.sqlite.

Step 6: Test n8n starts correctly

export $(cat ~/.n8n/.env | xargs)
n8n start

Expected startup output:

n8n ready on 0.0.0.0, port 5678
Version: 1.xx.x

Open http://<pi-ip>:5678 from another device on the same network. Complete the setup wizard. Then stop n8n with Ctrl+C and set it up as a background service.

Step 7: Run n8n as a background service

You have two options: PM2 (recommended for Pi) or systemd.

Option A: PM2 (recommended for Raspberry Pi)

Raspberry PiPM2

PM2 is a Node.js process manager. It does not require root access, handles restarts on crash, and integrates well with Node.js apps like n8n.

npm install -g pm2

# Start n8n with PM2 (loads env from .env file):
pm2 start n8n --name n8n -- start --env-file ~/.n8n/.env

# Save the process list:
pm2 save

# Generate systemd startup script (run the printed command):
pm2 startup

# Useful PM2 commands:
pm2 list          # Show all processes and their status
pm2 logs n8n      # Live logs
pm2 restart n8n   # Restart n8n
pm2 stop n8n      # Stop n8n

Option B: systemd service

LinuxRaspberry Pi
sudo tee /etc/systemd/system/n8n.service << 'EOF'
[Unit]
Description=n8n Workflow Automation
After=network.target

[Service]
Type=simple
User=pi
WorkingDirectory=/home/pi
EnvironmentFile=/home/pi/.n8n/.env
ExecStart=/home/pi/.npm-global/bin/n8n start
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target
EOF

sudo systemctl daemon-reload
sudo systemctl enable n8n
sudo systemctl start n8n
sudo systemctl status n8n

Critical Environment Variables Explained

Most n8n configuration is done via environment variables. Understanding the critical ones is the difference between a stable production instance and a fragile deployment that breaks silently.

Variable Required What It Does Example Value
N8N_ENCRYPTION_KEY Critical Encrypts all stored credentials. If lost or changed, all credentials become unreadable permanently. Set before first launch and never change. openssl rand -hex 32
WEBHOOK_URL Critical for webhooks Sets the public URL n8n uses to generate webhook URLs in the editor. Without this, webhooks show localhost:5678/webhook/... instead of your public domain. Webhooks work in test mode but fail when external services call them. https://myflows.localto.net/
N8N_PROTOCOL Behind proxy Set to https when n8n is behind a reverse proxy or tunnel that handles TLS. Tells n8n to generate HTTPS links. https
N8N_HOST Behind proxy The public hostname. Used to generate correct URLs in the editor. myflows.localto.net
N8N_PROXY_HOPS Behind proxy Number of reverse proxy hops in front of n8n. Set to 1 when using a single tunnel or proxy layer. 1
DB_TYPE Production Database type. Default is sqlite. Use postgresdb for production. postgresdb
GENERIC_TIMEZONE Recommended Timezone for the Schedule trigger node. Without this, scheduled workflows run in UTC. Europe/Istanbul
EXECUTIONS_DATA_PRUNE Recommended Enable automatic deletion of old execution data. Without this, the database grows without limit. true
EXECUTIONS_DATA_MAX_AGE With pruning Hours to keep execution data. 168 hours = 7 days. 168
N8N_RUNNERS_ENABLED Recommended Enables the new task runner architecture. Recommended for stability in n8n 1.x+. true
EXECUTIONS_MODE Multi-worker only Set to queue to enable distributed execution via Redis. Requires PostgreSQL and a Redis instance. For single-server setups, leave unset (defaults to main mode). queue

Securing Your n8n Instance

n8n has no authentication by default in older versions

n8n v1.0+ uses a built-in user management system where the first user to access the instance becomes the owner. The setup wizard runs on first access. Complete it immediately after installation to secure your instance before exposing it to any network. Until the setup wizard is completed, anyone who can reach port 5678 can take ownership of your n8n instance.

Complete the setup wizard

Open http://localhost:5678 immediately after starting n8n for the first time. The wizard asks for:

  • First name and last name
  • Email address (used as login username)
  • Password (minimum 8 characters, mixed case and numbers)

After completing the wizard, n8n requires login to access the editor. You can invite additional users from Settings → Users.

Block environment variable access in nodes

By default, Code nodes and Execute Command nodes can access environment variables, which may include your database password and encryption key. Block this in production:

# Add to your .env:
N8N_BLOCK_ENV_ACCESS_IN_NODE=true

Set execution timeout

Without a timeout, stuck workflows consume memory and CPU indefinitely, eventually crashing the instance:

# Add to your .env:
# Maximum execution time in seconds (3600 = 1 hour):
EXECUTIONS_TIMEOUT=3600
EXECUTIONS_TIMEOUT_MAX=7200

Receiving Webhooks: How n8n Webhooks Work

Webhooks are one of the most powerful n8n features. The Webhook node creates an HTTP endpoint on your n8n instance that external services can call to trigger workflows. To use webhooks with external services (GitHub, Stripe, Twilio, Shopify, etc.), n8n must be reachable from the internet with a public URL.

Test vs Production webhook URLs

Every Webhook node in n8n has two URLs visible in the node panel:

URL Type When It Works Use Case
Test URL (/webhook-test/...) Only when the workflow editor is open and you click "Listen for Test Event" Testing during workflow development
Production URL (/webhook/...) When the workflow is Active (toggle in top right) Live webhook from GitHub, Stripe, etc.
Activate the workflow to use the production webhook URL

A common mistake is configuring an external service with the production webhook URL but leaving the n8n workflow inactive. The production URL only responds when the workflow is toggled to Active. Test webhooks only work while the editor is open and you are in test mode.

Building a webhook workflow

1

Add a Webhook node

In the editor, click the + button or search for "Webhook". Set the HTTP Method (GET or POST depending on what the sender uses) and set a URL path (e.g., stripe-events). The full URL will be https://yourpublicdomain/webhook/stripe-events.

2

Add processing nodes

Connect nodes to process the incoming data. Use an IF node to branch based on the event type, a Code node to transform data, or direct integrations like Slack, Google Sheets, or HTTP Request to forward the data.

3

Activate the workflow

Click the toggle in the top-right corner to set the workflow to Active. The production webhook URL is now live and will receive events even when you are not in the editor.

4

Give the external service the production URL

Copy the production webhook URL from the Webhook node (the /webhook/... URL, not /webhook-test/...) and paste it into the external service's webhook configuration. After setting WEBHOOK_URL in your environment (covered in the Localtonet section below), this URL will show your public domain instead of localhost.

Practical Workflow Examples

🔔 GitHub Push Notification to Slack

Trigger: Webhook node (POST, path: github-push)
Flow: IF node checks {{ $json.ref }} equals refs/heads/main → Slack node sends message to #deployments channel with commit message and author.
Setup: In GitHub repository → Settings → Webhooks → Add webhook. Paste the n8n production URL. Content type: application/json. Select "Just the push event".

💳 Stripe Payment to Google Sheets

Trigger: Webhook node (POST, path: stripe-payments)
Flow: IF node checks {{ $json.type }} equals payment_intent.succeeded → Code node extracts amount, customer email, and timestamp → Google Sheets node appends a new row.
Setup: Stripe Dashboard → Developers → Webhooks → Add endpoint. Paste the production URL. Select "payment_intent.succeeded" event.

📧 Contact Form to CRM

Trigger: Webhook node (POST, path: contact-form)
Flow: Code node validates and cleans the form data → HTTP Request node creates a contact in HubSpot/Pipedrive via their API → Email node sends a confirmation to the submitter.
Setup: Configure your website's form to POST to the n8n webhook URL instead of a form service.

🤖 AI Document Processing

Trigger: Schedule node (every hour) or Webhook node
Flow: HTTP Request node fetches new documents from an S3 bucket or email inbox → OpenAI or Ollama node summarizes or classifies each document → Google Sheets or database node stores the results.
Note: For local LLMs (Ollama running on the same machine), connect the Ollama node to http://host.docker.internal:11434 (Docker) or http://localhost:11434 (npm install).

Expose n8n to the Internet with Localtonet

For webhooks to work with external services, n8n must be reachable from the internet. Localtonet creates an encrypted outbound tunnel from your machine to a public HTTPS endpoint. No port forwarding required, works behind CGNAT, and the tunnel URL becomes your permanent webhook base URL.

1

Install Localtonet

# Linux / Raspberry Pi:
curl -fsSL https://localtonet.com/install.sh | sh

# Verify:
localtonet --version
2

Authenticate with your AuthToken

Register at localtonet.com, go to Dashboard → My Tokens, copy your token:

localtonet --authtoken YOUR_TOKEN_HERE
3

Create an HTTP tunnel for port 5678

Go to localtonet.com/tunnel/http. Fill in:

  • IP: 127.0.0.1
  • Port: 5678
  • Custom Subdomain (optional): e.g. myflows

Click Create, then Start. Note your public URL: e.g. https://myflows.localto.net.

4

Set WEBHOOK_URL and N8N_PROXY_HOPS in your .env

This is the most important step. Without it, webhook URLs in the n8n editor show localhost:5678 instead of your public URL, and external services cannot reach your webhooks.

# Add to your .env file:
WEBHOOK_URL=https://myflows.localto.net/
N8N_PROTOCOL=https
N8N_HOST=myflows.localto.net
N8N_PROXY_HOPS=1

Restart n8n after changing the environment:

# Docker Compose:
docker compose restart n8n

# PM2 on Raspberry Pi:
pm2 restart n8n

# systemd:
sudo systemctl restart n8n
5

Verify webhook URLs are correct

Open the n8n editor at your public URL (https://myflows.localto.net). Open or create a Webhook node. The production webhook URL should now show your public domain:

Production URL: https://myflows.localto.net/webhook/your-path

Use this URL in GitHub, Stripe, or any other service.

6

Run Localtonet as a service (always-on tunnel)

sudo localtonet --install-service --authtoken YOUR_TOKEN_HERE
sudo localtonet --start-service --authtoken YOUR_TOKEN_HERE

# Check status:
sudo localtonet --status-service

The tunnel now starts automatically on every reboot alongside n8n.

Backing Up n8n: Workflows, Credentials, and Database

n8n stores everything in three places: the database (PostgreSQL or SQLite), the data directory (/home/node/.n8n in Docker, ~/.n8n on npm), and the encryption key in your environment. All three must be backed up together. If you have the database but lost the encryption key, credentials are gone. If you have the encryption key but lost the database, workflows are gone.

CLI export commands

n8n provides CLI commands to export workflows and credentials as JSON files. These are portable and can be imported into any n8n instance.

# Docker Compose: export all workflows to a JSON file:
docker exec -u node n8n n8n export:workflow --all --output=/files/workflows-backup.json

# Docker Compose: export all credentials (encrypted with N8N_ENCRYPTION_KEY):
docker exec -u node n8n n8n export:credentials --all --output=/files/credentials-backup.json

# npm / Raspberry Pi: export all workflows:
n8n export:workflow --all --output=~/n8n-backup/workflows-backup.json

# npm / Raspberry Pi: export credentials (encrypted):
n8n export:credentials --all --output=~/n8n-backup/credentials-backup.json
Exported credentials are encrypted with N8N_ENCRYPTION_KEY

When you import credentials to a different n8n instance, that instance must have the same N8N_ENCRYPTION_KEY to decrypt them. If you use a different key on the new instance, import credentials with the --decrypted flag on the old instance to export them in plain text, then re-encrypt them on import. Plain text credential files contain all your API keys and OAuth tokens, treat them like passwords.

Full backup script (Docker Compose)

#!/bin/bash
# Save as ~/n8n/backup.sh
# Add to cron: 0 3 * * * ~/n8n/backup.sh

BACKUP_DIR=~/n8n-backups/$(date +%F)
mkdir -p "$BACKUP_DIR"

cd ~/n8n

# Export workflows:
docker exec -u node n8n n8n export:workflow --all \
  --output=/files/workflows.json
cp ./local-files/workflows.json "$BACKUP_DIR/"

# Export credentials (encrypted):
docker exec -u node n8n n8n export:credentials --all \
  --output=/files/credentials.json
cp ./local-files/credentials.json "$BACKUP_DIR/"

# Dump PostgreSQL database:
docker exec n8n-postgres pg_dump \
  -U n8n -d n8n > "$BACKUP_DIR/n8n-db.sql"

# Back up the .env file (contains encryption key!):
cp .env "$BACKUP_DIR/.env.backup"

echo "Backup complete: $BACKUP_DIR"

# Keep only last 14 days of backups:
find ~/n8n-backups -maxdepth 1 -type d -mtime +14 -exec rm -rf {} +

Import (restore) commands

# Docker Compose: import workflows:
docker exec -u node n8n n8n import:workflow --input=/files/workflows.json

# Docker Compose: import credentials:
docker exec -u node n8n n8n import:credentials --input=/files/credentials.json

# npm: import workflows:
n8n import:workflow --input=~/n8n-backup/workflows-backup.json

# npm: import credentials:
n8n import:credentials --input=~/n8n-backup/credentials-backup.json

Updating n8n

n8n releases a new minor version almost every week. The stable tag always points to the latest production release. Update regularly to get security fixes and new features.

Update with Docker Compose

Docker
cd ~/n8n

# Pull latest image:
docker compose pull n8n

# Restart with new image:
docker compose up -d n8n

# Check the new version:
docker exec n8n n8n --version

Update with npm (Raspberry Pi)

Raspberry Pinpm
# Stop n8n:
pm2 stop n8n   # or: sudo systemctl stop n8n

# Update:
npm update -g n8n

# Verify new version:
n8n --version

# Start n8n:
pm2 start n8n   # or: sudo systemctl start n8n
Always back up before updating

Run your backup script before pulling a new version. n8n performs automatic database migrations on startup, but if something goes wrong with a migration on a major version update, you need the backup to restore. Check the n8n release notes before major version updates for any breaking changes or manual migration steps.

Troubleshooting

Problem Cause Fix
"Credential could not be decrypted" N8N_ENCRYPTION_KEY changed or not set; n8n auto-generated a new key on restart Restore the original encryption key value in your .env and restart. If the key is lost, credentials must be re-entered manually. There is no other recovery.
Webhook URLs show localhost:5678 instead of public domain WEBHOOK_URL environment variable not set Set WEBHOOK_URL=https://yourpublicdomain/ (trailing slash required), set N8N_PROXY_HOPS=1, and restart n8n.
Production webhook returns 404 or times out Workflow is not Active Open the workflow and toggle the Active switch in the top-right corner. Production webhooks only work when the workflow is Active.
n8n container restarts in a loop PostgreSQL not ready when n8n starts, or wrong database credentials in .env Check logs: docker compose logs n8n. The healthcheck on the postgres service should prevent this, but if it still happens, check that POSTGRES_PASSWORD in .env matches the value n8n's DB_POSTGRESDB_PASSWORD is using.
n8n is slow or unresponsive on Raspberry Pi SQLite database I/O on SD card, or not enough RAM Move to a USB SSD. Add swap space: sudo fallocate -l 2G /swapfile && sudo chmod 600 /swapfile && sudo mkswap /swapfile && sudo swapon /swapfile. Consider switching to PostgreSQL.
Schedule trigger runs at wrong time GENERIC_TIMEZONE not set; schedules run in UTC Set GENERIC_TIMEZONE=Europe/Istanbul (or your timezone) in .env and restart. Existing schedule nodes may need to be saved again to pick up the timezone.
Database grows too large EXECUTIONS_DATA_PRUNE not enabled; execution history accumulates indefinitely Set EXECUTIONS_DATA_PRUNE=true and EXECUTIONS_DATA_MAX_AGE=168 in .env and restart. For faster recovery, manually prune via: Settings → Pruning in the n8n editor.
n8n says "n8n is not yet initialized" on first access Database migration still running after first container start Wait 30-60 seconds and refresh. The first start runs database migrations which can take time depending on the hardware. Check docker compose logs -f n8n for migration progress.

Frequently Asked Questions

Is n8n really free to self-host?

Yes. n8n's Community Edition is free to self-host under the fair-code license, which allows use for any purpose including commercial, with no software-imposed limits on workflows, executions, or users. Your only costs are the hardware or VPS you run it on. n8n does offer a paid cloud plan starting at $22/month for 2,500 executions per month, and an Enterprise license for self-hosted instances that adds SSO, advanced RBAC, and dedicated support. For most individuals and small teams, the free self-hosted Community Edition is fully sufficient.

What happens if I lose my N8N_ENCRYPTION_KEY?

All stored credentials become permanently unreadable. There is no recovery mechanism. n8n encrypts credentials before storing them in the database using this key. Without the exact key that was used during encryption, the data cannot be decrypted. You would need to re-enter every API key, OAuth token, and password in every credential stored in n8n. This is the most common and most damaging configuration mistake in self-hosted n8n deployments. Store the key in a password manager immediately after generating it, and include it in every deployment configuration file.

Why do my webhook URLs show localhost instead of my public domain?

The WEBHOOK_URL environment variable is not set. n8n uses this variable to generate the webhook URLs displayed in the editor. If it is not set, n8n generates URLs based on what it sees internally, which is localhost:5678. Set WEBHOOK_URL=https://yourpublicdomain/ (include the trailing slash), set N8N_PROXY_HOPS=1, and restart n8n. After restarting, open a Webhook node and the production URL should show your public domain.

Can I run n8n on a Raspberry Pi 3?

Yes, but with limitations. Raspberry Pi 3 uses ARMv7 architecture (armv7l), which is supported by Node.js v20 and n8n. However, with only 1 GB of RAM, the Pi 3 can run n8n for simple personal workflows but will struggle with complex flows, AI workflows, or multiple concurrent executions. Add swap space (sudo fallocate -l 2G /swapfile) to prevent out-of-memory crashes. Raspberry Pi 4 (4 GB RAM) or Pi 5 are strongly recommended for a smooth experience. Pi Zero and Pi 1 (ARMv6) are not compatible with Node.js v20+.

Should I use SQLite or PostgreSQL?

SQLite for development and personal testing only. PostgreSQL for any production use. SQLite works fine when you are building and testing workflows on your local machine, but it causes slow editor performance, database lock errors under concurrent load, and does not support n8n's queue mode for scaling. If you plan to run n8n 24/7, have multiple active workflows, or process significant data volumes, start with PostgreSQL from the beginning. Migrating from SQLite to PostgreSQL later is possible but involves manual data transfer.

How do I access my self-hosted n8n from outside my home network?

Use a tunnel service like Localtonet. It creates an outbound connection from your machine to a public HTTPS endpoint, giving your n8n instance a public URL without port forwarding or a static IP. This also works behind CGNAT, which is common with home ISPs. After setting up the tunnel, set the WEBHOOK_URL environment variable to the public URL and restart n8n. Your webhook URLs in the editor will then show the correct public address for use with external services.

Give Your Self-Hosted n8n a Public Webhook URL

Localtonet creates a secure HTTPS tunnel for your n8n instance so external services like Stripe, GitHub, and Twilio can reach your webhooks. No port forwarding. No static IP. Works behind CGNAT. Free to start.

Get Started Free →

Localtonet is a secure multi-protocol tunneling and proxy platform designed to expose localhost, devices, private services, and AI agents to the public internet supporting HTTP/HTTPS tunnels, TCP/UDP forwarding, mobile proxy infrastructure, file server publishing, latency-optimized game connectivity, and developer-ready AI agent endpoint exposure from a single unified control plane.

support