25 min read

How to Self-Host Uptime Kuma and Remote Access

Uptime Kuma is the most popular self-hosted uptime monitoring tool available, with over 76,000 GitHub stars. It monitors websites, APIs, TCP ports, DNS records, Docker containers, and more, and sends alerts through 90+ notification channels including Telegram, Discord, Slack, and email. Version 2.0 (released October 2025) added MariaDB support, rootless Docker images, and a refreshed UI. Version 2.1 (February 2026) added Globalping support for worldwide probes and domain expiry monitoring.

Uptime Kuma · Monitoring · Docker · Raspberry Pi · 2026

How to Self-Host Uptime Kuma: Complete Guide to Installation, Monitor Types, Status Pages, and Remote Access

Uptime Kuma is the most popular self-hosted uptime monitoring tool available, with over 76,000 GitHub stars. It monitors websites, APIs, TCP ports, DNS records, Docker containers, and more, and sends alerts through 90+ notification channels including Telegram, Discord, Slack, and email. Version 2.0 (released October 2025) added MariaDB support, rootless Docker images, and a refreshed UI. Version 2.1 (February 2026) added Globalping support for worldwide probes and domain expiry monitoring. This guide covers every aspect of self-hosting Uptime Kuma: Docker Compose installation with the correct v2 image tag, every monitor type including the powerful Push monitor for services that are not publicly reachable, setting up Telegram and email notifications, creating and customizing a public status page, backing up your data correctly (the JSON backup/restore feature was removed in v2), updating safely, and using Localtonet to access your dashboard and status page remotely from any network.

🐳 Docker Compose · v2.x 🍓 Raspberry Pi 3 / 4 / 5 🔔 90+ Notification Channels 🌐 Public Status Page · Remote Access

What Is Uptime Kuma?

Uptime Kuma is an open-source, self-hosted monitoring tool created by Louis Lam and first released in 2021. It is designed as a self-hosted alternative to cloud monitoring services like UptimeRobot, Pingdom, and Better Uptime. Built with Vue 3 on the frontend and Node.js on the backend, it communicates over WebSocket for real-time status updates in the browser without page refreshes.

The core function is simple: Uptime Kuma periodically checks whether your services are up and sends you an alert when something goes down or comes back up. But the breadth of what it can monitor, and the number of ways it can notify you, make it significantly more capable than most cloud alternatives.

🌐 HTTP/HTTPS Monitoring Check websites, APIs, and web services. Monitor status codes, response body keywords, JSON query results, and SSL certificate expiry dates.
🔌 TCP Port Monitoring Check if any TCP port is open and accepting connections. Works for databases, mail servers, game servers, and any TCP-based service.
📡 Ping / ICMP Monitoring Ping any IP address or hostname and track latency over time with interactive charts. Detects network-level reachability issues.
🔁 Push Monitor (Passive) Flip the direction: instead of Uptime Kuma reaching out to your service, your service sends a heartbeat to Uptime Kuma. Critical for monitoring services behind NAT or CGNAT that cannot receive incoming connections.
🐳 Docker Container Monitoring Monitor the health status of Docker containers. Uptime Kuma reads the Docker health check result and alerts when a container goes unhealthy or stops.
🌍 DNS Record Monitoring Monitor DNS record resolution for A, AAAA, CNAME, MX, TXT, and other record types. Detects DNS propagation issues and misconfigured records.
🔒 SSL Certificate Expiry Monitor SSL/TLS certificate expiry dates and receive alerts before certificates expire. Available as part of HTTP monitors and as a dedicated certificate monitor.
📊 Public Status Pages Create one or more public-facing status pages that show the current status and uptime history of selected services. Share with your users or keep for internal use.

Uptime Kuma v1 vs v2: What Changed

Uptime Kuma v2.0 was released in October 2025 and is a major version update with breaking changes. If you are running v1 and considering upgrading, read this section carefully before proceeding.

Featurev1v2
DatabaseSQLite onlySQLite (default) or MariaDB/MySQL (new in v2)
Docker imagelouislam/uptime-kuma:1louislam/uptime-kuma:2
Rootless DockerNot availableAvailable: louislam/uptime-kuma:2-rootless
Backup/Restore via JSONSupportedRemoved. Only data directory backup is supported.
Badge endpoint durationsAny valueOnly 24, 24h, 30d, 1y accepted
Alpine-based Docker imagesAvailableDropped
Node.js requirement (non-Docker)v14+v20.4+ required
Email notification templatingCustom regexLiquidJS (variables are now case-sensitive)
DNS Cache for HTTP monitorsAvailableRemoved. Use bundled nscd for Docker.
v2.1 additions (Feb 2026)N/AGlobalping multi-region probes, domain expiry monitoring, Jira Service Management integration, Google Sheets notifications
v1 to v2 migration can take hours and must not be interrupted

The v1 to v2 upgrade rewrites the heartbeat table into a new optimized format. This migration runs automatically on first startup of the v2 container, but it can take minutes to hours depending on how much historical data you have. Louis Lam (the creator) reported that 20 monitors with 90 days of data took about 7 minutes. With 1.5 GB of data, users have reported 20 to 30 minutes. If the migration is interrupted (container killed, power loss), the database can be left in a broken state. Always take a full backup of the /app/data directory before upgrading, and never stop the container while logs show migration in progress.

Requirements

ComponentMinimumNotes
CPU1 coreUptime Kuma is lightweight. Even a Raspberry Pi 3 handles 50+ monitors comfortably.
RAM256 MBWith 25 monitors, expect ~100-200 MB RAM usage. More monitors = more RAM.
Storage1 GBSQLite database grows with historical data. Default retention is fine for most uses.
DockerDocker 24+, Compose v2Use docker compose (no hyphen), not docker-compose
File systemLocal or Docker volumeNFS file systems are NOT supported. Uptime Kuma requires POSIX file lock support. NFS does not reliably support this and causes SQLite database corruption.

Installation: Docker Compose

Step 1: Install Docker

sudo apt update
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker $USER
# Log out and back in, then verify:
docker --version
docker compose version

Expected output:

Docker version 27.x.x, build xxxxxxx
Docker Compose version v2.x.x

Step 2: Create the project directory and docker-compose.yml

mkdir ~/uptime-kuma && cd ~/uptime-kuma
cat > docker-compose.yml << 'EOF'
services:
  uptime-kuma:
    image: louislam/uptime-kuma:2
    container_name: uptime-kuma
    restart: unless-stopped
    ports:
      - "127.0.0.1:3001:3001"
    volumes:
      - ./data:/app/data
    environment:
      - TZ=Europe/Istanbul
EOF
Use louislam/uptime-kuma:2, not :latest

Pinning to the major version tag :2 keeps you on the v2 branch and ensures you always get the latest v2 patches without accidentally jumping to v3 when it releases. Using :latest means a future docker compose pull could pull v3, triggering another major migration unexpectedly.

The port binding 127.0.0.1:3001:3001 makes Uptime Kuma only accessible from the local machine. Access from other devices and networks goes through the Localtonet tunnel (covered below), which is more secure than binding to all interfaces.

Step 3: Start Uptime Kuma

docker compose up -d

# Check status:
docker compose ps

Expected output:

NAME           IMAGE                    STATUS
uptime-kuma    louislam/uptime-kuma:2   Up 30 seconds
# Watch startup logs:
docker compose logs -f uptime-kuma

Expected startup log:

Welcome to Uptime Kuma
Uptime Kuma Version: 2.x.x
Listening on 0.0.0.0:3001

Step 4: Create the admin account

Open http://localhost:3001 in your browser. You are prompted to create the first admin account with a username and password. This account has full access to all monitors, settings, and the admin panel. Use a strong password.

Create the admin account immediately after installation

Until the first account is created, anyone who can reach port 3001 can create the admin account and take control of your instance. If you are using the 127.0.0.1:3001:3001 port binding (as in the compose file above), only connections from localhost are accepted, so this is less urgent. But if you have bound to 0.0.0.0:3001:3001 or have exposed the port via a tunnel, create the admin account before doing anything else.

Monitor Types: A Complete Reference

Uptime Kuma supports many monitor types. Choosing the right type for each service is important: the wrong type can produce false positives (alerts for services that are actually up) or miss real outages.

Monitor Type What It Checks Best For Considered "Up" When
HTTP(s) Sends an HTTP or HTTPS request to a URL and checks the response status code Websites, REST APIs, web apps Response code is in the accepted range (default: 2xx)
HTTP(s) - Keyword Like HTTP(s), but also checks the response body for a specific keyword or phrase Pages where a 200 OK is not enough: login pages that return 200 even when the app is broken, CMS error pages Response is 200 AND keyword is present (or absent, if inverted)
HTTP(s) - JSON Query Sends an HTTP request and evaluates a JSONPath expression against the response body REST APIs that return status in a JSON field (e.g. {"status": "ok"}) JSONPath expression evaluates to the expected value
TCP Port Attempts to open a TCP connection to a host and port Databases (MySQL 3306, PostgreSQL 5432), mail servers (SMTP 25/587), any non-HTTP TCP service TCP connection succeeds
Ping (ICMP) Sends ICMP echo requests to an IP or hostname and measures round-trip time Network devices (routers, switches, NAS), servers where only network reachability matters ICMP reply is received
Push Passive / inverted: Uptime Kuma waits for your service to send a heartbeat to a unique URL. If no heartbeat arrives within the configured interval, the service is marked down. Services that cannot receive inbound connections (behind NAT/CGNAT), cron jobs, scripts, backups, internal services A heartbeat request was received within the configured interval
DNS Performs a DNS query for a specified hostname and record type, and optionally checks the result against an expected value DNS servers, authoritative nameservers, monitoring for DNS hijacking or incorrect records DNS query returns a valid response (and matches expected value if set)
Docker Container Reads the Docker daemon health status for a named container Monitoring specific containers on the same Docker host Container status is "healthy" (requires a HEALTHCHECK in the Dockerfile)
Steam Game Server Queries a Steam game server using the Steam A2S protocol Self-hosted game servers (CS, TF2, Rust, Valheim, etc.) Server responds to A2S_INFO query
WebSocket Attempts to establish a WebSocket connection and optionally checks a response message Real-time apps, chat servers, notification services using WebSocket WebSocket handshake succeeds

The Push Monitor: Monitoring Services Behind NAT and CGNAT

The Push monitor is arguably the most powerful and underused monitor type. Most monitors are active: Uptime Kuma reaches out to your service. The Push monitor is passive: your service reaches out to Uptime Kuma.

This matters enormously for home server operators. If your services are behind CGNAT, Uptime Kuma cannot reach them for active checks because they have no public IP. The Push monitor inverts the relationship: your internal service sends an outbound HTTP request to the Uptime Kuma public URL. Outbound connections always work behind NAT and CGNAT.

Setting up a Push monitor

1

Add a new monitor and select "Push" as the type

In the Uptime Kuma dashboard, click Add New Monitor. Set the Monitor Type to Push. Give it a name (e.g. "Home NAS Backup Job"). Set the Heartbeat Interval to how often (in seconds) your service will send heartbeats. If a heartbeat is not received within this interval, the monitor goes to Down status.

2

Copy the Push URL

After saving, Uptime Kuma shows a unique Push URL for this monitor. It looks like:
https://yourpublicdomain/api/push/abcd1234?status=up&msg=OK&ping=
This URL is unique per monitor. When your service calls it, the monitor records a successful heartbeat.

3

Configure your service to call the Push URL

Add a cron job, script, or application call that hits the Push URL at the configured interval. Any HTTP GET request to the URL counts as a heartbeat.

# Cron job example: call push URL every 5 minutes:
# Add to crontab with: crontab -e
*/5 * * * * curl -fsS "https://yourpublicdomain/api/push/abcd1234?status=up&msg=OK" > /dev/null

# For a backup script, call at the end of a successful run:
# Only send heartbeat if the backup succeeded
backup_my_files && curl -fsS "https://yourpublicdomain/api/push/abcd1234?status=up&msg=backup_ok"

Push monitor use cases for home servers

ServiceWhy Push monitor?Heartbeat call point
Nightly backup scriptMonitors whether backup ran successfully (active check cannot verify this)End of backup script, only on success
Home NAS behind CGNATNAS has no public IP for active monitoringCron job on NAS every 5 minutes
Raspberry Pi sensor readerPi is on a home network, not publicly accessiblePython script after each successful sensor read
Local cron jobVerify a scheduled task actually ranEnd of the cron script
Database backup jobCheck that the DB dump completed without errorsAfter successful pg_dump or mysqldump

Setting Up Notifications

Uptime Kuma supports over 90 notification providers. Notifications are configured once and then assigned to one or more monitors. When a monitor changes status (down or recovered), all assigned notifications fire.

To add a notification, go to Settings → Notifications → Add Notification.

Telegram Notification Setup

Telegram
1

Open Telegram and message @BotFather. Send /newbot, choose a name and username for your bot. BotFather gives you an API token (looks like 123456789:ABCDEFabcdefGHIJKLMNOP). Copy it.

2

Find your Chat ID: message your new bot once, then open https://api.telegram.org/bot<YOUR_TOKEN>/getUpdates in a browser. The chat.id field in the JSON response is your Chat ID. For personal alerts use a negative number if it is a group chat, or a positive number for a direct chat.

3

In Uptime Kuma, go to Settings → Notifications → Add Notification. Select Telegram. Paste the Bot Token and Chat ID. Click Test to verify a message arrives in Telegram. Click Save.

Email (SMTP) Notification Setup

Email

Go to Settings → Notifications → Add Notification, select Email (SMTP), and fill in:

FieldGmailGeneric SMTP
Hostnamesmtp.gmail.comYour SMTP server
Port465 (SSL) or 587 (STARTTLS)465 or 587
SecuritySSL/TLS (port 465)Match your provider
Usernameyour@gmail.comSMTP username
PasswordApp password (not account password)SMTP password
Fromyour@gmail.comSender address
Toalert-recipient@email.comWhere alerts go
Gmail requires an App Password, not your Google account password

If you have 2-Step Verification enabled on your Google account (which you should), Gmail will reject your account password for SMTP. Go to myaccount.google.com/apppasswords, generate a 16-character App Password for "Mail," and use that in Uptime Kuma instead of your account password.

Discord Notification Setup

Discord

In your Discord server, go to Server Settings → Integrations → Create Webhook. Select the channel and copy the webhook URL. In Uptime Kuma, add a notification, select Discord, paste the webhook URL, click Test, and Save. Discord webhooks require no bot token or OAuth setup.

Creating a Public Status Page

Uptime Kuma's status page feature lets you create a public-facing page that shows the current status and uptime history of selected services. This is the same concept as status.github.com or status.stripe.com, but self-hosted and fully under your control.

1

Create a new status page

In the left sidebar, click Status Pages, then New Status Page. Enter a title (shown as the page heading) and a slug (used in the URL path, e.g. services results in https://yourdomain/status/services).

2

Add monitors to the status page

Click Add Group to create a logical grouping (e.g. "Core Services," "APIs"). Inside each group, click Add Monitor and select which monitors to display. You choose which monitors are public-facing; monitors not added to any status page remain private.

3

Customize the appearance

You can set a custom logo, description text, and custom CSS. Toggle Show Tags to display monitor tags on the page. Enable Search Engine Visibility off if the page is internal-only.

4

Set the domain (optional)

If you want the status page to be served directly at a custom domain or subdomain (e.g. status.yourcompany.com), set the Domain field in the status page settings and point your DNS to the Localtonet public URL. Without this, the page is accessible at https://yourpublicdomain/status/your-slug.

Installation on Raspberry Pi

Uptime Kuma is an excellent fit for a Raspberry Pi. The resource usage is low enough that it runs on a Pi 3 or Pi Zero 2W alongside other services. The Docker Compose setup above works on Raspberry Pi OS 64-bit (Bookworm) without any changes.

# Install Docker on Raspberry Pi OS:
curl -fsSL https://get.docker.com | sudo sh
sudo usermod -aG docker pi
# Log out and back in

# Create the project and compose file exactly as above:
mkdir ~/uptime-kuma && cd ~/uptime-kuma
# Create docker-compose.yml with louislam/uptime-kuma:2 image
docker compose up -d
# Check RAM usage after startup (Pi 4, ~25 monitors):
docker stats uptime-kuma --no-stream

Expected output (Pi 4, 25 monitors):

NAME          CPU %   MEM USAGE / LIMIT   MEM %
uptime-kuma   0.10%   152MiB / 3.7GiB     4.1%
Do not store the data volume on an NFS mount or SMB share

Uptime Kuma's SQLite database requires POSIX file lock support. NFS (Network File System) does not reliably implement POSIX file locks, and using an NFS-mounted path for /app/data is a common cause of database corruption. This applies whether you are running on a Pi, a VPS, or any Linux machine. Always map /app/data to a local directory (./data) or a named Docker volume. If you want the data on a NAS or external drive, mount the physical drive directly (USB or SATA), not over the network.

Remote Access: Dashboard and Status Page via Localtonet

There are two reasons to expose Uptime Kuma publicly with Localtonet:

  • Access the dashboard from anywhere: Check monitor status and manage your setup from your phone, laptop, or any device outside your home network without VPN.
  • Make the status page public: Share the status page URL with users, team members, or clients so they can see service status without any login. Push monitors also require a public URL so that your internal services can send heartbeats to Uptime Kuma.
1

Install Localtonet

curl -fsSL https://localtonet.com/install.sh | sh
localtonet --version
2

Authenticate

localtonet --authtoken YOUR_TOKEN_HERE
3

Create an HTTP tunnel for port 3001

Go to localtonet.com/tunnel/http. Set IP to 127.0.0.1, Port to 3001, optionally set a custom subdomain (e.g. mystatus). Click Create then Start. Note the public URL: e.g. https://mystatus.localto.net.

4

Set the Primary Base URL in Uptime Kuma settings

In Uptime Kuma, go to Settings → General and set Primary Base URL to your Localtonet public URL. This ensures Push monitor URLs, status page links, and notification links all use the correct public address instead of localhost.

Primary Base URL: https://mystatus.localto.net
5

Run Localtonet as a service

sudo localtonet --install-service --authtoken YOUR_TOKEN_HERE
sudo localtonet --start-service --authtoken YOUR_TOKEN_HERE

The tunnel starts automatically on every reboot.

Backing Up Uptime Kuma

The JSON Backup/Restore feature was removed in v2

In Uptime Kuma v1, you could export and import all monitors and settings as a JSON file from the Settings page. This feature was removed in v2. The only supported backup method in v2 is to copy the entire /app/data directory (mounted as ./data in the compose file). Backing up only the SQLite database file without the surrounding directory misses config files and certificate files that may also be needed for a complete restore.

All Uptime Kuma data lives in the /app/data directory inside the container, mounted to ./data on the host. A complete backup copies this entire directory.

#!/bin/bash
# Save as ~/uptime-kuma/backup.sh
# Add to cron: 0 2 * * * ~/uptime-kuma/backup.sh

BACKUP_DIR=~/uptime-kuma-backups/$(date +%F_%H%M)
mkdir -p "$BACKUP_DIR"

# Uptime Kuma can be backed up while running because SQLite WAL mode
# handles concurrent reads, but a brief stop gives a cleaner backup:
cd ~/uptime-kuma
docker compose stop uptime-kuma

# Copy the entire data directory:
rsync -a ./data/ "$BACKUP_DIR/data/"

# Restart:
docker compose start uptime-kuma

echo "Backup complete: $BACKUP_DIR"

# Keep only last 14 days:
find ~/uptime-kuma-backups -maxdepth 1 -type d -mtime +14 -exec rm -rf {} +
chmod +x ~/uptime-kuma/backup.sh
# Test it:
~/uptime-kuma/backup.sh

Restore from backup

# Stop Uptime Kuma:
cd ~/uptime-kuma
docker compose down

# Replace the data directory with the backup:
rm -rf ./data
cp -r ~/uptime-kuma-backups/YYYY-MM-DD_HHMM/data ./data

# Start:
docker compose up -d

Updating Uptime Kuma

Uptime Kuma releases updates frequently. Minor version updates (2.x to 2.y) are safe to apply directly. For the v1 to v2 major upgrade, see the migration warning at the top of this guide.

cd ~/uptime-kuma

# Back up before updating:
~/uptime-kuma/backup.sh

# Pull the latest v2 image:
docker compose pull uptime-kuma

# Restart with the new image:
docker compose up -d uptime-kuma

# Watch logs to confirm successful start (and any migration progress):
docker compose logs -f uptime-kuma

Expected log after a clean update:

Welcome to Uptime Kuma
Uptime Kuma Version: 2.x.y
Listening on 0.0.0.0:3001

Resetting the Admin Password via CLI

If you forget the admin password, Uptime Kuma provides a CLI tool to reset it without losing any data:

# Docker Compose:
docker exec -it uptime-kuma npm run reset-password

You will be prompted to enter a new password. The tool modifies the database directly. Restart Uptime Kuma after resetting:

docker compose restart uptime-kuma

Troubleshooting

Problem Cause Fix
Container starts but immediately crashes or restarts in a loop Data directory on an NFS mount, or file permissions wrong on the data directory Verify ./data is on a local filesystem, not NFS or SMB. For rootless images, set ownership: sudo chown -R 1000:1000 ./data
v1 to v2 migration stuck or shows "Aggregate table migration is already in progress" Migration was interrupted (container killed mid-migration) Restore from the backup you took before upgrading and retry. Do not interrupt the migration process. Migration can take 20+ minutes for large datasets.
Monitor shows Down but the service is actually up Firewall blocking the check from the Uptime Kuma host, or the service only accepts connections from certain IPs Test from the Uptime Kuma host: curl -v http://service-url. Check firewall rules on the monitored service. For Docker-hosted services on the same host, use http://host.docker.internal:port instead of localhost.
Push monitor never receives heartbeats (stays in "Pending") Primary Base URL not set, so Push URLs show localhost instead of the public domain Go to Settings → General, set Primary Base URL to your Localtonet public URL. Copy the Push URL again from the monitor detail page after saving.
Email notifications not sending Wrong SMTP credentials, or Gmail is rejecting account password instead of App Password Use an App Password for Gmail (generated at myaccount.google.com/apppasswords). Click the Test button in the notification settings to see the exact error message.
Docker container monitor shows Pending even though container is running Container does not have a HEALTHCHECK defined in its Dockerfile The Docker container monitor reads Docker's built-in health status. If the container has no HEALTHCHECK directive, Docker reports the status as "starting" indefinitely. Either add a HEALTHCHECK to the Dockerfile or use a TCP monitor to check the container's port instead.
Status page shows "No monitors" or does not update No monitors were added to the status page, or the Primary Base URL is wrong Open the status page editor and add monitors to groups. Ensure Primary Base URL in Settings matches the URL you are accessing the status page from.
WebSocket connection lost / dashboard disconnects frequently Proxy timeout or missing WebSocket upgrade headers If using Nginx as a reverse proxy, add proxy_set_header Upgrade $http_upgrade; and proxy_set_header Connection "upgrade";. Localtonet handles WebSocket forwarding natively without extra configuration.

Frequently Asked Questions

Can Uptime Kuma monitor services on my home LAN that have no public IP?

Yes, in two ways. First, if Uptime Kuma itself runs on the same home network, it can reach any device on the LAN directly using private IP addresses (e.g. 192.168.1.100:8080) for active monitors like HTTP, TCP, and Ping. Second, for services on remote networks without public IPs, use the Push monitor: configure a cron job or script on the remote machine to call the Push URL over an outbound connection, which always works regardless of NAT or CGNAT.

What is the difference between Uptime Kuma and UptimeRobot?

UptimeRobot is a cloud-based service that hosts the monitoring infrastructure for you. It is simpler to get started with but your data is on UptimeRobot's servers, the free tier has a 5-minute check interval (Uptime Kuma supports 20 seconds), and the free plan limits you to 50 monitors. Uptime Kuma is self-hosted: you run it on your own hardware and all data stays with you. It has no free-tier limits, supports check intervals as low as 20 seconds, and adds monitor types (Push, Docker, Game Server, WebSocket) that UptimeRobot does not have. The tradeoff is that you are responsible for keeping Uptime Kuma running and backed up.

Does Uptime Kuma alert when a service comes back up?

Yes. Uptime Kuma sends notifications both when a monitor goes down and when it recovers. The recovery notification is sent automatically to all notification channels attached to that monitor. You can customize the notification content for down and recovery events separately using the LiquidJS templating system in v2.

How do I avoid false-positive alerts from brief blips?

Uptime Kuma has a Retries setting per monitor (called "Retries before down"). Set this to 2 or 3 to require that the monitor fail the configured number of consecutive checks before sending a down alert. In v2, the default retries value for newly created monitors was changed from 1 to 0. Setting it to 2 means Uptime Kuma checks twice more after the first failure before alerting, filtering out brief timeouts and transient network hiccups.

Can I have multiple status pages with different monitors on each?

Yes. Uptime Kuma supports multiple status pages, each with its own slug, monitors, groups, and appearance settings. You could have one status page for external users showing only public-facing services, and another for internal use showing all infrastructure monitors. Each status page has its own URL. Monitors can appear on multiple status pages simultaneously.

Can Uptime Kuma itself go down and miss alerts?

Yes, this is the fundamental limitation of self-hosted monitoring: if the machine running Uptime Kuma goes down, Uptime Kuma stops sending alerts. For home server use, this is usually acceptable: you will notice if your home server is completely offline. For more critical monitoring, consider running a second lightweight Uptime Kuma instance on a separate machine or VPS that monitors your primary Uptime Kuma instance, or use a cloud service (even a free UptimeRobot account) as a secondary check on your most critical services.

Make Your Uptime Kuma Dashboard and Status Page Publicly Accessible

Localtonet gives your self-hosted Uptime Kuma a public HTTPS URL so you can access the dashboard from anywhere and share your status page with users. Works behind CGNAT without port forwarding. Push monitors require a public URL to receive heartbeats from services on private networks. Free to start.

Get Your Free Tunnel →

Localtonet is a secure multi-protocol tunneling and proxy platform designed to expose localhost, devices, private services, and AI agents to the public internet supporting HTTP/HTTPS tunnels, TCP/UDP forwarding, mobile proxy infrastructure, file server publishing, latency-optimized game connectivity, and developer-ready AI agent endpoint exposure from a single unified control plane.

support