Skip to content
Author Nejat Hakan
eMail nejat.hakan@outlook.de
PayPal Me https://paypal.me/nejathakan


Network/Server Monitoring Uptime Kuma

Introduction to Uptime Kuma

Welcome to the world of self-hosted monitoring with Uptime Kuma! In an era where digital services are paramount, ensuring their availability and performance is crucial. Uptime Kuma emerges as a user-friendly, open-source solution that empowers you to take control of your monitoring needs without relying on third-party services. This section will introduce you to Uptime Kuma, explain the rationale behind self-hosting your monitoring, briefly compare it with other tools, and cover some fundamental monitoring concepts.

What is Uptime Kuma?

Uptime Kuma is a "fancy self-hosted monitoring tool." At its core, it is designed to be incredibly easy to set up and use, even for individuals who may not have extensive experience with complex monitoring systems. It provides a clean, intuitive web interface where you can add, manage, and view the status of your monitored services.

Key features of Uptime Kuma include:

  • Open Source: Uptime Kuma is freely available under an open-source license. This means you can use it without any licensing fees, inspect its source code, and even contribute to its development.
  • Self-Hostable: You can run Uptime Kuma on your own server, whether it's a physical machine, a virtual private server (VPS), or even a Raspberry Pi. This gives you complete control over your data and configuration.
  • User-Friendly UI: One of its standout features is its beautiful and responsive web interface. Adding new monitors, configuring notifications, and viewing status dashboards are straightforward processes.
  • Versatile Monitor Types: Uptime Kuma supports a wide array of monitor types, including:
    • HTTP(s): For websites and web applications.
    • TCP Port: For any service listening on a TCP port (e.g., SSH, databases).
    • Ping (ICMP): For basic host reachability.
    • DNS: To check DNS resolution.
    • Keyword: To verify specific content on a webpage.
    • Push (Heartbeat): For services that can actively report their status.
    • And many more, including Steam Game Server, Docker Container, etc.
  • Notifications: It integrates with over 90 notification services, such as Email (SMTP), Telegram, Discord, Slack, Webhook, Gotify, and ntfy.sh, ensuring you're alerted promptly when a service goes down or recovers.
  • Status Pages: You can create public or private status pages to display the real-time status of your monitored services to users or stakeholders. These pages are customizable and can even be hosted on a custom domain.
  • Lightweight: Uptime Kuma is relatively lightweight and doesn't require significant server resources to run effectively.

Why Self-Host Your Monitoring?

While many commercial monitoring services exist (e.g., UptimeRobot, Pingdom), self-hosting your monitoring solution like Uptime Kuma offers several distinct advantages, particularly for technically inclined users, students, and organizations that value control:

  1. Control and Ownership: When you self-host, you own the data. There are no third parties involved in collecting or storing your monitoring metrics or configuration details. You decide how long data is retained and who has access to it.
  2. Privacy: For internal services or applications that are not publicly accessible, self-hosting ensures that monitoring probes originate from within your network or a trusted environment, rather than from external commercial servers. This can be critical for security and privacy.
  3. Cost-Effectiveness: Commercial monitoring services often have subscription fees that can escalate with the number of monitors or frequency of checks. Uptime Kuma, being open-source, is free to use. Your only costs are related to the infrastructure you run it on, which can be very minimal.
  4. Learning Experience: Setting up and managing your own monitoring tool provides an invaluable learning opportunity. You'll gain deeper insights into network protocols, server administration, and the intricacies of ensuring service reliability. This is particularly beneficial for students and aspiring IT professionals.
  5. Customization and Flexibility: Self-hosting allows for greater customization. While Uptime Kuma itself offers many options, running it on your own infrastructure means you can integrate it more deeply with your existing systems, perhaps using its API for custom automation or reporting.
  6. No Vendor Lock-in: You are not tied to a specific commercial vendor's ecosystem. If your needs change, you can migrate or adapt your self-hosted solution more easily.

Uptime Kuma vs Other Monitoring Tools

The monitoring landscape is vast, with tools catering to different needs and scales. Here's a brief comparison to understand Uptime Kuma's niche:

  • Nagios/Icinga: These are powerful, enterprise-grade monitoring systems known for their extensive plugin architecture and configurability. However, they typically have a steeper learning curve and can be more complex to set up and manage than Uptime Kuma. They are often used for comprehensive infrastructure monitoring.
  • Zabbix: Similar to Nagios, Zabbix is a feature-rich, open-source monitoring solution for networks, servers, applications, and services. It offers advanced features like trending, anomaly detection, and distributed monitoring but requires more resources and expertise to deploy effectively.
  • Prometheus & Grafana: This combination is very popular for metrics collection and visualization, especially in cloud-native and microservices environments. Prometheus is a time-series database and alerting toolkit, while Grafana provides powerful dashboarding. While extremely capable, setting up Prometheus for basic uptime monitoring can be more involved than Uptime Kuma, which is specifically designed for ease of use in uptime/reachability checks. Uptime Kuma can even expose its metrics to Prometheus.
  • Commercial Services (e.g., UptimeRobot, Pingdom, StatusCake): These services offer convenience and managed infrastructure. They are quick to set up and often provide a global network of monitoring probes. However, they come with subscription costs, potential limitations on free tiers, and less control over data and privacy compared to a self-hosted solution like Uptime Kuma.

Uptime Kuma's Sweet Spot:
Uptime Kuma excels in its simplicity, ease of use, and focus on uptime and reachability monitoring. It's an excellent choice for:

  • Individuals monitoring personal projects, blogs, or home servers.
  • Small to medium-sized businesses needing a straightforward way to track their web services.
  • Developers wanting a quick tool to monitor APIs and development environments.
  • Anyone who prefers a self-hosted, open-source solution with a modern UI and a rich set of notification options without a steep learning curve.

It's not meant to replace comprehensive APM (Application Performance Monitoring) tools or deep infrastructure metric collectors for large enterprises, but it perfectly fills the need for an accessible and efficient uptime monitor.

Core Concepts in Monitoring

Before diving into Uptime Kuma, let's clarify some fundamental monitoring terms:

  • Uptime/Downtime:
    • Uptime: The period during which a service is operational and accessible. Often expressed as a percentage (e.g., 99.9% uptime).
    • Downtime: The period during which a service is unavailable or not functioning correctly.
  • Latency (Response Time): The time it takes for a service to respond to a request. High latency can make a service feel slow, even if it's technically "up." Uptime Kuma measures and displays this.
  • SLA (Service Level Agreement): A commitment between a service provider and a client. Aspects of the service – quality, availability, responsibilities – are agreed upon. Uptime percentages are often part of SLAs.
  • Polling (Probing): The act of actively checking the status of a service. Uptime Kuma periodically sends requests (probes) to your configured monitors to determine their status. The frequency of these checks is the "heartbeat interval."
  • Alerting: The process of notifying relevant personnel when a monitored service changes state, particularly when it goes down or recovers.

Understanding these concepts will help you make informed decisions when configuring your monitors and interpreting the data Uptime Kuma provides.

Workshop Introduction Setup a Basic Ping Monitor

This initial mini-workshop aims to give you a conceptual feel for Uptime Kuma's interface by setting up a very simple monitor. We'll cover the actual installation of Uptime Kuma in detail later. For now, imagine Uptime Kuma is already running and accessible.

Objective: Familiarize yourself with the Uptime Kuma dashboard and the process of adding a basic "Ping" monitor. Ping monitors are used to check if a host is reachable on the network by sending ICMP (Internet Control Message Protocol) echo requests, much like the ping command in your terminal.

Prerequisites (Conceptual for this introduction):

  • Access to a running Uptime Kuma instance's web dashboard.
  • Knowledge of an IP address or hostname you want to monitor (e.g., a public DNS server like 8.8.8.8).

Steps:

  1. Access the Uptime Kuma Dashboard: Imagine you've navigated to your Uptime Kuma instance in a web browser (e.g., http://localhost:3001). You would be greeted by the main dashboard, which initially would be empty or show any existing monitors.

  2. Add a New Monitor: Locate and click the "+ Add New Monitor" button. This button is typically prominently displayed on the dashboard.

  3. Select Monitor Type: A dialog or page will appear asking you to choose the "Monitor Type." For this introductory exercise, select "Ping (ICMP)".

  4. Configure the Monitor:
    You will now see a form with several fields to configure your Ping monitor.

    • Friendly Name: This is a descriptive name for your monitor that will be displayed on the dashboard.
      • Enter: Google DNS Ping
    • Hostname / IP Address: This is the target host you want to ping.
      • Enter: 8.8.8.8 (This is one of Google's public DNS servers, known for its high availability.)
    • Packet Size (bytes): The size of the ICMP packet to send. The default (often 56 bytes, resulting in 64 bytes of ICMP data) is usually fine. For this conceptual exercise, leave it at the default.
    • Other options like "Heartbeat Interval," "Retries," etc., are available. For now, we'd conceptually leave them at their default values.
  5. Save the Monitor: After filling in the necessary fields, click the "Save" button.

  6. Observe the Monitor on the Dashboard: You will be taken back to the dashboard, and your new "Google DNS Ping" monitor will appear.

    • Initially, its status might be "Pending" or show a spinning icon as Uptime Kuma performs its first check.
    • After a few seconds, it should turn "Up" (usually indicated by a green color) if 8.8.8.8 is reachable.
    • You'll see information like the current status, average response time (ping latency), and an uptime percentage graph that will start populating over time.

    Conceptual Uptime Kuma Monitor Status (This is a placeholder image representing the monitor on the dashboard)

Explanation of What You'd See:

  • Status Indicator: A clear visual cue (e.g., green for "Up," red for "Down," yellow for "Pending/Warning").
  • Response Time (Latency): How quickly the target host (8.8.8.8) responded to the ping. This is a key performance metric.
  • Uptime Graph: A visual history of the monitor's status over time.
  • Event Log: Clicking on the monitor would typically reveal a log of status changes (e.g., when it went up or down).

This conceptual walkthrough illustrates the simplicity of adding a monitor in Uptime Kuma. The actual installation and more detailed configurations will be covered in the subsequent sections. The goal here was to provide a gentle first look at the core functionality.


Basic

This section covers the foundational knowledge required to get Uptime Kuma installed and to understand its basic monitoring capabilities. We will walk through the installation process, explore the different types of monitors Uptime Kuma offers, and set up some initial checks.

1. Getting Started with Uptime Kuma

Before you can begin monitoring your services, you need to install Uptime Kuma. This sub-section details the prerequisites, common installation methods, and the initial configuration steps to get your monitoring dashboard up and running.

Prerequisites for Installation

To ensure a smooth installation and operation of Uptime Kuma, your system should meet certain prerequisites. While Uptime Kuma is quite flexible, here are the common requirements:

  • Operating System:
    • Linux (Recommended): Most distributions like Ubuntu, Debian, CentOS, Fedora, etc., are well-suited. Linux is generally preferred for server applications due to its stability and performance. The workshops will primarily assume a Linux environment.
    • Windows: Uptime Kuma can run on Windows, typically via Docker or Node.js directly. Windows Subsystem for Linux (WSL2) is also a good option for running Docker on Windows.
    • macOS: Similar to Windows, Docker or a direct Node.js installation are viable.
  • Node.js (if installing from source or without Docker):
    • Uptime Kuma is built with Node.js. If you are not using Docker, you will need a compatible version of Node.js installed. Refer to the official Uptime Kuma documentation or its package.json file for the specific version requirements (usually a recent LTS version).
    • npm (Node Package Manager) or yarn is also required, which comes bundled with Node.js.
  • Docker (Recommended for ease of use and management):
    • Docker allows you to run Uptime Kuma in an isolated container, simplifying installation, updates, and dependency management.
    • If you choose this route, you'll need Docker Engine installed on your host system.
  • Hardware Resources:
    • CPU: A single modern CPU core is generally sufficient for many instances.
    • RAM: At least 512MB of RAM is a good starting point, but 1GB or more is recommended, especially if you plan to have many monitors or run other services on the same machine. Uptime Kuma itself is not overly memory-hungry.
    • Disk Space: A few gigabytes of disk space should be adequate for the application and its data (SQLite database). The data size will grow over time depending on the number of monitors and data retention settings.
  • Basic Command-Line Knowledge:
    • You will need to interact with your server's terminal or command prompt to execute installation commands, manage Docker containers, or edit configuration files. Familiarity with basic commands like cd, ls, mkdir, nano (or your preferred text editor), and system service management (like systemctl) will be beneficial.
  • Network Access:
    • The server running Uptime Kuma needs network access to reach the services you intend to monitor (which might be on the internet or your local network).
    • You'll also need network access to the Uptime Kuma web interface from your client machine (e.g., via a web browser).

Installation Methods

Uptime Kuma offers several installation methods. The Docker method is highly recommended for its simplicity and robustness.

Docker encapsulates Uptime Kuma and all its dependencies into a standardized unit called a container. This makes installation consistent across different operating systems and simplifies updates and data management.

  • Why Docker?

    • Isolation: Uptime Kuma runs in its own environment, preventing conflicts with other applications or system libraries on your host.
    • Portability: You can easily move your Uptime Kuma instance to another Docker-enabled host.
    • Ease of Updates: Updating Uptime Kuma is often as simple as pulling a new Docker image and restarting the container.
    • Dependency Management: Docker handles all of Uptime Kuma's Node.js and other dependencies internally.
    • Reproducibility: Ensures that Uptime Kuma runs the same way regardless of the underlying host OS specifics (as long as Docker is supported).
  • Docker Installation (Brief Overview): If you don't have Docker installed, you'll need to install it first. The process varies by OS:

    • Linux: Generally, you can use your distribution's package manager (e.g., sudo apt install docker.io on Debian/Ubuntu, or follow the official Docker installation guide for your specific distro at docs.docker.com).
    • Windows: Install Docker Desktop for Windows.
    • macOS: Install Docker Desktop for Mac. Ensure the Docker service is running after installation.
  • Pulling the Uptime Kuma Docker Image: Open your terminal or command prompt and run:

    docker pull louislam/uptime-kuma:1
    
    This command downloads the latest stable version 1.x.x of the Uptime Kuma image from Docker Hub. The :1 tag ensures you get the latest patch and minor releases within version 1, providing stability.

  • Running the Uptime Kuma Container: To run Uptime Kuma, you execute a docker run command. A typical command looks like this:

    docker run -d --restart=always -p 3001:3001 -v uptime_kuma_data:/app/data --name uptime-kuma louislam/uptime-kuma:1
    
    Let's break down this command:

    • docker run: The command to create and start a new container.
    • -d or --detach: Runs the container in the background (detached mode) and prints the container ID.
    • --restart=always: Configures the container to restart automatically if it stops or if the Docker daemon restarts (e.g., after a server reboot). This is crucial for a monitoring tool.
    • -p 3001:3001: This maps port 3001 on your host machine to port 3001 inside the container. Uptime Kuma listens on port 3001 by default. If port 3001 is already in use on your host, you can change the host port (e.g., -p 3000:3001 would make Uptime Kuma accessible on http://<your_host_ip>:3000).
    • -v uptime_kuma_data:/app/data: This is a critical part for data persistence. It creates (or uses an existing) Docker named volume called uptime_kuma_data and mounts it to the /app/data directory inside the container. Uptime Kuma stores all its configuration, monitors, and historical data in /app/data. Using a named volume ensures that your data persists even if you remove and recreate the container (e.g., during an update). You can choose any name for the volume, e.g., my_kuma_data.
    • --name uptime-kuma: Assigns a human-readable name to your container, making it easier to manage (e.g., docker stop uptime-kuma, docker logs uptime-kuma).
    • louislam/uptime-kuma:1: Specifies the Docker image and tag to use.

Using Node.js/npm (Source Installation)

This method is for users who prefer not to use Docker or want more direct control over the installation process. It requires Node.js and npm (or yarn) to be installed on the system.

  • Install Node.js and npm: If not already installed, download Node.js from the official website (nodejs.org) or use a version manager like nvm (Node Version Manager). Ensure you install a version compatible with Uptime Kuma (check their GitHub repository for current requirements, usually an LTS version). npm is included with Node.js. For example, on Linux using nvm:

    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
    # Close and reopen your terminal, or source your profile file (e.g., source ~/.bashrc)
    nvm install --lts # Installs the latest LTS version of Node.js
    nvm use --lts
    node -v # Check Node.js version
    npm -v  # Check npm version
    

  • Clone the Uptime Kuma Repository: Clone the official Uptime Kuma repository from GitHub:

    git clone https://github.com/louislam/uptime-kuma.git
    cd uptime-kuma
    

  • Install Dependencies: Inside the uptime-kuma directory, install the necessary Node.js packages:

    npm install --production
    # or if you prefer yarn:
    # yarn install --production
    
    The --production flag ensures only production dependencies are installed, skipping development tools.

  • Running the Server: You can start the Uptime Kuma server using npm:

    npm start
    
    This will typically start Uptime Kuma in the foreground, listening on port 3001. For production use, you'd want to run it as a background service using a process manager like pm2.

    Using pm2 (Recommended for Node.js installs): pm2 is a powerful process manager for Node.js applications that provides features like auto-restart, logging, and monitoring.

    1. Install pm2 globally: sudo npm install pm2 -g
    2. Start Uptime Kuma with pm2: pm2 start server/server.js --name uptime-kuma
    3. To make pm2 start Uptime Kuma on system boot: pm2 startup (this will output a command you need to run, usually with sudo) pm2 save

Using Pre-built Binaries

For some platforms or specific use cases, pre-built binaries might be available or community-provided solutions might exist (e.g., packages for NAS systems like Synology). Always refer to the official Uptime Kuma GitHub page for the most up-to-date and supported installation methods beyond Docker and source.

Initial Configuration

Once Uptime Kuma is installed and running, you need to perform a brief initial setup.

  1. Accessing the Web Interface: Open your web browser and navigate to the address where Uptime Kuma is running.

    • If using Docker with the example command: http://<your_server_ip>:3001 or http://localhost:3001 if running on your local machine.
    • If using Node.js directly: It will also typically be http://<your_server_ip>:3001 unless you configured a different port.
  2. Creating the First Admin User: The first time you access Uptime Kuma, you'll be greeted with a setup screen. It will prompt you to create an administrator account.

    • Choose a strong Username.
    • Enter a strong Password and confirm it.
    • Click "Create." This user will have full administrative privileges over your Uptime Kuma instance. Remember these credentials!
  3. Understanding the Dashboard Layout: After creating the admin user and logging in, you'll see the main Uptime Kuma dashboard.

    • Top Navigation Bar: Contains links to "Dashboard," "Status Pages," "Maintenance," "Settings," and the user/logout menu.
    • Main Area: This is where your monitors will be displayed. Initially, it will be empty.
    • "+ Add New Monitor" Button: The primary button for adding new services to monitor.
    • Footer: May contain links to Uptime Kuma's GitHub page, documentation, etc.

Take a few moments to click around and familiarize yourself with the interface. It's designed to be intuitive.

Workshop Basic Installation and First Monitor

Objective: Install Uptime Kuma using the recommended Docker method on a Linux server and set up a simple HTTP monitor for a public website to verify the installation.

Prerequisites:

  • A Linux server (can be a local VM, a cloud server, or even your local Linux machine) with internet access.
  • Docker installed and running on the server. If you don't have Docker, refer to the official Docker documentation to install it for your distribution (e.g., sudo apt update && sudo apt install docker.io -y then sudo systemctl start docker && sudo systemctl enable docker for Debian/Ubuntu).
  • Sudo/root privileges on the server for Docker commands.

Steps:

  1. Verify Docker Installation:
    Open a terminal on your Linux server and run:

    docker --version
    
    You should see the installed Docker version. If you get a "command not found" error, Docker is not installed correctly, or your user might not be in the docker group (you might need to use sudo docker --version or add your user to the docker group: sudo usermod -aG docker $USER and then log out and log back in).

  2. Pull the Uptime Kuma Docker Image:
    Download the official Uptime Kuma image from Docker Hub:

    sudo docker pull louislam/uptime-kuma:1
    
    This command fetches the image tagged 1, which refers to the latest stable version 1.x.x. You'll see output as Docker downloads the image layers.

  3. Run the Uptime Kuma Container:
    Execute the following command to start Uptime Kuma:

    sudo docker run -d --restart=always -p 3001:3001 -v uptime_kuma_data:/app/data --name uptime-kuma louislam/uptime-kuma:1
    
    Let's re-iterate the meaning of these Docker flags in this practical context:

    • sudo docker run: We use sudo because the Docker daemon typically requires root privileges to manage containers and network ports (unless your user is in the docker group and configured for rootless Docker, which is more advanced).
    • -d: Detached mode. The container runs in the background.
    • --restart=always: Ensures Uptime Kuma automatically restarts if the container crashes or the server reboots. This is vital for a monitoring service.
    • -p 3001:3001: Maps port 3001 of your host server to port 3001 inside the Uptime Kuma container. This is how you'll access the Uptime Kuma web UI. If port 3001 on your host is busy, change the first 3001 (e.g., -p 8080:3001 would make it accessible via http://<your_server_ip>:8080).
    • -v uptime_kuma_data:/app/data: This creates a Docker named volume called uptime_kuma_data. Docker manages this volume, and it's where Uptime Kuma will store all its persistent data (settings, monitor configurations, history). This is crucial for data persistence. If you stop and remove the container, then run a new one with the same volume mapping, your data will still be there. If you omit this or use an anonymous volume, your data might be lost if the container is removed.
    • --name uptime-kuma: Gives your container a friendly name, uptime-kuma, so you can easily refer to it in other Docker commands (e.g., sudo docker logs uptime-kuma, sudo docker stop uptime-kuma).
    • louislam/uptime-kuma:1: The image to use.

    After running the command, Docker will output a long string, which is the container ID. You can verify the container is running:

    sudo docker ps
    
    You should see uptime-kuma listed with status "Up".

  4. Access Uptime Kuma in a Web Browser:
    Open your favorite web browser on your computer (or any device that can reach your server). Navigate to http://<your_server_ip>:3001. Replace <your_server_ip> with the actual IP address of the server where you installed Uptime Kuma. If you installed it on your local machine, you can use http://localhost:3001. If your server has a firewall (like ufw on Ubuntu), you might need to allow traffic on port 3001:

    sudo ufw allow 3001/tcp
    sudo ufw reload
    

  5. Complete the Initial Setup (Create Admin User):
    You should see the Uptime Kuma setup page.

    • Enter a Username (e.g., admin).
    • Enter a strong Password and confirm it.
    • Click "Create." You will be logged in and taken to the main dashboard.
  6. Add a New HTTP(s) Monitor:
    Let's add your first monitor to check if a public website is up.

    • On the dashboard, click the large green "+ Add New Monitor" button.
    • Monitor Type: Select "HTTP(s)".
    • Friendly Name: Give it a descriptive name, e.g., Google Search Homepage.
    • URL: Enter the URL of the website you want to monitor, e.g., https://google.com.
    • Heartbeat Interval: This defines how often Uptime Kuma checks the URL. The default is 60 seconds. You can leave it for now.
    • There are many other options (Retries, Page Timeout, Authentication, etc.), but for this basic monitor, the defaults are fine.
    • Click "Save" at the bottom of the form.
  7. Observe the Monitor Status:
    You'll be redirected to the dashboard. Your new "Google Search Homepage" monitor will appear.

    • It might briefly show "Pending" status.
    • Within a few seconds to a minute (depending on the heartbeat interval), it should change to "Up" (green color) if https://google.com is accessible.
    • You will see the response time (latency) and a small graph will start to form showing its uptime.
  8. Check Container Logs (Optional but good practice):
    If you want to see what Uptime Kuma is doing in the background, you can view its container logs:

    sudo docker logs uptime-kuma
    
    To follow the logs in real-time:
    sudo docker logs -f uptime-kuma
    
    Press Ctrl+C to stop following. This is useful for troubleshooting.

  9. Test Data Persistence (Optional but important to understand):
    Let's verify that your data (the monitor you just created) persists if the container is restarted.

    • Stop the container: sudo docker stop uptime-kuma
    • Wait a few seconds. If you try to access Uptime Kuma in your browser now, it will fail.
    • Start the container again: sudo docker start uptime-kuma
    • Wait a few seconds for it to initialize, then refresh Uptime Kuma in your browser.
    • You should see your "Google Search Homepage" monitor still there, with its history intact. This is because of the -v uptime_kuma_data:/app/data volume mapping we used.

Congratulations! You have successfully installed Uptime Kuma using Docker and configured your first monitor. This setup provides a robust and easily manageable monitoring solution.

2. Understanding Monitor Types

Uptime Kuma's strength lies in its versatility, offering a range of monitor types to check different aspects of your services. Each type works differently and is suited for specific use cases. Understanding these will allow you to effectively monitor your infrastructure.

HTTP(s) Monitor

This is one of the most common monitor types, used for checking the availability and responsiveness of websites, web applications, and APIs.

  • How it works: The HTTP(s) monitor sends an HTTP or HTTPS request (typically a GET request, but POST, PUT, etc., are supported) to the specified URL. It then analyzes the response based on several criteria:

    1. Connectivity: Can it establish a connection to the server and port?
    2. HTTP Status Code: Does the server respond with an expected status code? By default, Uptime Kuma considers status codes in the 200-299 range as "Up." You can customize this. For example, a 200 OK means success, while 404 Not Found or 500 Internal Server Error indicate problems.
    3. Response Time (Latency): How long did it take to receive the response? This is recorded and displayed.
    4. Keyword Checking (Optional): It can check if the response body (the HTML, JSON, or text content of the page) contains or does not contain a specific keyword or phrase.
    5. JSON Query (Optional): For API endpoints returning JSON, you can check values within the JSON structure.
  • Key Configuration Options:

    • Friendly Name: A descriptive name for the monitor.
    • URL: The full URL to monitor (e.g., https://myservice.com/api/health).
    • Request Method: GET (default), POST, PUT, DELETE, HEAD, OPTIONS, PATCH.
    • Request Body: For methods like POST or PUT, you can specify the data to send (e.g., JSON, form data).
    • Request Headers: Add custom HTTP headers (e.g., for authentication Authorization: Bearer <token>, or Content-Type: application/json).
    • Accepted Status Codes: Define which HTTP status codes indicate an "Up" state (e.g., 200-299, 200,302).
    • Keyword Checking: Specify a keyword and whether its presence or absence in the response body means "Up."
    • Ignore TLS/SSL Error for HTTPS: Allows monitoring sites with self-signed or invalid SSL certificates. Use with extreme caution as it bypasses security checks and can mask man-in-the-middle attacks. Only use for trusted internal services where you understand the risk.
    • Max Redirects: How many HTTP redirects (e.g., 301, 302) to follow.
    • Timeout: How long to wait for a response before considering it a failure.
  • Use Cases:

    • Monitoring availability of public websites (e.g., your blog, company homepage).
    • Checking health check endpoints of your web applications (e.g., /health, /status).
    • Verifying API endpoints are responding correctly.
    • Ensuring critical e-commerce pages are loading.

Ping (ICMP) Monitor

This monitor checks basic network reachability of a host using ICMP echo requests, the same protocol used by the ping command-line utility.

  • How it works: Uptime Kuma sends an ICMP "echo request" packet to the specified hostname or IP address. If the host is reachable and configured to respond to pings, it will send back an ICMP "echo reply" packet. Uptime Kuma measures if a reply is received and the time it took (latency).

    • Important Note: ICMP can be blocked by firewalls (on the target host, or intermediary network devices). A failed ping doesn't always mean the host is completely down; it might just mean ICMP requests are being ignored or dropped. However, if a service should respond to pings (like most servers do by default), a failed ping is a strong indicator of a problem.
  • Key Configuration Options:

    • Friendly Name: A descriptive name.
    • Hostname / IP Address: The target to ping (e.g., 192.168.1.1, myserver.internal.lan).
    • Packet Size (bytes): The size of the data payload in the ICMP packet. Default is usually 56 bytes.
    • Timeout: How long to wait for an echo reply.
  • Use Cases:

    • Checking if servers (internal or external) are online and responding on the network.
    • Monitoring network devices like routers, switches, firewalls (if they are configured to respond to pings).
    • Quickly assessing basic connectivity to a host before checking specific services on it.

TCP Port Monitor

This monitor checks if a specific TCP port on a host is open and accepting connections. This is useful for services that don't use HTTP but listen on a network port.

  • How it works: Uptime Kuma attempts to establish a TCP connection to the specified hostname/IP address and port number.

    • If the connection is successfully established (a TCP three-way handshake completes), the port is considered open, and the service is likely listening. Uptime Kuma then immediately closes the connection.
    • If the connection is refused, times out, or encounters other errors, the port is considered closed or the service is not responding.
  • Key Configuration Options:

    • Friendly Name: A descriptive name.
    • Hostname / IP Address: The target host.
    • Port: The TCP port number to check (e.g., 22 for SSH, 3306 for MySQL, 5432 for PostgreSQL).
    • Timeout: How long to wait for the connection attempt.
  • Use Cases:

    • Monitoring SSH service (port 22).
    • Checking database server availability (e.g., MySQL on port 3306, PostgreSQL on 5432, MongoDB on 27017).
    • Monitoring mail servers (SMTP on port 25/587, IMAP on port 143/993).
    • Verifying game servers or other custom TCP-based applications are listening.
    • Ensuring firewalls are correctly configured to allow access to a specific port.

DNS Monitor

This monitor checks the health and responsiveness of a DNS server and verifies if it can resolve a specific hostname to an expected IP address or record type.

  • How it works: Uptime Kuma sends a DNS query to a specified DNS server (or the system's default resolver if not specified) for a given hostname and record type (e.g., A, AAAA, CNAME, MX, TXT).

    • It checks if the DNS server responds within the timeout.
    • It verifies if the response contains the expected result (e.g., if you query for google.com type A record, you might expect an IP address like 172.217.160.142, though this can change and is region-dependent).
  • Key Configuration Options:

    • Friendly Name: A descriptive name.
    • Hostname to Resolve: The domain name you want to query (e.g., mydomain.com).
    • DNS Server: The IP address of the DNS server to query. If left blank, Uptime Kuma uses the system's configured DNS resolver(s). You can specify a public DNS server like 1.1.1.1 (Cloudflare) or 8.8.8.8 (Google), or your internal DNS server.
    • Record Type: The type of DNS record to query for:
      • A: IPv4 address.
      • AAAA: IPv6 address.
      • CNAME: Canonical Name (alias).
      • MX: Mail Exchange records.
      • NS: Name Server records.
      • PTR: Pointer record (reverse DNS).
      • SOA: Start of Authority.
      • SRV: Service record.
      • TXT: Text record.
    • Expected Result: You can specify a particular value that the DNS query should return for the monitor to be "Up". This can be an IP address for A/AAAA records, a hostname for CNAME/MX, or specific text for TXT records. Regular expressions can often be used here for more flexible matching.
    • Timeout: How long to wait for the DNS server's response.
  • Use Cases:

    • Monitoring the availability and correctness of your own DNS servers.
    • Verifying that critical DNS records for your domains (e.g., A, MX records) are resolving correctly from a specific DNS server's perspective.
    • Checking for DNS propagation after making changes to your DNS records (though propagation time can vary).
    • Ensuring external DNS resolvers used by your applications are functioning.

Keyword Monitor (subtype of HTTP(s))

While not a distinct top-level monitor type in the selection list, "Keyword Checking" is a powerful feature within the HTTP(s) monitor type.

  • How it works: After an HTTP(s) monitor successfully fetches the content of a URL (receives a 2xx status code by default), it can then scan the HTML, JSON, XML, or plain text response body.

    • Keyword Exists: The monitor is considered "Up" if the specified keyword or phrase is found in the response.
    • Keyword Not Exists (Invert Check): The monitor is considered "Up" if the specified keyword or phrase is not found in the response. This is useful for detecting error messages or unwanted content.
  • Key Configuration Options (within HTTP(s) monitor settings):

    • Keyword: The exact string or regular expression to search for.
    • Invert Keyword Check: A checkbox. If checked, the monitor is "Up" if the keyword is not found.
  • Use Cases:

    • Content Verification: Ensuring specific text (e.g., "Welcome to our store," "Copyright 2024") is present on a webpage, indicating the page loaded correctly beyond just a 200 OK status.
    • Error Detection: Checking for the absence of error messages (e.g., "Internal Server Error," "Database Connection Failed"). You would use "Invert Keyword Check" for this.
    • API Response Validation: Confirming that a specific field or value exists in a JSON/XML API response.
    • Detecting Defacement: Alerting if critical content is missing or unexpected content appears on a sensitive page.

Other Monitor Types

Uptime Kuma supports several other specialized monitor types, which are worth noting:

  • Push Monitor (Heartbeat Monitor): This type works differently. Instead of Uptime Kuma actively polling a service, the monitored service or a script is responsible for sending a heartbeat (an HTTP GET request) to a unique URL provided by Uptime Kuma. If Uptime Kuma doesn't receive a heartbeat within a configured interval, it marks the service as "Down."

    • Use Cases: Monitoring cron jobs, batch processes, services behind restrictive firewalls that Uptime Kuma can't reach directly, or any application that can be modified to send an HTTP request upon successful completion or periodically.
  • Steam Game Server Monitor: Queries a game server using the Steam server query protocol to check its status, current map, number of players, etc.

    • Use Cases: Monitoring your favorite game servers or servers you host.
  • Docker Container Monitor: Checks if a specific Docker container (managed by the same Docker daemon that might be running Uptime Kuma, or a remote Docker socket) is running.

    • Use Cases: Ensuring critical application containers are up and running on your Docker host.
  • MQTT Monitor: Connects to an MQTT broker and optionally subscribes to a topic to check for messages or broker availability.

    • Use Cases: Monitoring IoT infrastructure or message queues.
  • SQL Server / PostgreSQL / MySQL / MongoDB Monitor: These allow direct connection to database servers to check their availability by attempting to connect and optionally run a simple query.

    • Use Cases: More direct database health checks than just a TCP port check.

The choice of monitor type depends entirely on what service you are monitoring and what aspect of its health you want to verify.

Workshop Configuring Diverse Monitors

Objective: Set up at least three different types of monitors in your Uptime Kuma instance to monitor a mix of public and/or local services, demonstrating the versatility of Uptime Kuma.

Prerequisites:

  • Uptime Kuma installed and running (as per Workshop in "1. Getting Started with Uptime Kuma").
  • Access to the Uptime Kuma web dashboard.
  • (Optional) A local network device whose IP address you know (e.g., your home router).

Scenario:
We will configure the following monitors:

  1. An HTTP(s) Monitor for a well-known news website, including keyword checking.
  2. A Ping Monitor for a local network device (like your router) or a public server.
  3. A DNS Monitor to check Google's public DNS server's ability to resolve a common domain.

Steps:

  1. HTTP(s) Monitor for a News Website (with Keyword Check):
    Let's monitor the BBC News website and check for a common word typically found on its homepage.

    • On the Uptime Kuma dashboard, click "+ Add New Monitor".
    • Monitor Type: Select HTTP(s).
    • Friendly Name: Enter BBC News Homepage.
    • URL: Enter https://www.bbc.com/news.
    • Scroll down to the Keyword section.
      • Keyword: Enter World (this word is very likely to be on the BBC News homepage).
      • Keyword Exists / Not Exists: Ensure it's set to Keyword Exists (the default).
    • Review other settings like "Heartbeat Interval" (default 60s is fine for this).
    • Click "Save".
    • Explanation: This monitor will not only check if bbc.com/news returns a 200 OK status but also verify that the word "World" is present in the page content. This adds an extra layer of confidence that the page is rendering meaningful content. If the site was up but serving a blank page or an error page without that keyword, the monitor would (correctly) show as "Down" or "Warning" depending on your keyword configuration.
  2. Ping Monitor for a Local or Public Device:
    We'll ping a reliable public server, Google's DNS server 8.8.8.8. If you prefer, and know its IP, you can ping your home router (e.g., 192.168.1.1, 192.168.0.1 - check your computer's network settings for "gateway" or "router" IP).

    • On the dashboard, click "+ Add New Monitor".
    • Monitor Type: Select Ping (ICMP).
    • Friendly Name: Enter Google DNS Primary Ping (or Home Router Ping if using your router's IP).
    • Hostname / IP Address: Enter 8.8.8.8 (or your router's IP address).
    • Leave "Packet Size" and other settings at their defaults unless you have a specific reason to change them.
    • Click "Save".
    • Explanation: This monitor will periodically send ICMP echo requests to 8.8.8.8. If it receives replies, the monitor status will be "Up." This confirms basic network layer reachability to the target.
    • Potential Issue Note: If monitoring a device on your local network (like a router), ensure its firewall isn't blocking incoming ICMP echo requests. Most consumer routers respond to pings on their LAN interface by default. Some public servers or enterprise networks might block ICMP for security reasons, so a failed ping doesn't always mean the server itself is down, just that it's not responding to pings.
  3. DNS Monitor for Google's Public DNS:
    Let's verify that Google's public DNS server (8.8.8.8) can correctly resolve cloudflare.com to an A record (IPv4 address).

    • On the dashboard, click "+ Add New Monitor".
    • Monitor Type: Select DNS.
    • Friendly Name: Enter Google DNS resolving Cloudflare.
    • Hostname to Resolve: Enter cloudflare.com.
    • DNS Server: Enter 8.8.8.8 (to explicitly query Google's DNS).
    • Record Type: Select A from the dropdown.
    • Expected Result (Optional but good for precision): You could put a known Cloudflare IP here, but IPs can change. For this workshop, leaving it blank is fine; the monitor will be "Up" if any valid A record is returned. If you wanted to be very specific, you could first look up an IP for cloudflare.com (e.g., using nslookup cloudflare.com 8.8.8.8 in your terminal) and paste one of the IPs into this field.
    • Click "Save".
    • Explanation: This monitor sends a DNS query to 8.8.8.8 asking for the A record(s) of cloudflare.com. It checks if 8.8.8.8 responds correctly. This is useful for ensuring that DNS resolution, a fundamental internet service, is working as expected through a specific resolver.
  4. Observe Monitor Statuses:
    Go back to your Uptime Kuma dashboard. You should now see your three new monitors:

    • BBC News Homepage
    • Google DNS Primary Ping
    • Google DNS resolving Cloudflare

    After their initial checks, they should all turn green ("Up"). Observe their response times and how the mini-graphs start to populate.

    Discuss potential reasons for a monitor to be "Down":

    • HTTP(s) Monitor:
      • Website is actually down (e.g., server error, application crash).
      • Network connectivity issue between Uptime Kuma server and the target website.
      • Incorrect URL.
      • Firewall blocking Uptime Kuma's IP.
      • Expected status code not met (e.g., site returns 403 Forbidden).
      • Keyword not found (or found, if inverted).
      • SSL certificate issue (if not ignoring TLS errors).
    • Ping Monitor:
      • Target host is offline or unreachable.
      • Firewall on the target host or network path is blocking ICMP packets.
      • Incorrect IP address or hostname.
      • High network latency or packet loss causing timeouts.
    • DNS Monitor:
      • DNS server (8.8.8.8 in our case) is down or unreachable.
      • The hostname (cloudflare.com) does not exist or has no records of the specified type.
      • Network issue preventing Uptime Kuma from reaching the DNS server.
      • Expected result (if specified) does not match the actual DNS response.

This workshop has given you hands-on experience with configuring different types of monitors, which forms the core of Uptime Kuma's functionality. As you monitor more of your own services, you'll select the monitor type that best fits the nature of each service.


The content is becoming quite extensive, which is good for verbosity. I'll continue with Intermediate and Advanced sections maintaining this level of detail and practical workshops. I will ensure to maintain the requested markdown structure consistently. The next sections (Intermediate and Advanced) will build upon these basics.## Intermediate

Having mastered the basics of installing Uptime Kuma and setting up various monitor types, we now move to intermediate topics. This section will delve into advanced monitor configuration options, setting up notifications to be alerted of incidents, and creating status pages to communicate service availability. These features will help you refine your monitoring strategy and make it more proactive and communicative.

3. Advanced Monitor Configuration

Beyond the basic setup, Uptime Kuma offers several advanced configuration options for each monitor. These allow you to fine-tune how checks are performed, how failures are handled, and how monitors are organized. Understanding these settings is key to creating a robust and efficient monitoring setup that accurately reflects your service health and minimizes false positives.

Heartbeat Interval and Retries

These two settings are fundamental to how Uptime Kuma determines if a service is genuinely down or just experiencing a transient glitch.

  • Heartbeat Interval (Polling Frequency):

    • Explanation: This setting defines how often Uptime Kuma will check the status of the monitored service. It's essentially the "pulse" of your monitoring. For example, a heartbeat interval of 60 seconds means Uptime Kuma will probe the service once every minute.
    • Impact:
      • Shorter Interval (e.g., 10-30 seconds): Faster detection of downtime. You'll know about issues more quickly. However, this also means more frequent requests to your monitored services, which can add a small amount of load. It also consumes more resources on the Uptime Kuma server itself (CPU, network). For highly critical services, a shorter interval is often preferred.
      • Longer Interval (e.g., 5-15 minutes): Slower detection of downtime. This reduces the load on both the monitored services and the Uptime Kuma server. Suitable for less critical services or services where brief, intermittent unavailability is tolerable or expected.
    • Considerations: Choose an interval that balances the need for timely alerts with the acceptable load on your services and Uptime Kuma. For public APIs with rate limits, be mindful not to exceed them with overly frequent checks.
  • Retries:

    • Explanation: When a monitor check fails (e.g., an HTTP request times out or returns a 500 error), Uptime Kuma doesn't immediately mark the service as "Down." Instead, it will attempt to re-check the service a specified number of times. This "Retries" setting defines how many additional attempts are made after the initial failure before the status is officially changed to "Down."
    • How it works: If Retries is set to 2, and an initial check fails:
      1. Initial check: Fails.
      2. Retry 1: Performed shortly after the initial failure (the retry interval is usually shorter than the main heartbeat interval). If it succeeds, the service remains "Up." If it fails...
      3. Retry 2: Performed. If it succeeds, the service remains "Up." If it fails...
      4. The service is marked as "Down," and notifications (if configured) are sent.
    • Impact: Setting retries helps prevent "false positive" alerts caused by transient network blips or very short-lived service interruptions. A service might be momentarily unavailable but recover before the retries are exhausted.
    • Considerations: A common value for retries is 1 to 3. Too many retries can delay legitimate downtime notifications. Too few (or zero) can lead to an increase in alerts for very brief issues. The optimal number depends on the stability of the service and network.

Timeouts

  • Explanation: The "Timeout" setting specifies the maximum amount of time Uptime Kuma will wait for a response from the monitored service during a single check. If the service doesn't respond within this duration, the check is considered failed (which might then trigger the retry mechanism).
  • Setting Appropriate Timeouts:
    • This value should be set based on the expected performance of the service being monitored.
    • For a fast local service or a responsive API, a timeout of 5-10 seconds might be appropriate.
    • For services that are known to be slower or are accessed over less reliable networks, you might need a longer timeout (e.g., 30 seconds).
    • Too Short: A timeout that's too short can lead to false "Down" alerts if the service is just a bit slower than usual but still operational.
    • Too Long: A timeout that's too long means Uptime Kuma will wait unnecessarily for a non-responsive service, potentially delaying the detection of actual downtime if retries are also high.
  • Per-Monitor Type: The timeout is specific to the monitor type. For example, an HTTP(s) monitor has a page timeout, while a Ping monitor has a timeout for ICMP replies.

HTTP(s) Specifics

HTTP(s) monitors have a rich set of advanced options due to the complexity and versatility of the HTTP protocol.

  • Authentication:

    • Basic Authentication: If your website or API requires HTTP Basic Authentication, you can provide the username and password directly in the monitor settings. Uptime Kuma will automatically include the necessary Authorization header.
    • Bearer Token / Custom Headers: For token-based authentication (like OAuth 2.0 Bearer tokens) or other custom authentication schemes, use the "Request Headers" section. You can add an Authorization header with your token:
      • Header Name: Authorization
      • Header Value: Bearer your_very_long_token_here
    • NTLM Authentication: Support for NTLM may also be present for Windows-based services.
  • Custom Headers: You can add any custom HTTP headers to your requests. This is useful for:

    • Sending API keys (e.g., X-Api-Key: yourkey).
    • Specifying Content-Type (e.g., application/json) or Accept headers.
    • Setting a custom User-Agent string if the server behaves differently for specific user agents.
    • Any other header required by the target service.
  • Request Body (for POST/PUT/PATCH requests): If you select POST, PUT, or PATCH as the "Request Method," a "Request Body" section will appear. Here you can specify the data to be sent with the request.

    • Content Type: Choose the type of data you're sending (e.g., JSON, XML, Text, Form Data).
    • Body Content: Enter the actual payload. For JSON, it would be a valid JSON object:
      {
          "key": "value",
          "another_key": 123
      }
      
  • Ignoring TLS/SSL errors (HTTPS only):

    • Option: "Ignore TLS/SSL Error for HTTPS" or similar wording.
    • Functionality: When checked, Uptime Kuma will not validate the SSL/TLS certificate of the HTTPS site it's connecting to. This means it will proceed with the connection even if the certificate is self-signed, expired, issued for a different domain name, or has an untrusted Certificate Authority (CA).
    • Security Implications (CRITICAL): Enabling this option significantly reduces security. It bypasses the mechanisms that ensure you are connecting to the legitimate server and that the communication is encrypted securely without tampering. It makes your monitoring check vulnerable to Man-in-the-Middle (MitM) attacks.
    • When to (cautiously) use:
      • Internal, trusted development/testing environments: Where you are using self-signed certificates for convenience and fully understand the risks within that controlled environment.
      • Legacy internal systems: Where updating certificates is not feasible, and the risk is assessed and accepted.
    • When NOT to use: Never enable this for public-facing websites or any service handling sensitive data if you can avoid it. The proper solution is always to fix the SSL certificate issue on the server side.
    • Recommendation: Use only as a last resort and with full awareness of the security trade-offs.
  • Accepted Status Codes:

    • By default, Uptime Kuma considers HTTP status codes in the range 200 (OK) to 299 (various success codes) as an "Up" state.
    • You can customize this. For example, if you have an API endpoint that correctly returns a 302 Found (redirect) as part of its normal operation and you want to consider that "Up," you can add 302 to the list of accepted codes (e.g., 200-299,302).
    • This allows for more nuanced monitoring where non-2xx codes are expected behavior.

Maintenance Windows

Services often require planned maintenance (updates, restarts, hardware changes). During these periods, the service will be intentionally unavailable. You don't want Uptime Kuma to flood you with "Down" alerts during scheduled maintenance.

  • Defining Maintenance Periods: Uptime Kuma allows you to schedule maintenance windows. You can specify:
    • Start date and time.
    • End date and time (or duration).
    • Which monitors are affected (all monitors, or specific ones selected by tags or individually).
    • A description or reason for the maintenance.
  • How Uptime Kuma Handles Monitors During Maintenance:
    • When a monitor enters a scheduled maintenance window, Uptime Kuma typically pauses active polling for it or continues polling but suppresses notifications.
    • The status of the monitor during maintenance might be displayed differently (e.g., a special "In Maintenance" status or icon).
    • No "Down" alerts are sent for services that go down if they are covered by an active maintenance window.
    • Once the maintenance window ends, normal monitoring and alerting resume.
  • Benefits:
    • Prevents alert fatigue during planned outages.
    • Provides a record of scheduled maintenance periods, which can be useful for reporting and historical analysis.
    • Improves the accuracy of your uptime statistics by excluding planned downtime.

Tags and Grouping

As your number of monitored services grows, keeping them organized becomes essential. Uptime Kuma provides tags for this purpose.

  • Organizing Monitors:
    • Tags: You can assign one or more tags to each monitor (e.g., production, staging, web-servers, databases, customer-facing, internal-tools).
    • Purpose: Tags act like labels, allowing you to categorize and group your monitors.
  • Using Tags:
    • Filtering on the Dashboard: The main dashboard often allows filtering monitors by tags, so you can quickly view only the services belonging to a specific category.
    • Status Pages: When creating status pages, you can choose to display monitors based on their tags. This allows you to create different status pages for different audiences (e.g., a public status page showing only customer-facing services, and an internal one showing all production services).
    • Notifications: You might be able to apply notification rules based on tags (though this depends on the specific notification setup options).
    • Maintenance Windows: You can schedule maintenance for all monitors having a particular tag.
  • Best Practices:
    • Develop a consistent tagging strategy.
    • Use meaningful and descriptive tags.
    • Don't use too many overly granular tags, as it can become hard to manage. Find a balance.

By leveraging these advanced configuration options, you can create a monitoring setup that is more precise, less noisy, and better organized, leading to more effective incident response and service management.

Workshop Fine-Tuning an API Monitor

Objective: Configure an HTTP(s) monitor for a public JSON API endpoint. This workshop will involve setting a shorter heartbeat interval for quicker feedback, configuring retries and a timeout, adding a custom request header, and checking for a specific keyword within the JSON response.

Prerequisites:

  • Uptime Kuma installed and running.
  • Access to the Uptime Kuma web dashboard.
  • A public API endpoint to test against. We'll use https://jsonplaceholder.typicode.com/todos/1, which returns a simple JSON object.

Steps:

  1. Choose the Public API Endpoint: We will use https://jsonplaceholder.typicode.com/todos/1. Open this URL in your browser or use a tool like curl to inspect its output. You should see something like:

    {
      "userId": 1,
      "id": 1,
      "title": "delectus aut autem",
      "completed": false
    }
    
    This is a free fake API useful for testing.

  2. Create a New HTTP(s) Monitor in Uptime Kuma:

    • On the Uptime Kuma dashboard, click "+ Add New Monitor".
    • Monitor Type: Select HTTP(s).
    • Friendly Name: Enter JSONPlaceholder Todo #1 API.
    • URL: Enter https://jsonplaceholder.typicode.com/todos/1.
  3. Configure Advanced Settings (Timing and Retries):

    • Heartbeat Interval: Change this from the default (e.g., 60 seconds) to 30 seconds. This will make Uptime Kuma check the API every 30 seconds, providing faster feedback during this workshop.
      • Explanation: For a real API, especially a public one, you'd choose this based on how critical the API is and any rate limits it might have. 30 seconds is quite frequent for general use.
    • Retries: Set this to 2.
      • Explanation: If the API fails the first check, Uptime Kuma will try two more times before marking it as "Down." This helps avoid false alarms from temporary network glitches.
    • Page Timeout / Timeout: Set this to 5000 milliseconds (which is 5 seconds).
      • Explanation: If the API doesn't respond within 5 seconds, the check will be considered a failure. This is a reasonable timeout for a well-performing API.
  4. Add a Custom Request Header:
    While jsonplaceholder.typicode.com doesn't require custom headers, we'll add one for demonstration purposes. Many real-world APIs require Authorization or X-Api-Key headers.

    • Scroll down to the "Headers" section.
    • Click "+ Add Header".
    • Header Name: Enter X-Workshop-Test
    • Header Value: Enter UptimeKumaDemo-123
    • Explanation: This demonstrates how you would add necessary headers for authentication, content type negotiation (e.g., Accept: application/json), or any other purpose.
  5. Configure Keyword Checking for JSON Response:
    We want to verify not just that the API returns a 200 OK, but also that the content is as expected. Let's check for a part of the JSON response.

    • Scroll down to the "Keyword" section.
    • Keyword Type: Ensure it's "Keyword" (Plain Text).
    • Keyword: Enter "completed": false
      • Explanation: We are looking for this exact string within the JSON response. It's important to choose a keyword that is:
        • Stable: Unlikely to change frequently unless the underlying data truly changes in a way you want to detect.
        • Specific enough: To confirm the response structure/content is what you expect. Avoid overly generic keywords.
        • Note the quotes around "completed" and the space after the colon – be precise with JSON string fragments.
    • Keyword Exists / Not Exists: Ensure it's set to Keyword Exists.
  6. Save and Observe the Monitor:

    • Click "Save" at the bottom of the form.
    • You'll be taken to the dashboard. Observe your JSONPlaceholder Todo #1 API monitor.
    • It should quickly turn "Up" (green) because the API is responsive, returns a 200 OK, and contains the specified keyword.
    • Note the response time. Click on the monitor name to see more details, including the history of checks.
  7. Demonstrate Failure by Changing Expected Keyword:
    Now, let's see what happens if the keyword check fails.

    • Edit the JSONPlaceholder Todo #1 API monitor (click the pencil/edit icon).
    • Go back to the "Keyword" section.
    • Change the Keyword to something incorrect that won't be in the response, for example: "status": "active" (this key-value pair doesn't exist in the sample JSON).
    • Click "Save".
    • Wait for the next check (up to 30 seconds, per our heartbeat interval).
    • The monitor should now turn "Down" (red). If you click on it, the event log should indicate that the keyword was not found, even though the API itself likely returned a 200 OK status code.
    • Explanation: This demonstrates that keyword checking provides a deeper validation than just the HTTP status code. The API is "up" in terms of reachability, but it's not returning the content Uptime Kuma expects, which in many scenarios is a failure condition.
  8. Revert the Keyword and Discuss API Authentication:

    • Edit the monitor again.
    • Change the Keyword back to "completed": false.
    • Click "Save". The monitor should return to "Up" status after the next check.
    • Discussion Point: If this API required authentication (e.g., a Bearer token), you would add it in the "Headers" section:
      • Header Name: Authorization
      • Header Value: Bearer your_actual_api_token This is crucial for monitoring protected API endpoints.

This workshop illustrated how to use advanced settings like custom heartbeat intervals, retries, timeouts, custom headers, and keyword checking to create a more robust and specific monitor for an API. These techniques are invaluable for ensuring your services are not just reachable but are also functioning correctly.

4. Notifications and Alerting

Detecting downtime is only half the battle; being promptly notified of incidents is crucial for a swift response. Uptime Kuma excels in its wide range of notification integrations, allowing you to receive alerts through your preferred communication channels. This sub-section covers the importance of notifications, the supported services, and how to set them up.

Importance of Timely Notifications

When a critical service goes down, every minute counts. Timely notifications enable you to:

  • Minimize Downtime: The sooner you know about a problem, the sooner you can start investigating and resolving it, reducing the impact on users.
  • Proactive Response: Instead of waiting for users to report issues, you can be ahead of the curve, potentially fixing problems before they are widely noticed.
  • Maintain SLAs: For businesses with Service Level Agreements, quick detection and resolution are key to meeting uptime commitments.
  • Improve Reliability: Analyzing alert patterns can help identify recurring issues or unstable services that need attention.
  • Peace of Mind: Knowing you'll be alerted if something breaks allows you to focus on other tasks without constantly checking dashboards manually.

Supported Notification Services

Uptime Kuma boasts an impressive list of over 90 (and growing) integrated notification providers. This flexibility ensures you can likely use a service you're already familiar with. Some of the most popular and commonly used ones include:

  • Email (SMTP): A classic. Send alerts to any email address via an SMTP server.
  • Telegram: A popular messaging app, great for instant mobile notifications to individuals or groups.
  • Slack: Widely used team collaboration platform; Uptime Kuma can send messages to specific channels.
  • Discord: Another popular communication platform for communities and teams.
  • Webhook: A generic and powerful option. Uptime Kuma can send an HTTP POST request with event data to any URL. This allows integration with countless third-party services or custom scripts (e.g., PagerDuty, Opsgenie, custom incident management systems).
  • Gotify: A self-hostable push notification server.
  • ntfy.sh: A free and open-source pub-sub notification service that supports web push and mobile apps.
  • Signal: Secure messaging app.
  • Microsoft Teams: Collaboration platform.
  • Google Chat: Google's team messaging service.
  • Rocket.Chat: Open-source team communication platform.
  • Pushbullet, Pushover, Bark: Various push notification services for mobile devices.
  • Many more: Including specific services like Twilio (SMS), Apprise (which itself supports many more services), Matrix, etc.

You can find the full, up-to-date list within Uptime Kuma's settings or on its official GitHub page.

Setting up a Notification Provider

The general process for setting up a new notification provider in Uptime Kuma is consistent:

  1. Navigate to Settings: In the Uptime Kuma web interface, click on "Settings" in the top navigation bar.
  2. Select "Notifications": In the settings menu (usually on the left sidebar), click on "Notifications."
  3. Setup New Notification: Click the button labeled "Setup New Notification" or similar.
  4. Choose Notification Type: A dropdown list or selection area will show all supported notification services. Select the one you want to configure.
  5. Configure Provider-Specific Details: Each notification type will have its own set of required fields. You'll need to provide API keys, tokens, server addresses, channel IDs, etc., depending on the service.
  6. Test Notification: Most providers offer a "Test" button. Use this to send a sample notification to confirm your settings are correct and Uptime Kuma can communicate with the service.
  7. Save Configuration: Once tested and configured, save the notification provider setup.

Let's look at a couple of common examples:

Example: Email (SMTP)

  • Friendly Name: A name for this notification setup (e.g., "Admin Email Alerts").
  • Hostname: Your SMTP server's address (e.g., smtp.example.com, smtp.gmail.com).
  • Port: The SMTP port (e.g., 587 for TLS, 465 for SSL, 25 for unencrypted - though encrypted is highly recommended).
  • Security: Choose the encryption method: None, SSL/TLS, or STARTTLS.
  • Username: Your SMTP account username.
  • Password: Your SMTP account password.
  • From Email: The email address alerts will appear to be sent from.
  • To Email: The email address(es) where alerts will be sent.
  • Custom Subject (Optional): You can customize the subject line of the notification emails using template variables (e.g., {{monitorName}} is {{status}}).

Example: Telegram

  • Friendly Name: A name for this setup (e.g., "Telegram Ops Team").
  • Bot Token: The API token for your Telegram Bot. You get this from BotFather on Telegram when you create a bot.
  • Chat ID: The unique ID of the Telegram chat (user, group, or channel) where notifications should be sent. You can get this from various helper bots like @userinfobot for your personal chat ID, or by adding your bot to a group and using other methods to find the group's ID.
    • Getting Bot Token:
      1. In Telegram, search for "BotFather" and start a chat.
      2. Send /newbot command.
      3. Follow instructions to name your bot and choose a username for it.
      4. BotFather will provide an HTTP API token. Copy and save this token securely.
    • Getting Chat ID (for personal chat):
      1. In Telegram, search for "userinfobot" and start a chat.
      2. Send any message (e.g., /start or hello).
      3. The bot will reply with your user information, including your "Id". This is your Chat ID.
    • Getting Chat ID (for a group):
      1. Add your newly created bot to the Telegram group.
      2. Send a message to the group (e.g., /my_id @your_bot_username - this specific command might not work for all bots, but the principle is to interact with the bot in the group).
      3. A common way is to send a message to the group, then access the Telegram Bot API URL: https://api.telegram.org/botYOUR_BOT_TOKEN/getUpdates. Look for the JSON response; it will contain messages sent to the bot, including from the group, and you can find the chat.id (it's often a negative number for groups). Or use a helper bot that can tell you group IDs.

Linking Notifications to Monitors

Once a notification provider is set up and saved, you need to tell Uptime Kuma which monitors should use it.

  • Apply Globally (Default Notification):
    • When setting up a notification provider, there's usually an option like "Default - Send this notification for all existing and new monitors" or "Apply on all existing monitors and enable by default for new monitors."
    • If checked, this notification provider will be automatically used for all current and future monitors unless overridden at the individual monitor level. This is convenient for a primary alert channel.
  • Apply Per-Monitor:
    • When editing an individual monitor's settings, there's a "Notifications" section.
    • Here, you can select which specific configured notification providers should be used for that particular monitor.
    • This allows for granular control. For example:
      • Critical production services might alert via Telegram, Email, and PagerDuty.
      • Less critical staging services might only alert via a specific Slack channel.
      • Personal project monitors might only alert your personal email.
  • Delaying Notifications:
    • Some notification setups might allow you to specify a delay before a notification is sent after a service is confirmed "Down." This can be useful if you want to give a service a few extra minutes to recover automatically before an alert is triggered, especially if it's prone to very short, self-correcting outages.

Customizing Notification Messages

Uptime Kuma generally provides good default notification messages that include essential information like the monitor name, its status (Up/Down), and sometimes the error message or response time.

  • Template Variables: For some notification types (like Email or Webhook), Uptime Kuma might allow you to customize the message body or subject using template variables. These variables are placeholders that get replaced with actual data at the time of notification. Common variables include:
    • {{monitorName}}: The friendly name of the monitor.
    • {{monitorURL}}: The URL or hostname of the monitor.
    • {{status}}: The current status (e.g., "UP", "DOWN").
    • {{time}}: The timestamp of the event.
    • {{error}}: The error message if the monitor is down.
    • {{latency}}: The response time.
  • Consult Documentation: Check the Uptime Kuma documentation or the specific notification provider's settings within Uptime Kuma to see which variables are available and how to use them. This allows you to tailor notifications to include exactly the information you need in the format you prefer.

Effective notification setup is key to transforming Uptime Kuma from a passive dashboard into an active alerting system that helps you maintain service reliability.

Workshop Setting up Telegram Notifications

Objective:
Configure Uptime Kuma to send alert messages to your Telegram account when a monitored service goes down or comes back up.

Prerequisites:

  • Uptime Kuma installed and running.
  • A Telegram account.
  • Access to the Uptime Kuma web dashboard.

Steps:

  1. Create a Telegram Bot with BotFather:

    • Open your Telegram app (on desktop or mobile).
    • In the search bar, type BotFather and select the official BotFather account (it usually has a blue checkmark).
    • Start a chat with BotFather by clicking "Start" or sending /start.
    • To create a new bot, send the command: /newbot
    • BotFather will ask for a display name for your bot. Choose any name, for example: My Kuma Alerter
    • Next, BotFather will ask for a username for your bot. This username must be unique and end in bot. For example: MyKumaAlerterBot or MyPersonalKumaBot.
    • If the username is valid and available, BotFather will congratulate you and provide an HTTP API token. It will look something like 1234567890:ABCdefGhIJKLmnopQRSTuvwxyz123456789.
    • IMPORTANT: Copy this token immediately and save it in a secure place (like a password manager). This token is like a password for your bot; anyone with it can control your bot.
  2. Get Your Telegram Chat ID:
    You need to tell your bot where to send messages. For personal alerts, this will be your personal chat ID.

    • In the Telegram search bar, type userinfobot and select the "UserInfo Bot" (or a similar bot that provides user details).
    • Start a chat with UserInfo Bot by clicking "Start" or sending /start.
    • The bot will immediately reply with your user information, including your ID. This numeric ID is your Chat ID.
    • Copy this Chat ID and save it.
    • (Alternative for sending to a group: You would first add your newly created bot to the Telegram group. Then, to get the group's Chat ID, one common method is to send any message in that group. Then, open a web browser and go to https://api.telegram.org/bot<YOUR_BOT_TOKEN>/getUpdates (replace <YOUR_BOT_TOKEN> with the token from Step 1). Look at the JSON response. Find the message you sent in the group, and look for message.chat.id. Group IDs are usually negative numbers. This method requires the bot to have received at least one message in the group since it was last polled or started.)
  3. Configure Telegram Notification Provider in Uptime Kuma:

    • Open your Uptime Kuma dashboard in a web browser.
    • Click on "Settings" in the top navigation bar.
    • In the left sidebar of the Settings page, click on "Notifications".
    • Click the "+ Setup New Notification" button.
    • Notification Type: Select Telegram from the dropdown list.
    • Friendly Name: Give this notification setup a descriptive name, e.g., My Personal Telegram.
    • Bot Token: Paste the HTTP API token you got from BotFather in Step 1.
    • Chat ID: Paste the Chat ID you got from UserInfo Bot (or your group chat ID) in Step 2.
    • Send Silently (Optional): You can choose to send notifications silently if you prefer.
    • Default - Apply on all existing monitors...: For this workshop, check this box. This means any monitor going down will use this Telegram notification by default.
    • Click the "Test" button.
      • You should receive a test message in your Telegram chat from the bot you created (e.g., "My Kuma Alerter").
      • Troubleshooting Test Failure:
        • Double-check the Bot Token: Ensure no extra spaces or missing characters.
        • Double-check the Chat ID: Ensure it's correct.
        • Ensure your bot can send messages to you: If it's for personal chat, you might need to send a /start message to your bot first from your Telegram account to initiate the chat. (Search for your bot's username in Telegram, open the chat, and send /start).
        • If sending to a group, ensure the bot is a member of the group and has permission to send messages.
    • Once the test is successful, click "Save".
  4. Test with an Actual Monitor Event:
    Now, let's trigger an alert by making a monitor go "Down."

    • Go back to your Uptime Kuma Dashboard.
    • If you still have the JSONPlaceholder Todo #1 API monitor from the previous workshop, edit it. If not, create a new temporary HTTP(s) monitor.
    • Change its URL to something that is guaranteed to fail, e.g., https://thissitedefinitelydoesnotexist12345abc.com.
    • Alternatively, if it's an HTTP monitor with keyword checking, change the keyword to something that won't be found.
    • Save the monitor.
    • Wait for Uptime Kuma to perform its next check (this depends on the monitor's "Heartbeat Interval" and "Retries" settings).
    • Once Uptime Kuma detects the monitor as "Down" (it will turn red on the dashboard), you should receive a Telegram notification from your bot. The message will typically state which monitor is down.
    • Now, fix the monitor:
      • Edit the failing monitor again.
      • Change the URL back to a working one (e.g., https://jsonplaceholder.typicode.com/todos/1) or fix the keyword.
      • Save the monitor.
    • Wait for the next check. Once Uptime Kuma detects the monitor is "Up" again, you should receive another Telegram notification indicating that the service has recovered.
  5. Discuss Best Practices for Notification Fatigue:

    • While instant notifications are great, receiving too many can lead to "alert fatigue," where people start ignoring them.
    • Strategies:
      • Use "Default" wisely: Only apply truly critical notification channels as default.
      • Granular control: For less critical services, consider sending notifications to less intrusive channels (e.g., a specific Slack channel instead of direct Telegram to everyone) or not at all.
      • Appropriate Retries: Use the "Retries" setting on monitors to avoid alerts for very brief, transient issues.
      • Maintenance Windows: Use maintenance windows for planned downtime to suppress alerts.
      • Notification Delays: If available for the provider, a slight delay before sending an alert can sometimes allow for auto-recovery.
      • Tiered Notifications: For very complex systems (though Uptime Kuma is simpler), you might use different notification channels based on severity or duration of outage (this usually requires external tools integrating with Uptime Kuma's webhooks).

You have now successfully integrated Uptime Kuma with Telegram for real-time alerts. This significantly enhances your ability to respond quickly to service disruptions. You can repeat similar steps to add other notification providers like Email or Slack if needed.

5. Status Pages

While notifications are for administrators and technical teams, status pages serve a different purpose: communicating the current operational status of your services to a broader audience, such as end-users, customers, or internal stakeholders. Uptime Kuma allows you to create customizable, public-facing (or private) status pages easily.

Purpose of Status Pages

Status pages are a vital component of transparency and trust-building for any service provider. Their main purposes include:

  • Real-time Service Status:
    Provide a centralized, easy-to-understand view of whether your services are operational, experiencing issues, or undergoing maintenance.
  • Incident Communication:
    When an outage occurs, a status page is the primary place to post updates about the issue, expected resolution times, and what's being done to fix it. This reduces the flood of support tickets and queries.
  • Transparency and Trust:
    Being open about service health, even during incidents, builds trust with your users. It shows you are aware of issues and are working on them.
  • Historical Performance:
    Many status pages also show historical uptime data, further demonstrating service reliability over time.
  • Reduced Support Load:
    During an outage, users can check the status page first before contacting support, significantly reducing the burden on your support team.
  • Proactive Announcements:
    You can use status pages to announce planned maintenance or other events that might affect service availability.

Creating a Status Page

Setting up a new status page in Uptime Kuma is straightforward:

  1. Navigate to "Status Pages": In the Uptime Kuma web interface, click on "Status Pages" in the top navigation bar.
  2. Add New Status Page: Click the button typically labeled "+ Add New Status Page" or similar.
  3. Basic Configuration: You'll be presented with a form to configure the new status page:
    • Slug: This is the unique path component for the URL of your status page. For example, if your Uptime Kuma is at http://kuma.example.com, and you set the slug to myservices, the status page will be accessible at http://kuma.example.com/status/myservices. Choose a short, descriptive, URL-friendly slug (lowercase letters, numbers, hyphens).
    • Title: The main title displayed at the top of the status page (e.g., "My Awesome Service Status," "Contoso Corp System Status").
    • Description (Optional): A brief description that appears below the title, providing more context.
    • Theme: Uptime Kuma might offer different visual themes (e.g., Light, Dark, Auto).
    • Show Powered By Uptime Kuma (Optional): You can choose whether to display a small "Powered by Uptime Kuma" link in the footer.

Customizing Status Pages

Uptime Kuma provides several options to tailor the appearance and content of your status pages:

  • Adding a Logo: You can upload your company or project logo, which will typically be displayed in the header of the status page, reinforcing branding.
  • Custom CSS: For more advanced visual customization, Uptime Kuma allows you to add custom CSS rules. This lets you change fonts, colors, spacing, and almost any other visual aspect of the page to match your branding. You'll need some CSS knowledge for this. You can use your browser's developer tools (Inspect Element) to identify CSS selectors on the status page and then override their styles.
  • Selecting Which Monitors to Display:
    • You don't necessarily want to show all your internal monitors on a public status page.
    • Uptime Kuma allows you to select which monitors are included. This is often done by:
      • Adding Monitors Individually: Manually picking monitors from your list.
      • Using Tags: If you've tagged your monitors (e.g., public-facing, critical-api), you can often specify that all monitors with a certain tag should be displayed. This is a very flexible approach.
  • Grouping Monitors on the Status Page:
    • To improve clarity, especially if you have many services, you can group related monitors. For example, you could have groups like "Website Services," "API Endpoints," "Database Services."
    • You create a group, give it a name, and then assign selected monitors to that group. Each group will then be displayed as a section on the status page.
  • Announcements and Incident History:
    • Announcements/Notices: You can post general announcements at the top of your status page (e.g., "We are performing scheduled maintenance on X service tonight from 2 AM to 4 AM.").
    • Incidents: When a service is down, you can create an "incident" associated with it. This allows you to post updates as you investigate and resolve the issue (e.g., "Investigating - We are aware of an issue affecting the login service and are currently investigating.", "Monitoring - A fix has been implemented, and we are monitoring the situation.", "Resolved - The issue with the login service has been resolved."). This creates a timeline of the incident for users to follow.
    • Uptime Kuma typically displays a history of past incidents, contributing to transparency.

Password Protection for Status Pages

Sometimes, you might want to create a status page that is not fully public but accessible only to certain individuals (e.g., an internal status page for employees). Uptime Kuma often provides an option to password-protect a status page.

  • You set a username and password in the status page configuration.
  • When someone tries to access the status page URL, they will be prompted for these credentials via HTTP Basic Authentication.

Custom Domain Setup (Brief Mention)

While Uptime Kuma serves status pages from its own URL (e.g., http://<kuma_ip_or_domain>:3001/status/<slug>), you might want to host it on a custom domain or subdomain for a more professional look (e.g., status.yourcompany.com).

  • This is typically achieved by setting up a reverse proxy (like Nginx, Apache, or Caddy) in front of Uptime Kuma.
  • The reverse proxy would be configured to listen on your custom domain (e.g., status.yourcompany.com) and forward requests for the status page path to the Uptime Kuma instance.
  • You would also configure DNS: create a CNAME or A record for status.yourcompany.com pointing to the server running your reverse proxy.
  • SSL/TLS certificates (e.g., from Let's Encrypt) should also be handled by the reverse proxy to secure the custom domain with HTTPS.
  • This is a more advanced setup and will be covered in more detail in the "Security and Reverse Proxy" section.

Status pages are a powerful tool for user communication. By thoughtfully configuring them, you can significantly improve user experience, especially during service disruptions.

Workshop Creating a Public Status Page

Objective:
Create a publicly accessible status page in Uptime Kuma that displays the status of a selection of your monitors, customize its appearance slightly, and add an announcement.

Prerequisites:

  • Uptime Kuma installed and running.
  • At least 2-3 monitors configured (e.g., from previous workshops like "Google Search Homepage," "BBC News Homepage," "JSONPlaceholder Todo #1 API").
  • Access to the Uptime Kuma web dashboard.

Steps:

  1. Navigate to Status Pages in Uptime Kuma:

    • In your Uptime Kuma dashboard, click on "Status Pages" in the top navigation menu.
  2. Add a New Status Page:

    • Click the "+ Add New Status Page" button.
  3. Configure Basic Status Page Details:

    • Slug: Enter a URL-friendly slug. For this workshop, use my-public-services.
      • (This means your status page will be accessible at http://<your_kuma_ip_or_domain>:3001/status/my-public-services)
    • Title: Enter a descriptive title, e.g., My Awesome Services - Live Status.
    • Description: Add a short description, e.g., Real-time operational status and incident reports for my key services.
    • Theme: Choose "Auto" or your preferred theme (Light/Dark).
    • Leave "Show 'Powered by Uptime Kuma'" checked for now (it's good to give credit!).
  4. Customize Appearance (Optional but Recommended):

    • Logo: If you have a simple logo image URL handy, you can paste it in the "Logo URL" field. Otherwise, skip this for now.
    • Custom CSS: Let's add a little custom CSS to change the background color of the page header (where the title is).
      • In the "Custom CSS" text area, paste the following:
        /* Target the main header of the status page */
        .status-page .main-header {
            background-color: #2c3e50; /* A dark blue-gray color */
            color: #ecf0f1; /* A light gray for text */
        }
        
        .status-page .main-header h1 { /* Target the title within the header */
            color: #ffffff; /* White color for the main title */
        }
        
        .status-page .main-header p { /* Target the description within the header */
            color: #bdc3c7; /* A softer gray for the description */
        }
        
      • Explanation of CSS:
        • .status-page .main-header: This CSS selector targets the div element that typically acts as the main header of the status page. We're changing its background and default text color.
        • .status-page .main-header h1 and .status-page .main-header p: These target the <h1> (title) and <p> (description) elements specifically within that header to give them distinct colors.
        • You can find CSS selectors by right-clicking on elements on a webpage and choosing "Inspect" or "Inspect Element" in your browser's developer tools. This opens a panel where you can see the HTML structure and the CSS applied to it.
  5. Manage Monitors and Create a Group:
    We want to select specific monitors for this public page and organize them.

    • Scroll down to the section for managing monitors on the status page (it might be labeled "Service Groups" or "Displayed Monitors").
    • Click on "+ Add Group".
      • Group Name: Enter Public Web Services.
      • Click "Create" or "Save" for the group.
    • Now, you need to add monitors to this "Public Web Services" group.
      • You should see a list of your available monitors. Find monitors like Google Search Homepage or BBC News Homepage (or any other public-facing HTTP(s) monitors you've created).
      • Drag these monitors from the "Available Monitors" list into your newly created "Public Web Services" group.
    • Ensure the option "Show this group on the status page" (or similar wording) is checked for your "Public Web Services" group. If there's an option to show monitors not in any group, you might want to uncheck that if you only want grouped monitors.
  6. Add an Announcement (Optional):
    Let's add a welcome message.

    • Find the section for "Announcements" or "Incidents" on the status page configuration. It might be a separate tab or section.
    • Click on "+ Add Announcement" or "Create Incident/Notice".
    • Style/Type: Select Info or Announcement.
    • Title: Enter Welcome to Our New Status Page!
    • Content/Description: Enter We are pleased to provide real-time updates on our service availability through this new status page. Please check back here for any operational announcements or incident reports.
    • You might have options for start/end dates for the announcement. For a permanent welcome, these might not be needed or can be set to far future dates.
    • Post or Save the announcement.
  7. Save the Status Page Configuration:

    • Click the main "Save" button for the entire status page configuration.
  8. Access and Review Your New Status Page:

    • Open a new browser tab or window.
    • Navigate to the URL of your status page: http://<your_kuma_ip_or_domain>:3001/status/my-public-services (replace <your_kuma_ip_or_domain> with your Uptime Kuma server's IP address or domain name).
    • You should see:
      • Your title and description with the custom CSS applied to the header.
      • The "Public Web Services" group with the monitors you added to it, showing their current status.
      • The announcement you posted.
      • The "Powered by Uptime Kuma" footer.
  9. Share and Discuss:

    • This URL is now "public" in the sense that anyone who can reach your Uptime Kuma server and knows the URL can view it.
    • Discussion Points:
      • For a truly public-facing status page for a real business, you would typically set this up on a custom subdomain (e.g., status.yourcompany.com) using a reverse proxy and secure it with HTTPS. This enhances professionalism and trust.
      • Consider what information is appropriate for a public page. Avoid exposing internal hostnames or sensitive details.
      • Regularly review and update the monitors displayed and any announcements.

You have successfully created and customized a status page! This is a valuable tool for communicating with your users. Experiment with adding more monitors, creating different groups, or posting mock incident updates to familiarize yourself further with its capabilities.


Advanced

With a solid understanding of Uptime Kuma's intermediate features, we now venture into advanced topics. This section will cover securing your Uptime Kuma instance using a reverse proxy, implementing backup and recovery strategies, and exploring more sophisticated use cases like API interactions and push monitoring. These advanced skills will help you create a production-ready, resilient, and more deeply integrated monitoring system.

6. Security and Reverse Proxy

While Uptime Kuma itself has authentication for its admin interface, exposing any web application directly to the internet, especially on a non-standard port, can have security implications and usability drawbacks. A reverse proxy is a crucial component for enhancing security, enabling custom domains, and managing SSL/TLS certificates for Uptime Kuma in a production environment.

Why Use a Reverse Proxy?

A reverse proxy is a server that sits in front of web servers (like Uptime Kuma) and forwards client requests (from web browsers) to those web servers. It offers numerous benefits:

  1. HTTPS/SSL Termination (Security):
    • Uptime Kuma itself runs on HTTP by default. A reverse proxy can handle incoming HTTPS connections, decrypt the SSL/TLS traffic, and then forward the request to Uptime Kuma over plain HTTP on the local network. This encrypts the traffic between the client and your server, protecting sensitive data like login credentials.
    • It centralizes SSL certificate management (e.g., using Let's Encrypt for free certificates).
  2. Custom Domain Names & Clean URLs:
    • Instead of accessing Uptime Kuma via http://<your_server_ip>:3001, you can use a user-friendly custom domain like status.yourcompany.com or uptime.yourdomain.net. The reverse proxy maps this domain to your Uptime Kuma instance, hiding the port number.
  3. Load Balancing (Less relevant for a single Uptime Kuma instance):
    • For high-traffic applications, reverse proxies can distribute load across multiple backend servers. While Uptime Kuma typically runs as a single instance, this is a general benefit of reverse proxies.
  4. Hiding Internal IP/Port:
    • The reverse proxy acts as the public entry point. The actual IP address and port where Uptime Kuma is listening (e.g., 127.0.0.1:3001) are not directly exposed to the internet, adding a layer of obscurity.
  5. Path-Based Routing / Consolidating Services:
    • You can host multiple web services on the same server IP and port, with the reverse proxy routing traffic based on the URL path (e.g., yourdomain.com/uptime-kuma goes to Uptime Kuma, yourdomain.com/other-app goes to another application).
  6. Security Enhancements:
    • Web Application Firewall (WAF): Some reverse proxies can integrate WAF capabilities (e.g., ModSecurity for Nginx/Apache) to filter malicious requests.
    • Rate Limiting & Access Control: Configure rules to limit request rates or restrict access based on IP address at the proxy level.
    • HTTP Header Manipulation: Add or remove HTTP headers for security (e.g., Strict-Transport-Security, Content-Security-Policy).
  7. Caching:
    • For static content, a reverse proxy can cache responses, reducing load on the backend Uptime Kuma server (though Uptime Kuma's dashboard is dynamic, status pages might benefit slightly).

Several excellent open-source reverse proxy servers are available:

  • Nginx:
    • Extremely popular, high-performance, and stable. Known for its efficiency and rich feature set.
    • Configuration can be a bit complex for beginners but is very powerful.
    • Nginx Proxy Manager (NPM): A Docker-based application that provides a user-friendly web UI for managing Nginx proxy hosts, including SSL certificate generation with Let's Encrypt. Highly recommended for ease of use if you're already using Docker.
  • Apache HTTP Server (httpd):
    • Another veteran web server that can also function as a very capable reverse proxy (using modules like mod_proxy).
    • Often chosen by those already familiar with Apache configuration.
  • Caddy Server:
    • Modern web server known for its automatic HTTPS (it provisions SSL certificates from Let's Encrypt by default and renews them).
    • Its configuration file (Caddyfile) is generally considered simpler than Nginx's for many common use cases.
  • Traefik Proxy:
    • A cloud-native edge router/reverse proxy that's particularly popular in Docker and Kubernetes environments due to its automatic service discovery capabilities.
    • Can automatically configure itself based on container labels.

The choice often depends on your familiarity, existing infrastructure, and specific needs. For this guide, we'll focus conceptually on Nginx, as it's widely used, and mention Nginx Proxy Manager for a simpler UI-driven approach.

Basic Nginx Configuration for Uptime Kuma

Here's a conceptual example of an Nginx server block to reverse proxy Uptime Kuma, assuming Uptime Kuma is running on http://127.0.0.1:3001 on the same server as Nginx.

# /etc/nginx/sites-available/uptime-kuma.yourdomain.com

server {
    listen 80;
    server_name uptime-kuma.yourdomain.com; # Your desired domain/subdomain

    # Optional: Redirect HTTP to HTTPS once SSL is set up
    # location / {
    #     return 301 https://$host$request_uri;
    # }

    # For Let's Encrypt ACME challenge (if handling SSL with Certbot directly on Nginx)
    location ~ /.well-known/acme-challenge/ {
        allow all;
        root /var/www/html; # Or a directory Nginx can write to for challenges
    }

    # Main proxy configuration after SSL is set up
    # (If not using SSL yet, this location block can be directly under listen 80)
    location / {
        proxy_pass http://127.0.0.1:3001; # Uptime Kuma's local address and port
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # WebSocket support - CRUCIAL for Uptime Kuma's real-time updates
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";

        proxy_set_header X-Forwarded-Host $host; # Some apps need this
        proxy_set_header X-Forwarded-Server $host; # Some apps need this
        proxy_read_timeout 86400s; # Optional: longer timeout if needed for large data transfers (less relevant for Kuma)
        proxy_send_timeout 86400s; # Optional
    }
}

# After obtaining SSL certificate (e.g., with Certbot), your config will look more like:
# server {
#     listen 443 ssl http2;
#     listen [::]:443 ssl http2;
#     server_name uptime-kuma.yourdomain.com;

#     ssl_certificate /etc/letsencrypt/live/uptime-kuma.yourdomain.com/fullchain.pem;
#     ssl_certificate_key /etc/letsencrypt/live/uptime-kuma.yourdomain.com/privkey.pem;
#     # Include other SSL best practices (e.g., from certbot, or Mozilla SSL Config Generator)

#     # (location / block with proxy_pass and WebSocket settings as above)
#     location / {
#         proxy_pass http://127.0.0.1:3001;
#         proxy_set_header Host $host;
#         # ... other proxy_set_header lines ...
#         proxy_http_version 1.1;
#         proxy_set_header Upgrade $http_upgrade;
#         proxy_set_header Connection "upgrade";
#     }
# }
# server {
#    listen 80;
#    server_name uptime-kuma.yourdomain.com;
#    location / {
#        return 301 https://$host$request_uri; # Redirect HTTP to HTTPS
#    }
# }

Key Nginx Directives Explained:

  • listen 80; / listen 443 ssl;: Tells Nginx to listen on port 80 (HTTP) or 443 (HTTPS).
  • server_name uptime-kuma.yourdomain.com;: Specifies which domain this server block applies to.
  • proxy_pass http://127.0.0.1:3001;: The core directive. It forwards requests to Uptime Kuma.
  • proxy_set_header Host $host;: Passes the original host header from the client to Uptime Kuma.
  • proxy_set_header X-Real-IP $remote_addr;: Passes the client's real IP address.
  • proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;: Appends client IP to list of IPs request has passed through.
  • proxy_set_header X-Forwarded-Proto $scheme;: Tells Uptime Kuma if the original request was HTTP or HTTPS.
  • WebSocket Support (Critical for Uptime Kuma):
    • proxy_http_version 1.1;
    • proxy_set_header Upgrade $http_upgrade;
    • proxy_set_header Connection "upgrade"; These three lines are essential because Uptime Kuma's web interface uses WebSockets for real-time communication (live updates on the dashboard, log streaming). Without these, the dashboard might load, but live updates will fail.
  • SSL with Let's Encrypt (Certbot):
    • Certbot is a command-line tool that automates obtaining and renewing free SSL/TLS certificates from Let's Encrypt.
    • You'd typically run sudo certbot --nginx -d uptime-kuma.yourdomain.com. Certbot will then modify your Nginx configuration to include the SSL certificate paths and set up HTTPS.

Securing Uptime Kuma Instance

Beyond the reverse proxy, consider these security measures for Uptime Kuma itself:

  • Strong Admin Password:
    • Use a long, complex, and unique password for your Uptime Kuma admin account. Use a password manager.
  • 2FA (Two-Factor Authentication):
    • Uptime Kuma supports 2FA (using TOTP authenticator apps like Authy, Google Authenticator, etc.). Enable this!
    • Go to Settings > Security > Enable Two-Factor Authentication and follow the on-screen instructions (scan QR code with your app, enter code, save recovery codes).
    • This adds a significant layer of security; even if your password is compromised, attackers can't log in without the second factor.
  • Restricting Access (IP Whitelisting):
    • If Uptime Kuma should only be accessible from specific IP addresses or ranges (e.g., your office network), you can configure this at the reverse proxy level or with firewall rules (e.g., ufw on Linux).
    • Example Nginx IP restriction within a location block:
      location / {
          allow 192.168.1.0/24; # Allow your local network
          allow your_office_static_ip;
          deny all; # Deny all other IPs
      
          # ... your proxy_pass and other directives ...
      }
      
  • Keeping Uptime Kuma Updated:
    • The Uptime Kuma developers regularly release updates that include new features, bug fixes, and potentially security patches.
    • If using Docker: Updating is usually as simple as:
      sudo docker pull louislam/uptime-kuma:1  # Pull the latest image
      sudo docker stop uptime-kuma             # Stop the current container
      sudo docker rm uptime-kuma               # Remove the old container
      # Re-run your 'docker run' command with the same volume and port mappings
      sudo docker run -d --restart=always -p 3001:3001 -v uptime_kuma_data:/app/data --name uptime-kuma louislam/uptime-kuma:1
      
      (Tools like Watchtower can automate Docker container updates, but use with caution for critical services.)
    • If using Node.js/npm:
      cd /path/to/uptime-kuma
      git fetch --all
      git checkout 1.x.x # Replace with the latest tag/version, or 'master' if you follow it
      git pull
      npm install --production
      # Restart with pm2 if you use it
      pm2 restart uptime-kuma
      
  • Regular Backups: (Covered in detail in the next section) Critical for recovering from any security incident or data loss.
  • Secure the Host System:
    • Ensure the underlying server OS is hardened: regular updates, strong passwords, firewall configured, unnecessary services disabled.

By implementing a reverse proxy and these additional security measures, you can significantly improve the security posture of your Uptime Kuma installation.

Workshop Securing Uptime Kuma with Nginx and Let's Encrypt

Objective: Place an existing Uptime Kuma Docker installation behind an Nginx reverse proxy, enable HTTPS using a free Let's Encrypt certificate obtained via Certbot, and access Uptime Kuma via a custom domain/subdomain.

Prerequisites:

  • Uptime Kuma installed via Docker and running (e.g., on http://127.0.0.1:3001 or http://localhost:3001).
  • A Linux server where Nginx will be installed (this can be the same server running Docker and Uptime Kuma).
  • A registered domain name (e.g., yourdomain.com) or a subdomain (e.g., kuma.yourdomain.com) that you control.
  • DNS A or CNAME record for your chosen domain/subdomain pointing to the public IP address of your Nginx server. This must be set up before running Certbot. Allow time for DNS propagation.
  • sudo or root privileges on the Nginx server.
  • Ports 80 and 443 open on your server's firewall to allow Nginx to receive HTTP and HTTPS traffic from the internet.
    • Example using ufw on Ubuntu/Debian:
      sudo ufw allow 'Nginx Full' # Allows both HTTP (80) and HTTPS (443)
      sudo ufw reload
      # Or more specifically:
      # sudo ufw allow 80/tcp
      # sudo ufw allow 443/tcp
      

Steps:

  1. Ensure Uptime Kuma is Running:
    Verify that your Uptime Kuma Docker container is running and accessible locally on its port (e.g., curl http://localhost:3001 from the server should show Uptime Kuma's HTML, or you can access it via http://<server_ip>:3001 in a browser if not firewalled off externally for this step). For this workshop, we assume Uptime Kuma is listening on 127.0.0.1:3001. If your docker run command exposed it on all interfaces (e.g., -p 3001:3001), it's fine, but for proxy_pass to 127.0.0.1:3001, Uptime Kuma needs to be reachable on that local IP from Nginx.

  2. Install Nginx and Certbot:
    On your Linux server (where your domain points):

    # For Debian/Ubuntu
    sudo apt update
    sudo apt install nginx certbot python3-certbot-nginx -y
    
    # For CentOS/RHEL (may need EPEL repository)
    # sudo yum install epel-release -y
    # sudo yum install nginx certbot python3-certbot-nginx -y
    # sudo systemctl enable --now nginx
    
    Verify Nginx is running: sudo systemctl status nginx (should be active/running).

  3. Initial Nginx Configuration for Uptime Kuma (HTTP Only First):
    Create an Nginx server block configuration file for your Uptime Kuma domain. Replace kuma.yourdomain.com with your actual domain/subdomain.

    sudo nano /etc/nginx/sites-available/kuma.yourdomain.com
    
    Paste the following configuration, replacing kuma.yourdomain.com with your domain and ensuring proxy_pass points to where Uptime Kuma is listening (likely http://127.0.0.1:3001 if Nginx and Docker are on the same machine):

    server {
        listen 80;
        listen [::]:80; # For IPv6
        server_name kuma.yourdomain.com; # YOUR DOMAIN HERE
    
        # This location block is temporary for Certbot ACME challenge
        location ~ /.well-known/acme-challenge/ {
            allow all;
            root /var/www/html; # Default Nginx webroot, ensure it exists and Nginx can read
        }
    
        # We will add the main proxy location block after Certbot runs
        # or you can add it now commented out if you prefer
    }
    
    • Explanation:
      • listen 80;: Nginx listens on port 80 for HTTP requests for this domain.
      • server_name kuma.yourdomain.com;: Tells Nginx this block is for requests to this specific domain.
      • location ~ /.well-known/acme-challenge/: This is crucial. Let's Encrypt's Certbot uses this path to place a temporary file to verify you control the domain. root /var/www/html; specifies where Nginx should look for these files. Ensure /var/www/html exists (sudo mkdir -p /var/www/html).

    Enable this Nginx site configuration by creating a symbolic link:

    sudo ln -s /etc/nginx/sites-available/kuma.yourdomain.com /etc/nginx/sites-enabled/
    
    Test Nginx configuration for syntax errors:
    sudo nginx -t
    
    If it says "syntax is ok" and "test is successful," reload Nginx to apply changes:
    sudo systemctl reload nginx
    

  4. Obtain SSL Certificate with Certbot:
    Now, run Certbot to get an SSL certificate for your domain. Certbot will automatically detect your Nginx configuration for kuma.yourdomain.com (because of the server_name directive) and offer to configure HTTPS for it.

    sudo certbot --nginx -d kuma.yourdomain.com
    

    • Follow the prompts:
      • Enter your email address (for renewal notices and urgent security updates).
      • Agree to the Terms of Service.
      • Choose whether to share your email with EFF (optional).
    • Certbot will attempt to verify your domain ownership using the ACME challenge (placing a file in /var/www/html/.well-known/acme-challenge/).
    • If successful, Certbot will ask if you want to redirect HTTP traffic to HTTPS. Choose option 2 (Redirect). This is recommended for security.
    • Certbot will automatically update your Nginx configuration file (/etc/nginx/sites-available/kuma.yourdomain.com) with the SSL certificate paths and the redirect. It will also reload Nginx.
  5. Update Nginx Configuration to Proxy Uptime Kuma and Add WebSocket Support:
    Now that Certbot has set up the SSL parts, edit your Nginx configuration file again:

    sudo nano /etc/nginx/sites-available/kuma.yourdomain.com
    
    Your file will now look something like this (Certbot adds a lot):
    server {
        server_name kuma.yourdomain.com; # YOUR DOMAIN HERE
    
        # THIS IS WHERE YOU ADD THE PROXY DIRECTIVES
        location / {
            proxy_pass http://127.0.0.1:3001; # Point to Uptime Kuma
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
    
            # WebSocket support - CRUCIAL
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    
        # SSL configuration managed by Certbot
        listen [::]:443 ssl ipv6only=on; # managed by Certbot
        listen 443 ssl; # managed by Certbot
        ssl_certificate /etc/letsencrypt/live/kuma.yourdomain.com/fullchain.pem; # managed by Certbot
        ssl_certificate_key /etc/letsencrypt/live/kuma.yourdomain.com/privkey.pem; # managed by Certbot
        include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    }
    
    # This server block is for HTTP to HTTPS redirect, also managed by Certbot
    server {
        if ($host = kuma.yourdomain.com) {
            return 301 https://$host$request_uri;
        } # managed by Certbot
    
        listen 80;
        listen [::]:80; # For IPv6
        server_name kuma.yourdomain.com;
        return 404; # managed by Certbot
    }
    

    • Key Change: Inside the server { ... } block that has listen 443 ssl;, add the location / { ... } block with the proxy_pass and WebSocket proxy_set_header directives as shown above. Make sure proxy_pass points to your Uptime Kuma instance (e.g., http://127.0.0.1:3001).

    Test Nginx configuration again:

    sudo nginx -t
    
    Reload Nginx if successful:
    sudo systemctl reload nginx
    

  6. Access Uptime Kuma via Your Secure Domain:
    Open your web browser and navigate to https://kuma.yourdomain.com (using your actual domain).

    • You should see the Uptime Kuma login page, now served over HTTPS.
    • Check the browser's address bar for the padlock icon, indicating a secure connection.
    • Log in. The dashboard should load, and importantly, the live updates (like monitor status changes, graphs updating without a page refresh) should work. If they don't, the WebSocket proxy configuration is likely incorrect or missing.
  7. Enable 2FA in Uptime Kuma (Highly Recommended):
    Now that your Uptime Kuma is accessible via a public domain:

    • In Uptime Kuma, go to Settings > Security.
    • Find the "Two-Factor Authentication (2FA)" section and click "Setup".
    • Follow the on-screen instructions:
      1. Scan the QR code with your preferred authenticator app (e.g., Google Authenticator, Authy, Microsoft Authenticator).
      2. Enter the 6-digit code generated by your app into Uptime Kuma.
      3. Crucially, save the provided recovery codes in a very safe place. If you lose access to your authenticator app, these codes are the only way to regain access.
    • Once enabled, future logins will require your password and a code from your authenticator app.

Congratulations! Your Uptime Kuma instance is now significantly more secure, accessible via a custom domain over HTTPS, and has 2FA enabled. The reverse proxy handles SSL termination and ensures WebSocket traffic, essential for Uptime Kuma's real-time features, is correctly proxied. Remember that Certbot will automatically renew your SSL certificates. You can test renewal with sudo certbot renew --dry-run.

7. Backup and Recovery

Even with the best monitoring and security, data loss can occur due to hardware failure, software bugs, accidental deletion, or security breaches. Implementing a robust backup and recovery strategy for your Uptime Kuma instance is crucial to protect your configuration, historical monitoring data, and status page setups.

Importance of Backups

  • Data Protection: Safeguards your valuable monitoring configurations (all your monitors, notification setups, status pages) and historical uptime/downtime data. Re-creating this from scratch can be time-consuming and means loss of historical trends.
  • Disaster Recovery: Enables you to restore your Uptime Kuma service quickly after a critical failure of the server or storage.
  • Migration: Backups can facilitate migrating Uptime Kuma to a new server or a different environment.
  • Testing Upgrades: Before a major Uptime Kuma upgrade, taking a backup allows you to roll back if the upgrade causes issues.
  • Peace of Mind: Knowing your data is backed up reduces stress and allows for more confident system administration.

What to Backup

The primary target for Uptime Kuma backups is its data directory.

  • Uptime Kuma Data Directory:
    • If using Docker with a named volume (recommended): Uptime Kuma stores all its data (SQLite database, uploaded files like logos, etc.) inside the Docker volume you mapped to /app/data in the container. The path to this volume on the host system needs to be backed up.
      • You can find the host path of a Docker named volume using:
        sudo docker volume inspect <your_volume_name>
        
        (e.g., sudo docker volume inspect uptime_kuma_data). Look for the Mountpoint value.
    • If using a host bind mount with Docker (e.g., -v /path/on/host/kuma_data:/app/data): You need to back up the /path/on/host/kuma_data directory.
    • If using a native Node.js installation: Uptime Kuma typically stores its data in a data subdirectory within its installation folder (e.g., /path/to/uptime-kuma/data). This data directory is what you need to back up.
  • Configuration Files (External to Uptime Kuma):
    • Reverse Proxy Configuration: If you're using Nginx, Apache, Caddy, etc., as a reverse proxy, back up their configuration files (e.g., /etc/nginx/sites-available/, /etc/caddy/Caddyfile).
    • Docker Compose File: If you deployed Uptime Kuma using docker-compose.yml, back up this file as it defines your service configuration.
    • Custom Scripts: Any custom scripts related to Uptime Kuma (e.g., backup scripts themselves, scripts that interact with its API).

Backup Methods

Choose a backup method that suits your environment and technical comfort level. Automating backups is highly recommended.

  1. Manual Backup (Copying the Data Directory):

    • The simplest method, but prone to being forgotten.
    • Steps:
      1. (Optional but recommended) Stop the Uptime Kuma container/service to ensure data consistency:
        sudo docker stop uptime-kuma # If using Docker
        # or pm2 stop uptime-kuma # If using PM2
        
      2. Archive and compress the data directory:
        # Example for Docker volume mountpoint found via 'docker volume inspect'
        # sudo tar -czvf /path/to/your/backups/uptime-kuma-backup-YYYYMMDD.tar.gz -C /var/lib/docker/volumes/uptime_kuma_data/ _data
        # Example for a native install's data directory
        # sudo tar -czvf /path/to/your/backups/uptime-kuma-backup-YYYYMMDD.tar.gz -C /path/to/uptime-kuma/ data
        
        • Replace /path/to/your/backups/ with your backup storage location.
        • Replace YYYYMMDD with the current date.
        • The -C /path/to/parent_dir_of_data/ data_dir_name part is important. It changes to the parent directory before archiving data_dir_name, so the archive contains data_dir_name at its root, making restoration easier. For Docker volumes where the mountpoint is like /var/lib/docker/volumes/my_volume/_data, you'd use -C /var/lib/docker/volumes/my_volume/ _data.
      3. Restart Uptime Kuma:
        sudo docker start uptime-kuma
        # or pm2 start uptime-kuma
        
    • Pros: Simple to understand.
    • Cons: Manual, easy to forget, potential for data inconsistency if Uptime Kuma is not stopped (though SQLite is often resilient, stopping is safer).
  2. Automated Backup Scripts (e.g., using cron and tar or rsync):

    • This is the recommended approach for regular, unattended backups.
    • Create a shell script that performs the backup steps (similar to manual, but can include date stamping, rotation of old backups, and logging).
    • Schedule this script to run automatically using cron.
    • Example Script components:
      • Define source data directory and backup destination.
      • Stop Uptime Kuma (optional but safer).
      • Create a timestamped tar.gz archive.
      • Start Uptime Kuma.
      • Prune old backups to save disk space (e.g., keep the last 7 daily backups).
      • Log output to a file.
    • rsync: Can also be used for backups, especially for incremental backups to a remote location. rsync -a --delete /path/to/kuma_data/ /path/to/backup_location/ would mirror the data.
  3. Docker Volume Backup Strategies:

    • If using Docker named volumes, you can use Docker-centric backup methods:
      • Backup the volume's host path: As described above, find the Mountpoint and back up that directory. This is the most common way.
      • Using a temporary container: Run a temporary container that mounts the Uptime Kuma data volume and another volume for backups, then uses tar inside this container to create the archive.
        # sudo docker run --rm -v uptime_kuma_data:/data -v /path/to/your/backups:/backup alpine \
        #   tar -czvf /backup/uptime-kuma-backup-$(date +%Y%m%d%H%M%S).tar.gz -C /data .
        
        This command creates a temporary Alpine Linux container, mounts the uptime_kuma_data volume to /data inside it, mounts your host's backup directory to /backup, and then runs tar to archive the contents of /data (which is your Uptime Kuma data) into the /backup directory. The container is removed automatically (--rm). This can be done while Uptime Kuma is running, though stopping it first is still the safest for data consistency.
  4. Uptime Kuma's Built-in Export/Import (Limited Use for Full Backup):

    • Uptime Kuma's settings interface (Settings > Backup) usually has options to "Export" and "Import" settings.
    • What it does: This typically exports your monitor configurations, notifications, and status pages into a JSON file.
    • Limitations for Full Backup: This method usually does not include historical data (the uptime/downtime event history, ping times, etc.). It's more for migrating configurations or as a lightweight settings backup.
    • Recommendation: Use this as a supplementary, easy way to back up just your settings, but rely on file-system level backups (of the data directory/volume) for complete data protection including history.
  5. Off-site Backups:

    • For critical data, always store a copy of your backups off-site (e.g., a different physical location, cloud storage like AWS S3, Backblaze B2, Google Cloud Storage).
    • Tools like rclone are excellent for syncing local backup files to various cloud storage providers.

Restoring from Backup

The restoration process depends on how the backup was made. The general steps involve:

  1. Prepare the Environment:
    • Ensure Uptime Kuma (and its container/service) is stopped.
    • If restoring to a new server, ensure Docker/Node.js and any other dependencies are installed.
  2. Locate the Backup: Identify the backup archive (.tar.gz) or directory you want to restore.
  3. Remove or Rename Existing (Corrupted/Old) Data:
    • If there's an existing Uptime Kuma data directory or volume that's problematic, either delete it or rename it (e.g., mv /var/lib/docker/volumes/uptime_kuma_data /var/lib/docker/volumes/uptime_kuma_data_old).
  4. Extract the Backup:
    • Restore the backed-up data directory/volume to its original location.
      # Example: Restoring a tar.gz archive to a Docker volume's parent directory
      # Ensure the target directory for extraction results in the correct final path,
      # e.g., if archive contains '_data/', extract into '/var/lib/docker/volumes/uptime_kuma_data/'
      # If archive contains 'data/', extract into '/path/to/uptime-kuma/'
      # sudo tar -xzvf /path/to/your/backups/uptime-kuma-backup-YYYYMMDD.tar.gz -C /destination/path/
      
      • The -C /destination/path/ is crucial. For a Docker volume named uptime_kuma_data whose host mountpoint is /var/lib/docker/volumes/uptime_kuma_data/_data, and if your backup tar.gz file contains the _data directory at its root, you would extract it to /var/lib/docker/volumes/uptime_kuma_data/.
      • Example: sudo tar -xzvf backup.tar.gz -C /var/lib/docker/volumes/uptime_kuma_data/ (if backup.tar.gz contains _data/...)
  5. Check Permissions and Ownership:
    • Ensure the restored files and directories have the correct ownership and permissions that Uptime Kuma (or the user running it, often node inside the Docker container, or your user if native install) can read and write to them. For Docker volumes, Docker usually handles this, but for host bind mounts or native installs, you might need chown or chmod.
  6. Restart Uptime Kuma:
    • Start the Uptime Kuma container or service:
      sudo docker start uptime-kuma
      # or pm2 start uptime-kuma
      
  7. Verify:
    • Access Uptime Kuma via your browser. Check if all monitors, settings, and historical data are restored correctly.

Testing the Restoration Process

Crucially, periodically test your backup and restoration process.
A backup is useless if it can't be restored successfully.

  • Set up a test environment (e.g., a separate VM or Docker instance).
  • Try restoring your latest backup to this test environment to ensure it works as expected.
  • This practice helps identify any issues with your backup strategy before a real disaster occurs.

Workshop Implementing Automated Backups

Objective: Create a shell script to automatically back up the Uptime Kuma Docker data volume and schedule this script to run daily using cron. The script will also prune old backups.

Prerequisites:

  • Uptime Kuma installed using Docker with a named volume (e.g., uptime_kuma_data as used in previous workshops).
  • cron daemon installed and running on the host server (standard on most Linux distributions).
  • sudo privileges for managing Docker and creating cron jobs.
  • A directory to store backups (e.g., /opt/uptime-kuma-backups).

Scenario: Uptime Kuma is running in Docker, using the named volume uptime_kuma_data. We will create a script to back up the contents of this volume.

Steps:

  1. Identify the Uptime Kuma Data Volume's Host Mountpoint: First, confirm the name of your Uptime Kuma data volume. If you followed earlier workshops, it's likely uptime_kuma_data. Then, find its actual location on the host filesystem:

    sudo docker volume inspect uptime_kuma_data
    
    Look for the "Mountpoint" line in the JSON output. It will be something like /var/lib/docker/volumes/uptime_kuma_data/_data. Note this full path accurately. This is the directory whose contents we need to back up.

  2. Create a Backup Directory on the Host: Choose a location on your host system to store the backup archives.

    sudo mkdir -p /opt/uptime-kuma-backups
    sudo chown $USER:$USER /opt/uptime-kuma-backups # Or a dedicated backup user
    
    (Adjust ownership if you plan to run the cron job as a different user, but for sudo crontab -e, root will run it, so root needs write access or the script handles sudo internally for writing)

  3. Create the Backup Script: Create a new shell script file. We'll place it in /usr/local/bin to make it system-wide accessible.

    sudo nano /usr/local/bin/backup_uptime_kuma.sh
    
    Paste the following script content. Carefully replace YOUR_VOLUME_MOUNTPOINT_HERE with the actual Mountpoint path you found in Step 1.

    #!/bin/bash
    
    # === Configuration ===
    # Source directory: The actual directory on the host where Docker stores the volume data.
    # IMPORTANT: This is the Mountpoint from 'docker volume inspect <volume_name>'
    # For a volume named 'uptime_kuma_data', this is often '/var/lib/docker/volumes/uptime_kuma_data/_data'
    # The script will back up the *contents* of this directory.
    UPTIME_KUMA_DATA_DIR="YOUR_VOLUME_MOUNTPOINT_HERE" # e.g., "/var/lib/docker/volumes/uptime_kuma_data/_data"
    
    # Backup storage directory
    BACKUP_DEST_DIR="/opt/uptime-kuma-backups"
    
    # Uptime Kuma Docker container name
    DOCKER_CONTAINER_NAME="uptime-kuma"
    
    # Number of daily backups to keep
    DAYS_TO_KEEP=7
    
    # Timestamp for backup filename
    TIMESTAMP=$(date +"%Y%m%d-%H%M%S")
    BACKUP_FILENAME="uptime-kuma-backup-${TIMESTAMP}.tar.gz"
    FULL_BACKUP_PATH="${BACKUP_DEST_DIR}/${BACKUP_FILENAME}"
    
    # Log file for this script's output
    LOG_FILE="/var/log/uptime_kuma_backup.log"
    
    # === Functions ===
    log_message() {
        echo "$(date +"%Y-%m-%d %H:%M:%S") - $1" | sudo tee -a ${LOG_FILE}
    }
    
    # === Main Script ===
    log_message "Starting Uptime Kuma backup process..."
    
    # Ensure backup destination directory exists
    sudo mkdir -p "${BACKUP_DEST_DIR}"
    if [ ! -d "${BACKUP_DEST_DIR}" ]; then
        log_message "ERROR: Backup destination directory ${BACKUP_DEST_DIR} could not be created."
        exit 1
    fi
    
    # Check if source data directory exists
    if [ ! -d "${UPTIME_KUMA_DATA_DIR}" ]; then
        log_message "ERROR: Uptime Kuma data directory ${UPTIME_KUMA_DATA_DIR} not found. Check UPTIME_KUMA_DATA_DIR variable."
        exit 1
    fi
    
    # Optional: Stop Uptime Kuma container for data consistency (safer)
    # log_message "Stopping Uptime Kuma container: ${DOCKER_CONTAINER_NAME}..."
    # sudo docker stop "${DOCKER_CONTAINER_NAME}"
    # if [ $? -ne 0 ]; then
    #     log_message "WARNING: Failed to stop Uptime Kuma container. Proceeding with backup, but data might be inconsistent."
    # else
    #     sleep 5 # Give it a few seconds to shut down
    # fi
    
    log_message "Creating backup archive: ${FULL_BACKUP_PATH}"
    # We use -C to change directory so the archive paths are relative to UPTIME_KUMA_DATA_DIR
    # For example, if UPTIME_KUMA_DATA_DIR is /var/lib/docker/volumes/uptime_kuma_data/_data,
    # this will archive the contents of _data (like kuma.db) directly into the tarball.
    # The `.` at the end means "archive everything in the current directory (which -C changed to)".
    sudo tar -czvf "${FULL_BACKUP_PATH}" -C "${UPTIME_KUMA_DATA_DIR}" .
    
    if [ $? -eq 0 ]; then
        log_message "Backup created successfully: ${FULL_BACKUP_PATH}"
    else
        log_message "ERROR: Backup creation failed!"
        # Optional: Restart Uptime Kuma container even if backup failed (if stopped)
        # log_message "Restarting Uptime Kuma container: ${DOCKER_CONTAINER_NAME} (after failed backup)..."
        # sudo docker start "${DOCKER_CONTAINER_NAME}"
        exit 1
    fi
    
    # Optional: Restart Uptime Kuma container (if it was stopped)
    # log_message "Starting Uptime Kuma container: ${DOCKER_CONTAINER_NAME}..."
    # sudo docker start "${DOCKER_CONTAINER_NAME}"
    # if [ $? -ne 0 ]; then
    #     log_message "ERROR: Failed to start Uptime Kuma container."
    #     # Decide if this is a critical error for the script
    # fi
    
    log_message "Pruning old backups from ${BACKUP_DEST_DIR}, keeping last ${DAYS_TO_KEEP}..."
    # List files by modification time (newest first), skip the newest N, delete the rest.
    # Using find for safer deletion:
    sudo find "${BACKUP_DEST_DIR}" -maxdepth 1 -name "uptime-kuma-backup-*.tar.gz" -type f -mtime +"${DAYS_TO_KEEP}" -print -delete | while read -r file; do log_message "Deleted old backup: $file"; done
    # An alternative simpler ls/tail/xargs method (can be risky with spaces in filenames, though our timestamped names are safe):
    # ls -1dt "${BACKUP_DEST_DIR}"/uptime-kuma-backup-*.tar.gz | tail -n +$((${DAYS_TO_KEEP} + 1)) | sudo xargs -r rm -f
    
    log_message "Backup process finished."
    echo "----------------------------------------" | sudo tee -a ${LOG_FILE}
    
    exit 0
    
    • Key parts of the script:
      • UPTIME_KUMA_DATA_DIR: Crucial! Set this correctly. This is the directory whose contents will be backed up.
      • BACKUP_DEST_DIR: Where .tar.gz files will be stored.
      • DOCKER_CONTAINER_NAME: Name of your Uptime Kuma Docker container.
      • DAYS_TO_KEEP: How many daily backups to retain.
      • LOG_FILE: Where the script logs its actions.
      • sudo tar -czvf "${FULL_BACKUP_PATH}" -C "${UPTIME_KUMA_DATA_DIR}" .: This is the core backup command.
        • -C "${UPTIME_KUMA_DATA_DIR}": Changes directory to UPTIME_KUMA_DATA_DIR before archiving.
        • .: Archives all files and subdirectories within that UPTIME_KUMA_DATA_DIR. This ensures paths inside the archive are relative to UPTIME_KUMA_DATA_DIR (e.g., kuma.db will be at the root of the archive).
      • Stopping/Starting container: The lines for stopping and starting the Docker container are commented out. For maximum data integrity, it's best to stop Uptime Kuma before backing up its SQLite database. However, SQLite is often robust enough for live backups if the application quiesces writes, but stopping is safer. Uncomment these if you prefer the safer approach, but be aware it causes brief downtime.
      • Pruning old backups: The find ... -mtime +"${DAYS_TO_KEEP}" -delete command safely removes backups older than DAYS_TO_KEEP days.
      • sudo tee -a ${LOG_FILE}: Used to write log messages to both console (if run manually) and the log file. Requires sudo because /var/log is usually root-owned.

    Make the script executable:

    sudo chmod +x /usr/local/bin/backup_uptime_kuma.sh
    

  4. Test the Backup Script Manually: Run the script once manually to ensure it works:

    sudo /usr/local/bin/backup_uptime_kuma.sh
    

    • Check the output.
    • Verify that a uptime-kuma-backup-YYYYMMDD-HHMMSS.tar.gz file is created in /opt/uptime-kuma-backups/.
    • Inspect the contents of the archive (optional but good):
      tar -ztvf /opt/uptime-kuma-backups/uptime-kuma-backup-*.tar.gz
      
      You should see files like kuma.db inside.
    • Check the log file: sudo cat /var/log/uptime_kuma_backup.log.
  5. Schedule the Backup Script with Cron: We'll schedule this script to run daily, for example, at 2:00 AM. Open the root user's crontab for editing:

    sudo crontab -e
    
    If prompted, choose an editor (like nano). Add the following line at the end of the file:

    0 2 * * * /usr/local/bin/backup_uptime_kuma.sh
    
    • Cron syntax explanation:
      • 0: Minute (0-59) -> 0th minute
      • 2: Hour (0-23) -> 2 AM
      • *: Day of month (1-31) -> every day
      • *: Month (1-12) -> every month
      • *: Day of week (0-7, Sunday is 0 or 7) -> every day of the week
      • /usr/local/bin/backup_uptime_kuma.sh: The command to run.
    • The script already handles its own logging to /var/log/uptime_kuma_backup.log.

    Save the crontab file and exit the editor. Cron will now automatically execute your backup script at 2:00 AM every day.

  6. Discuss Restoration (Theory based on this backup script):
    To restore from a backup created by this script:

    1. Stop Uptime Kuma: sudo docker stop uptime-kuma.
    2. (Optional) If the current data volume uptime_kuma_data is corrupted, you might want to remove and recreate it, or just clear its contents:
      • To clear contents: sudo rm -rf /var/lib/docker/volumes/uptime_kuma_data/_data/* (Use with extreme caution!).
      • Or, more safely, rename the old data: sudo mv /var/lib/docker/volumes/uptime_kuma_data/_data /var/lib/docker/volumes/uptime_kuma_data/_data_broken_$(date +%F)
      • And recreate the mountpoint if needed: sudo mkdir -p /var/lib/docker/volumes/uptime_kuma_data/_data and ensure Docker re-owns it or permissions are correct for the container user.
    3. Identify the backup file you want to restore from /opt/uptime-kuma-backups/ (e.g., uptime-kuma-backup-LATEST.tar.gz).
    4. Extract the backup archive directly into the Docker volume's host mountpoint:
      # Ensure UPTIME_KUMA_DATA_DIR is the same as in the backup script (e.g., /var/lib/docker/volumes/uptime_kuma_data/_data)
      UPTIME_KUMA_DATA_DIR="/var/lib/docker/volumes/uptime_kuma_data/_data" # Or your actual path
      LATEST_BACKUP_FILE=$(ls -t /opt/uptime-kuma-backups/uptime-kuma-backup-*.tar.gz | head -1)
      
      sudo tar -xzvf "${LATEST_BACKUP_FILE}" -C "${UPTIME_KUMA_DATA_DIR}"
      
      • This command extracts the contents of the archive. Since the archive was created with paths relative to the data directory (using -C ... .), it will place files like kuma.db directly into UPTIME_KUMA_DATA_DIR.
    5. (If necessary) Ensure correct ownership/permissions for the Docker container to access the restored files. Usually, Docker manages volume permissions, but if UPTIME_KUMA_DATA_DIR was recreated, ensure the _data part is usable by the container. Often the container runs as user node (UID 1000). sudo chown -R 1000:1000 "${UPTIME_KUMA_DATA_DIR}" might be needed in some edge cases if permissions are wrong.
    6. Start Uptime Kuma: sudo docker start uptime-kuma.
    7. Verify that your monitors and history are restored.

    Important: Always test your restoration process in a non-production environment first!

You now have an automated daily backup system for your Uptime Kuma data, complete with log rotation. This significantly improves the resilience of your monitoring setup. Remember to also back up your reverse proxy configurations and the backup script itself. Consider sending backups off-site for ultimate protection.

8. Advanced Topics and Integrations

Beyond core monitoring, notifications, and status pages, Uptime Kuma offers capabilities for deeper integration and more specialized use cases. This section explores interacting with Uptime Kuma's API, exposing metrics for Prometheus, leveraging push monitors for passive checks, and conceptual scaling.

API Usage

Uptime Kuma is primarily controlled via its web interface. However, for automation, custom integrations, or programmatic management, Uptime Kuma exposes an API.

  • API Type: Socket.IO
    • Uptime Kuma's real-time communication, including its API for programmatic interaction, is built on Socket.IO. Socket.IO is a library that enables real-time, bidirectional, and event-based communication between web clients and servers. It typically starts with HTTP long-polling and may upgrade to WebSockets if available.
    • This means interacting with the API is not as simple as sending standard REST HTTP requests to predefined endpoints for all actions. You'll need a Socket.IO client library in your programming language of choice (e.g., Python, Node.js).
  • Authentication:
    • API interactions usually require authentication. When you log in to the Uptime Kuma UI, a session or token is established. Programmatic clients will need to replicate this login process to authenticate their Socket.IO connection or use API keys if Uptime Kuma implements them (check official documentation for the latest on API key support).
    • The login process typically involves emitting a login event with username and password and listening for a success/failure response.
  • Key API Capabilities (Conceptual - specific events/methods depend on Uptime Kuma version):
    • Listing Monitors: Get a list of all configured monitors and their current status.
    • Adding/Editing/Deleting Monitors: Programmatically create new monitors, modify existing ones, or remove them.
    • Pausing/Resuming Monitors: Temporarily disable or re-enable checks for specific monitors.
    • Getting Monitor History/Events: Retrieve historical uptime data or event logs for a monitor.
    • Managing Status Pages: Potentially create or update status pages (less common via API, usually UI-driven).
    • Managing Notifications: Add or modify notification providers.
    • Subscribing to Events: Listen for real-time events like monitorDown, monitorUp, newAvgPing, etc.
  • Use Cases for API Interaction:
    • Custom Dashboards: Build specialized dashboards using data fetched from Uptime Kuma's API, perhaps combining it with metrics from other systems.
    • Automation Scripts:
      • Automatically add new services to Uptime Kuma when they are deployed (e.g., from a CI/CD pipeline).
      • Automatically pause monitors for servers that are being taken down for scheduled maintenance by an orchestration tool.
    • Integration with Other Tools: Feed Uptime Kuma status data into other incident management or reporting systems that don't have direct Uptime Kuma integration.
    • Bulk Operations: Perform actions on many monitors at once that might be tedious through the UI.
  • Finding API Details:
    • The official Uptime Kuma GitHub repository and documentation are the primary sources for API details. Look for developer documentation, API specifications, or examples.
    • Browser Developer Tools: You can also observe the Socket.IO messages exchanged between your browser and the Uptime Kuma server when you use the web UI. This can give you insights into the events and data structures used, but relying on official documentation is always better.
  • Example (Conceptual Python with python-socketio):
    # import socketio
    #
    # sio = socketio.Client()
    #
    # @sio.event
    # def connect():
    #     print('Connected to Uptime Kuma')
    #     # Attempt to login
    #     sio.emit('login', {'username': 'your_admin_user', 'password': 'your_password'}, callback=handle_login_response)
    #
    # def handle_login_response(success, msg_type, msg):
    #     if success:
    #         print('Login successful:', msg)
    #         # Now you can emit other commands, e.g., to list monitors
    #         # sio.emit('getMonitorList', callback=handle_monitor_list)
    #     else:
    #         print('Login failed:', msg)
    #
    # # @sio.on('monitorList') # Or whatever the event name is
    # # def handle_monitor_list(data):
    # #    print('Monitors:', data)
    #
    # @sio.event
    # def disconnect():
    #     print('Disconnected from Uptime Kuma')
    #
    # try:
    #     # Replace with your Uptime Kuma URL, ensure path is correct if behind reverse proxy
    #     sio.connect('http://localhost:3001', socketio_path='/socket.io') # Default path
    #     sio.wait()
    # except Exception as e:
    #     print(f"Connection error: {e}")
    
    This is highly conceptual and would need to be adapted based on the actual API events and data structures of your Uptime Kuma version.

Prometheus Integration

Prometheus is a very popular open-source systems monitoring and alerting toolkit. Many applications expose their metrics in a format that Prometheus can "scrape" (collect). Uptime Kuma provides a Prometheus metrics endpoint.

  • Metrics Endpoint:
    • Uptime Kuma exposes metrics at a specific HTTP endpoint, typically /metrics. So, if Uptime Kuma is at http://localhost:3001, the metrics would be at http://localhost:3001/metrics.
    • This endpoint returns data in Prometheus text-based exposition format.
  • Exposed Metrics:
    The metrics typically include:
    • uptime_kuma_monitor_status_up: A gauge (value 0 or 1) indicating if a monitor is up. It will have labels like monitor_name, monitor_type.
    • uptime_kuma_monitor_latency_seconds: A gauge showing the last recorded latency/response time for a monitor, also with labels.
    • Other metrics related to certificate expiry, specific monitor types, etc.
  • Scraping with Prometheus:
    1. Install Prometheus: If you don't have it, set up a Prometheus server.
    2. Configure Prometheus Target: In your prometheus.yml configuration file, add a scrape job for Uptime Kuma:
      scrape_configs:
        - job_name: 'uptime-kuma'
          static_configs:
            - targets: ['<uptime_kuma_host>:<port>'] # e.g., 'localhost:3001' or 'kuma.yourdomain.com' if behind proxy
          # If Uptime Kuma's /metrics endpoint is not at the root, specify metrics_path:
          # metrics_path: /metrics # Default for Uptime Kuma usually
      
      If Uptime Kuma is behind a reverse proxy with HTTPS, use https://kuma.yourdomain.com as the target and ensure Prometheus can validate the SSL certificate.
    3. Prometheus will then periodically fetch data from Uptime Kuma's /metrics endpoint.
  • Visualizing in Grafana:
    • Once Prometheus is scraping Uptime Kuma metrics, you can connect Grafana (a popular visualization tool) to your Prometheus instance as a data source.
    • In Grafana, you can then build dashboards using PromQL (Prometheus Query Language) to visualize:
      • The status of your monitors (e.g., using a Stat panel or Singlestat).
      • Latency trends over time.
      • Alerts based on Prometheus rules (e.g., if uptime_kuma_monitor_status_up == 0 for a certain period).
    • There might be pre-built Grafana dashboards available for Uptime Kuma metrics on the Grafana Dashboards website.
  • Benefits:
    • Integrates Uptime Kuma's status into a broader monitoring ecosystem if you already use Prometheus.
    • Allows for more complex alerting rules and long-term metric storage/analysis via Prometheus and Grafana.

Push Monitors (Heartbeats)

Standard monitors in Uptime Kuma are "active" – Uptime Kuma polls the service. Push monitors are "passive" – the monitored service or job is responsible for telling Uptime Kuma it's alive by sending an HTTP GET request (a "heartbeat") to a unique URL.

  • Concept:
    1. You create a "Push" type monitor in Uptime Kuma.
    2. Uptime Kuma generates a unique Push URL for this monitor.
    3. You configure your service, cron job, script, or application to send an HTTP GET request to this Push URL periodically or upon successful completion of a task.
    4. You also configure an "Expected Heartbeat Interval" in Uptime Kuma (e.g., every 1 hour, every 24 hours).
    5. If Uptime Kuma does not receive a request (a push) at that URL within the expected interval (plus some grace period), it marks the monitor as "Down."
  • Push URL Parameters (Optional): When sending the heartbeat, you can often include query parameters to provide more information:
    • ?status=up (default if not specified): Explicitly tell Kuma the push indicates an "up" state.
    • ?status=down&msg=ErrorDescription: You can also actively push a "down" status with a message if your script detects a failure.
    • &ping=1200: Report a "ping" or processing time in milliseconds.
  • Setting up a Push Monitor:
    1. In Uptime Kuma, add a new monitor.
    2. Select type: "Push".
    3. Give it a Friendly Name.
    4. Set the "Expected Heartbeat Interval" (e.g., 1 hour, 5 minutes, 24 hours, depending on how often your job runs or should report).
    5. Save. Uptime Kuma will display the unique Push URL. Copy this URL.
  • Integrating with Cron Jobs or Applications:
    • Cron Job: Append && curl -fsS "YOUR_PUSH_URL" to your cron job command. The && ensures curl only runs if the preceding command was successful.
      0 3 * * * /path/to/your/script.sh && curl -fsS "https://kuma.example.com/api/push/xyz123abc?status=up&msg=OK" > /dev/null
      
      The -fsS flags for curl mean: fail silently (-f), don't show progress (-s), show error on server error (-S). > /dev/null discards curl's normal output.
    • Application Code: In your application (Python, Node.js, Java, etc.), after a critical task completes successfully, use an HTTP client library to make a GET request to the Push URL.
  • Use Cases:
    • Monitoring Cron Jobs: Ensure nightly backups, data processing scripts, or other scheduled tasks are actually running and completing successfully.
    • Services Behind Firewalls: Monitor services that Uptime Kuma cannot reach directly if those services can make outbound HTTP requests.
    • Application Health: An application can push a heartbeat after successfully completing a set of internal health checks.
    • IoT Devices: Low-power devices can send a heartbeat when they wake up and connect to the network.

Scaling and High Availability (Conceptual)

Uptime Kuma is designed for simplicity and ease of use, and it's generally very stable for monitoring hundreds or even a few thousand services on adequate hardware. However, it's not inherently designed as a distributed, highly available system out-of-the-box like some enterprise monitoring solutions.

  • Scaling Considerations:
    • Vertical Scaling: Running Uptime Kuma on a server with more CPU, RAM, and faster I/O will allow it to handle more monitors and a higher frequency of checks.
    • Resource Usage: The primary resource consumed is CPU for performing checks (especially HTTP/S checks with SSL) and network bandwidth. The internal SQLite database is usually not a bottleneck for typical Uptime Kuma loads.
    • Number of Monitors vs. Interval: Many monitors with very short heartbeat intervals will significantly increase load.
  • High Availability (HA):

    • Single Point of Failure: A single Uptime Kuma instance is a single point of failure for your monitoring. If the Uptime Kuma server goes down, your monitoring stops.
    • Achieving HA (Advanced/DIY):
      • Redundant Instances (Not Natively Supported for Shared State): You could run two independent Uptime Kuma instances monitoring the same set of services. This gives you redundant alerting but not a unified dashboard or history unless you build custom synchronization.
      • Database Replication (Complex for SQLite): SQLite is a file-based database, making active-active replication complex. Tools like Litestream or rqlite could theoretically be used for SQLite replication, but integrating them seamlessly with Uptime Kuma would be a significant custom effort.
      • Failover with a Load Balancer/Floating IP: You might set up a primary Uptime Kuma instance and a hot standby. If the primary fails (detected by another system), traffic could be redirected to the standby. Data synchronization between primary and standby would still be a challenge.
      • Docker Orchestration (Kubernetes, Swarm): Running Uptime Kuma in a Docker orchestration environment can provide process-level HA (if the container dies, it gets restarted). For data persistence, you'd use persistent volumes. True HA of the application state itself is still limited by Uptime Kuma's single-instance nature regarding its database.
    • Focus: For most users, ensuring the Uptime Kuma host is reliable, using automated backups, and having quick recovery procedures is a more practical approach than attempting full HA for Uptime Kuma itself. If you need extremely high availability for the monitoring system itself, enterprise tools designed for distributed operation might be more suitable, or you accept the small risk of monitoring downtime.
  • Distributed Monitoring (Probing from different locations):

    • Uptime Kuma itself polls from the server it's running on. If you need to check your services' accessibility from multiple geographic locations, you would typically need to run multiple Uptime Kuma instances in those locations. There isn't a built-in "remote probe" feature that reports back to a central Uptime Kuma server.

Contributing to Uptime Kuma (Open Source Aspect)

Uptime Kuma is an open-source project, thriving on community contributions.

  • GitHub: The project is hosted on GitHub (louislam/uptime-kuma).
  • Ways to Contribute:
    • Reporting Bugs: If you find an issue, report it clearly on GitHub Issues.
    • Suggesting Features: Propose new ideas or improvements.
    • Improving Documentation: Help make the docs clearer or more comprehensive.
    • Translations: Uptime Kuma supports multiple languages; you can help translate it.
    • Developing Code: If you're a developer, you can fix bugs, implement new features, or add new notification providers by submitting Pull Requests.
    • Community Support: Help other users on forums, Discord, or GitHub Discussions.

Contributing to open source is a great way to learn, give back, and become part of a community.

Workshop Monitoring a Cron Job with a Push Monitor

Objective:
Use Uptime Kuma's "Push" monitor type to ensure a critical (simulated) nightly cron job is running successfully and reporting back to Uptime Kuma.

Prerequisites:

  • Uptime Kuma installed and running.
  • curl command-line utility installed on the server where the cron job will run (usually installed by default on Linux).
  • Access to Uptime Kuma web dashboard.
  • Permission to add cron jobs on the server.

Scenario:
We'll create a dummy shell script that simulates a task. If the task "succeeds," it will send a heartbeat to Uptime Kuma. If it "fails," it won't (or could optionally send a "down" signal). We'll schedule this script with cron.

Steps:

  1. Create a Push Monitor in Uptime Kuma:

    • In your Uptime Kuma dashboard, click "+ Add New Monitor".
    • Monitor Type: Select Push.
    • Friendly Name: Enter Nightly Data Processing Job.
    • Expected Heartbeat Interval: This is how often Uptime Kuma expects to receive a heartbeat. For a real nightly job, you might set this to 25 hours (to allow some leeway). For this workshop, to see results faster, set it to 5 minutes. This means if Uptime Kuma doesn't get a push within 5 minutes of the last one (or since creation), it will mark the monitor as "Down."
    • Retry (for Push Monitors): This usually refers to how long Uptime Kuma waits after the expected interval before marking as down, or how many missed heartbeats. The UI will clarify.
    • Click "Save".
    • After saving, Uptime Kuma will display the Push URL. It will look something like http://<your_kuma_domain_or_ip>:3001/api/push/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx.
    • Copy this Push URL carefully. You'll need it for your script.
  2. Create the Dummy Cron Job Script: On the server where your cron job will run (can be the Uptime Kuma server itself for this test), create a shell script.

    sudo nano /usr/local/bin/simulated_nightly_job.sh
    
    Paste the following content into the script. Replace YOUR_UPTIME_KUMA_PUSH_URL_HERE with the actual Push URL you copied from Uptime Kuma.

    #!/bin/bash
    
    # Replace with your actual Uptime Kuma Push URL
    PUSH_URL="YOUR_UPTIME_KUMA_PUSH_URL_HERE"
    
    # Path to a log file for this script's actions
    JOB_LOG_FILE="/var/log/simulated_nightly_job.log"
    
    echo "$(date): Simulated nightly job started." >> "${JOB_LOG_FILE}"
    
    # Simulate some work (e.g., data processing, backup)
    echo "$(date): Doing important work..." >> "${JOB_LOG_FILE}"
    sleep 20 # Simulate work taking 20 seconds
    
    # Simulate success or failure of the task
    # To test failure, change TASK_SUCCESSFUL to false
    TASK_SUCCESSFUL=true
    
    if $TASK_SUCCESSFUL; then
        echo "$(date): Task completed successfully. Sending UP heartbeat to Uptime Kuma." >> "${JOB_LOG_FILE}"
        # Append parameters: status=up (optional, default), msg=OK (custom message), ping= (optional, duration in ms)
        # The -m 60 flag for curl sets a maximum time of 60 seconds for the operation.
        curl -fsS -m 60 "${PUSH_URL}?status=up&msg=Nightly%20Job%20OK" >> "${JOB_LOG_FILE}" 2>&1
        if [ $? -ne 0 ]; then
            echo "$(date): ERROR - Failed to send UP heartbeat to Uptime Kuma." >> "${JOB_LOG_FILE}"
        else
            echo "$(date): Successfully sent UP heartbeat." >> "${JOB_LOG_FILE}"
        fi
    else
        echo "$(date): Task FAILED. Sending DOWN heartbeat to Uptime Kuma." >> "${JOB_LOG_FILE}"
        # Optionally, you can explicitly send a "down" signal if the script knows it failed.
        curl -fsS -m 60 "${PUSH_URL}?status=down&msg=Nightly%20Job%20Failed%20Simulated" >> "${JOB_LOG_FILE}" 2>&1
        if [ $? -ne 0 ]; then
            echo "$(date): ERROR - Failed to send DOWN heartbeat to Uptime Kuma for failure." >> "${JOB_LOG_FILE}"
        else
            echo "$(date): Successfully sent DOWN heartbeat for failure." >> "${JOB_LOG_FILE}"
        fi
    fi
    
    echo "$(date): Simulated nightly job finished." >> "${JOB_LOG_FILE}"
    echo "---" >> "${JOB_LOG_FILE}"
    exit 0
    
    • Explanation:
      • PUSH_URL: Your unique URL from Uptime Kuma.
      • TASK_SUCCESSFUL: A variable to easily simulate success or failure.
      • curl -fsS -m 60 "${PUSH_URL}?status=up&msg=Nightly%20Job%20OK": If successful, it sends a GET request to the push URL.
        • ?status=up&msg=Nightly%20Job%20OK: Sends an "up" status and a URL-encoded message "Nightly Job OK".
        • -m 60: curl will timeout after 60 seconds for this operation.
        • >> "${JOB_LOG_FILE}" 2>&1: Appends curl's output (if any) and errors to the job log.
      • If TASK_SUCCESSFUL is false, it sends status=down (or you could just have it not send any curl command, and Uptime Kuma would eventually time out).

    Make the script executable:

    sudo chmod +x /usr/local/bin/simulated_nightly_job.sh
    

  3. Test the Script Manually:
    Run the script once from the command line to see if it sends the heartbeat.

    sudo /usr/local/bin/simulated_nightly_job.sh
    

    • Check Uptime Kuma: Your "Nightly Data Processing Job" monitor should turn green ("Up") and show a recent heartbeat. Click on it to see details; it might show your "Nightly Job OK" message.
    • Check the job log: sudo cat /var/log/simulated_nightly_job.log.
  4. Schedule the Script with Cron:
    For testing, we'll run it every minute. For a real nightly job, you'd change the schedule. Open the crontab for editing (usually for the root user, as the script writes to /var/log and might need other permissions):

    sudo crontab -e
    
    Add the following line:
    * * * * * /usr/local/bin/simulated_nightly_job.sh
    

    • This * * * * * means "run every minute."
    • For a real job running at 3:05 AM daily: 5 3 * * * /usr/local/bin/simulated_nightly_job.sh

    Save and exit the crontab editor.

  5. Observe Uptime Kuma:

    • Within a minute or two, Uptime Kuma should receive the first heartbeat from the cron job. The "Nightly Data Processing Job" monitor will remain "Up" and show regular heartbeats (every minute, in this test setup).
  6. Simulate a Job Failure:
    Now, let's make the job "fail" so it doesn't send the "up" heartbeat (or sends a "down" one).

    • Edit the script: sudo nano /usr/local/bin/simulated_nightly_job.sh
    • Change the line TASK_SUCCESSFUL=true to TASK_SUCCESSFUL=false.
    • Save the script.
    • Wait: Cron will run the script again within the next minute. This time, it will either not send an "up" heartbeat or send an explicit "down" signal (depending on how you configured the else block in the script).
    • Observe Uptime Kuma:
      • If your script sends status=down, the monitor should immediately turn red ("Down").
      • If your script simply doesn't send an "up" heartbeat on failure, Uptime Kuma will wait for the "Expected Heartbeat Interval" (5 minutes in our workshop setup) to pass without receiving a push. After that, it will mark the monitor as "Down."
    • You should receive a notification if you have them configured for this monitor.
  7. Restore to Success and Cleanup:

    • Edit the script again: sudo nano /usr/local/bin/simulated_nightly_job.sh
    • Change TASK_SUCCESSFUL=false back to TASK_SUCCESSFUL=true.
    • Save the script.
    • The next time cron runs it, an "up" heartbeat will be sent, and the Uptime Kuma monitor will recover.
    • Important Cleanup: Once you're done with the workshop, remove or comment out the cron job to stop it from running every minute!
      sudo crontab -e
      # Put a '#' at the beginning of the line:
      # * * * * * /usr/local/bin/simulated_nightly_job.sh
      # Save and exit.
      
      You can also delete the test monitor from Uptime Kuma if desired.

This workshop demonstrates the power of push monitors for tasks that Uptime Kuma can't actively poll. It ensures that a job not only starts but also (presumably) completes its intended work before signaling its health. This is a far more reliable way to monitor batch jobs than just checking if a server is online.

Conclusion

Throughout this comprehensive guide, we've journeyed from the fundamentals of Uptime Kuma to its more advanced configurations and integrations. You've learned how to install it, set up a variety of monitor types to check everything from website availability to DNS resolution, and fine-tune these monitors with advanced settings like custom heartbeats, retries, and timeouts.

We delved into the critical aspects of receiving timely alerts through a multitude of notification channels, with a practical workshop on setting up Telegram notifications. You've also seen how to create and customize public-facing status pages, enhancing transparency and communication with your users, especially during incidents.

The advanced sections equipped you with the knowledge to bolster your Uptime Kuma installation's security using a reverse proxy like Nginx, complete with SSL/TLS encryption via Let's Encrypt. We emphasized the paramount importance of regular, automated backups and a tested recovery strategy, providing a hands-on workshop to implement this for Docker volumes. Finally, we explored sophisticated topics such as leveraging the Socket.IO API for programmatic control, integrating with Prometheus for broader metrics collection, and effectively using push monitors to track the health of cron jobs and other passive services.

Uptime Kuma stands out as an exceptionally user-friendly, yet powerful, self-hosted monitoring solution. Its simplicity does not detract from its capability to provide essential oversight for your critical services. By mastering the concepts and techniques discussed, you are now well-equipped to deploy, manage, and scale your own Uptime Kuma instance effectively, ensuring you have a reliable eye on your digital infrastructure.

The world of self-hosting is one of continuous learning and improvement. We encourage you to continue exploring Uptime Kuma's features, consult its official documentation on GitHub, and engage with its vibrant community. As your needs evolve, you'll find Uptime Kuma to be a versatile and dependable partner in maintaining the availability and performance of your self-hosted services. Happy monitoring!