Author | Nejat Hakan |
nejat.hakan@outlook.de | |
PayPal Me | https://paypal.me/nejathakan |
Reverse Proxy Nginx Proxy Manager
Introduction to Reverse Proxies and Nginx Proxy Manager
Welcome to this comprehensive guide on Nginx Proxy Manager (NPM), a powerful yet user-friendly tool for managing reverse proxy services, especially in self-hosting environments. This document will take you from the fundamental concepts of reverse proxies to advanced configurations and troubleshooting techniques, empowering you to securely and efficiently expose your self-hosted applications to the internet. We assume you are comfortable with basic Linux command-line operations and have a foundational understanding of networking concepts.
What is a Reverse Proxy?
Imagine a large office building with many different departments, each offering a specific service. Instead of visitors wandering around trying to find the right department, they first approach a central reception desk. The receptionist knows where every department is located and directs visitors accordingly. The receptionist also handles initial inquiries, filters out unwanted visitors, and ensures that only authorized personnel reach sensitive areas.
In the digital world, a reverse proxy acts much like this receptionist for your web servers and applications. When a user tries to access one of your services from the internet, their request doesn't go directly to the application server. Instead, it first hits the reverse proxy. The reverse proxy then forwards this request to the appropriate backend server or service based on the requested domain name, path, or other criteria.
Key Benefits of Using a Reverse Proxy:
- Centralized Access and Simplified URLs: Instead of remembering
http://yourserver_ip:8080
for one service andhttp://yourserver_ip:9000
for another, you can access them via user-friendly subdomains likeservice1.yourdomain.com
andservice2.yourdomain.com
, all running on standard web ports (80 for HTTP, 443 for HTTPS). The reverse proxy handles the internal port mapping. - SSL/TLS Termination: Implementing HTTPS (SSL/TLS encryption) can be complex to configure for each individual application. A reverse proxy can handle SSL/TLS termination centrally. This means encrypted HTTPS traffic from the internet is decrypted at the reverse proxy, and then traffic can be forwarded to backend services as unencrypted HTTP if they are on a secure internal network. This simplifies certificate management, as you only need to manage certificates on the reverse proxy.
- Enhanced Security:
- Hiding Backend Server Information: The reverse proxy acts as a facade, obscuring the IP addresses and characteristics of your internal backend servers from direct public exposure.
- Protection against Common Web Attacks: Many reverse proxies can be configured to block common web exploits, filter malicious requests, and provide a basic layer of web application firewall (WAF) functionality.
- Access Control: You can implement access restrictions, IP whitelisting/blacklisting, or basic authentication at the reverse proxy level before requests even reach your applications.
- Load Balancing: If you have multiple instances of an application running on different servers for high availability or performance, a reverse proxy can distribute incoming traffic among them. This prevents any single server from being overwhelmed and improves overall service reliability. (While Nginx Proxy Manager can do this, it's often simpler with dedicated load balancers for very complex setups).
- Caching: A reverse proxy can cache frequently requested static content (like images, CSS, or JavaScript files). When another user requests the same content, the reverse proxy can serve it directly from its cache instead of bothering the backend server, leading to faster response times and reduced load on your applications.
- Compression: Reverse proxies can compress server responses before sending them to the client (e.g., using Gzip or Brotli), reducing bandwidth usage and improving load times for users.
Why Nginx?
Nginx (pronounced "engine-x") is an open-source, high-performance web server, reverse proxy, load balancer, and HTTP cache. It's renowned for its:
- Performance and Scalability: Nginx uses an event-driven, asynchronous architecture, allowing it to handle a vast number of concurrent connections with minimal resource consumption (CPU and memory). This makes it ideal for high-traffic websites and services.
- Stability and Reliability: Nginx is known for its robustness and has been battle-tested in production by many of the world's largest websites.
- Rich Feature Set: It offers a wide array of features crucial for modern web delivery, including SSL/TLS termination, HTTP/2 support, WebSocket proxying, URL rewriting, access control, and extensive customization options through its flexible configuration language.
- Active Community and Development: Nginx benefits from a large, active community and continuous development, ensuring it stays up-to-date with the latest web technologies and security practices.
While Nginx is incredibly powerful, its traditional configuration involves editing text-based configuration files, which can be daunting for beginners or cumbersome for managing many sites.
Introducing Nginx Proxy Manager (NPM)
Nginx Proxy Manager (NPM) is a Docker-based application that provides a clean, user-friendly web interface for managing Nginx reverse proxy configurations. It simplifies many of the common tasks associated with setting up and maintaining a reverse proxy, especially for self-hosters.
Why use NPM over manual Nginx configuration?
- Simplicity and Ease of Use: NPM abstracts away the complexity of Nginx configuration files. You can set up proxy hosts, request SSL certificates, and configure access lists through an intuitive graphical user interface (GUI) without needing to write Nginx syntax directly (though you can add custom Nginx snippets if needed).
- Integrated Let's Encrypt SSL Management: One of NPM's standout features is its seamless integration with Let's Encrypt. You can request, install, and automatically renew free SSL/TLS certificates for your domains with just a few clicks. This is a huge time-saver and lowers the barrier to implementing HTTPS.
- Docker-Native: Being a Docker application, NPM is easy to deploy, manage, and update. It isolates its dependencies and simplifies the setup process across different host systems.
- Visual Overview: The dashboard provides a clear overview of all your proxy hosts, their status, and SSL certificate information.
Core Features of Nginx Proxy Manager:
- Easy creation and management of proxy hosts (routing domain names to backend services).
- Automated SSL certificate generation and renewal via Let's Encrypt.
- Support for wildcard SSL certificates.
- Option to upload custom SSL certificates.
- Access lists for IP whitelisting/blacklisting and HTTP Basic Authentication.
- Redirection hosts.
- 404 hosts (custom error pages for unhandled domains).
- Stream hosts for TCP/UDP proxying (e.g., for game servers, SSH).
- Basic protection against common exploits.
- Websocket support.
- User management for accessing the NPM interface.
- An option to add custom Nginx configurations for advanced users.
Prerequisites for this Guide
To make the most of this guide and follow along with the workshops, you should ideally have:
- Basic Linux Knowledge: Familiarity with navigating the command line, editing files, and managing services.
- Docker and Docker Compose Installed: NPM is deployed as a Docker container, and Docker Compose simplifies its management. Ensure these are installed and working on your server.
- You can typically check with
docker --version
anddocker-compose --version
.
- You can typically check with
- A Domain Name: You'll need a registered domain name (e.g.,
yourdomain.com
) that you can manage DNS records for. Free dynamic DNS services like DuckDNS can work for home setups, but a paid domain offers more flexibility. - A Server: This could be:
- A Virtual Private Server (VPS) from a cloud provider (e.g., DigitalOcean, Linode, Vultr, Hetzner).
- A Home Server: A Raspberry Pi, an old PC, a Network Attached Storage (NAS) device with Docker capabilities, or any machine you dedicate to self-hosting.
- This server needs a publicly accessible IP address for Let's Encrypt to verify your domain and for users to access your services.
- Firewall Access: You must be able to open ports 80 (HTTP) and 443 (HTTPS) on your server's firewall and, if applicable, on your home router (port forwarding) to allow internet traffic to reach NPM. Port 81 is also used by default for accessing the NPM admin interface.
This guide will provide verbose explanations, assuming a university student audience eager to dive deep into the concepts and practical applications of Nginx Proxy Manager.
Workshop Understanding Reverse Proxy Concepts
Objective:
To conceptualize how a reverse proxy directs traffic and its benefits in a multi-service environment.
Scenario:
Imagine you have a home server (e.g., a Raspberry Pi or an old desktop) where you're running several self-hosted applications:
- personal blog (e.g., WordPress) running on port
8080
. - photo gallery (e.g., PhotoPrism) running on port
8090
. - personal cloud storage (e.g., Nextcloud) running on port
8100
.
Without a reverse proxy, to access these services from your local network, you'd use URLs like:
http://<your_server_local_ip>:8080
for the blog.http://<your_server_local_ip>:8090
for the photo gallery.http://<your_server_local_ip>:8100
for the cloud storage.
If you wanted to make these accessible from the internet, you'd have to:
- Open ports 8080, 8090, and 8100 on your router and firewall, pointing them to your server. This increases your server's attack surface.
- Users would need to remember these specific port numbers in the URL (e.g.,
http://your_public_ip:8080
), which is not user-friendly. - Managing SSL certificates for each service individually would be a hassle.
Task:
- Draw a Diagram:
On a piece of paper or using a simple drawing tool, create a diagram that illustrates the following:- The Internet (representing external users).
- Your router/firewall (with only ports 80 and 443 open to the Nginx Proxy Manager server).
- Your Nginx Proxy Manager server.
- Your three backend services (Blog, Photo Gallery, Cloud Storage) running on their respective internal ports (8080, 8090, 8100) on the same or different internal servers/containers.
- Illustrate Traffic Flow:
- Show how a user request for
https://blog.yourdomain.com
(port 443) reaches NPM. - Show how NPM, after handling SSL, forwards this request to the Blog service on its internal port
8080
. - Repeat this for
https://photos.yourdomain.com
(to port8090
) andhttps://cloud.yourdomain.com
(to port8100
). - Indicate that communication between NPM and the backend services can be plain HTTP if they are on a trusted internal network.
- Show how a user request for
Diagram Sketch Example:
Internet User
|
| (HTTPS: blog.yourdomain.com on port 443)
v
+---------------------+
| Router/Firewall | (Ports 80, 443 open to NPM)
+---------------------+
|
v
+--------------------------+
| Nginx Proxy Manager |
| (Listens on 80, 443) |
| |
| blog.yourdomain.com --->|--- (HTTP) ---> [Blog Service (port 8080)]
| photos.yourdomain.com--->|--- (HTTP) ---> [Photo Gallery (port 8090)]
| cloud.yourdomain.com --->|--- (HTTP) ---> [Cloud Storage (port 8100)]
+--------------------------+
Discussion Points (Consider these as you draw):
- Simplified Access: How do the URLs
https://blog.yourdomain.com
,https://photos.yourdomain.com
, andhttps://cloud.yourdomain.com
compare to accessing services by IP and port? - Security:
- How many ports are exposed to the internet in the "with NPM" scenario versus the "without NPM" scenario?
- How does NPM hide the internal structure (IPs, ports) of your backend services?
- SSL Management: Where are SSL certificates managed in this setup? How does this simplify things?
- Scalability/Flexibility: If you wanted to change the port of your Blog service from
8080
to8888
, what would you need to update? Would users notice? (Answer: Only the NPM configuration for that proxy host; user-facing URL remains the same). - Adding New Services: How easy would it be to add a fourth service, say
wiki.yourdomain.com
, using this setup?
This conceptual workshop should help solidify the role and benefits of a reverse proxy like Nginx Proxy Manager before we dive into the practical installation and configuration.
1. Setting Up Nginx Proxy Manager
Now that we understand the "why" and "what" of Nginx Proxy Manager (NPM), let's get our hands dirty and set it up. NPM is designed to run as a Docker application, which greatly simplifies its installation and management. We'll primarily use Docker Compose, a tool for defining and running multi-container Docker applications.
Understanding the NPM Docker Compose Stack
A typical Nginx Proxy Manager setup using Docker Compose involves a few key components defined in a docker-compose.yml
file:
-
The
app
service: This is the main Nginx Proxy Manager application container.- Image:
jc21/nginx-proxy-manager:latest
is the official Docker image. - Ports: It exposes several ports:
80:80
: This maps port 80 on your host machine to port 80 inside the NPM container. This is for standard HTTP traffic that your proxy hosts will serve.443:443
: Maps port 443 on your host to port 443 in the container. This is for HTTPS traffic.81:81
: Maps port 81 on your host to port 81 in the container. This is the port for accessing the NPM web administration interface.
- Volumes: Persistent storage is crucial for NPM:
./data:/data
: This mounts a local directory nameddata
(in the same directory as yourdocker-compose.yml
file) to the/data
directory inside the container. This volume stores NPM's configuration, including SQLite database (by default), access lists, and proxy host settings../letsencrypt:/etc/letsencrypt
: This mounts a local directory namedletsencrypt
to/etc/letsencrypt
inside the container. This is where Let's Encrypt SSL certificates and related files are stored.
- Environment Variables:
DB_SQLITE_FILE: "/data/database.sqlite"
: Tells NPM where to store its SQLite database file within its/data
volume.- (Optional) You can configure NPM to use an external MySQL/MariaDB database using other environment variables (e.g.,
DB_MYSQL_HOST
,DB_MYSQL_USER
, etc.). For most home users and small setups, the default SQLite is sufficient and simpler.
- Image:
-
The
db
service (Optional but Recommended for larger setups): If you opt for MariaDB/MySQL.- Image: Typically
jc21/mariadb-aria:latest
or an officialmariadb
image. - Environment Variables: To set up the root password, database name, user, and password for NPM.
- Volume: A volume to persist the database data (e.g.,
./mysql_data:/var/lib/mysql
). - The
app
service would then have adepends_on: - db
directive.
- Image: Typically
-
Networks:
- It's good practice to define a custom Docker network (e.g.,
proxy-net
) for your NPM stack. This allows theapp
anddb
containers (if used) to communicate easily and can also be used to connect other application containers that NPM will proxy to.
- It's good practice to define a custom Docker network (e.g.,
Choosing Your Setup Environment:
- VPS (Virtual Private Server): Ideal for publicly accessible services. Providers like DigitalOcean, Linode, Vultr, Hetzner offer various plans. You'll get a static public IP address.
- Home Server: A Raspberry Pi, an old PC, or a NAS can host NPM. If your home internet connection has a dynamic IP, you might need a Dynamic DNS (DDNS) service. You'll also need to configure port forwarding on your home router (forwarding external ports 80 and 443 to the internal IP of your NPM server on ports 80 and 443, and external port 81 to internal port 81 for the admin UI).
For simplicity in this basic setup, we will use the default SQLite database, which doesn't require a separate db
service in the Docker Compose file.
Prerequisites for Installation
Before you proceed, ensure the following are in place on your chosen server:
- Docker Installed:
- Verify with:
docker --version
- If not installed, follow the official Docker installation guide for your Linux distribution (e.g.,
sudo apt install docker.io
on Debian/Ubuntu, or use the convenience script fromget.docker.com
). - Ensure your user is part of the
docker
group to run Docker commands withoutsudo
(e.g.,sudo usermod -aG docker $USER
, then log out and log back in).
- Verify with:
- Docker Compose Installed:
- Verify with:
docker-compose --version
(ordocker compose version
for Docker Compose V2). - If not installed, follow the official Docker Compose installation guide. For Linux, this often involves downloading the binary from GitHub.
- Verify with:
- Firewall Configuration:
- Ensure that incoming traffic on TCP ports 80, 443, and 81 is allowed on your server's firewall (e.g.,
ufw
,firewalld
). - Example using
ufw
(Uncomplicated Firewall) on Ubuntu/Debian: - If you're behind a home router, set up port forwarding for these ports from your router's public IP to your server's internal IP.
- Ensure that incoming traffic on TCP ports 80, 443, and 81 is allowed on your server's firewall (e.g.,
Step-by-step Installation using Docker Compose (SQLite version)
We'll use the simpler SQLite setup for NPM, which is perfectly adequate for most self-hosting needs.
-
Create a Directory for NPM:
You can choose any directory name and location (e.g.,
It's good practice to keep your Docker Compose projects organized./opt/npm-stack
). -
Create the
Paste the following content into the file:docker-compose.yml
file:
Inside the~/npm-stack
directory, create a file nameddocker-compose.yml
using a text editor likenano
orvim
:version: '3.8' services: app: image: 'jc21/nginx-proxy-manager:latest' container_name: npm-app # Optional: gives a predictable container name restart: unless-stopped ports: # These ports are in format <host-port>:<container-port> - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port for NPM UI environment: # SQLite is the default database if DB_MYSQL_HOST is not specified DB_SQLITE_FILE: "/data/database.sqlite" # Uncomment the line below if you want to disable NPM's anonymous data collection # DISABLE_TELEMETRICS: 'true' volumes: # Persist NPM data and Let's Encrypt certificates on the host - ./data:/data - ./letsencrypt:/etc/letsencrypt # If you want NPM to connect to other containers, they should often be on the same custom network # For now, we'll keep it simple. We'll discuss networks more in later sections. # networks: # - proxy-net # Define a custom network if you plan to link other service containers directly by name # networks: # proxy-net: # driver: bridge
Explanation of the
docker-compose.yml
:version: '3.8'
: Specifies the Docker Compose file format version.services:
: Defines the different application components (containers).app:
: This is the name we're giving to our Nginx Proxy Manager service.image: 'jc21/nginx-proxy-manager:latest'
: Tells Docker to use the latest version of the official NPM image from Docker Hub.container_name: npm-app
: Assigns a fixed, human-readable name to the container. This can be useful fordocker
commands.restart: unless-stopped
: Ensures the container will automatically restart if it crashes or if the Docker daemon restarts, unless it was manually stopped.ports:
: Defines port mappings.'80:80'
: Maps port 80 on your host server to port 80 inside the NPM container. This is where Nginx inside NPM listens for incoming HTTP traffic.'443:443'
: Maps port 443 on your host to port 443 in the container for HTTPS traffic.'81:81'
: Maps port 81 on your host to port 81 in the container, where the NPM admin web interface is served.
environment:
: Sets environment variables inside the container.DB_SQLITE_FILE: "/data/database.sqlite"
: Specifies that NPM should use an SQLite database and where to store its file within the/data
volume.DISABLE_TELEMETRICS: 'true'
(Optional): If you uncomment this, it disables anonymous statistics collection by NPM.
volumes:
: Defines how data is persisted../data:/data
: Creates a directory nameddata
in your current host directory (~/npm-stack/data
) and mounts it to/data
inside the container. All NPM configurations, proxy host settings, user accounts, etc., will be stored here. This ensures your data persists even if the container is removed and recreated../letsencrypt:/etc/letsencrypt
: Similarly, creates~/npm-stack/letsencrypt
on the host and mounts it to/etc/letsencrypt
in the container. This is where all your SSL certificates from Let's Encrypt will be stored.
networks:
(Commented out for now): This section, if uncommented, would define a custom Docker bridge network namedproxy-net
. While highly recommended for connecting other service containers to NPM by their container names, we will introduce this concept more thoroughly in an intermediate section to keep the initial setup straightforward. For now, NPM can still proxy to services exposed on the host's IP or other IPs.
-
Run Nginx Proxy Manager:
Save thedocker-compose.yml
file (Ctrl+X, then Y, then Enter innano
). Now, from within the~/npm-stack
directory (where yourdocker-compose.yml
file is), run the following command:docker-compose up
: This command starts the services defined in yourdocker-compose.yml
.-d
: Runs the containers in "detached" mode, meaning they run in the background, and you get your command prompt back.
Docker Compose will first pull the
jc21/nginx-proxy-manager:latest
image if it's not already on your system. Then, it will create and start thenpm-app
container. You will also see two new directories,data
andletsencrypt
, created in your~/npm-stack
directory. These are your persistent storage volumes. -
Verify the Installation:
You should see output similar to this, indicating the
You can check if the container is running correctly:npm-app
container is up and running, with ports 80, 81, and 443 mapped: You can also check the logs of the container if you suspect any issues during startup:
Initial NPM Configuration
With the NPM container running, it's time to access its web UI and perform the initial setup.
-
Access the NPM Web UI:
Open your web browser and navigate tohttp://<your_server_ip>:81
. Replace<your_server_ip>
with the actual public or private IP address of the server where you installed NPM.- If you installed it on a local VM for testing, use the VM's IP address.
- If on a VPS, use its public IP address.
- If on a home server, use its local network IP if accessing from within your home network.
-
Default Administrator Credentials:
You will be greeted with the Nginx Proxy Manager login screen. The default credentials are:- Email:
admin@example.com
- Password:
changeme
Enter these credentials and click "Sign In".
- Email:
-
Change Default Admin Details (CRITICAL):
Immediately after your first login, NPM will prompt you to change the default administrator details. This is a critical security step.- Full Name: Enter your name or a descriptive name for the admin user.
- Nickname: A shorter alias.
- Email: Change
admin@example.com
to your actual email address. This is important for notifications, especially for Let's Encrypt certificate expiry warnings. - Click "Save".
-
Change Admin Password (CRITICAL):
Next, you'll be prompted to change the default password.- Current Password:
changeme
- New Password: Choose a strong, unique password.
- Confirm New Password: Re-enter your new password.
- Click "Save".
- Current Password:
You are now logged into the Nginx Proxy Manager dashboard with your updated credentials. Congratulations, NPM is installed and ready for use!
Workshop Installing Nginx Proxy Manager
Objective: To successfully install Nginx Proxy Manager on your server using Docker Compose and access its web administration interface.
Prerequisites:
- A server (local Virtual Machine, Raspberry Pi, or cloud VPS) with a Linux-based operating system.
- Docker and Docker Compose installed on the server.
- Your user account on the server should have permissions to run Docker commands (ideally by being part of the
docker
group). - Ports 80, 443, and 81 (TCP) must be open on your server's firewall. If your server is behind a NAT router (like a home router), these ports must also be forwarded from the router to your server's internal IP address.
Steps:
-
SSH into Your Server: Connect to your server's command line interface using SSH.
-
Create a Project Directory: Choose a location for your NPM configuration. We'll use
/opt/npm
for this workshop, which is a common location for optional software. You might needsudo
to create a directory here.(Note: If you prefer, usesudo mkdir -p /opt/npm sudo chown $USER:$USER /opt/npm # Give your current user ownership cd /opt/npm
~/npm-stack
as in the tutorial text; the principle is the same. Adjust paths accordingly if you use a different location.) -
Create the
Paste the following configuration into the file:docker-compose.yml
File: Use a text editor likenano
to create thedocker-compose.yml
file in the/opt/npm
directory:Save the file and exit the editor (Ctrl+X, then Y, then Enter inversion: '3.8' services: app: image: 'jc21/nginx-proxy-manager:latest' container_name: npm_app_workshop # A unique name for this workshop restart: unless-stopped ports: - '80:80' # HTTP - '443:443' # HTTPS - '81:81' # NPM Admin UI environment: DB_SQLITE_FILE: "/data/database.sqlite" # For this workshop, we'll leave telemetrics enabled (default) # You can add DISABLE_TELEMETRICS: 'true' later if you wish volumes: - ./data:/data # Maps /opt/npm/data on host to /data in container - ./letsencrypt:/etc/letsencrypt # Maps /opt/npm/letsencrypt on host
nano
). -
Start Nginx Proxy Manager: From the
This will download the image (if not already present) and start the NPM container./opt/npm
directory, execute the Docker Compose command: -
Verify Container Status: Check that the container is running:
You should seenpm_app_workshop
(or whatevercontainer_name
you chose) in the list with status "Up". Also, check if thedata
andletsencrypt
directories were created in/opt/npm
: You should seedata
andletsencrypt
directories. -
Access the NPM Admin UI: Open a web browser on your local computer and navigate to:
http://<your_server_ip>:81
(Replace<your_server_ip>
with the actual IP address of your server). -
Initial Login and Configuration:
- You should see the NPM login page.
- Log in with the default credentials:
- Email:
admin@example.com
- Password:
changeme
- Email:
- Follow the on-screen prompts to:
- Change the admin user's Full Name, Nickname, and Email Address (use your real email).
- Change the admin user's Password to something strong and memorable.
Verification:
- You are successfully logged into the Nginx Proxy Manager dashboard.
- You can see sections like "Dashboard", "Hosts", "SSL Certificates", etc.
- If you check the
/opt/npm/data
directory on your server, you should find adatabase.sqlite
file, among other files and folders, confirming that persistent storage is working.
If you encountered any issues:
- Double-check firewall rules (ports 80, 443, 81).
- Verify port forwarding if behind a NAT router.
- Check Docker and Docker Compose installation.
- Inspect NPM container logs:
docker logs npm_app_workshop
.
You have now successfully installed Nginx Proxy Manager! In the next section, we'll configure our first proxy host.
2. Your First Proxy Host
With Nginx Proxy Manager (NPM) installed and running, it's time to put it to work. The primary function of NPM is to act as a reverse proxy, routing traffic from a public-facing domain name to an internal backend service. This is achieved by creating "Proxy Hosts."
Understanding Proxy Hosts in NPM
A "Proxy Host" in NPM is essentially a set of rules that tells Nginx how to handle incoming requests for a specific domain or subdomain and where to forward them.
Core Components of a Proxy Host:
- Domain Names:
This is the public domain or subdomain (e.g.,myapp.yourdomain.com
,blog.yourdomain.com
) that users will type into their browser. NPM will listen for requests matching these domain names. You can specify multiple domain names or aliases for a single proxy host entry. - Scheme:
This defines the protocol NPM will use to communicate with your backend service. It can be:http
: NPM will forward requests to the backend service using unencrypted HTTP. This is common for backend services running on a trusted internal network.https
: NPM will forward requests to the backend service using encrypted HTTPS. This is necessary if your backend service itself expects HTTPS traffic and has its own SSL certificate (e.g., for end-to-end encryption).
- Forward Hostname / IP:
This is the address of your backend service. It can be:- An IP address (e.g.,
192.168.1.100
if the service is on another machine in your local network, or172.17.0.1
which is often the Docker host's IP from within a container on the default bridge network). - A hostname (e.g.,
myservice-container-name
if your backend service is a Docker container running on the same custom Docker network as NPM, orinternal-server.local
if it's a resolvable hostname on your network).
- An IP address (e.g.,
- Forward Port:
This is the port number on which your backend service is listening (e.g.,8080
,3000
,80
).
Setting Up a Simple Backend Service for Demonstration
To create our first proxy host, we need a backend service to proxy to. For simplicity, we'll use a very basic web server container that just tells us who it is. The containous/whoami
Docker image is perfect for this.
-
Run the
whoami
Docker Container:
Open a terminal on your server (where Docker is installed) and run the following command:docker run
: Command to run a new container.-d
: Run the container in detached mode (in the background).--name whoami-app
: Assigns a friendly namewhoami-app
to this container.-p 8000:80
: Maps port8000
on your host server to port80
inside thewhoami-app
container. Thewhoami
application inside the container listens on port80
by default. We use host port8000
to avoid conflict with NPM which is already using host port80
.containous/whoami
: The Docker image to use.
-
Verify the Backend Service:
Or, if your server has a specific internal IP (e.g.,
You can verify that thewhoami-app
service is running and accessible locally on your server:192.168.1.50
): You should see output similar to this, showing details about the request and the container:This confirms our simple backend web service is running and listening on portHostname: <container_id> IP: 127.0.0.1 IP: ::1 IP: 172.17.0.2 # This is the container's IP on the Docker bridge network RemoteAddr: 172.17.0.1:43066 # This is the Docker host's IP and a random port GET / HTTP/1.1 Host: localhost:8000 User-Agent: curl/7.74.0 Accept: */*
8000
of our Docker host.
Configuring DNS
Before NPM can route traffic for a domain like app1.yourdomain.com
, the internet's Domain Name System (DNS) needs to know that app1.yourdomain.com
points to the public IP address of your server where NPM is running.
-
Access Your DNS Provider's Control Panel:
Log in to the website of your domain registrar or DNS provider (e.g., GoDaddy, Namecheap, Cloudflare, Google Domains). -
Create an 'A' Record:
You need to add anA
record for the subdomain you want to use.- Type:
A
- Name/Host: Enter the subdomain part. For
app1.yourdomain.com
, you would enterapp1
. Some providers might require the fullapp1.yourdomain.com
. - Value/Points to: Enter the public IP address of your server where NPM is running.
- TTL (Time To Live): You can usually leave this at the default (e.g., 1 hour or "Automatic"). Lower TTLs (like 5 minutes) are useful during testing as changes propagate faster, but can increase DNS query load.
For example, if your server's public IP is
203.0.113.45
and your domain isexample.com
, you'd create anA
record forapp1
pointing to203.0.113.45
. - Type:
-
Wait for DNS Propagation: DNS changes can take some time to propagate across the internet. This can range from a few minutes to several hours (though usually much quicker for new records). You can use online tools like
dnschecker.org
to check if your new A record is visible from different parts of the world.
Creating a Proxy Host in NPM UI
Once your DNS record is set up (or at least you've initiated the setup and are waiting for propagation), you can configure the Proxy Host in Nginx Proxy Manager.
-
Log in to NPM: Open your browser and go to
http://<your_server_ip>:81
, then log in. -
Navigate to Proxy Hosts: In the NPM dashboard, click on "Hosts" in the top menu, then select "Proxy Hosts".
-
Add Proxy Host: Click the "Add Proxy Host" button. You'll see a dialog with several tabs. We'll focus on the "Details" tab for now.
(Self-correction: I cannot embed images directly. I will describe the UI elements.)
Fill in the "Details" Tab:
- Domain Names:
- Enter the full subdomain you configured in DNS, e.g.,
app1.yourdomain.com
. - You can add multiple domain names here if you want them all to point to the same backend service (e.g.,
www.app1.yourdomain.com
). Add one per line.
- Enter the full subdomain you configured in DNS, e.g.,
- Scheme:
- Select
http
. Ourwhoami-app
container is listening for HTTP traffic.
- Select
- Forward Hostname / IP:
- Enter the IP address of your Docker host (the server where
whoami-app
is running and exposing port8000
). - You can often use
127.0.0.1
orlocalhost
if NPM can resolve it to the host machine from within its container. However, a more robust method is often to use the host's specific internal IP address on the Docker bridge network (commonly172.17.0.1
by default if NPM is on the defaultbridge
network) or the host's primary network interface IP. - For this workshop, try your server's main network interface IP (e.g.,
192.168.1.50
if that's your server's LAN IP, or its public IP if you are certain no firewall blocks this internal communication path, though using an internal IP is preferred). - A safer bet for services running on the same Docker host but exposed via host ports is usually the host's IP on the
docker0
bridge or a specific LAN IP.
- Enter the IP address of your Docker host (the server where
- Forward Port:
- Enter
8000
(this is the host port we mapped forwhoami-app
).
- Enter
- Block Common Exploits:
- It's generally a good idea to toggle this ON. This enables some basic Nginx rules to protect against common web vulnerabilities.
- Websockets Support:
- Leave this OFF for now. The
whoami-app
service does not use WebSockets. You would enable this for applications that require real-time, two-way communication (like chat apps, live dashboards).
- Leave this OFF for now. The
- Domain Names:
-
Save the Configuration: Click the "Save" button.
Your new proxy host will appear in the list.
Testing Your First Proxy Host
-
Open Your Browser: Navigate to
http://app1.yourdomain.com
(using the domain you configured, over HTTP for now, as we haven't set up SSL yet). -
Check the Result: If everything is configured correctly (NPM, backend service, and DNS), you should see the output from the
whoami-app
service, similar to what you saw withcurl http://localhost:8000
, but this time served through your domain name via Nginx Proxy Manager. The "Host" header in thewhoami
output should now reflectapp1.yourdomain.com
.
Troubleshooting Common Issues
If it doesn't work, here are some common things to check:
- DNS Propagation:
- Use
ping app1.yourdomain.com
from your computer or an online DNS checker. Does it resolve to your server's public IP address? If not, wait longer or double-check your DNS record.
- Use
- Firewall:
- Is port 80 (HTTP) open on your server's firewall and forwarded correctly from your router if applicable?
- NPM Container:
- Is the NPM container (
npm-app
or similar) running? Check withdocker ps
. - Check the NPM logs:
docker logs npm-app
. Look for any error messages related toapp1.yourdomain.com
.
- Is the NPM container (
- Backend Service (
whoami-app
):- Is the
whoami-app
container running? Check withdocker ps
. - Is it accessible directly on the host at
http://<your_server_ip>:8000
? If not, the backend service itself has a problem.
- Is the
- NPM Proxy Host Configuration:
- Double-check the
Forward Hostname / IP
andForward Port
in NPM.- If NPM and
whoami-app
are on the same host, theForward Hostname / IP
should be an IP address that NPM can reach the host's port8000
on. This could be the host's main LAN IP or the IP of thedocker0
bridge (often172.17.0.1
). Avoid usinglocalhost
or127.0.0.1
directly in NPM's forward host field unless you are certain of your Docker networking setup, aslocalhost
inside the NPM container refers to the NPM container itself, not the Docker host.
- If NPM and
- Double-check the
- Browser Cache:
- Try clearing your browser cache or using an incognito/private window, especially if you were trying to access the domain before DNS fully propagated.
Once you have http://app1.yourdomain.com
working, you've successfully set up your first reverse proxy rule! The next step is to secure it with SSL/TLS.
Workshop Exposing a Simple Web Service
Objective: To make a simple web service (the containous/whoami
application) accessible from the internet via a subdomain using Nginx Proxy Manager, initially over HTTP.
Prerequisites:
- Nginx Proxy Manager installed and running (from Workshop of section 1).
- You have a registered domain name (e.g.,
yourworkshopdomain.com
) for which you can manage DNS records. - The server running NPM has a public IP address, and ports 80 and 443 are open and forwarded to it.
Steps:
-
Deploy the
(If you already ran this aswhoami
Web Service: If you haven't already, SSH into your server and run thecontainous/whoami
Docker container, exposing it on host port8000
:whoami-app
, you can reuse it or stop and remove the old one (docker stop whoami-app && docker rm whoami-app
) before running this command to avoid name conflicts). -
Verify Local Access to
You should see output fromwhoami
: On your server, confirm you can access thewhoami
service:whoami
. Note down your server's primary LAN IP address (e.g.,192.168.1.X
) or the IP address of itsdocker0
interface (often172.17.0.1
if you runip addr show docker0
). This will be used as the "Forward Hostname / IP". -
Configure DNS:
- Log in to your domain registrar's or DNS provider's control panel.
- Create a new
A
record for a subdomain. Let's useproxytest.yourworkshopdomain.com
.- Type:
A
- Name/Host:
proxytest
(orproxytest.yourworkshopdomain.com
) - Value/Points to: Your server's public IP address.
- TTL: Set to a low value like
300
seconds (5 minutes) for faster testing, or leave as default.
- Type:
- Save the DNS record. Wait a few minutes for it to start propagating. You can check its status using a tool like
https://dnschecker.org/#A/proxytest.yourworkshopdomain.com
.
-
Create the Proxy Host in NPM:
- Open your browser and log in to the Nginx Proxy Manager admin UI (
http://<your_server_public_ip>:81
). - Go to "Hosts" -> "Proxy Hosts".
- Click "Add Proxy Host".
- Details Tab:
- Domain Names: Enter
proxytest.yourworkshopdomain.com
(use your actual domain). - Scheme: Select
http
. - Forward Hostname / IP: Enter the IP address of your server where port
8000
is exposed by thewhoami-service
container. This should be an IP address reachable by the NPM container.- Commonly, this is the Docker host's IP on the
docker0
bridge network (e.g.,172.17.0.1
). - Alternatively, use your server's primary LAN IP if NPM can reach it.
- Avoid using
localhost
or127.0.0.1
here unless you are very sure about your Docker network configuration, as these resolve to the NPM container itself, not the host.
- Commonly, this is the Docker host's IP on the
- Forward Port: Enter
8000
. - Block Common Exploits: Toggle ON.
- Websockets Support: Leave OFF.
- Domain Names: Enter
- Click "Save".
- Open your browser and log in to the Nginx Proxy Manager admin UI (
-
Test Public Access:
- Wait for DNS propagation to complete (this might take a few minutes to an hour, depending on your TTL and DNS provider).
- Open a new browser tab or an incognito window and navigate to:
http://proxytest.yourworkshopdomain.com
(using HTTP, not HTTPS yet). - You should see the output from the
whoami-service
application. The "Host" field in the output should now displayproxytest.yourworkshopdomain.com
.
Verification:
- You can successfully access the
whoami
service usinghttp://proxytest.yourworkshopdomain.com
. - The information displayed by
whoami
reflects that the request came via your configured subdomain.
Troubleshooting during the workshop:
- "Site can't be reached" / DNS error:
DNS record is not yet propagated or is incorrect. Double-check the A record and usednschecker.org
. - NPM's default "Congratulations" page:
Your DNS is likely pointing to the server, but NPM doesn't have a proxy host entry for the exact domain name you typed, or there's an issue with the proxy host config. Check for typos in the "Domain Names" field in NPM. - "502 Bad Gateway":
NPM is running and received the request, but it cannot connect to theForward Hostname / IP
andForward Port
you specified.- Is the
whoami-service
container running (docker ps
)? - Is the
Forward Hostname / IP
correct and reachable from within the NPM container? - Is the
Forward Port
(8000
) correct? - Check the NPM logs (
docker logs <npm_container_name>
) for more specific error messages.
- Is the
- "404 Not Found" from
whoami
:
This is unlikely withwhoami
unless the path is wrong, but if you were proxying a real app, this could mean the app itself is returning a 404.
Once this HTTP access is confirmed, you are ready to secure this connection with SSL in the next section.
3. Securing Your Services with SSL/TLS (Let's Encrypt)
Exposing services over HTTP is functional, but it's insecure. Any data transmitted between the user's browser and your server (including passwords or sensitive information) is sent in plain text, vulnerable to interception. HTTPS (HTTP Secure), which uses SSL/TLS (Secure Sockets Layer/Transport Layer Security) protocols, encrypts this communication, ensuring privacy and data integrity.
Importance of HTTPS
- Encryption: Protects data in transit from eavesdropping. If an attacker intercepts the communication, they'll only see scrambled, unreadable data.
- Authentication: Verifies that the user is communicating with the legitimate server they intended to reach, not an imposter. This is done through SSL certificates issued by trusted Certificate Authorities (CAs).
- Data Integrity: Ensures that the data exchanged has not been tampered with during transmission.
- User Trust: Browsers display visual cues (like a padlock icon) for HTTPS sites, reassuring users that their connection is secure. Modern browsers actively warn users about non-HTTPS sites, especially if they handle forms.
- SEO Benefits: Search engines like Google give a slight ranking boost to HTTPS-enabled websites.
- Modern Web Features: Many new browser features and APIs are available only over HTTPS connections.
Introduction to Let's Encrypt
Traditionally, obtaining SSL certificates involved purchasing them from commercial CAs, which could be costly and sometimes a cumbersome process. Let's Encrypt revolutionized this by providing:
- Free Certificates: Let's Encrypt is a non-profit Certificate Authority that provides SSL/TLS certificates at no cost.
- Automated Process: The entire process of obtaining, installing, and renewing certificates can be automated using software that implements the ACME (Automated Certificate Management Environment) protocol.
- Open and Transparent: It's an open initiative with a strong focus on security and transparency.
How Let's Encrypt Works (Simplified for HTTP-01 Challenge):
- Request: Your ACME client (in our case, Nginx Proxy Manager) tells Let's Encrypt you want a certificate for
yourdomain.com
. - Challenge: Let's Encrypt gives your client a challenge to prove you actually control
yourdomain.com
. For the HTTP-01 challenge (the most common one used by NPM for non-wildcard domains), Let's Encrypt asks your client to place a specific file with specific content at a known URL on your web server (e.g.,http://yourdomain.com/.well-known/acme-challenge/<random_token>
). - Verification: Let's Encrypt's servers attempt to download this file from that URL.
- Issuance: If the file is found and the content matches, Let's Encrypt issues the SSL certificate for
yourdomain.com
. - Installation & Renewal: Your client (NPM) installs the certificate. Let's Encrypt certificates are typically valid for 90 days. Well-behaved ACME clients like NPM will automatically attempt to renew them (usually around 30 days before expiry) by repeating a similar challenge process.
For the HTTP-01 challenge to work, your server must be reachable from the internet on port 80, as Let's Encrypt will make an HTTP request to validate domain ownership.
NPM's Integrated Let's Encrypt Support
One of the most compelling features of Nginx Proxy Manager is its built-in, user-friendly support for Let's Encrypt. It handles:
- Requesting new SSL certificates.
- Automatically performing the HTTP-01 challenge.
- Installing the issued certificates.
- Automatically renewing certificates before they expire.
This abstracts away almost all the manual work typically associated with SSL certificate management.
Requesting an SSL Certificate in NPM
Let's secure the app1.yourdomain.com
(or proxytest.yourworkshopdomain.com
from the workshop) proxy host we set up earlier.
-
Edit Your Existing Proxy Host:
- Log in to your NPM admin UI (
http://<your_server_ip>:81
). - Go to "Hosts" -> "Proxy Hosts".
- Find the proxy host you created (e.g.,
app1.yourdomain.com
). - Click the three-dot menu icon (â‹®) on the right side of its entry and select "Edit".
- Log in to your NPM admin UI (
-
Navigate to the "SSL" Tab: In the "Edit Proxy Host" dialog, click on the "SSL" tab.
-
Configure SSL Settings:
- SSL Certificate:
- Click the dropdown menu. It will likely say "None".
- Select "Request a new SSL Certificate".
- Let's Encrypt Email Address for Expiry Notifications:
- Enter a valid email address. Let's Encrypt will use this to send you notifications if your certificate is nearing expiration and automatic renewal is failing for some reason. This should ideally be the same email you configured for your NPM admin user.
- Force SSL:
- Toggle this ON. This option will automatically redirect all HTTP requests for this domain to HTTPS. This is highly recommended.
- HTTP/2 Support:
- Toggle this ON. HTTP/2 is a newer version of the HTTP protocol that offers performance improvements (like multiplexing and header compression) over HTTP/1.1. Most modern browsers support it, and it's generally beneficial to enable.
- HSTS Enabled (HTTP Strict Transport Security):
- For now, you might want to leave this OFF during initial testing, or understand its implications fully before enabling it.
- Explanation: If enabled, NPM will send an
Strict-Transport-Security
header to browsers. This header tells the browser that it should only communicate with this site using HTTPS for a specified period (max-age). Even if the user typeshttp://
or clicks an HTTP link, the browser will automatically upgrade the request to HTTPS before sending it. - Implication: Once a browser receives an HSTS header, it will remember this policy. If you later have issues with your SSL setup or want to disable HTTPS (which is highly discouraged), users whose browsers have seen the HSTS header will be unable to access your site over HTTP until the HSTS max-age expires. This can make troubleshooting SSL issues difficult.
- Recommendation: Enable HSTS once you are confident your HTTPS setup is stable and you intend to use HTTPS permanently. NPM also has an "HSTS Subdomains" option; be cautious with this as it applies the HSTS policy to all subdomains.
- Agree to Let's Encrypt Terms of Service:
- You must toggle this ON to agree to the Let's Encrypt Subscriber Agreement.
- SSL Certificate:
-
Save and Automated Process:
- Click the "Save" button.
- NPM will now automatically communicate with Let's Encrypt to request and validate the certificate. This process usually takes a few seconds to a minute.
- During this time, NPM will temporarily configure Nginx to serve the challenge file required by Let's Encrypt over HTTP (port 80).
- If successful, NPM will install the certificate, and your proxy host will be configured to serve traffic over HTTPS.
-
Test Your Secure Site:
- Open your browser and navigate to
https://app1.yourdomain.com
(note thehttps://
). - You should see the padlock icon in your browser's address bar, indicating a secure connection.
- Click the padlock to view certificate details; it should be issued by Let's Encrypt.
- Try navigating to
http://app1.yourdomain.com
. If you enabled "Force SSL", you should be automatically redirected to thehttps://
version.
- Open your browser and navigate to
Understanding Certificate Renewal
Let's Encrypt certificates are typically valid for 90 days. Nginx Proxy Manager automatically handles the renewal process. By default, it checks for certificates due for renewal periodically (usually daily) and attempts to renew any that are within about 30 days of expiry.
For automatic renewal to work:
- Your Nginx Proxy Manager container must be running.
- Your server must remain accessible from the internet on port 80 (for the HTTP-01 challenge, even if you force all traffic to HTTPS). Some configurations might also require port 443.
- Your DNS records must still correctly point your domain to your server's IP.
You generally don't need to do anything manually for renewals; NPM takes care of it.
Troubleshooting SSL Issues
If certificate issuance fails, NPM will usually revert to the previous setting (no SSL or the old certificate). Here are common causes:
- DNS Propagation Not Complete: Let's Encrypt's servers couldn't resolve your domain name to your server's IP address. Wait longer or verify your DNS record.
- Port 80 Blocked: Your server's firewall, or your router's firewall/port forwarding, is blocking incoming traffic on port 80. Let's Encrypt needs to access your server on port 80 for the HTTP-01 challenge.
- Temporarily allow port 80 if it's blocked, try issuing the certificate, and then you can decide on your port 80 policy (though keeping it open for renewals is best).
- Incorrect Web Root or Challenge Path: NPM usually handles this correctly, but if there's a misconfiguration, the challenge file might not be served from the expected location.
- Rate Limits from Let's Encrypt: Let's Encrypt has rate limits to prevent abuse (e.g., too many failed attempts, too many certificates for the same domain in a short period). If you hit a rate limit, you'll need to wait (often an hour or up to a week for some limits) before trying again. Check the Let's Encrypt documentation for current rate limits.
- NPM Logs: Always check the NPM container logs for error messages: Look for lines related to "certbot", "Let's Encrypt", or your domain name around the time you tried to issue the certificate. The logs often provide specific reasons for failure.
- CAA Records: If you have DNS Certification Authority Authorization (CAA) records configured for your domain, ensure they permit Let's Encrypt (
letsencrypt.org
) to issue certificates. If you don't know what CAA records are, you probably don't have them, or they are not causing the issue.
By successfully adding an SSL certificate, you've made a significant step in securing your self-hosted service!
Workshop Adding SSL to Your Proxy Host
Objective: To secure the proxytest.yourworkshopdomain.com
service (created in the previous workshop) with a free SSL certificate from Let's Encrypt using Nginx Proxy Manager.
Prerequisites:
- The
proxytest.yourworkshopdomain.com
proxy host from the previous workshop is working correctly over HTTP. - Your server is reachable from the internet on both port 80 and port 443.
- Port 80 is required for Let's Encrypt's HTTP-01 challenge to validate domain ownership.
- Port 443 will be used for the actual HTTPS traffic once the certificate is installed.
- DNS for
proxytest.yourworkshopdomain.com
is correctly pointing to your server's public IP.
Steps:
-
Log in to Nginx Proxy Manager: Access your NPM admin UI at
http://<your_server_public_ip>:81
. -
Edit the Proxy Host:
- Navigate to "Hosts" -> "Proxy Hosts".
- Locate the entry for
proxytest.yourworkshopdomain.com
. - Click the three-dot menu (â‹®) on its right and select "Edit".
-
Configure SSL Settings in the "SSL" Tab:
- Click on the "SSL" tab.
- SSL Certificate: From the dropdown, select "Request a new SSL Certificate".
- Let's Encrypt Email Address: Enter your valid email address (e.g., the one you used for the NPM admin user). This is for important renewal notifications.
- Force SSL: Toggle this ON. This ensures users are always redirected to the secure HTTPS version.
- HTTP/2 Support: Toggle this ON for better performance.
- HSTS Enabled: For this workshop, let's enable it to see its effect, but be mindful of its "stickiness." If you prefer to be cautious, you can leave it off for now and enable it later once you're fully comfortable. If you do enable it, use the default
max-age
. Do not enable "HSTS Subdomains" unless you understand the full implications for all your other subdomains. - I Agree to the Let's Encrypt Terms of Service: Toggle this ON. You must agree to their terms.
-
Save and Initiate Certificate Request:
- Review your settings.
- Click the "Save" button.
- NPM will now attempt to obtain the SSL certificate from Let's Encrypt. This may take a few moments. You might see a "Processing..." indicator.
-
Test the Secure Connection:
- Once the process completes (the dialog closes or you see a success message/status update), open a new browser tab or an incognito window.
- Navigate to
https://proxytest.yourworkshopdomain.com
(ensure you usehttps://
). - Check for the Padlock: Your browser should display a padlock icon in the address bar, indicating a secure connection.
- Inspect Certificate Details: Click on the padlock icon. You should be able to view the certificate details. Verify that it was issued by "Let's Encrypt" (or R3, which is an intermediate CA for Let's Encrypt) and is valid for
proxytest.yourworkshopdomain.com
. Check the validity dates. - Test HTTP to HTTPS Redirection: Try navigating to
http://proxytest.yourworkshopdomain.com
(withhttp
). If "Force SSL" is working, you should be automatically redirected tohttps://proxytest.yourworkshopdomain.com
. - (If HSTS was enabled) After successfully accessing the HTTPS site, try typing
http://proxytest.yourworkshopdomain.com
again. Your browser (if it supports HSTS and has processed the header) might directly go to HTTPS without even making an initial HTTP request.
Verification:
- You can access
https://proxytest.yourworkshopdomain.com
successfully. - The browser shows a valid SSL certificate issued by Let's Encrypt.
- HTTP requests are automatically redirected to HTTPS.
- (Optional) Check the NPM logs (
docker logs <npm_container_name>
) to see the interaction with Let's Encrypt (you might see lines fromcertbot
).
Troubleshooting during the workshop:
- Certificate request fails:
- Port 80 not accessible: This is the most common issue. Double-check your firewall and router port forwarding for port 80 TCP to your NPM server. Let's Encrypt must be able to reach your server on port 80 over the public internet.
- DNS issues: Ensure
proxytest.yourworkshopdomain.com
resolves correctly to your public IP from external locations (use an online DNS checker). - Let's Encrypt Rate Limits: If you've tried too many times or have other issues, you might be temporarily rate-limited. The NPM logs should indicate this.
- Typos in domain name: Ensure the domain name in NPM matches your DNS record exactly.
- Mixed Content Warnings (after SSL is active): If your backend application (the
whoami
service in this case, though it's too simple to cause this) tries to load resources (images, scripts, CSS) over HTTP while the main page is HTTPS, browsers will show "mixed content" warnings. This is an application-level issue, not an NPM SSL issue. The solution is to ensure your backend application serves all content over HTTPS or uses relative URLs correctly.
Congratulations! Your self-hosted service is now securely accessible over HTTPS, thanks to Nginx Proxy Manager and Let's Encrypt.
4. Managing Multiple Services and Subdomains
As your self-hosting journey progresses, you'll likely want to expose more than just one application. Nginx Proxy Manager (NPM) excels at managing multiple services, each accessible via its own unique subdomain, and all centrally managed through its user interface.
Organizing Your Services
A common and clean way to organize access to multiple services is by using subdomains. For example, if your main domain is yourdomain.com
, you might set up:
blog.yourdomain.com
for your blogging platform.cloud.yourdomain.com
for your personal cloud storage (e.g., Nextcloud).git.yourdomain.com
for your self-hosted Git server (e.g., Gitea).photos.yourdomain.com
for your photo gallery.
This approach is user-friendly and allows for distinct configurations (including SSL certificates and access controls) for each service.
Adding More Proxy Hosts
For each new service you want to expose:
- Deploy the Backend Service: Ensure the application is running (e.g., as a Docker container or a native service) and listening on a specific internal IP address and port.
- Configure DNS: Create a new
A
record (orCNAME
if appropriate) for the chosen subdomain (e.g.,serviceX.yourdomain.com
), pointing to the public IP address of your NPM server. Wait for DNS propagation. - Create a New Proxy Host in NPM:
- In NPM, go to "Hosts" -> "Proxy Hosts" and click "Add Proxy Host".
- Details Tab:
- Domain Names: Enter the new subdomain (e.g.,
serviceX.yourdomain.com
). - Scheme:
http
(most common for internal backends) orhttps
(if the backend itself requires HTTPS). - Forward Hostname / IP: The internal IP or Docker container name of your new backend service.
- Forward Port: The port your new backend service is listening on.
- Toggle "Block Common Exploits" and "Websockets Support" as needed.
- Domain Names: Enter the new subdomain (e.g.,
- SSL Tab:
- Select "Request a new SSL Certificate".
- Ensure "Force SSL" and "HTTP/2 Support" are enabled.
- Provide your Let's Encrypt email and agree to the ToS.
- Save the proxy host. NPM will attempt to obtain the SSL certificate.
Repeat this process for every service you wish to make accessible.
Using Wildcard Certificates
If you plan to host many services under the same parent domain (e.g., numerous subdomains of yourdomain.com
), managing individual SSL certificates for each can become slightly repetitive. A wildcard certificate, denoted as *.yourdomain.com
, can secure your main domain (yourdomain.com
) AND all its first-level subdomains (e.g., blog.yourdomain.com
, cloud.yourdomain.com
, but NOT test.blog.yourdomain.com
).
Pros of Wildcard Certificates:
- Simplified Management: One certificate covers multiple subdomains. You only need to go through the issuance and renewal process once for all associated services.
- Faster Subdomain Deployment: Once you have a wildcard certificate, adding a new proxy host for a new subdomain under its coverage doesn't require a new SSL certificate request specifically for that subdomain; you just assign the existing wildcard certificate.
Cons and Considerations for Wildcard Certificates:
- DNS Challenge Required: Let's Encrypt requires you to prove domain ownership using a DNS-01 challenge for wildcard certificates. The HTTP-01 challenge (placing a file on your web server) is not sufficient because it cannot prove control over all potential subdomains.
- The DNS-01 challenge involves programmatically creating a specific TXT DNS record for your domain. NPM can automate this if your DNS provider is supported and you provide API credentials.
- Security Implications: If the private key of your wildcard certificate is compromised, all subdomains covered by it are also compromised. With individual certificates, the compromise of one key affects only that specific subdomain.
- Supported DNS Providers: NPM has built-in support for automating DNS challenges with several popular DNS providers (e.g., Cloudflare, DigitalOcean, GoDaddy, Namecheap, AWS Route 53, and many more). If your provider is not directly supported, you might need to use a more manual method or a third-party tool like
acme.sh
with a DNS alias mode, which is more advanced. - API Credentials Security: You'll need to provide NPM with API keys or tokens for your DNS provider. These credentials must be protected. It's crucial to use API tokens with the minimum necessary permissions (e.g., only permission to edit DNS records for the specific zone/domain).
Setting Up a Wildcard Certificate in NPM (using DNS Challenge):
-
Obtain API Credentials from Your DNS Provider:
- Log in to your DNS provider's dashboard.
- Look for API access or token generation. Create an API key/token that has permission to create and delete TXT records for your domain zone.
- Crucially, follow the principle of least privilege. The token should only have the permissions it absolutely needs. Securely copy the API key/token.
- NPM's documentation (or the UI itself when you select a provider) often shows the format needed for the credentials.
-
Add SSL Certificate in NPM:
- In NPM, go to "SSL Certificates" in the top menu.
- Click "Add SSL Certificate" and choose "Let's Encrypt".
- Domain Names:
- Enter your root domain on the first line (e.g.,
yourdomain.com
). - Enter your wildcard domain on the second line (e.g.,
*.yourdomain.com
).
- Enter your root domain on the first line (e.g.,
- Use a DNS Challenge: Toggle this ON.
- DNS Provider: Select your DNS provider from the dropdown list.
- Credentials File Content / API Key Fields:
- NPM will show fields specific to the selected DNS provider. Enter your API key/token and any other required information (e.g., email, secret) in the format specified. For some providers, you might need to create a small INI-style text snippet.
- Example for Cloudflare using an API Token:
- Propagation Seconds (Optional): The time (in seconds) NPM should wait for DNS changes to propagate before asking Let's Encrypt to verify the TXT record. The default is often fine, but you might need to increase it for some slower DNS providers.
- Let's Encrypt Email Address: Enter your email for notifications.
- I Agree to the Let's Encrypt Terms of Service: Toggle ON.
- Click "Save".
NPM will use the provided API credentials to automatically create the necessary TXT DNS record, wait for propagation, ask Let's Encrypt to verify it, and then (if successful) remove the TXT record and save the certificate. This process might take a bit longer than an HTTP-01 challenge due to DNS propagation delays.
-
Applying the Wildcard Certificate to Proxy Hosts:
- Once the wildcard certificate is successfully obtained, it will appear in your list of SSL certificates.
- When creating or editing a Proxy Host for a subdomain covered by the wildcard (e.g.,
service1.yourdomain.com
,service2.yourdomain.com
):- Go to the "SSL" tab.
- In the "SSL Certificate" dropdown, instead of "Request a new SSL Certificate," select your newly created wildcard certificate (e.g.,
yourdomain.com (*.yourdomain.com)
). - Ensure "Force SSL" and "HTTP/2 Support" are enabled.
- Save the proxy host.
Now, service1.yourdomain.com
will use the wildcard certificate. You don't need to request a separate certificate for it.
Docker Networking for NPM and Services
When NPM and your backend services are all running as Docker containers on the same host, using Docker's networking features can simplify configuration and enhance security.
-
Create a Custom Docker Network: It's best practice to create a custom bridge network for your proxy stack and related services. If you followed the initial NPM setup with
Or, add it to your NPMdocker-compose.yml
, you might have already defined one (e.g.,proxy-net
). If not, you can create one manually:docker-compose.yml
:Then runversion: '3.8' services: app: # ... other npm app configurations ... networks: - proxy-net # Add this line # ... other services ... networks: proxy-net: driver: bridge
docker-compose up -d
to apply the change (it might recreate the NPM container to attach it to the new network). -
Attach Backend Service Containers to the Same Network: When you run your backend service containers (e.g., Gitea, Nextcloud), ensure they are also connected to this
proxy-net
network.- If using
docker run
: - If using Docker Compose for your backend services (in a separate
docker-compose.yml
or the same one):services: my-gitea-service: image: gitea/gitea:latest networks: - proxy-net # Assumes proxy-net is defined in this file or as an external network networks: proxy-net: external: true # If proxy-net was created by another compose file or manually # Or define it here if this is the main compose file creating it
- If using
-
Use Container Names for Forwarding in NPM: Once NPM and a backend service (e.g.,
my-gitea-service
) are on the same custom Docker network, Docker's embedded DNS server allows containers to resolve each other by their container names.- In NPM's Proxy Host configuration for
git.yourdomain.com
:- Forward Hostname / IP:
my-gitea-service
(the name of the Gitea container). - Forward Port: The port Gitea listens on inside its container (e.g.,
3000
). - Scheme:
http
(as communication is internal to the Docker network).
- Forward Hostname / IP:
- In NPM's Proxy Host configuration for
Benefits of Using Custom Docker Networks:
- Simplified Configuration: No need to figure out host IPs or deal with
172.17.0.1
. Container names are more stable. - Enhanced Security: You don't need to expose the backend service's ports on the Docker host machine (
-p host_port:container_port
). The service is only accessible via NPM through the Docker network, reducing the host's attack surface. Communication happens over the isolated Docker network. - Cleaner Setup: Avoids port conflicts on the host if multiple services internally use common ports like 80 or 8080.
This approach is highly recommended for managing multiple containerized services with NPM.
Workshop Exposing Multiple Services with Individual SSL Certificates
Objective: Expose two distinct simple web services on different subdomains (app1.yourworkshopdomain.com
and app2.yourworkshopdomain.com
), each secured with its own individual Let's Encrypt SSL certificate, and utilizing a shared Docker network for communication.
Prerequisites:
- NPM installed and running.
- A registered domain (
yourworkshopdomain.com
) where you can add DNS records. - Ports 80 and 443 open and forwarded to your NPM server.
- A custom Docker network (e.g.,
npm_proxy-net
) that your NPM container is connected to. If you used thedocker-compose.yml
from section 1 which definedproxy-net
, and your compose project was namednpm-stack
, the network might be namednpm-stack_proxy-net
. You can check withdocker network ls
anddocker inspect <npm_container_name>
to find the network NPM is on. For this workshop, let's assume your NPMdocker-compose.yml
is in a directory namednpm
and it definesproxy-net
, so the network becomesnpm_proxy-net
.
Steps:
-
Ensure NPM is on a Defined Network: Modify your NPM
docker-compose.yml
(e.g., in/opt/npm/docker-compose.yml
) if it doesn't already explicitly define and use a network:If you make changes, navigate to your NPM compose directory and runversion: '3.8' services: app: image: 'jc21/nginx-proxy-manager:latest' container_name: npm_app_workshop restart: unless-stopped ports: - '80:80' - '443:443' - '81:81' environment: DB_SQLITE_FILE: "/data/database.sqlite" volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt networks: # Add this section - proxy-net networks: # Add this section proxy-net: driver: bridge name: workshop_proxy_net # Explicitly name the network
docker-compose up -d
. This will create/recreate NPM on theworkshop_proxy_net
network. -
Deploy First Service (
whoami-1
): This will beapp1
.--network workshop_proxy_net
: Connects this container to our shared network.- Note: No
-p
port mapping is needed as NPM will access it via the Docker network.whoami
listens on port 80 internally.
-
Deploy Second Service (Simple Nginx Page -
nginx-app2
): This will beapp2
. -
Verify Internal Service Accessibility (from NPM container): Find your NPM container name (e.g.,
Both commands should return the respective service's output, confirming network connectivity.npm_app_workshop
). -
Configure DNS Records:
- Go to your DNS provider.
- Create an
A
record forapp1.yourworkshopdomain.com
pointing to your server's public IP. - Create an
A
record forapp2.yourworkshopdomain.com
pointing to your server's public IP. - Wait for DNS propagation.
-
Configure Proxy Host for
app1.yourworkshopdomain.com
:- In NPM, go to "Proxy Hosts" -> "Add Proxy Host".
- Details Tab:
- Domain Names:
app1.yourworkshopdomain.com
- Scheme:
http
- Forward Hostname / IP:
whoami-app1
(the container name) - Forward Port:
80
(internal port ofwhoami
) - Enable "Block Common Exploits".
- Domain Names:
- SSL Tab:
- Select "Request a new SSL Certificate".
- Enable "Force SSL", "HTTP/2 Support".
- Enter your email, agree to ToS.
- Save.
-
Configure Proxy Host for
app2.yourworkshopdomain.com
:- In NPM, "Add Proxy Host" again.
- Details Tab:
- Domain Names:
app2.yourworkshopdomain.com
- Scheme:
http
- Forward Hostname / IP:
nginx-app2
(the container name) - Forward Port:
80
(internal port ofnginx
) - Enable "Block Common Exploits".
- Domain Names:
- SSL Tab:
- Select "Request a new SSL Certificate".
- Enable "Force SSL", "HTTP/2 Support".
- Enter your email, agree to ToS.
- Save.
-
Test Public Access:
- Open
https://app1.yourworkshopdomain.com
in your browser. You should see thewhoami
output. - Open
https://app2.yourworkshopdomain.com
in your browser. You should see "Hello from Nginx App 2 via NPM!". - Verify SSL certificates are valid and issued by Let's Encrypt for each respective domain.
- Open
Verification:
Both subdomains are accessible via HTTPS, served by their respective backend containers, using individual SSL certificates, and leveraging Docker networking.
Workshop Setting Up a Wildcard Certificate (using Cloudflare for DNS Challenge)
Objective: Create a wildcard SSL certificate for *.yourworkshopdomain.com
using Cloudflare's DNS challenge and apply it to a new service app3.yourworkshopdomain.com
.
Prerequisites:
- Your domain
yourworkshopdomain.com
is managed by Cloudflare. - You have a Cloudflare account.
- NPM is running and connected to the
workshop_proxy_net
Docker network.
Steps:
-
Create a Cloudflare API Token:
- Log in to your Cloudflare dashboard.
- Go to "My Profile" (from the top right user icon) -> "API Tokens".
- Click "Create Token".
- Find the "Edit zone DNS" template and click "Use template".
- Permissions:
- Zone Resources:
Zone
-DNS
-Edit
(this should be pre-selected).
- Zone Resources:
- Zone Resources:
- Select
Include
-Specific zone
-yourworkshopdomain.com
.
- Select
- You can optionally set Client IP Address Filtering or TTL for the token's validity.
- Click "Continue to summary".
- Click "Create Token".
- Copy the generated API token immediately. You will not be able to see it again. Store it securely.
-
Add Wildcard SSL Certificate in NPM:
- In NPM, go to "SSL Certificates" -> "Add SSL Certificate" -> "Let's Encrypt".
- Domain Names:
- Line 1:
yourworkshopdomain.com
- Line 2:
*.yourworkshopdomain.com
- Line 1:
- Toggle ON "Use a DNS Challenge".
- DNS Provider: Select "Cloudflare" from the dropdown.
- Credentials File Content: Paste the following, replacing
YOUR_CLOUDFLARE_API_TOKEN
with the token you just copied: - Propagation Seconds: Leave default (e.g.,
120
) or adjust if your Cloudflare DNS updates are known to be slower/faster. - Let's Encrypt Email Address: Your email.
- Toggle ON "I Agree to the Let's Encrypt Terms of Service".
- Click "Save".
- NPM will now attempt to obtain the wildcard certificate. This might take a minute or two as it interacts with Cloudflare's API to set and verify TXT records. Monitor the NPM logs (
docker logs npm_app_workshop
) if curious or if it fails.
-
Deploy a Third Service (
whoami-app3
): -
Configure DNS for
app3
:- In Cloudflare, add an
A
record forapp3.yourworkshopdomain.com
pointing to your server's public IP. Ensure it's "Proxied" (orange cloud) if you want Cloudflare's CDN benefits, or "DNS Only" (grey cloud) if you want direct connection to NPM for this test. For simplicity with Let's Encrypt via NPM, "DNS Only" is often easier initially unless you specifically configure Cloudflare SSL modes carefully. Let's assume "DNS Only" for now.
- In Cloudflare, add an
-
Create Proxy Host for
app3.yourworkshopdomain.com
using the Wildcard Certificate:- In NPM, go to "Proxy Hosts" -> "Add Proxy Host".
- Details Tab:
- Domain Names:
app3.yourworkshopdomain.com
- Scheme:
http
- Forward Hostname / IP:
whoami-app3
- Forward Port:
80
- Enable "Block Common Exploits".
- Domain Names:
- SSL Tab:
- SSL Certificate: From the dropdown, select your newly created wildcard certificate (it will likely be named
yourworkshopdomain.com (*.yourworkshopdomain.com)
). - Enable "Force SSL" and "HTTP/2 Support".
- SSL Certificate: From the dropdown, select your newly created wildcard certificate (it will likely be named
- Click "Save".
-
Test Public Access for
app3
:- Open
https://app3.yourworkshopdomain.com
in your browser. - You should see the
whoami
output. - Inspect the SSL certificate. It should be the wildcard certificate covering
*.yourworkshopdomain.com
, issued by Let's Encrypt.
- Open
Verification:
app3.yourworkshopdomain.com
is accessible via HTTPS.- The SSL certificate used for
app3
is the wildcard certificate you created. - The services
app1
andapp2
should still be working with their individual certificates.
This demonstrates the power of wildcard certificates for managing multiple subdomains under a single SSL certificate, automated through NPM's DNS challenge feature.
5. Access Lists and Basic Authentication
While making your services publicly accessible is often the goal, some applications or specific parts of them (like admin panels) should be restricted to authorized users only. Nginx Proxy Manager provides "Access Lists" to control who can reach your proxied services, offering IP-based filtering and HTTP Basic Authentication.
Controlling Access to Your Services
Why restrict access?
- Private Services: You might host services intended only for personal use, family, or a small team (e.g., a personal document server, a private Git repository, a development environment).
- Admin Interfaces: Many web applications have administrative dashboards that should not be exposed to the general internet.
- Staging/Testing Environments: Services under development might need to be accessible to testers but not the public.
- Geographic Restrictions: Though more advanced, the principle of limiting access applies.
NPM's Access Lists provide a straightforward way to implement these controls at the reverse proxy level, before traffic even reaches your backend application.
NPM Access Lists
An Access List in NPM is a reusable set of rules that can be applied to one or more Proxy Hosts. It combines two main types of controls:
-
IP Whitelisting/Blacklisting (Access Tab):
- You can define lists of IP addresses or network ranges (using CIDR notation, e.g.,
192.168.1.0/24
or203.0.113.55
) that are either allowed (allow
) or denied (deny
) access. - The rules are processed in order. An Nginx
satisfy all
directive means a client must pass allallow
rules and not match anydeny
rules. NPM's "Satisfy Any" option (explained below) changes this logic.
- You can define lists of IP addresses or network ranges (using CIDR notation, e.g.,
-
HTTP Basic Authentication (Authorization Tab):
- You can define a list of username and password combinations.
- If a Proxy Host uses an Access List with Basic Auth configured, users attempting to access the service will be prompted by their browser for a username and password.
Creating an Access List in NPM:
- Log in to NPM.
- Navigate to "Access Lists" from the top menu.
- Click "Add Access List".
- Details Tab:
- Name: Give your Access List a descriptive name (e.g.,
AdminOnly
,LocalNetworkAccess
). - Satisfy Any: This is an important toggle.
- If ON (default): A client is granted access if they satisfy either the IP address criteria (e.g., their IP is in an
allow
list) OR they provide valid Basic Authentication credentials. This is useful if you want to allow access from specific IPs without a password, but require a password from anywhere else. - If OFF: A client must satisfy both the IP address criteria AND provide valid Basic Authentication credentials (if both are configured). This is more restrictive.
- If ON (default): A client is granted access if they satisfy either the IP address criteria (e.g., their IP is in an
- Name: Give your Access List a descriptive name (e.g.,
- Access Tab (IP Controls):
- Click "Add Whitelist Entry" or "Add Blacklist Entry".
- Enter the IP address or CIDR range.
- Click "Save Item".
- You can add multiple entries. The
Order
column determines processing sequence.Deny
rules are typically processed beforeAllow
rules if they overlap for the same IP. Nginx's default is that the first matchingallow
ordeny
rule wins, unless overridden by more specific rules. NPM's interface simplifies this; generally, if an IP matches adeny
rule, it's blocked. If it matches anallow
rule and nodeny
rule, it's permitted (subject to "Satisfy Any" and Basic Auth).
- Authorization Tab (Basic Auth):
- Click "Add User".
- Username: Enter the desired username.
- Password: Enter a strong password for this user.
- Click "Save Item".
- You can add multiple user accounts.
- Click "Save" to save the Access List.
- Details Tab:
Applying Access Lists to Proxy Hosts:
- Go to "Hosts" -> "Proxy Hosts".
- Edit the Proxy Host you want to protect.
- Go to the "Access List" tab.
- Select your newly created Access List from the "Access List" dropdown.
- Click "Save".
Now, when users try to access this Proxy Host, NPM will enforce the rules defined in the selected Access List.
Understanding HTTP Basic Authentication
HTTP Basic Authentication is a simple challenge-response mechanism built into the HTTP protocol.
- Initial Request: A user tries to access a protected resource.
- Challenge (401 Unauthorized): If the request doesn't include authentication credentials, or if they are invalid, the server (NPM in this case) responds with a
401 Unauthorized
status code and aWWW-Authenticate: Basic realm="Your Realm Name"
header. The "realm" is a descriptive string that might be shown to the user (NPM sets a default one). - Browser Prompt: The user's web browser sees the
401
andWWW-Authenticate
header and displays a pop-up dialog asking for a username and password. - Credentials Sent: The user enters their credentials. The browser then re-sends the original request, this time including an
Authorization
header:Authorization: Basic <base64_encoded_username:password>
The<username:password>
string is encoded using Base64. - Verification: The server receives the request, decodes the Base64 string from the
Authorization
header, and checks if the username and password match a configured user.- If valid: The server processes the request and returns the resource (e.g.,
200 OK
). - If invalid: The server responds with another
401 Unauthorized
, and the browser may prompt again.
- If valid: The server processes the request and returns the resource (e.g.,
Security Considerations for Basic Auth:
- HTTPS is CRUCIAL: Base64 encoding is not encryption; it's easily reversible. If you use Basic Auth over an unencrypted HTTP connection, the username and password are sent in plain text (after Base64 decoding) and can be easily sniffed. Always use Basic Authentication only over HTTPS connections. NPM, when configured with SSL for the proxy host, ensures this.
- Not for High-Security Needs: Basic Auth is suitable for simple access control but is not as secure as more modern authentication mechanisms like OAuth2, OpenID Connect, or SAML, especially against brute-force attacks if weak passwords are used. It doesn't offer features like multi-factor authentication (MFA).
- Password Storage: NPM stores the Basic Auth passwords. Ensure your NPM instance itself is secured.
Use Cases for Access Lists
- Restricting Admin Panels:
- Application: Your Nextcloud instance at
cloud.yourdomain.com
. - Access List:
NextcloudAdmins
with Basic Auth users. - Applied to the
cloud.yourdomain.com
proxy host.
- Application: Your Nextcloud instance at
- Internal Network Access Only:
- Application: A development web server at
dev.yourdomain.com
. - Access List:
LocalNetworkOnly
with anallow
rule for your local network's IP range (e.g.,192.168.1.0/24
) and "Satisfy Any" ON, with no Basic Auth users. This effectively denies access from outside your local network.
- Application: A development web server at
- Staging Environment for Specific Clients:
- Application:
staging.yourdomain.com
. - Access List:
ClientAccess
withallow
rules for specific client IP addresses and perhaps Basic Auth as a fallback if their IP changes. "Satisfy Any" would be ON.
- Application:
- Protecting Sensitive Data Viewers:
- Application: A log viewer or monitoring dashboard at
logs.yourdomain.com
. - Access List:
MonitoringTeam
with Basic Auth users.
- Application: A log viewer or monitoring dashboard at
Access Lists provide a powerful first line of defense, managed conveniently through the NPM interface.
Workshop Securing a Service with Basic Authentication
Objective: Protect one of your previously exposed services (e.g., app1.yourworkshopdomain.com
) with HTTP Basic Authentication using an NPM Access List.
Prerequisites:
- At least one service (e.g.,
app1.yourworkshopdomain.com
from the previous workshop) exposed via NPM and secured with SSL (HTTPS). - NPM admin interface accessible.
Steps:
-
Log in to Nginx Proxy Manager: Access your NPM admin UI (
http://<your_server_public_ip>:81
). -
Create an Access List:
- Navigate to "Access Lists" from the top menu.
- Click the "Add Access List" button.
- Details Tab:
- Name: Enter a descriptive name, e.g.,
App1BasicAuthUsers
. - Satisfy Any: Leave this toggled ON for this workshop. This means if we were to add IP whitelists later, users from those IPs wouldn't need the password, but everyone else would. Since we're only doing auth for now, this setting's effect is minimal here.
- Name: Enter a descriptive name, e.g.,
- Authorization Tab:
- Click the "Add User" button.
- Username: Enter
workshopuser
. - Password: Enter a password, e.g.,
SecureP@sswOrd123
. (In a real scenario, use a strong, unique password). - Click "Save Item". You'll see the user added to the list.
- Access Tab:
- For this workshop, we will not add any IP-based rules. Leave this section empty.
- Click the main "Save" button at the bottom to save the entire Access List. You should see
App1BasicAuthUsers
in your list of Access Lists.
-
Apply the Access List to Your Proxy Host:
- Navigate to "Hosts" -> "Proxy Hosts".
- Find the proxy host for
app1.yourworkshopdomain.com
(or whichever service you chose to protect). - Click the three-dot menu (â‹®) on its right and select "Edit".
- Go to the "Access List" tab.
- From the "Access List" dropdown menu, select the
App1BasicAuthUsers
list you just created. - Click "Save".
-
Test the Basic Authentication:
- Open a new incognito/private browser window. This is important to ensure you're not using any cached credentials or sessions from previous visits.
- Navigate to
https://app1.yourworkshopdomain.com
. - Prompt: Your browser should now display a pop-up dialog prompting you for a username and password. The realm might say something like "Restricted Access".
- Test with incorrect credentials:
- Enter a wrong username or password and click "Sign In" / "OK".
- Access should be denied, and you should be re-prompted.
- Test with correct credentials:
- Enter Username:
workshopuser
- Enter Password:
SecureP@sswOrd123
(or whatever you set) - Click "Sign In" / "OK".
- Enter Username:
- You should now be granted access and see the
whoami
output (or your chosen service's page). - Close the incognito window. Open another one and try accessing the page again. It should prompt you again.
Verification:
- Accessing
https://app1.yourworkshopdomain.com
prompts for authentication. - Correct credentials (
workshopuser
/SecureP@sswOrd123
) grant access. - Incorrect credentials deny access.
- The service remains accessible over HTTPS.
You have successfully secured a service using HTTP Basic Authentication managed by Nginx Proxy Manager's Access Lists. This provides an essential layer of control for your self-hosted applications.
6. Custom Nginx Configurations
Nginx Proxy Manager (NPM) offers a user-friendly interface that covers a vast majority of common reverse proxy needs. However, Nginx is an incredibly powerful and flexible web server, and there might be situations where you need to implement specific Nginx directives that aren't directly exposed through NPM's UI. For these scenarios, NPM provides an "Advanced" tab in the Proxy Host settings, allowing you to inject custom Nginx configuration snippets.
When NPM's UI Isn't Enough
You might need custom Nginx configurations for:
- Adding Custom HTTP Headers: Setting security headers like
Content-Security-Policy
(CSP),X-Frame-Options
,Referrer-Policy
, or custom application-specific headers. - Modifying Existing Headers: Changing or removing headers set by the backend application or Nginx itself.
- Setting
client_max_body_size
: Increasing the maximum allowed size of the client request body, essential for applications that handle large file uploads. - Implementing Advanced Rate Limiting: While Nginx can do sophisticated rate limiting, NPM's UI doesn't expose these. Custom configs can set up
limit_req_zone
andlimit_req
. - Custom Error Pages: Although NPM has a basic feature for setting a default 404 page, you might want more granular control over error pages for specific proxy hosts or error codes using
error_page
directives. - URL Rewrites or Redirects with Complex Logic: For rewrites beyond simple redirection hosts, you might use
rewrite
directives. - Fine-tuning Caching Behavior: Implementing specific
proxy_cache_path
,proxy_cache_key
,proxy_cache_valid
directives for server-side caching (though this can be complex). - Specific Location Block Configurations: Applying different rules or proxy settings for specific URL paths within a domain (e.g.,
/api
,/static
).
Accessing the "Advanced" Tab in Proxy Host Settings
For any Proxy Host you've defined in NPM:
- Go to "Hosts" -> "Proxy Hosts".
- Edit the desired Proxy Host.
- Navigate to the "Advanced" tab.
You'll find a text area labeled "Custom Nginx Configuration". Any valid Nginx configuration directives you place here will be injected into the server
block that NPM generates for that specific proxy host.
How NPM Incorporates Custom Configurations:
NPM generates the main Nginx configuration file for each proxy host based on your UI settings. The content from the "Custom Nginx Configuration" box is typically inserted towards the end of the server { ... }
block for that host, but before the final closing brace }
. This means your custom directives will generally apply to the entire server block unless you use location
blocks within your custom config to target specific paths.
Context is Key:
- Directives placed directly in the custom config box are usually in the
server
context. - If you need to apply directives to a specific path, you must define a
location /path/to/target { ... }
block within your custom configuration.
Common Customizations
Here are a few examples of common custom Nginx configurations you might use:
-
Adding Security Headers: These headers instruct browsers on how to handle your content, mitigating certain types of attacks like clickjacking or cross-site scripting (XSS).
# In the Custom Nginx Configuration box: # Prevents the site from being framed (helps against clickjacking) # SAMEORIGIN: Allows framing only by pages from the same origin. # DENY: Disallows framing entirely. add_header X-Frame-Options "SAMEORIGIN" always; # Prevents browsers from MIME-sniffing the content type add_header X-Content-Type-Options "nosniff" always; # Controls how much referrer information is sent with requests # strict-origin-when-cross-origin: Sends full URL for same-origin, only origin for cross-origin HTTPS->HTTPS, no referrer for HTTP. add_header Referrer-Policy "strict-origin-when-cross-origin" always; # Content Security Policy (CSP) - This is powerful but complex. # Start with a restrictive policy and loosen as needed. # This example allows resources only from the same origin. # add_header Content-Security-Policy "default-src 'self';" always; # CAUTION: A misconfigured CSP can break your site. Test thoroughly. # The 'always' parameter ensures the header is added regardless of response code.
-
Setting
Note: Your backend application may also have its own upload size limits that need to be configured separately (e.g., in PHP'sclient_max_body_size
for Large File Uploads: By default, Nginx has a small limit for client request body size (often 1MB). If your application allows users to upload larger files, you'll need to increase this.php.ini
upload_max_filesize
andpost_max_size
). -
Custom Error Pages: To show a branded error page instead of Nginx's default for a 502 error:
Self-correction: True custom error pages that reference files on disk can be tricky with NPM's Dockerized setup unless those files are mounted into the NPM container or served by another accessible service. A more straightforward custom config for error pages might involve proxying to a dedicated error page service or using Nginx's ability to return custom text directly, though the latter is less user-friendly.# Assume you have /custom_502.html available at your webroot (defined elsewhere or a full path) # This is a simplified example; actual path resolution needs care. # A common way is to define a location block for error pages. # # location /custom_error_pages/ { # root /path/to/your/error_docs; # Ensure Nginx can access this path # internal; # Makes this location only accessible via Nginx internal redirects # } # # error_page 502 /custom_error_pages/my_custom_502_page.html; # For a simpler approach, if NPM serves from a consistent root for errors: # error_page 502 /custom_502.html; # You'd need to ensure this file exists where Nginx expects it. # NPM itself might have specific ways it handles its own error pages, # so custom error pages for proxied services might need careful pathing.
-
Basic Rate Limiting (Example): This requires defining a
This is more of an advanced Nginx topic than a typical NPM use case without deeper customization.limit_req_zone
in thehttp
block, which is outside NPM's direct per-host custom config. However, you can often placelimit_req_zone
in NPM's global custom Nginx settings if available, or some users modify NPM's Nginx template files (more advanced and not recommended for beginners as it can break with updates). If a zone is defined globally, you can uselimit_req
in the custom config:
Syntax and Testing
- Nginx Configuration Syntax is CRUCIAL: A small typo (like a missing semicolon
;
or mismatched braces{}
) can prevent Nginx from starting or reloading, potentially taking down all your proxied services. - NPM's Internal Check: When you save changes in the Advanced tab, NPM usually performs a basic Nginx syntax check (
nginx -t
). If it detects an error, it often prevents saving the broken configuration or might revert, showing an error message. However, this check might not catch all logical errors. - Manual Testing: After applying custom configurations, thoroughly test the affected proxy host. Check:
- Does the site still load?
- Are the intended changes working (e.g., are headers present, can you upload large files)?
- Check browser developer tools (Network tab for headers, Console for errors).
- NPM Logs: If something breaks, the NPM container logs (
docker logs <npm_container_name>
) are your first place to look for Nginx error messages.
Important Note: Nginx Proxy Manager is designed to manage the primary Nginx configuration for your proxy hosts. Custom configurations are powerful additions but should be used judiciously. Be careful not to add directives that conflict with what NPM generates based on your UI settings, as this can lead to unpredictable behavior. Always start with small, simple custom additions and test them thoroughly.
Workshop Adding Custom Security Headers
Objective: Add X-Frame-Options
, X-Content-Type-Options
, and Referrer-Policy
HTTP security headers to one of your existing services (e.g., app2.yourworkshopdomain.com
) using the Custom Nginx Configuration feature in NPM.
Prerequisites:
- An existing service (e.g.,
app2.yourworkshopdomain.com
from a previous workshop) exposed via NPM and accessible over HTTPS. - NPM admin interface accessible.
Steps:
-
Log in to Nginx Proxy Manager:
Access your NPM admin UI. -
Choose a Service and Edit its Proxy Host:
- Navigate to "Hosts" -> "Proxy Hosts".
- Locate the proxy host for
app2.yourworkshopdomain.com
(or your chosen service). - Click the three-dot menu (â‹®) and select "Edit".
-
Navigate to the "Advanced" Tab:
In the "Edit Proxy Host" dialog, click on the "Advanced" tab. -
Add Custom Nginx Configuration:
In the "Custom Nginx Configuration" text area, carefully paste the following Nginx directives:# Add security headers add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; add_header Referrer-Policy "strict-origin-when-cross-origin" always; # Example: If you also wanted to increase max upload size (optional for this workshop) # client_max_body_size 50M;
Explanation of the headers:
add_header X-Frame-Options "SAMEORIGIN" always;
- This header helps prevent clickjacking attacks by controlling whether your site can be embedded within an
<frame>
,<iframe>
,<embed>
, or<object>
on other websites. "SAMEORIGIN"
allows embedding only if the parent page is from the same origin (scheme, hostname, port) as your site."DENY"
would prevent any framing.
- This header helps prevent clickjacking attacks by controlling whether your site can be embedded within an
add_header X-Content-Type-Options "nosniff" always;
- This header prevents browsers from trying to guess (MIME-sniff) the content type of a resource if the
Content-Type
header sent by the server is different from the actual content. This can protect against attacks where a file (e.g., an image) might be misinterpreted as executable script.
- This header prevents browsers from trying to guess (MIME-sniff) the content type of a resource if the
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
- This header controls how much referrer information (the URL of the page that linked to the current page) is included with requests made from your site.
"strict-origin-when-cross-origin"
is a good default:- Sends the full URL for same-origin requests.
- Sends only the origin (e.g.,
https://yourdomain.com
) when navigating from HTTPS to another HTTPS cross-origin site. - Sends no referrer information when navigating from HTTPS to an HTTP cross-origin site (to prevent leaking secure URLs to insecure sites).
always
: This keyword ensures that Nginx adds these headers to the response regardless of the HTTP status code (e.g., even for error pages like 404 or 500). Withoutalways
, headers are typically only added to 2xx and 3xx responses.
-
Save the Configuration:
Click the "Save" button. NPM will perform a syntax check. If there's an error, it will notify you. If successful, the configuration will be applied. -
Test the Custom Headers:
- Open your web browser and go to the service you modified, e.g.,
https://app2.yourworkshopdomain.com
. - Open your browser's Developer Tools (usually by pressing F12, or right-click -> "Inspect").
- Go to the "Network" tab in the Developer Tools.
- Refresh the page (
Ctrl+R
orCmd+R
). - In the list of network requests, select the main document request for
app2.yourworkshopdomain.com
(it's usually the first one). - In the details pane for that request, find the "Response Headers" section.
- Verify the Headers: Look for the headers you added:
x-frame-options: SAMEORIGIN
x-content-type-options: nosniff
referrer-policy: strict-origin-when-cross-origin
(Note: Header names are case-insensitive in HTTP, so they might appear in lowercase).
You can also use online tools like
securityheaders.com
to scan your site and check for these and other security headers (though it tests your public site, so ensure DNS is pointed correctly). - Open your web browser and go to the service you modified, e.g.,
Verification:
- The website
https://app2.yourworkshopdomain.com
loads correctly. - The specified custom HTTP security headers (
X-Frame-Options
,X-Content-Type-Options
,Referrer-Policy
) are present in the response headers when inspecting the site with browser developer tools.
If the headers are not present or the site breaks:
- Double-check for typos in the "Custom Nginx Configuration" box. Ensure every line ends with a semicolon
;
. - Check the NPM container logs (
docker logs <npm_container_name>
) for any Nginx error messages during reload. - Try removing the custom config, saving, and then adding one header at a time to isolate any problematic directive.
This workshop demonstrates how to enhance your application's security posture by adding important HTTP headers using NPM's custom configuration capabilities.
7. Advanced SSL Management
While Nginx Proxy Manager's integration with Let's Encrypt for automated SSL certificate issuance is a primary draw, there are scenarios where more advanced SSL management is required. This includes using your own custom certificates, fine-tuning SSL/TLS protocols and ciphers, and understanding features like HSTS and OCSP Stapling in greater depth.
Using Your Own SSL Certificates (BYOC - Bring Your Own Certificate)
There are several reasons why you might want to use your own SSL certificates instead of relying solely on Let's Encrypt via NPM:
- Internal Certificate Authority (CA):
In corporate or lab environments, you might operate your own internal CA for issuing certificates to internal services. These certificates are trusted only by devices configured to trust your internal CA. - Certificates from Other Commercial CAs:
You might have purchased Extended Validation (EV) or Organization Validation (OV) certificates from other commercial CAs for specific branding or trust requirements. - Specific Certificate Types or Features:
Some specialized scenarios might require certificates with features not readily available through Let's Encrypt's standard issuance (though Let's Encrypt covers most common needs). - Offline or Air-gapped Environments:
If NPM cannot reach Let's Encrypt servers, you'll need to provide certificates manually.
Uploading Custom Certificates in NPM:
-
Obtain Your Certificate Files:
You will typically have:- Certificate (
.crt
or.pem
): This is your server's public certificate. - Private Key (
.key
or.pem
): This is the secret key corresponding to your public certificate. Keep this secure and private. - Intermediate Certificate(s) / Chain Certificate (
.crt
or.pem
): These are certificates from the issuing CA that chain your server certificate back to a trusted root CA. Often, this is provided as a "fullchain" certificate, which includes your server certificate PLUS the intermediate certificates.
- Certificate (
-
Add Custom Certificate in NPM:
- Log in to NPM.
- Go to "SSL Certificates" -> "Add SSL Certificate" -> "Custom".
- Name: Give your certificate a descriptive name in NPM (e.g.,
MyInternalCAServerCert
,CommercialEV_Cert
). - Certificate Key: Open your private key file (e.g.,
private.key
) with a text editor. Copy the entire content, including the-----BEGIN PRIVATE KEY-----
and-----END PRIVATE KEY-----
lines (or similar, likeBEGIN RSA PRIVATE KEY
), and paste it into this field. - Certificate: Open your server certificate file (e.g.,
certificate.crt
). Copy its entire content, including-----BEGIN CERTIFICATE-----
and-----END CERTIFICATE-----
, and paste it here. - Intermediate Certificate (Optional but usually required for public CAs): Open your intermediate/chain certificate file (e.g.,
chain.pem
orfullchain.pem
).- If you have a
fullchain.pem
that already includes your server certificate, some CAs recommend putting only the intermediate certificates in this field. - If your server certificate file (
certificate.crt
) contains only the end-entity certificate, then you must paste the content of the intermediate certificate(s) file here. This file might contain multiple-----BEGIN CERTIFICATE-----...-----END CERTIFICATE-----
blocks. Paste them all. - The correct chain is crucial for browser trust.
- If you have a
- Click "Save".
-
Apply the Custom Certificate to a Proxy Host:
- Edit the desired Proxy Host.
- Go to the "SSL" tab.
- In the "SSL Certificate" dropdown, select the custom certificate you just uploaded by its name.
- Ensure "Force SSL" and other relevant options are set.
- Save the Proxy Host.
Important Note on Custom Certificates:
- Renewal is MANUAL: NPM will not automatically renew custom certificates you upload. You are responsible for obtaining a renewed certificate from your CA and re-uploading the new certificate (and possibly key/chain) into NPM before the old one expires.
- Private Key Security: The private key is extremely sensitive. Ensure it's handled securely during the copy-paste process.
Forcing Specific SSL/TLS Versions and Ciphers
Nginx Proxy Manager, by virtue of using a modern Nginx version, generally defaults to secure and widely compatible SSL/TLS protocols (like TLSv1.2 and TLSv1.3) and strong cipher suites. However, for specific compliance requirements or if you need to support older clients (not recommended if security is paramount), you might want to customize these.
This is typically done using the "Custom Nginx Configuration" box in the Advanced tab of a Proxy Host, or globally if NPM supports a global custom Nginx config section.
-
Warning: Disabling TLSv1.2 might break compatibility with many clients. Enabling older protocols like TLSv1.0 or TLSv1.1 is strongly discouraged due to known vulnerabilities.ssl_protocols
directive:
Specifies the enabled SSL/TLS protocol versions.
Example (NPM's defaults are likely good, this is just illustrative): -
ssl_ciphers
directive:
Specifies the list of cipher suites Nginx should use.
Example (again, NPM's defaults are usually fine):Modifying ciphers requires a deep understanding of SSL/TLS. Incorrect configurations can weaken security or break compatibility.# In Custom Nginx Configuration (this is a common strong cipher suite recommendation): ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384'; ssl_prefer_server_ciphers on; # Server dictates cipher choice from client's list
Tools for Testing:
- Qualys SSL Labs Server Test (
ssllabs.com/ssltest/
): An excellent online tool to analyze your server's SSL/TLS configuration, including protocols, ciphers, certificate chain, and known vulnerabilities. It provides a grade (A+, A, B, etc.). openssl s_client
: A command-line tool to connect to an SSL/TLS server and inspect connection details.
Understanding HSTS (HTTP Strict Transport Security) In Depth
HSTS is a security policy mechanism that helps protect websites against protocol downgrade attacks and cookie hijacking. It tells browsers to only communicate with the site using HTTPS.
NPM's UI has toggles for "HSTS Enabled" and "HSTS Subdomains". When enabled, NPM adds the Strict-Transport-Security
header.
Key directives within the HSTS header:
max-age=<seconds>
: The time, in seconds, that the browser should remember that this site is only to be accessed using HTTPS. A common value is31536000
(1 year). NPM sets a default, likely a long duration.includeSubDomains
(Optional): If this directive is present, the HSTS policy applies to this domain AND all its subdomains. NPM's "HSTS Subdomains" toggle controls this. Use with extreme caution, as it forces all current and future subdomains to use HTTPS. If any subdomain cannot support HTTPS, it will become inaccessible.preload
(Optional): This is a powerful directive. If you addpreload
to your HSTS header and then submit your domain to the HSTS preload list (e.g.,hstspreload.org
), major browsers will eventually hardcode your domain as HTTPS-only. This means even on a user's very first visit, the browser will connect via HTTPS.- Implications of Preloading:
- You commit to HTTPS for the long term. Removing your site from the preload list is a slow process.
- You must ensure HTTPS is perfectly configured and stable across your entire site (and subdomains if
includeSubDomains
is used) before preloading. - NPM's UI does not directly add
preload
. You would need to do this via custom Nginx configuration if desired, after meeting all preload list requirements.
- Implications of Preloading:
NPM's HSTS Handling:
When you enable HSTS in NPM, it adds the Strict-Transport-Security
header with a suitable max-age
. If you also enable "HSTS Subdomains," it adds includeSubDomains
.
Caution with HSTS:
Once a browser has received an HSTS policy for a site, it will strictly enforce HTTPS for the max-age
duration. If your SSL certificate expires or you misconfigure HTTPS, users who have previously visited (and received the HSTS header) will be unable to access your site, even by clicking through browser warnings, until the HSTS policy expires in their browser or their SSL setup is fixed. This makes it critical to ensure your SSL (especially renewals) is solid before enabling HSTS with long max-age
values.
OCSP Stapling
Online Certificate Status Protocol (OCSP) Stapling is a performance and privacy enhancement for SSL/TLS.
- Problem: When a browser connects to an HTTPS site, it might need to check if the site's SSL certificate has been revoked (e.g., if the private key was compromised). Traditionally, the browser would make a separate request to the Certificate Authority's (CA) OCSP responder. This adds latency to the connection and can reveal to the CA which sites the user is visiting.
- Solution (OCSP Stapling): With OCSP stapling, the web server (Nginx in this case) periodically queries the CA's OCSP responder for the revocation status of its own certificate. The server then "staples" (attaches) this time-stamped OCSP response to the SSL/TLS handshake when clients connect.
- The browser receives the stapled OCSP response directly from your server, trusts it (as it's digitally signed by the CA), and doesn't need to make a separate OCSP request to the CA.
Benefits of OCSP Stapling:
- Improved Performance: Reduces connection latency by eliminating the client's need for a separate OCSP query.
- Enhanced Privacy: The CA doesn't see individual user IP addresses making OCSP requests for your site.
- Better Reliability: If the CA's OCSP responder is down or slow, clients can still verify revocation status from the stapled response.
NPM and OCSP Stapling: Nginx (and thus NPM) generally enables OCSP stapling by default when using certificates that support it (like those from Let's Encrypt). You usually don't need to configure anything specific in NPM for OCSP stapling to work.
Verification:
- Qualys SSL Labs Test: The "OCSP stapling" check in the SSL Labs report will indicate if it's enabled and working.
openssl s_client
: Look for "OCSP Response Data" in the output. If present and valid, stapling is working.
Advanced SSL management gives you finer control but also greater responsibility. Always test thoroughly after making changes to SSL/TLS configurations.
Workshop Uploading a Custom SSL Certificate (Self-Signed for Local Testing)
Objective: Create a self-signed SSL certificate, upload it to Nginx Proxy Manager as a "Custom" certificate, and use it for a proxy host that is only accessible locally (via hosts file modification). This demonstrates the BYOC (Bring Your Own Certificate) feature.
Important Note: Browsers will show prominent security warnings for self-signed certificates because they are not issued by a trusted public Certificate Authority. This workshop is for educational purposes to understand the upload mechanism, not for securing public-facing websites.
Prerequisites:
- OpenSSL command-line tool installed on your local machine or server (most Linux/macOS systems have it; Windows users might need to install it, e.g., via Git Bash or WSL).
- Nginx Proxy Manager installed and running.
- Ability to edit the
hosts
file on your client machine (the computer you'll use to browse).
Steps:
-
Create a Self-Signed Certificate: On your local machine or server, open a terminal and execute the following OpenSSL commands. Create a temporary directory to keep files organized:
You will now have three files inmkdir ~/self-signed-cert-test cd ~/self-signed-cert-test # 1. Generate a private key openssl genpkey -algorithm RSA -out private.key -pkeyopt rsa_keygen_bits:2048 echo "===> Private key (private.key) created." # 2. Create a Certificate Signing Request (CSR) # The Common Name (CN) is important. For this test, use a fake local domain. openssl req -new -key private.key -out certificate.csr -subj "/CN=npm-custom.localtest.com" echo "===> Certificate Signing Request (certificate.csr) created." # 3. Sign the CSR with the private key to create the self-signed certificate # Set validity for 365 days openssl x509 -req -days 365 -in certificate.csr -signkey private.key -out certificate.crt echo "===> Self-signed certificate (certificate.crt) created."
~/self-signed-cert-test
:private.key
: Your private key.certificate.csr
: The certificate signing request (not directly used by NPM, but part of the process).certificate.crt
: Your self-signed public certificate.
-
Upload the Custom Certificate to NPM:
- Log in to your Nginx Proxy Manager admin UI.
- Go to "SSL Certificates" from the top menu.
- Click "Add SSL Certificate" and select "Custom".
- Name: Enter
MyLocalTestCert
. - Certificate Key:
- Open the
private.key
file (e.g.,cat ~/self-signed-cert-test/private.key
). - Copy its entire content (including
-----BEGIN PRIVATE KEY-----
and-----END PRIVATE KEY-----
). - Paste this into the "Certificate Key" field in NPM.
- Open the
- Certificate:
- Open the
certificate.crt
file (e.g.,cat ~/self-signed-cert-test/certificate.crt
). - Copy its entire content (including
-----BEGIN CERTIFICATE-----
and-----END CERTIFICATE-----
). - Paste this into the "Certificate" field in NPM.
- Open the
- Intermediate Certificate: Leave this field blank. Self-signed certificates do not have intermediate certificates.
- Click "Save". Your
MyLocalTestCert
should now appear in the list.
-
Modify Your Client Machine's
hosts
File: To makenpm-custom.localtest.com
resolve to your NPM server's IP address on your local browsing machine only, you need to edit itshosts
file.- Find your NPM server's IP address: This is the IP you use to access NPM's admin UI or your services (e.g.,
192.168.1.100
or your public VPS IP). - Edit the
hosts
file (requires administrator/root privileges):- Linux/macOS:
/etc/hosts
- Windows:
C:\Windows\System32\drivers\etc\hosts
(Open Notepad as Administrator, then File -> Open and navigate to this file).
- Linux/macOS:
- Add the following line to the file, replacing
<NPM_SERVER_IP>
with the actual IP: Example:192.168.1.100 npm-custom.localtest.com
- Save the
hosts
file. Your operating system might cache DNS lookups; a browser restart oripconfig /flushdns
(Windows) /sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
(macOS) might be needed if changes don't take effect immediately.
- Find your NPM server's IP address: This is the IP you use to access NPM's admin UI or your services (e.g.,
-
Create/Edit a Proxy Host in NPM to Use the Custom Certificate:
- In NPM, go to "Hosts" -> "Proxy Hosts".
- Click "Add Proxy Host" (or edit an existing one you don't mind temporarily changing for this test).
- Details Tab:
- Domain Names:
npm-custom.localtest.com
- Scheme, Forward Hostname / IP, Forward Port: Point this to any existing simple backend service you have running (e.g., the
whoami-app1
container on port80
if it's on yourworkshop_proxy_net
network, or forward to172.17.0.1
port8000
ifwhoami-service
is running on host port 8000).
- Domain Names:
- SSL Tab:
- SSL Certificate: From the dropdown, select
MyLocalTestCert
(the custom certificate you uploaded). - Toggle ON "Force SSL".
- You can also toggle on "HTTP/2 Support".
- SSL Certificate: From the dropdown, select
- Click "Save".
-
Test Access in Your Browser:
- Open your web browser.
- Navigate to
https://npm-custom.localtest.com
. - Expect a Warning: Your browser will display a significant security warning (e.g., "Your connection is not private", "NET::ERR_CERT_AUTHORITY_INVALID", "Warning: Potential Security Risk Ahead"). This is because the certificate is self-signed and not trusted by your browser's built-in list of CAs.
- Proceed Past the Warning: Click "Advanced" and then "Proceed to npm-custom.localtest.com (unsafe)" or similar option.
- You should now see your backend service's content (e.g., the
whoami
output). - Inspect the Certificate: Click the (now broken) padlock icon or warning symbol in the address bar, then view the certificate details. You should see that it's issued to
npm-custom.localtest.com
and issued bynpm-custom.localtest.com
(i.e., self-signed), and matches the validity dates you set.
Verification:
- The mechanism for uploading and assigning a custom SSL certificate in NPM works.
- You can access the service at
https://npm-custom.localtest.com
after bypassing the browser's self-signed certificate warning. - The certificate presented by the browser is indeed the self-signed one you created.
Cleanup (Important):
- Remove the entry from your
hosts
file on your client machine to avoid future confusion. - You can delete the
MyLocalTestCert
from NPM ("SSL Certificates" -> three-dots -> Delete). - You can delete or reconfigure the
npm-custom.localtest.com
proxy host in NPM. - Delete the
~/self-signed-cert-test
directory.
This workshop successfully demonstrates how to use custom certificates with NPM, a useful skill for various non-public or specialized SSL scenarios.
8. Stream Hosts (TCP/UDP Proxying)
While Nginx Proxy Manager is primarily known for reverse proxying HTTP and HTTPS traffic (Layer 7 of the OSI model), it also includes a feature to proxy generic TCP and UDP streams (Layer 4). This is handled by Nginx's stream
module and is exposed in NPM through "Stream Hosts."
This capability allows you to use NPM as a unified entry point for services that don't use HTTP/S, such as:
- SSH (Secure Shell): Proxy SSH connections on a non-standard port.
- Databases: Provide access to a database server (e.g., PostgreSQL, MySQL) without exposing its native port directly, perhaps through a specific incoming port.
- MQTT (Message Queuing Telemetry Transport): For IoT applications.
- Game Servers: Many game servers use custom TCP or UDP protocols (e.g., Minecraft, Factorio).
- VPNs: Some VPN protocols like OpenVPN (TCP or UDP mode) or WireGuard (UDP).
- Other TCP/UDP based services: Any application that communicates over raw TCP or UDP sockets.
NPM's Stream Host Feature
How it Works:
When you configure a Stream Host, NPM configures Nginx's stream
block. This block operates at a lower level than the http
block. It simply forwards raw TCP or UDP packets from a specific incoming port on the NPM server to a specified backend hostname/IP and port.
Configuring a Stream Host in NPM UI:
- Log in to NPM.
- Navigate to "Hosts" in the top menu, then select "Stream Hosts".
- Click "Add Stream Host".
- Details Tab:
- Incoming Port: The public-facing port on your NPM server that will listen for incoming connections for this stream. This port must be:
- Unique: Not already used by another service on your NPM server (including ports 80, 443, 81 used by NPM itself or its HTTP proxy hosts).
- Open: Allowed through your server's firewall and any NAT router.
- Forward Hostname / IP: The IP address or hostname of the backend service you want to proxy to. This could be:
- An internal IP on your LAN (e.g.,
192.168.1.200
). localhost
or127.0.0.1
if the service is running on the same machine as NPM.- A Docker container name if the backend container is on the same Docker network as NPM.
- An internal IP on your LAN (e.g.,
- Forward Port: The port on which the backend service is listening.
- Protocol: Choose either
TCP
orUDP
from the dropdown, depending on what the backend service uses.
- Incoming Port: The public-facing port on your NPM server that will listen for incoming connections for this stream. This port must be:
- Click "Save".
- Details Tab:
NPM will then configure Nginx to listen on the specified Incoming Port
and forward traffic to the Forward Hostname / IP
and Forward Port
.
Limitations and Considerations for Stream Hosts
-
No Built-in SSL/TLS Termination for Streams: The "Stream Hosts" feature in NPM is for raw TCP/UDP proxying. It does not terminate SSL/TLS for these generic streams in the same way it does for HTTP/S proxy hosts. If your backend TCP service requires TLS (e.g., PostgreSQL with SSL, MQTT with TLS), the backend service itself must handle the TLS handshake. NPM is just passing the encrypted TCP stream through.
- You cannot apply a Let's Encrypt certificate directly to a Stream Host entry in NPM's UI to have NPM handle TLS for a generic TCP stream. SSL certificates in NPM are primarily for the HTTP proxy hosts.
-
Port Conflicts: As mentioned, the
Incoming Port
for a stream host must be unique on your NPM server. You cannot use ports 80 or 443 if they are already used by your HTTP/S proxy hosts, nor port 81 (NPM admin UI). You'll need to choose other available ports (e.g., 2222 for proxied SSH, 54321 for a proxied database). -
Security Implications: Exposing non-HTTP services, even through a proxy, requires careful consideration of security:
- Authentication and Authorization: Ensure the backend service itself has strong authentication and authorization mechanisms. NPM's stream proxying doesn't add an authentication layer like Basic Auth for HTTP.
- Firewalling: Only open the specific
Incoming Port
for the stream host. Restrict access to this port at your firewall to trusted IPs if possible. - Updates: Keep the backend service software updated to patch vulnerabilities.
-
Application Compatibility: Most simple client-server TCP/UDP applications work well behind a stream proxy. However, some complex protocols or applications that embed IP addresses within their protocol data might have issues if not designed to work behind a NAT or proxy.
-
No Hostname-Based Routing for Streams: Unlike HTTP proxying where NPM can route
service1.yourdomain.com
andservice2.yourdomain.com
(both coming in on port 443) to different backends, stream proxying is port-based. Each distinct stream service needs its own uniqueIncoming Port
on NPM.
Despite these considerations, Stream Hosts are a valuable feature for centralizing access to various types of services through a single NPM instance.
Workshop Proxying an SSH Connection (for demonstration)
Objective: Use Nginx Proxy Manager's stream host feature to proxy SSH traffic from a custom public port (e.g., 2222) to your server's actual SSH daemon (usually running on port 22).
WARNING: This workshop is primarily for demonstrating the stream proxy feature. Exposing SSH, even proxied, requires careful security hygiene:
- Always use strong passwords or, much preferably, key-based authentication for SSH.
- Keep your SSH server software updated.
- Consider using tools like
fail2ban
to protect against brute-force attacks. - Restricting firewall access to the proxied SSH port (e.g., 2222) to known IPs is highly recommended if feasible. It's often more secure to expose SSH directly on its standard or a non-standard port with robust security measures rather than proxying it unless you have a specific reason (like bypassing network restrictions that only allow certain ports out).
Prerequisites:
- An SSH server running on your NPM host machine (standard Linux servers usually have
sshd
running on port 22 by default). - Nginx Proxy Manager installed and running.
- A port number that is currently unused on your NPM server for the incoming proxied SSH connection. We'll use
2222
for this workshop. - This chosen port (e.g.,
2222/tcp
) must be opened in your server's firewall and port-forwarded from your router if NPM is behind a NAT.
Steps:
-
Ensure SSH Server is Running: Verify that your SSH daemon is active on the NPM host. For most Linux systems:
It should show as active (running) and typically listening on port 22. -
Open Port 2222 in Firewall: On your NPM server, allow incoming TCP connections on port
If you are behind a home router, log in to your router and add a port forwarding rule:2222
. Example usingufw
:- External Port:
2222
(TCP) - Internal IP: The IP address of your NPM server
- Internal Port:
2222
(TCP)
- External Port:
-
Configure the Stream Host in NPM:
- Log in to your NPM admin UI.
- Go to "Hosts" -> "Stream Hosts".
- Click "Add Stream Host".
- Details Tab:
- Incoming Port:
2222
(this is the public port clients will connect to). - Forward Hostname / IP:
localhost
(or127.0.0.1
). This tells NPM to forward the connection to the SSH server running on the same machine as NPM. - Forward Port:
22
(this is the actual port yoursshd
is listening on). - Protocol: Select
TCP
.
- Incoming Port:
- Click "Save".
-
Test the Proxied SSH Connection:
- From a different machine (your client computer, not the NPM server itself), open a terminal or an SSH client (like PuTTY on Windows).
- Attempt to SSH to your server using its public IP address (or domain name that points to it) but specify the new proxied port
2222
: Replaceyour_username
with your actual username on the server and<your_npm_server_public_ip>
with the server's public IP or a domain name pointing to it. - First Connection: If this is the first time connecting to this host/port combination, you'll be asked to verify the server's host key fingerprint. Type
yes
. - Authentication: You should then be prompted for your password (or your SSH key will be used if configured).
- Upon successful authentication, you should be logged into your server via SSH, but the connection was established through port
2222
and proxied by NPM to the real SSH service on port 22.
Verification:
- You can successfully establish an SSH session to your server by connecting to
<your_npm_server_public_ip>:2222
. - The connection behaves like a normal SSH session.
Troubleshooting:
- Connection refused on port 2222:
- Firewall on NPM server: Is port
2222/tcp
open? - Router port forwarding: Is external port
2222
correctly forwarded to internal port2222
on your NPM server's IP? - NPM Stream Host configuration: Is it saved and active? Check NPM logs for errors related to the stream module if issues persist.
- Is NPM itself running?
- Firewall on NPM server: Is port
- Connection times out on port 2222: Similar to "connection refused," often a firewall or routing issue.
- SSH connection works on port 22 directly but not on port 2222: This points to an issue with the NPM proxying or the firewall/forwarding for port 2222.
- Double-check
Forward Hostname / IP
(should belocalhost
or127.0.0.1
ifsshd
is on the same host as NPM) andForward Port
(should be22
) in the NPM Stream Host settings.
- Double-check
- Authentication fails: This is likely an SSH issue (wrong password/key), not an NPM proxying issue, assuming the connection is established up to the authentication prompt.
Discussion:
This workshop demonstrates the utility of stream hosts for TCP services. Consider scenarios where this might be useful:
- Bypassing restrictive firewalls: Some networks might block outbound connections on non-standard ports but allow connections on, for example, port 443. You could theoretically (though not recommended for SSH due to port ambiguity with HTTPS) run a stream proxy on port 443 if it's not used by HTTPS proxy hosts.
- Consolidating access points: Using NPM as a single point of entry for various services, even non-HTTP ones.
- Adding a layer of indirection: The actual backend service port is not directly exposed.
Always prioritize the security of the backend service itself when using stream proxies.
9. Advanced Nginx Proxy Manager Use Cases
Beyond the everyday tasks of proxying web services and managing SSL, Nginx Proxy Manager, by leveraging the power of Nginx, can be a component in more sophisticated setups. While NPM's UI might not directly offer buttons for these, understanding Nginx's capabilities allows you to use NPM creatively or augment it with custom configurations.
Blue/Green Deployments or Canary Releases (Conceptual with NPM)
Blue/Green Deployment:
A strategy to release new software versions with minimal downtime and risk.
- Concept: You have two identical production environments: "Blue" (current live version) and "Green" (new version).
- With NPM:
- Initially, NPM proxies
app.yourdomain.com
to your Blue environment (e.g.,blue-service.internal:8080
). - You deploy the new version to the Green environment (e.g.,
green-service.internal:8081
). Test it internally. - To switch traffic: In NPM, edit the proxy host for
app.yourdomain.com
. Change theForward Hostname / IP
and/orForward Port
to point to the Green environment. - All new traffic now goes to Green. Blue is still running and can be rolled back to instantly if Green has issues.
- This is a manual switch in NPM's UI but effective for simpler setups.
- Initially, NPM proxies
Canary Release:
Gradually roll out a new version to a small subset of users before a full release.
- Concept: Route a small percentage of traffic (e.g., 5%) to the new version (Canary) while the majority still uses the stable version.
- With NPM (and Nginx capabilities):
- NPM's UI doesn't directly support weighted load balancing needed for true canary releases.
- However, Nginx itself can do this using
upstream
blocks withweight
parameters andsplit_clients
module. This would require significant custom Nginx configuration, potentially by:- Defining two proxy hosts in NPM, e.g.,
app-stable.yourdomain.com
andapp-canary.yourdomain.com
, pointing to their respective backends. - Setting up a third proxy host for
app.yourdomain.com
in NPM, and in its "Advanced" Nginx configuration, implementing custom Nginx logic withupstream
andsplit_clients
to distribute traffic between the internal IPs/ports ofapp-stable
andapp-canary
. This is highly advanced and bypasses much of NPM's direct management for that specific host.
- Defining two proxy hosts in NPM, e.g.,
- A simpler, non-percentage based canary approach with NPM could be using an Access List to route specific IP addresses (e.g., internal testers) to the canary version while everyone else goes to the stable version.
GeoIP Blocking/Filtering (Requires Custom Nginx Config and GeoIP Module)
Concept:
Block or allow traffic based on the geographic origin of the client's IP address.
- Nginx can perform GeoIP lookups if the Nginx GeoIP2 module (
ngx_http_geoip2_module
) is installed and configured with a MaxMind GeoIP2 database. - NPM and GeoIP:
- The standard
jc21/nginx-proxy-manager
Docker image does not include the GeoIP2 module or databases by default. - To use GeoIP features, you would likely need to:
- Create a custom Docker image based on NPM's, adding the Nginx GeoIP2 module during the build.
- Download and regularly update MaxMind GeoLite2 (free) or paid GeoIP2 databases and make them available to the Nginx container (e.g., via a volume mount).
- Use custom Nginx configuration in NPM (either globally or per-host) to load the GeoIP2 module, define
geoip2
variables, and useif
statements ormap
directives to allow/deny traffic based on country codes.# Example snippet (assumes module and DB are set up) # geoip2 /path/to/GeoLite2-Country.mmdb { # $geoip2_data_country_code country iso_code; # } # # map $geoip2_data_country_code $allowed_country { # default no; # US yes; # Allow US # CA yes; # Allow Canada # } # # if ($allowed_country = no) { # return 403; # Forbidden # }
- This is an advanced customization requiring Docker image modification and deeper Nginx knowledge.
- The standard
Integrating with Fail2Ban
Concept:
Fail2Ban is an intrusion prevention software framework that scans log files and bans IPs showing malicious signs (too many password failures, seeking exploits, etc.).
- With NPM:
- NPM's Nginx access and error logs are generated. If using the default Docker image, these logs are typically sent to Docker's logging driver (viewable with
docker logs <npm_container_name>
). - To make these logs easily accessible to Fail2Ban running on the host:
- You can try to configure Docker to use a logging driver like
syslog
orjournald
and have Fail2Ban read from there. - Alternatively, you can mount Nginx's log directory from within the NPM container to a host path. This requires knowing the internal log paths within the NPM container (e.g.,
/data/logs/
) and adding a volume mount to yourdocker-compose.yml
:Then, configure Fail2Ban on the host to monitor log files inservices: app: # ... other configs ... volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt - ./nginx_logs:/data/logs # Mount NPM's internal log dir to host
./nginx_logs
.
- You can try to configure Docker to use a logging driver like
- Create Fail2Ban "jails" with filters specific to patterns you want to block (e.g., repeated 403s, 404s from specific IPs, attempts to access sensitive paths). When Fail2Ban detects a match, it will update firewall rules (e.g.,
iptables
,nftables
,ufw
) on the host to block the offending IP. - This effectively protects all services behind NPM at the network level.
- NPM's Nginx access and error logs are generated. If using the default Docker image, these logs are typically sent to Docker's logging driver (viewable with
Using NPM with Docker Swarm or Kubernetes (Conceptual)
- Docker Swarm:
- NPM can run as a Swarm service. You'd typically make it a global service on manager nodes or nodes with public IPs, publishing ports 80 and 443.
- For backend services, NPM would forward traffic to the Swarm service's virtual IP or use DNS resolution for service names within the Swarm overlay network.
- SSL certificate storage (
./data
,./letsencrypt
) would need to use Swarm-compatible persistent storage (e.g., NFS, GlusterFS, or specific volume drivers) if NPM instances can move between nodes.
- Kubernetes:
- NPM can act as a simple Ingress controller alternative for less complex needs, or run as a standalone proxy.
- It would typically run as a Deployment with a Service of type
LoadBalancer
orNodePort
to expose ports 80/443. - NPM would forward traffic to Kubernetes Service names (e.g.,
my-app-service.namespace.svc.cluster.local
) on their respective ClusterIP ports. - Persistent storage for
/data
and/letsencrypt
would use Kubernetes PersistentVolumes (PVs) and PersistentVolumeClaims (PVCs). - While dedicated Ingress controllers (like Nginx Ingress, Traefik) are more idiomatic in Kubernetes and offer deeper integration, NPM can be a simpler option for users already familiar with it.
Backup and Restore of NPM Configuration
This is critical for disaster recovery. If your server fails or the NPM data gets corrupted, you need to be able to restore your setup.
What to Back Up:
- The
data
volume/directory: This contains the SQLite database (users, proxy hosts, access lists, SSL certificate configurations) and other NPM settings. In ourdocker-compose.yml
examples, this is mapped to./data
on the host. - The
letsencrypt
volume/directory: This contains all your SSL certificate files (private keys, fullchains, renewal configurations). Mapped to./letsencrypt
on the host. - If using an external database (MariaDB/MySQL): You must back up the database separately using appropriate database dump tools (e.g.,
mysqldump
). The NPMdata
volume will not contain this. In our MariaDB example compose, this was./mysql_data
. - Your
docker-compose.yml
file itself: This defines how NPM is run.
Backup Procedure (for host-mounted volumes):
- (Optional but Recommended for data consistency, especially if not using SQLite's WAL mode properly or for other DBs) Stop NPM:
- Archive the Data Directories:
Create a compressed archive of the
./data
and./letsencrypt
directories.If you hadtar -cvzpf npm_backup_$(date +%Y-%m-%d_%H-%M-%S).tar.gz ./data ./letsencrypt ./docker-compose.yml # -c: create archive # -v: verbose (optional) # -z: compress with gzip # -p: preserve permissions # -f: specify archive filename
./mysql_data
for an external DB managed by the same compose: - (If you stopped NPM) Restart NPM:
- Move Backup to a Safe Location: Store the
*.tar.gz
file on a different server, cloud storage, or an external drive. Do not keep backups only on the same server.
Restore Procedure (for host-mounted volumes):
- Ensure NPM is stopped and containers (if any) are removed: On the new/restored server, in your NPM project directory:
- Clear Old Data (if any):
Remove any existing
./data
and./letsencrypt
directories (be careful!): - Extract the Backup:
Copy your backup
tar.gz
file to the NPM project directory. This should recreate the./data
,./letsencrypt
directories and yourdocker-compose.yml
. Ensure permissions are sensible (Docker usually handles this if the user runningdocker-compose
can write to these dirs). - Start NPM:
- Verify: Access the NPM UI and check if your proxy hosts, SSL certificates, and users are restored. Test your proxied services.
NPM itself does not have a built-in "Export/Import Configuration" button in its UI. Backup and restore relies on backing up the persistent Docker volumes. Regular, tested backups are a cornerstone of responsible self-hosting.
Workshop Backing Up and Restoring NPM Configuration
Objective:
Learn the process of backing up Nginx Proxy Manager's persistent data (configuration and SSL certificates) and simulate a restore to ensure the procedure is understood. We'll use the host-mounted volume approach from our previous workshops.
Prerequisites:
- Nginx Proxy Manager installed via Docker Compose and running (e.g., from
/opt/npm
as in previous workshops). - At least a few proxy hosts and SSL certificates configured in NPM.
- Command-line access to the server where NPM is running.
Steps for Backup:
-
Identify Your NPM Docker Compose Project Directory: This is the directory containing your
docker-compose.yml
file and thedata
andletsencrypt
subdirectories. For example,/opt/npm
. -
Stop the NPM Stack (Recommended for data integrity): While still in the
Verify that the NPM container is stopped:/opt/npm
directory: -
Create a Backup Archive: We'll create a timestamped
tar.gz
archive of thedata
andletsencrypt
directories, as well as thedocker-compose.yml
file itself.List the backup file to confirm its creation:# Create the backup in the parent directory (/opt) to avoid archiving the archive itself if run multiple times sudo tar -cvzpf /opt/npm_backup_$(date +%Y-%m-%d_%H-%M-%S).tar.gz -C /opt/npm data letsencrypt docker-compose.yml # Explanation: # -c: create # -v: verbose (list files) # -z: gzip compression # -p: preserve permissions # -f /opt/npm_backup_...tar.gz: output archive file name and path # -C /opt/npm: Change to directory /opt/npm before adding files # data letsencrypt docker-compose.yml: files/directories to add (paths relative to -C dir)
-
Restart the NPM Stack:
Verify NPM is running again: -
Secure the Backup: In a real-world scenario, you would now copy
/opt/npm_backup_YYYY-MM-DD_HH-MM-SS.tar.gz
to a remote, secure location (another server, cloud storage, external drive). For this workshop, just having it in/opt/
is fine.
Steps for Simulated Restore (Illustrative - Be Very Careful with rm -rf
on Real Data):
This simulation will involve removing the current data to mimic a disaster.
-
Stop and Remove Existing NPM Containers AND Data Volumes: This step simulates a complete loss of the current NPM setup.
cd /opt/npm sudo docker-compose down # Stop and remove containers # CRITICAL: The next command deletes your live NPM data for this simulation. # In a real restore, you'd do this on a new server or after actual data loss. # Double-check you are in the correct directory (/opt/npm)! echo "About to delete ./data and ./letsencrypt. Current directory: $(pwd)" echo "Ensure you have your backup! Press Ctrl+C to abort or Enter to continue." read dummy_variable sudo rm -rf ./data sudo rm -rf ./letsencrypt # We'll restore docker-compose.yml from the backup, so we can remove it too for a full simulation # sudo rm -f ./docker-compose.yml echo "Current data directories removed (simulated disaster)." ls -l # Should show data and letsencrypt are gone.
-
Restore from Backup:
- Ensure your backup file (e.g.,
/opt/npm_backup_YYYY-MM-DD_HH-MM-SS.tar.gz
) is accessible. - Extract the backup archive into the
/opt/npm
directory.# Ensure you are in /opt/npm for the paths to be correct after extraction cd /opt/npm sudo tar -xvzpf /opt/npm_backup_YYYY-MM-DD_HH-MM-SS.tar.gz -C /opt/npm # If your tarball stored files with a leading directory (e.g., 'opt/npm/data'), # you might need to adjust extraction or move files. # The command used for backup should ensure paths are relative to /opt/npm (data/, letsencrypt/). # Let's verify: # sudo tar -xvzpf /opt/npm_backup_YYYY-MM-DD_HH-MM-SS.tar.gz # This should extract data/, letsencrypt/, and docker-compose.yml directly into the current /opt/npm directory.
- Verify that
data
,letsencrypt
, anddocker-compose.yml
are now present in/opt/npm
: Ensure the ownership and permissions look similar to how they were before. The-p
flag intar
during backup helps preserve them. If Docker has issues with permissions, you might needsudo chown -R youruser:yourgroup ./data ./letsencrypt
(whereyouruser:yourgroup
matches what Docker expects, often related to the user runningdocker-compose
). However, usually, Docker handles this fine if the host directories are writable by the root/Docker user.
- Ensure your backup file (e.g.,
-
Start the NPM Stack from Restored Configuration:
-
Verify Restoration:
- Check container status:
sudo docker-compose ps
. It should be running. - Access your Nginx Proxy Manager admin UI (
http://<your_server_ip>:81
). - Log in with your usual NPM admin credentials.
- Check:
- Are your Proxy Hosts listed correctly?
- Are your SSL Certificates (including custom ones if any) present?
- Are your Access Lists and Users there?
- Test one or two of your previously configured proxied services by browsing to their HTTPS URLs. They should work with their SSL certificates intact.
- Check container status:
Verification:
All your Nginx Proxy Manager configurations (proxy hosts, SSL certificates, users, access lists) are successfully restored and functional after the simulated data loss and restore process.
Discussion:
- Backup Frequency: How often should you back up? It depends on how frequently you change your NPM configuration. For active setups, weekly or even daily might be appropriate. For static setups, monthly might suffice.
- Backup Automation: Consider automating the backup process using cron jobs and scripts that copy backups to remote storage.
- Testing Restores: Periodically test your restore procedure (e.g., in a staging environment or a VM) to ensure your backups are valid and you know how to use them. A backup is useless if it can't be restored.
- Off-site Backups: Storing backups in a different physical location (or different cloud region) protects against site-wide disasters (fire, flood, major server outage).
This workshop provides a fundamental understanding of safeguarding your NPM setup, a crucial aspect of reliable self-hosting.
10. Troubleshooting and Maintenance
Even with a user-friendly tool like Nginx Proxy Manager, issues can arise. Understanding common problems, how to diagnose them using logs, and proper maintenance practices are key to keeping your reverse proxy running smoothly and securely.
Common NPM Issues and Solutions
-
502 Bad Gateway:
- Meaning: NPM (Nginx) successfully received the request from the client but was unable to connect to or get a valid response from the backend service you specified in the Proxy Host configuration.
- Common Causes & Solutions:
- Backend Service Down/Not Running:
- Check: Is your backend application container or service actually running? Use
docker ps -a
for containers orsystemctl status <service_name>
for system services. - Solution: Start or restart the backend service.
- Check: Is your backend application container or service actually running? Use
- Incorrect Forward Hostname/IP or Port in NPM:
- Check: In NPM's Proxy Host settings, verify that the
Forward Hostname / IP
andForward Port
point to the correct address and port where your backend service is actually listening. - Solution: Correct the settings in NPM. Remember, if using Docker container names, NPM and the backend service must be on the same custom Docker network.
- Check: In NPM's Proxy Host settings, verify that the
- Firewall on Backend Service Host: If your backend service is on a different machine (or even the same host but not in Docker), a firewall on that machine might be blocking incoming connections from NPM.
- Check: Temporarily disable the firewall on the backend host or add a rule to allow connections from NPM's IP on the specific port.
- Solution: Adjust firewall rules.
- Docker Networking Issues:
- Check: If both NPM and backend are Docker containers, are they on the same custom Docker network? Can NPM resolve the backend container's name?
- Solution: Ensure they share a network. Test connectivity from within the NPM container:
docker exec <npm_container_name> curl http://<backend_container_name_or_ip>:<backend_port>
.
- Backend Application Crashing or Overloaded: The application might be starting but then crashing or too busy to respond.
- Check: Logs of the backend application.
- Solution: Address the issue within the backend application.
- Backend Service Down/Not Running:
-
503 Service Unavailable:
- Meaning: The server is temporarily unable to handle the request. This could be due to overloading or maintenance. In NPM's context, it might mean Nginx itself is having trouble or cannot find a healthy upstream.
- Common Causes & Solutions:
- Often similar to 502 Bad Gateway causes if NPM can't reach any healthy backend.
- NPM Itself is Overloaded (Rare for typical self-hosting): If NPM is under extreme load, it might return 503s.
- Check: Server resource usage (CPU, memory, disk I/O).
- Solution: Optimize backend services or scale up server resources.
- Misconfiguration in Custom Nginx Snippets: A faulty custom Nginx configuration could lead to 503s.
- Check: Review custom Nginx configs for errors.
- Solution: Correct or remove the faulty custom config.
-
SSL Certificate Errors (Browser Warnings like "Your connection is not private"):
- Common Causes & Solutions:
- Certificate Expired: Let's Encrypt certificates are valid for 90 days. NPM should auto-renew, but if it fails:
- Check: Certificate expiry date in NPM's SSL Certificates section or browser details.
- Solution: Ensure NPM is running and can reach Let's Encrypt (port 80 open externally). Manually try renewing in NPM if needed, check logs for renewal errors.
- Domain Mismatch (NET::ERR_CERT_COMMON_NAME_INVALID): The certificate is for
example.com
but you're accessingwww.example.com
(and www is not on the cert).- Check: Domain names listed on the certificate.
- Solution: Ensure all required domain names (including www and non-www if used) are on the certificate. Reissue if necessary. For Let's Encrypt, add all names when requesting.
- Untrusted Issuer (NET::ERR_CERT_AUTHORITY_INVALID):
- Using a self-signed certificate (as in Workshop 7). Normal for local/test setups.
- Incomplete certificate chain (missing intermediate certificates).
- Check: Certificate chain in browser details or SSL Labs test.
- Solution: If using a custom cert, ensure you upload the full chain (server cert + intermediates). If Let's Encrypt, this is usually handled.
- DNS Not Propagated Correctly During Issuance: Let's Encrypt couldn't verify domain ownership.
- Solution: Ensure DNS is correct and fully propagated, then try reissuing the certificate.
- Port 80 Blocked for Let's Encrypt HTTP-01 Challenge:
- Solution: Ensure port 80 is open externally to your NPM server.
- Let's Encrypt Rate Limits: Too many failed attempts or requests.
- Solution: Wait for the rate limit to expire (check Let's Encrypt docs, often an hour or a week).
- Mixed Content: The main page loads via HTTPS, but it tries to load resources (images, scripts, CSS) via HTTP. Browsers will block this or show warnings.
- Check: Browser console for mixed content errors.
- Solution: This is an application-level issue. Ensure your backend application serves all content over HTTPS or uses relative URLs. A
Content-Security-Policy: upgrade-insecure-requests;
header (added via custom Nginx config) can help some browsers auto-upgrade, but fixing the app is better.
- Certificate Expired: Let's Encrypt certificates are valid for 90 days. NPM should auto-renew, but if it fails:
- Common Causes & Solutions:
-
NPM Admin UI Not Loading (e.g., on
http://<server_ip>:81
):- Common Causes & Solutions:
- NPM Container Not Running or Crashing:
- Check:
docker ps -a
. Is the NPM container running? If exited, checkdocker logs <npm_container_name>
. - Solution: Address errors in logs (e.g., database issues, port conflicts). Restart the container.
- Check:
- Port Conflict for Port 81: Another service on the host might be using port 81.
- Check:
sudo netstat -tulnp | grep :81
. - Solution: Stop the conflicting service or change NPM's admin port in
docker-compose.yml
(e.g.,- '8081:81'
) and update firewall.
- Check:
- Firewall Blocking Port 81:
- Check: Server firewall rules and router port forwarding if accessing remotely.
- Solution: Allow port 81.
- Database Issues (if using external DB): If NPM can't connect to its database, the UI won't load.
- Check: NPM logs for database connection errors. Ensure DB server is running and accessible.
- Solution: Fix database connectivity.
- Corrupted
data
Volume: In rare cases, data corruption could prevent startup.- Solution: Restore from backup.
- NPM Container Not Running or Crashing:
- Common Causes & Solutions:
-
Permission Denied Errors (related to Docker volumes):
- Meaning: Docker or the process inside the container doesn't have correct permissions to read/write to the host-mounted volumes (
./data
,./letsencrypt
). - Common Causes & Solutions:
- Usually occurs if the user ID (UID) or group ID (GID) of the process inside the NPM container doesn't match the ownership/permissions of the
./data
and./letsencrypt
directories on the host, or if AppArmor/SELinux policies are too restrictive. - Solution:
- Ensure the directories on the host are writable by the user that Docker runs as (often root, or the user who owns the Docker socket).
- Try
sudo chown -R <UID>:<GID> ./data ./letsencrypt
where UID/GID match what the container expects (often1000:1000
orroot:root
). Finding the exact UID/GID can sometimes involvedocker exec -u root <npm_container_name> id <process_user>
. - For SQLite, the database file and its directory need to be writable by the user running nginx/php-fpm inside the container.
- Usually occurs if the user ID (UID) or group ID (GID) of the process inside the NPM container doesn't match the ownership/permissions of the
- Meaning: Docker or the process inside the container doesn't have correct permissions to read/write to the host-mounted volumes (
Reading NPM Logs
Logs are your best friend for troubleshooting.
-
NPM Application & Nginx Logs (via Docker): The primary way to get logs is through Docker:
These logs contain:docker logs <npm_app_container_name_or_id> # For continuous logs: docker logs -f <npm_app_container_name_or_id> # To see last N lines: docker logs --tail 100 <npm_app_container_name_or_id>
- NPM application startup messages.
- Let's Encrypt (certbot) activity and errors.
- Nginx access logs (requests processed).
- Nginx error logs (problems Nginx encountered).
-
Log Files within the Container: Nginx logs are typically stored within the
/data/logs/
directory inside the NPM container, which is part of the./data
volume mapped to your host. You can inspect these files directly on your host:./data/logs/proxy-host-X_access.log
./data/logs/proxy-host-X_error.log
./data/logs/letsencrypt.log
./data/logs/nginx_error.log
(general Nginx errors)
Key Information to Look For in Logs:
- Error messages (lines with
[error]
,[emerg]
,[crit]
). - Specific domain names or IP addresses related to the problem.
- Timestamps to correlate with when an issue occurred.
- For SSL issues, look for "certbot" or "Let's Encrypt" messages.
- For 502/503 errors, Nginx error logs often show "upstream connect error" or "no live upstreams".
Updating Nginx Proxy Manager
Keeping NPM updated is crucial for security and new features. Since it's a Docker image:
-
Backup Your Configuration (Highly Recommended Before Any Update): Follow the backup procedure from Section 9 (archive
./data
and./letsencrypt
directories anddocker-compose.yml
). -
Navigate to Your NPM Docker Compose Directory:
-
Pull the Latest Image: This fetches the newest version of
Or, if you're not using Docker Compose for some reason (not recommended for NPM):jc21/nginx-proxy-manager
from Docker Hub.docker pull jc21/nginx-proxy-manager:latest
-
Recreate and Start the Container with the New Image: Docker Compose makes this easy. It will stop the old container, remove it, and start a new one using the newly pulled image, while reattaching your existing volumes.
-
Verify:
- Check
docker ps
to ensure the new container is running. - Access the NPM UI and test your proxy hosts.
- Check the NPM version in the UI (often in the footer or an "About" section).
- Check
-
(Optional) Clean Up Old Images: Docker keeps old, unused images. You can clean them up periodically:
Monitoring NPM
- Basic Checks:
- Is the NPM admin UI accessible?
- Are your proxied services working correctly?
- Check SSL certificate expiry dates periodically in the NPM UI or via external monitoring tools.
- Resource Usage: Monitor the CPU, memory, and network usage of the NPM container. NPM is generally lightweight, but high traffic or many complex custom rules could increase resource use.
- External Monitoring Services: Use uptime monitoring services (e.g., UptimeRobot, Better Uptime, StatusCake - many have free tiers) to check the availability of your public-facing proxied domains. They can alert you if a service goes down.
Security Best Practices for NPM
- Keep NPM Updated: As covered above, updates bring security patches.
- Strong Admin Password for NPM UI: Use a unique, complex password for the NPM admin user.
- Restrict Access to NPM Admin UI (Port 81):
- If possible, use your server's firewall to restrict access to port 81 to only trusted IP addresses (e.g., your home IP, office IP, or VPN IP).
- Example
ufw
:sudo ufw allow from <your_trusted_ip> to any port 81 proto tcp
- This can be tricky if your IP changes often or you need access from anywhere.
- Regularly Review Proxy Host Configurations: Ensure only intended services are exposed. Remove old or unused proxy hosts.
- Review Access Lists: Ensure access controls are appropriate and restrictive where needed.
- Keep Host System Secure: NPM runs on your host server. Keep the host OS updated, use a firewall, and follow general server security practices.
- Secure Docker: Follow Docker security best practices (e.g., run as non-root if possible, secure the Docker socket, use trusted images).
- HTTPS Everywhere: Use SSL for all your proxy hosts. Enable "Force SSL". Consider HSTS once stable.
- Backups: Regular, tested, and off-site backups are non-negotiable.
By following these troubleshooting and maintenance guidelines, you can ensure your Nginx Proxy Manager instance remains a reliable and secure gateway to your self-hosted services.
Workshop Diagnosing a "Bad Gateway" Error
Objective: To intentionally cause a "502 Bad Gateway" error for one of your services, then use NPM logs and other diagnostic steps to identify and resolve the issue.
Prerequisites:
- NPM installed and running.
- At least one working proxy host configured (e.g.,
app1.yourworkshopdomain.com
from a previous workshop, pointing to a backend container likewhoami-app1
). Let's assumeapp1.yourworkshopdomain.com
proxies to a Docker container namedwhoami-app1
which is on the same Docker network as NPM.
Steps:
-
Verify Normal Operation:
- Open your browser and navigate to
https://app1.yourworkshopdomain.com
. - Confirm that it loads correctly, showing the
whoami
output.
- Open your browser and navigate to
-
Simulate Backend Service Failure: We will stop the backend Docker container that
app1.yourworkshopdomain.com
is supposed to proxy to.- SSH into your server where Docker is running.
- Stop the
whoami-app1
container (or whichever backend container your test proxy host uses): - Verify it's stopped:
-
Attempt to Access the Service and Observe the Error:
- Go back to your browser and refresh
https://app1.yourworkshopdomain.com
or try to open it in a new tab. - Expected Result: You should now see a "502 Bad Gateway" error page served by Nginx Proxy Manager.
- Go back to your browser and refresh
-
Diagnose the Issue using Troubleshooting Steps:
-
Step A: Check NPM Logs for Clues
- On your server, view the logs for your NPM container. Let's assume your NPM container is named
npm_app_workshop
. - Look for error messages related to the failed request. When NPM tries to connect to the (now stopped)
whoami-app1
container, you should see Nginx error log entries similar to this (the exact wording might vary slightly): Or if it couldn't resolve the name (less likely if it was working before): The "Connection refused" or "host not found" or "no live upstreams" message is a strong indicator that NPM couldn't reach the backend.
- On your server, view the logs for your NPM container. Let's assume your NPM container is named
-
Step B: Verify Backend Service Status (as we did in step 2, but this is part of diagnosis)
- Command:
docker ps -a | grep whoami-app1
- Observation: The container is in an "Exited" state. This confirms the backend is down.
- Command:
-
Step C: Test Internal Connectivity (if backend appeared to be running but still 502'd)
- If
docker ps
had shownwhoami-app1
as running, the next step would be to test if the NPM container can reach it on the Docker network: - If
whoami-app1
were stopped, thiscurl
command would fail (e.g., "Could not resolve host" or "Connection refused"). Ifwhoami-app1
were running but misconfigured (e.g., listening on a different internal port), this would also help pinpoint it.
- If
-
-
Resolve the Issue: The diagnosis clearly points to the
Verify it's running again:whoami-app1
container being stopped. Let's restart it: -
Re-test Access to the Service:
- Go back to your browser and refresh
https://app1.yourworkshopdomain.com
. - Expected Result: The page should now load correctly, showing the
whoami
output again.
- Go back to your browser and refresh
Verification:
- You successfully simulated a backend failure leading to a 502 error.
- You were able to use NPM/Nginx logs to identify clues pointing to an upstream connection problem.
- You confirmed the backend service status.
- You resolved the issue by restarting the backend service, and the proxied service became accessible again.
This workshop provides a practical example of a common troubleshooting workflow for Nginx Proxy Manager. The key is a systematic approach: check client access, check NPM logs, check backend status, and verify network connectivity between NPM and the backend.
Conclusion
Throughout this guide, we've journeyed from the foundational concepts of reverse proxies to the practical installation, configuration, and advanced usage of Nginx Proxy Manager. You've learned how to expose your self-hosted services to the internet in a secure, organized, and efficient manner.
Key Takeaways:
- Simplification:
Nginx Proxy Manager dramatically simplifies the complexities of Nginx configuration, making powerful reverse proxy capabilities accessible to a broader audience. - Security:
You've seen how to secure your applications with SSL/TLS certificates from Let's Encrypt, implement access controls, and add custom security headers, forming crucial layers in your self-hosting defense strategy. - Centralization:
NPM allows you to manage access to multiple diverse services through a single point, using user-friendly domain and subdomain structures. - Flexibility:
From basic HTTP proxying and SSL termination to stream hosting for TCP/UDP services and advanced custom Nginx configurations, NPM caters to a wide range of self-hosting needs. - Maintenance and Troubleshooting:
Understanding how to back up your configuration, update the software, and diagnose common issues using logs are vital skills for long-term reliability.
Self-hosting is a rewarding endeavor that gives you control over your data and applications. Tools like Nginx Proxy Manager are instrumental in making this journey more manageable and secure. The knowledge you've gained here provides a solid foundation not only for using NPM effectively but also for understanding broader web serving and security principles.
Further Exploration:
- Dive Deeper into Nginx:
While NPM abstracts much of Nginx, understanding Nginx's own documentation and capabilities can unlock even more advanced customizations. - Explore More Self-Hosted Applications:
With NPM as your gateway, the world of self-hosted software (Nextcloud, Gitea, Home Assistant, Plex, and countless others) is open to you. - Advanced Networking:
Learn more about Docker networking, firewalls, and potentially VLANs if your home lab grows. - Security Hardening:
Continuously learn about web security best practices, intrusion detection systems (like Fail2Ban), and keeping all components of your self-hosting stack updated.
Remember that the self-hosting landscape is ever-evolving. Keep learning, experimenting (safely, perhaps in a test environment), and engaging with communities to share knowledge and stay informed. Your ability to manage and secure your own digital services is a valuable skill in today's interconnected world.
Thank you for following this guide, and happy self-hosting!