Author | Nejat Hakan |
nejat.hakan@outlook.de | |
PayPal Me | https://paypal.me/nejathakan |
Reverse Proxy Caddy
Introduction to Reverse Proxies and Caddy
Welcome to the world of reverse proxies, a cornerstone of modern web infrastructure and an invaluable tool for any self-hoster. In this section, we'll demystify what reverse proxies are, why they are so beneficial, and introduce Caddy, a modern, powerful, and exceptionally user-friendly web server that excels as a reverse proxy. We aim to provide you with a deep understanding, making complex concepts accessible, much like a university course would.
What is a Reverse Proxy?
Imagine a large office building. Visitors don't just wander in and knock on any door they please. Instead, they approach a central reception desk. The receptionist greets them, ascertains who or what department they wish to see, verifies their appointment (if necessary), and then directs them to the correct location. The receptionist might also handle mail, screen visitors for security, and provide general information, ensuring the smooth and secure operation of the building.
In the digital world, a reverse proxy acts much like this receptionist for your web services. It's a server that sits in front of one or more web servers (your self-hosted applications), intercepting requests from clients (e.g., web browsers) on the internet. Instead of clients connecting directly to your application servers, they connect to the reverse proxy. The reverse proxy then forwards these requests to the appropriate backend server based on configured rules. To the client, it appears as if the reverse proxy itself is providing the resource.
Key Benefits of Using a Reverse Proxy:
-
Unified Access Point & Simplified URLs:
- You can host multiple websites or services on different internal ports or even different internal machines, but expose them all through a single public IP address and standard ports (80 for HTTP, 443 for HTTPS).
- For example,
yourdomain.com/service-a
could point tolocalhost:8080
, andyourdomain.com/service-b
could point tolocalhost:8081
, orapp1.yourdomain.com
andapp2.yourdomain.com
could point to different internal servers. This makes URLs cleaner and easier for users.
-
SSL/TLS Termination:
- This is a huge one for self-hosters. Managing SSL/TLS certificates (which enable HTTPS) can be complex for each individual application. A reverse proxy can handle all SSL/TLS encryption and decryption at a single point.
- Your backend applications can then communicate over unencrypted HTTP on your internal network, simplifying their configuration. The reverse proxy ensures all external communication is encrypted. Caddy, as we'll see, makes this incredibly easy with automatic HTTPS.
-
Load Balancing:
- If you have a popular application and want to run multiple instances of it for redundancy or to handle more traffic, a reverse proxy can distribute incoming requests across these instances. This improves performance, availability, and scalability.
- Common strategies include round-robin (distributing requests sequentially) or least connections (sending requests to the server with the fewest active connections).
-
Enhanced Security:
- Hiding Backend Server Information: The reverse proxy masks the IP addresses and characteristics of your backend servers, making it harder for attackers to directly target them.
- Request Filtering/Firewalling: Some reverse proxies can be configured to block malicious requests, filter by IP address, or integrate with Web Application Firewalls (WAFs).
- Centralized Authentication/Authorization: You can implement access controls (like password protection or OAuth) at the reverse proxy level, protecting multiple backend applications consistently.
-
Caching:
- A reverse proxy can cache static content (like images, CSS, and JavaScript files) or even dynamic content. When a client requests cached content, the proxy serves it directly without bothering the backend server. This reduces load on your applications and speeds up response times for users.
-
Compression:
- The reverse proxy can compress responses (e.g., using Gzip or Brotli) before sending them to clients, reducing bandwidth usage and improving load times, especially for users on slower connections.
-
Serving Static Content:
- While backend applications handle dynamic content, a reverse proxy can efficiently serve static files (images, CSS, JS) directly, offloading this task from potentially slower application servers.
How a Reverse Proxy Differs from a Forward Proxy:
It's common to confuse reverse proxies with forward proxies.
- A forward proxy (often just called a "proxy") sits in front of client computers (e.g., within a company network). When a user tries to access an external website, the request goes through the forward proxy. The forward proxy then fetches the content from the internet and returns it to the client. Forward proxies are used for things like bypassing internet filters, caching content for multiple users within a network, or masking the client's IP address. The client configures itself to use the forward proxy.
- A reverse proxy sits in front of servers. It accepts requests from the internet on behalf of those servers. The server administrator configures the reverse proxy. Clients are generally unaware they are talking to a reverse proxy; they think they are talking directly to the end server.
Think of it this way:
- Forward Proxy: Protects/acts on behalf of clients.
- Reverse Proxy: Protects/acts on behalf of servers.
Introducing Caddy Server
Caddy (often called Caddy Web Server) is an open-source, powerful, and modern web server written in Go. While it can function as a general-purpose web server like Apache or Nginx, it has gained immense popularity, especially in the self-hosting and containerization communities, for its simplicity and robust reverse proxy capabilities.
History and Philosophy:
Caddy was first released in 2015 by Matt Holt. Its core philosophy revolves around several key principles:
- Simplicity: Caddy aims to make web server configuration easy and intuitive. Its primary configuration file, the Caddyfile, is designed to be human-readable and much less verbose than traditional server configurations.
- Security by Default: Caddy prioritizes security. The most notable example is its pioneering feature of Automatic HTTPS. It was one of the first web servers to enable HTTPS by default using Let's Encrypt, automatically obtaining and renewing SSL/TLS certificates for your sites.
- Modern Features: Caddy supports modern protocols like HTTP/2 and HTTP/3 out-of-the-box, providing performance benefits without complex setup.
- Extensibility: Caddy has a modular architecture and can be extended with plugins.
Core Features that Make Caddy Shine:
-
Automatic HTTPS: This is Caddy's killer feature. If you have a publicly accessible domain name pointing to your Caddy server, Caddy will automatically:
- Obtain SSL/TLS certificates from Let's Encrypt (or ZeroSSL).
- Renew these certificates before they expire.
- Configure your sites to use HTTPS.
- Redirect HTTP traffic to HTTPS. All of this happens with minimal to no configuration on your part, drastically lowering the barrier to secure web hosting.
-
HTTP/2 and HTTP/3 Support: These newer versions of the HTTP protocol offer significant performance improvements over HTTP/1.1, such as multiplexing, header compression, and server push (for HTTP/2), and reduced latency with QUIC (for HTTP/3). Caddy enables them by default.
-
Easy Configuration with the Caddyfile: The Caddyfile is Caddy's native configuration format. It's designed for ease of use. A simple reverse proxy setup can often be achieved in just a few lines. We'll dive deep into the Caddyfile later.
-
Powerful Reverse Proxy Capabilities: Caddy's
reverse_proxy
directive is flexible and powerful, supporting load balancing, health checks, WebSocket proxying, gRPC proxying, and more. -
Extensible via Plugins: Caddy can be extended by adding plugins. This allows for features like DNS provider integrations (for ACME DNS challenges, enabling wildcard certificates or certificates for internal servers), advanced authentication mechanisms, and custom logging formats. Plugins are compiled into a custom Caddy binary.
-
API-Driven Configuration: Caddy has a robust JSON API that allows its configuration to be managed dynamically without downtime. This is particularly useful for automated environments or complex setups.
-
Cross-Platform: Caddy is a single, statically-linked binary with no external dependencies (unless you use plugins that require them). It runs on Linux, Windows, macOS, BSD, and more.
Why Choose Caddy Over Nginx or Apache for Self-Hosting?
Nginx and Apache are venerable, powerful, and extremely capable web servers that have powered the internet for decades. They are excellent choices and have vast ecosystems. However, for many self-hosters, especially those newer to web server administration or those who prioritize ease of use and modern features, Caddy presents several advantages:
- Simplicity of Configuration: The Caddyfile is generally considered much simpler and more intuitive than Nginx's configuration syntax or Apache's httpd.conf. Achieving common tasks, especially automatic HTTPS and basic reverse proxying, often requires significantly less configuration in Caddy.
- Automatic HTTPS Built-in: While Nginx and Apache can use Let's Encrypt (typically via Certbot), it's an external tool that needs separate setup and management. Caddy integrates this seamlessly and automatically. This is a major convenience and reduces a common point of failure or misconfiguration.
- Modern Defaults: Caddy enables features like HTTP/2 and attempts HTTP/3 by default. Its security defaults are generally very strong.
- Memory Safety: Being written in Go, Caddy benefits from Go's memory safety features, which can reduce the likelihood of certain types of security vulnerabilities (like buffer overflows) compared to C/C++ based servers, though Nginx and Apache are very mature and well-audited.
- Active Development and Community: Caddy has a very active development cycle and a helpful, growing community.
This isn't to say Caddy is always better. Nginx, for example, is renowned for its raw performance in extremely high-traffic scenarios and has a wider array of third-party modules for very specific, advanced use cases. However, for the vast majority of self-hosting needs, Caddy's blend of power, simplicity, and security makes it an outstanding choice.
Throughout this guide, we will explore how to leverage Caddy's capabilities to build a robust, secure, and manageable self-hosted infrastructure.
Workshop Getting Started with Caddy
This first workshop will guide you through installing Caddy, running it for the first time to serve a simple static file, and understanding the very basics of its operation and Caddyfile syntax.
Prerequisites:
- A computer running Linux, macOS, or Windows. We'll provide Linux-focused commands, but the principles apply elsewhere. For Windows, you can use PowerShell or Command Prompt.
- Access to a terminal or command prompt.
- (Optional but recommended)
curl
utility for testing. Most systems have it pre-installed.
Step 1: Installing Caddy
Caddy offers several installation methods. We'll cover two common ones for Linux: using a package manager (if available for your distribution) and downloading the pre-compiled binary. Choose the one most appropriate for your system.
-
Option A: Using a Package Manager (Debian/Ubuntu Example) Caddy maintains official repositories for popular distributions. This is often the easiest way to install and keep Caddy updated.
-
Install prerequisite packages:
-
Add Caddy's GPG key:
-
Add the Caddy repository:
-
Update package list and install Caddy:
This method usually installs Caddy as a systemd service, meaning it can be managed withsystemctl
(e.g.,sudo systemctl start caddy
). For this initial workshop, we'll often run Caddy directly from the command line to see its output. If installed as a service, you might want to stop it (sudo systemctl stop caddy
) before running Caddy manually for these exercises, to avoid port conflicts on default ports (80, 443).
-
-
Option B: Downloading the Pre-compiled Binary (Generic Linux/macOS/Windows) This method is universal and gives you direct control over the Caddy executable.
- Go to the Caddy download page: https://caddyserver.com/download
- Select your operating system and architecture.
- Download the binary. You can use
curl
orwget
from the command line on Linux/macOS. For example, for a 64-bit Linux system:For Windows, you'd typically download the# Replace with the latest version and correct architecture if needed wget "https://caddyserver.com/api/download?os=linux&arch=amd64" -O caddy # Or using curl: # curl -Lo caddy "https://caddyserver.com/api/download?os=linux&arch=amd64"
.exe
file from the website using your browser. - Make the binary executable (Linux/macOS):
(Windows
.exe
files are already executable). - (Optional) Move the binary to a location in your system's PATH, like
/usr/local/bin/
(Linux/macOS) or add its directory to your Windows PATH environment variable, so you can runcaddy
from any directory: If you don't move it to your PATH, you'll need to run it using./caddy
(Linux/macOS) orcaddy.exe
(Windows) from the directory where you downloaded it.
Step 2: Running Caddy for the First Time (Simple Static File Serving via Command)
Caddy can serve static files from the current directory with a very simple command, useful for quick testing or sharing.
-
Create a new directory for our test and navigate into it:
-
Create a simple
index.html
file in this directory: -
Now, run Caddy with the
file-server
subcommand. This tells Caddy to act as a static file server for the current directory.# If Caddy is in your PATH caddy file-server --browse --listen :2015 # If Caddy is in the current directory (Linux/macOS) # ./caddy file-server --browse --listen :2015
caddy file-server
: This is a Caddy command that starts a simple static file server.--browse
: This flag enables directory listings if no index file (likeindex.html
) is found. Even with an index file, it doesn't hurt.--listen :2015
: This tells Caddy to listen on port 2015. We use a non-standard port (above 1023) to avoid needing root/administrator privileges and to prevent conflicts with other services potentially using standard ports 80/443.
You should see output similar to this (the log format might vary slightly with Caddy versions, but the core information will be there):
The important part is that Caddy is running and listening. The line{"level":"info","ts":1678886401.12345,"msg":"using Caddyfile adapter to configure Caddy"} {"level":"warn","ts":1678886401.12355,"msg":"Caddyfile input is not formatted; run 'caddy fmt --overwrite' to fix inconsistencies","adapter":"Caddyfile"} {"level":"info","ts":1678886401.12365,"logger":"admin","msg":"admin endpoint started","address":"localhost:2019","enforce_origin":false,"origins":["//localhost:2019","//[::1]:2019","//127.0.0.1:2019"]} {"level":"info","ts":1678886401.12375,"logger":"http.auto_https","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":2015} {"level":"info","ts":1678886401.12385,"logger":"tls.cache.maintenance","msg":"started background certificate maintenance","cache":"0xSOMEMEMORYADDRESS"} {"level":"info","ts":1678886401.12395,"msg":"autosaved config (load with --resume)","file":"/home/user/.config/caddy/autosave.json"} // Path varies by OS {"level":"info","ts":1678886401.12405,"msg":"serving initial configuration"}
server is listening only on the HTTP port...
confirms it's serving HTTP on port 2015 and not attempting automatic HTTPS (which requires a domain name). -
Open your web browser and navigate to
http://localhost:2015
. You should see your message: "Hello from Caddy! (via command)". -
Check the terminal where Caddy is running. You should see new log lines indicating an incoming request and Caddy handling it, similar to:
Caddy uses structured JSON logging by default. This is excellent for automated processing but can be dense for human reading initially. We'll touch on customizing logs later.{"level":"info","ts":1678886460.23456,"logger":"http.log.access.log0","msg":"handled request","request":{"remote_ip":"127.0.0.1","remote_port":"54321","proto":"HTTP/1.1","method":"GET","host":"localhost:2015","uri":"/","headers":{"User-Agent":["Mozilla/5.0..."],"Accept":["text/html..."]}},"bytes_read":0,"user_id":"","duration":0.00012345,"size":45,"status":200,"resp_headers":{"Server":["Caddy"],"Content-Type":["text/html; charset=utf-8"],"Etag":["\"...\""],"Last-Modified":["..."],"Content-Length":["45"]}}
-
Press
Ctrl+C
in the terminal to stop Caddy.
Step 3: Understanding the Caddyfile
While command-line flags are handy for simple tasks, most Caddy configurations are defined in a text file, conventionally named Caddyfile
(capital 'C', no extension).
-
In your
~/caddy_intro_workshop
directory, create a file namedCaddyfile
: -
Open
Caddyfile
in a text editor (like VS Code, nano, Notepad++, etc.) and add the following content:# This is my first Caddyfile # It serves static files from the current directory on port 2016 localhost:2016 { # Set the root directory for files root * . # Enable static file serving file_server # Enable directory browsing (optional, good for development) browse }
Let's break this down meticulously:
- Lines starting with
#
are comments and are ignored by Caddy. localhost:2016
: This is the site address or site label. It tells Caddy which incoming requests this block of configuration should handle.localhost
: Matches requests where theHost
header islocalhost
.:2016
: Specifies that Caddy should listen on port 2016 for this site. If you omit the port, Caddy defaults to port 80 for HTTP site addresses that aren'tlocalhost
(and tries to use 443 for HTTPS). Forlocalhost
, it will default to a high port for HTTP if not specified.
{ ... }
: The curly braces define a site block. All directives inside these braces apply only to the site defined bylocalhost:2016
.- Indentation (usually spaces or tabs) inside the site block is for readability and is good practice, but Caddy's Caddyfile parser is quite flexible.
root * .
: This is a directive.root
: The name of the directive. It specifies the root directory from which to serve files for the site.*
: This is a request matcher.*
is a wildcard matcher that matches all requests to this site. We'll learn about more specific matchers later..
: The argument to theroot
directive. In this context,.
means the current working directory (the directory from which Caddy is run or where the Caddyfile is located, depending on how Caddy is started).
file_server
: This is another directive. It enables the static file server module. It will look for anindex.html
file by default.browse
: This directive enables directory listing. If you request a directory that doesn't have anindex.html
file, Caddy will show a list of files in that directory.
- Lines starting with
-
Save the
Caddyfile
. Ensure yourindex.html
from Step 2 is still in the~/caddy_intro_workshop
directory.
Step 4: Running Caddy with a Caddyfile
When you run caddy
without specific subcommands like file-server
, it looks for a Caddyfile
in the current directory by default and uses it for configuration.
- Ensure you are still in the
~/caddy_intro_workshop
directory (where yourCaddyfile
andindex.html
reside). -
Run Caddy using the
run
subcommand:# If Caddy is in your PATH caddy run # If Caddy is in the current directory (Linux/macOS) # ./caddy run
caddy run
: This command loads the configuration (from theCaddyfile
in the current directory by default), starts the server, and blocks (keeps running in the foreground, printing logs) until you interrupt it (e.g., withCtrl+C
).
You'll see startup logs. Look for lines indicating it's using your Caddyfile:
The key is that it found and parsed your{"level":"info","ts":1678886402.23456,"msg":"using Caddyfile adapter to configure Caddy","adapter":"Caddyfile","path":"Caddyfile"} ... {"level":"info","ts":1678886402.23678,"logger":"http","msg":"server running","server_name":"srv0","protocols":["h1","h2","h3"]} {"level":"info","ts":1678886402.23688,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"} // This might appear but won't apply to localhost:2016 effectively without explicit TLS ... {"level":"info","ts":1678886402.23789,"logger":"http.auto_https","msg":"server is listening only on the HTTP port, so no automatic HTTPS will be applied to this server","server_name":"srv0","http_port":2016}
Caddyfile
and is now listening on port2016
. -
Open your browser and go to
http://localhost:2016
. You should again see "Hello from Caddy! (via command)" (because it's serving theindex.html
file from the current directory, as configured). -
To test the
browse
directive, renameindex.html
temporarily:Now, refresh# Linux/macOS mv index.html old_index.html # Windows (PowerShell) # Rename-Item -Path "index.html" -NewName "old_index.html"
http://localhost:2016
in your browser. You should see a directory listing showingCaddyfile
andold_index.html
. Rename it back to see the content again: -
Stop Caddy with
Ctrl+C
in the terminal.
Step 5: Validating and Formatting Caddyfiles
Caddy provides helpful command-line tools for working with Caddyfiles, especially as they grow more complex.
-
Validating Configuration: Before starting Caddy, especially with a new or modified Caddyfile, it's wise to check its syntax. The
caddy validate
command does this. Navigate to your~/caddy_intro_workshop
directory if you're not already there.If your# If Caddy is in your PATH caddy validate --config Caddyfile # If Caddy is in the current directory (Linux/macOS) # ./caddy validate --config Caddyfile
Caddyfile
is valid, you won't see any output, and the command will exit successfully (exit code 0). Let's introduce an error to see what happens. Edit yourCaddyfile
and make a typo, for example, changefile_server
tofile_serveer
: Save it and runcaddy validate
again: Caddy tells you the file, line number, and the problem. This is incredibly helpful for debugging. Correct the typo back tofile_server
and save. Runcaddy validate
again to confirm it's fixed. -
Formatting Caddyfiles: Caddy has a standard format for Caddyfiles. The
caddy fmt
command will reformat your Caddyfile according to these conventions (consistent indentation, spacing, etc.). This makes Caddyfiles easier to read and share.Run# To see what it would change (dry run) caddy fmt --config Caddyfile # To reformat the file in place caddy fmt --overwrite --config Caddyfile
caddy fmt --overwrite --config Caddyfile
. If your file wasn't perfectly formatted, it will adjust it. Open yourCaddyfile
to see if any changes were made (e.g., to spacing). It's good practice to runcaddy fmt
after making significant changes.
This workshop covered the absolute basics: installing Caddy, serving files with a command, creating a simple Caddyfile, running Caddy with it, and using validation and formatting tools. You're now ready to explore Caddy's core purpose for many self-hosters: reverse proxying.
1. Basic Caddy Usage
Now that you have a foundational understanding of Caddy and can run it, let's delve into the core concepts you'll use most frequently for self-hosting. We'll explore the Caddyfile in more detail and set up your first reverse proxy.
The Caddyfile In-Depth
The Caddyfile is Caddy's native and most user-friendly configuration format. Its design prioritizes simplicity and readability. Understanding its structure, common directives, how requests are matched, and how to use placeholders is key to effectively using Caddy.
Overall Structure:
A Caddyfile is typically composed of one or more site blocks. Each site block defines how Caddy should handle requests for a specific site address.
# Global options block (optional, at the very top)
{
# Global settings like admin endpoint, email for ACME, etc.
# admin off
# email your-email@example.com
}
site_address_1 {
# Directives for site_address_1
directive1 arg1 arg2
directive2 {
sub_directive_option value
}
# More directives...
}
site_address_2, site_address_3 {
# Directives for site_address_2 AND site_address_3
# (you can define multiple site addresses for one block)
directive_x
}
# Snippets (reusable configuration blocks)
(my_common_settings) {
header Cache-Control "public, max-age=3600"
encode zstd gzip
}
site_address_4 {
import my_common_settings
# Other directives for site_address_4
}
- Global Options Block: An optional block at the very top of the Caddyfile, enclosed in curly braces
{}
without a site address preceding it. This is where you configure global Caddy settings, such as:debug
: Enables debug logging.admin off
oradmin localhost:2020
: Configures Caddy's admin API endpoint (default islocalhost:2019
).email your-email@example.com
: Sets the email address used for ACME (Let's Encrypt) certificate registration. Essential for public sites.acme_dns <provider_name> ...
: Configures DNS providers for ACME DNS challenges (for wildcard certificates or internal-only ACME).default_sni <hostname>
: Sets a default TLS Server Name Indication if a client doesn't provide one.
- Site Address(es): Each site block starts with one or more site addresses. These tell Caddy which requests the subsequent directives apply to. Examples:
example.com
: Handles requests forhttp://example.com
andhttps://example.com
. Caddy automatically enables HTTPS for public domain names.localhost
: Handles requests forhttp://localhost
.:8080
: Handles requests to any hostname on port 8080.http://example.com
: Explicitly handles HTTP requests forexample.com
.sub.example.com, *.example.com
: Handlessub.example.com
and any other subdomain ofexample.com
.- If a port is specified (e.g.,
example.com:8080
), Caddy listens on that port for that site. If no port is specified for a public domain, Caddy defaults to 80 (for HTTP->HTTPS redirect) and 443 (for HTTPS).
- Directives: These are the instructions that tell Caddy what to do with a request.
- A directive starts with its name (e.g.,
reverse_proxy
,file_server
,header
,respond
). - It can be followed by arguments (e.g.,
localhost:9000
forreverse_proxy
). - Some directives can have a sub-block of options enclosed in
{}
for more detailed configuration. - The order of most directives within a site block generally does not matter for execution priority. Caddy has a predefined order for directive handlers (e.g.,
try_files
is usually evaluated beforefile_server
). This can be a point of confusion initially but is designed for sensible defaults. You can often influence order using matchers or more specific directive blocks.
- A directive starts with its name (e.g.,
- Matchers: Matchers allow you to apply directives conditionally based on characteristics of the incoming request.
- They can precede a directive or a block of directives.
- Syntax:
matcher_token directive_args...
or - Common matcher types:
path /some/path/*
: Matches requests whose URI path starts with/some/path/
.*
is a wildcard.host sub.example.com
: Matches requests for a specific host.method GET POST
: Matches requests using GET or POST HTTP methods.header X-Custom-Header value
: Matches if a specific header is present with a given value.not <matcher>
: Negates another matcher.
- If a directive has no explicit matcher token, it often implicitly matches all requests for that site (like
*
).
- Placeholders (Variables): Caddy provides many placeholders that you can use in directive arguments to insert dynamic values from the request or environment.
- Examples:
{http.request.host}
,{http.request.uri.path}
,{http.request.remote}
,{http.vars.my_var}
. - These are extremely powerful for dynamic configurations, logging, and header manipulation.
- Examples:
- Snippets: Reusable blocks of configuration, defined with
(snippet_name) { ... }
and included in site blocks withimport snippet_name
. This helps keep your Caddyfile DRY (Don't Repeat Yourself).
Common Directives (Basic Set):
We'll cover these and more as we go, but here's an initial list:
reverse_proxy <upstreams...>
: The core directive for proxying requests to backend services.<upstreams...>
can be one or more backend addresses likelocalhost:8000
or192.168.1.10:3000
.
file_server [browse]
: Serves static files from theroot
directory.browse
enables directory listings.root <matcher> <path>
: Sets the root directory for file operations for requests matching<matcher>
. If no matcher,*
is implied.respond <text_or_status> [status]
: Responds directly with given text or HTTP status code, bypassing other handlers.redir <to> [code]
: Redirects the client to a different URL.log
: Enables access logging for the site (on by default). Can be customized.encode <formats...>
: Enables HTTP response compression (e.g.,encode zstd gzip
). Caddy enables this by default with sensible formats.header <matcher> <field> [value]
: Manipulates request or response headers.header Connection "Upgrade"
(adds/sets response header)header -Server
(removes response Server header)request_header <field> [value]
(manipulates request headers sent to backend)
handle <matcher> { ... }
: A block of directives that only applies if the matcher is satisfied.handle
blocks are mutually exclusive and tried in order of appearance in the Caddyfile.handle_path <path_prefix> { ... }
: A specializedhandle
that strips a prefix from the URL path before processing directives within its block.handle_errors { ... }
: Defines how to handle errors (e.g., 404, 500) by serving custom error pages or proxying to an error handling service.tls <email_address> | internal | <cert_file> <key_file>
: Configures TLS/HTTPS.tls your-email@example.com
: Enables automatic HTTPS with Let's Encrypt using this email. (Often not needed if email is in global options).tls internal
: Uses Caddy's internal, self-signed certificate authority (useful for local development).tls cert.pem key.pem
: Uses a manually provided certificate and key.
Understanding the Caddyfile syntax and common directives is 80% of the way to mastering basic Caddy usage. The best way to learn is by doing.
Workshop Your First Reverse Proxy
In this workshop, we'll set up Caddy to act as a reverse proxy for a simple backend application. For simplicity, we'll use Python's built-in HTTP server as our "backend service." This service will run on a local port, and Caddy will make it accessible via a different port or, eventually, a domain name.
Prerequisites:
- Caddy installed (from the previous workshop).
- Python 3 installed (most Linux and macOS systems have it; Windows users can install it from python.org).
Step 1: Create and Run a Simple Backend Service
We need something for Caddy to proxy to. Python's http.server
module is perfect for a quick demonstration.
-
Create a new directory for this workshop:
-
Inside
~/caddy_rp_workshop
, create a dummy file that our backend service will serve: -
Open a new terminal window or tab. This is important because our backend service needs to keep running while we configure and run Caddy in the other terminal.
-
In this new terminal, navigate to the
~/caddy_rp_workshop
directory: -
Start the Python HTTP server, telling it to listen on port 8000:
You should see output like:# Python 3 python3 -m http.server 8000 # If python3 isn't found, try python # python -m http.server 8000
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
This server is now running and serving files from the~/caddy_rp_workshop
directory on port 8000. -
Verify the backend service: Open your web browser and go to
http://localhost:8000/backend_page.html
. You should see "Hello from the Backend Service!". If you go tohttp://localhost:8000/
, you'll see a directory listing (Python's server provides this by default).Keep this Python server running in its terminal.
Step 2: Create the Caddyfile for Reverse Proxying
Now, let's go back to your original terminal window/tab.
- Ensure you are in the
~/caddy_rp_workshop
directory. -
Create a
Caddyfile
in this directory with the following content:# Caddyfile for our first reverse proxy localhost:8080 { # This tells Caddy to proxy all requests for localhost:8080 # to our backend service running on localhost:8000 reverse_proxy localhost:8000 # Optional: Add a log to see what's happening log { output stdout # Print logs to the terminal format console # More human-readable format for console } }
Let's dissect this
Caddyfile
:localhost:8080
: This is our site address. Caddy will listen on port 8080. Users will access our service viahttp://localhost:8080
.reverse_proxy localhost:8000
: This is the key directive.reverse_proxy
: The directive name.localhost:8000
: The upstream address. This is where Caddy will forward the requests it receives onlocalhost:8080
. It's the address of our Python backend service.
log { ... }
: This block customizes logging for this site.output stdout
: Tells Caddy to print log entries to standard output (the terminal), which is useful when running Caddy directly withcaddy run
.format console
: Changes the log format from the default JSON to a more human-readable single-line format, which is nice for development.
-
Save the
Caddyfile
.
Step 3: Run Caddy and Test the Reverse Proxy
- In your original terminal (where you created the
Caddyfile
), make sure you are in the~/caddy_rp_workshop
directory. -
Validate the Caddyfile (good practice!):
If there are no errors, proceed. -
Run Caddy:
You'll see Caddy's startup logs. It should indicate it's listening on port 8080. -
Test the reverse proxy:
- Open your web browser and navigate to
http://localhost:8080/backend_page.html
. - You should see "Hello from the Backend Service!". This content is being served by the Python server on port 8000, but you accessed it through Caddy on port 8080. Caddy proxied your request!
- Try accessing
http://localhost:8080/
. You should see the directory listing from the Python server.
- Open your web browser and navigate to
-
Examine the logs:
- Caddy's terminal (original terminal): You should see access logs from Caddy for requests to port 8080. With the
format console
option, they might look something like: - Python server's terminal (the new terminal): You should also see log entries here, but notice the remote address in these logs might be
127.0.0.1
(Caddy's address), not your browser's actual IP. This is because, from the backend's perspective, the request is coming from Caddy. Caddy typically adds headers likeX-Forwarded-For
to tell the backend the original client's IP.
- Caddy's terminal (original terminal): You should see access logs from Caddy for requests to port 8080. With the
Step 4: Understanding What Happened
- Your browser sent a request to
http://localhost:8080/backend_page.html
. - Caddy, listening on port 8080, received this request.
- The
reverse_proxy localhost:8000
directive told Caddy to forward this request to the server running atlocalhost:8000
. - Caddy made a new HTTP request to
http://localhost:8000/backend_page.html
. - The Python server on port 8000 received Caddy's request, found
backend_page.html
, and sent back the content ("Hello from the Backend Service!"). - Caddy received the response from the Python server.
- Caddy forwarded this response back to your browser.
You've successfully set up a basic reverse proxy! Caddy is acting as an intermediary, shielding your backend application.
Step 5: Stop the Services
- In Caddy's terminal, press
Ctrl+C
to stop Caddy. - In the Python server's terminal, press
Ctrl+C
to stop the backend service.
This workshop demonstrated the fundamental reverse_proxy
directive. In real-world scenarios, your backend service would be a web application (e.g., a Node.js app, a Django/Flask app, a Docker container running a service), and Caddy would make it accessible securely and manageably.
Serving Static Sites with Caddy
While Caddy excels as a reverse proxy, it's also a very capable static web server. This is useful for hosting documentation, personal websites, or the frontend of single-page applications (SPAs). We've already touched on this with file_server
in the introductory workshop, but let's formalize it.
Key Directives:
root <matcher> <directory>
:- This is fundamental. It specifies the document root – the directory on your server's filesystem where Caddy will look for files to serve.
- Example:
root * /var/www/mysite
tells Caddy to serve files from/var/www/mysite
for all requests to this site block. - The path can be absolute (e.g.,
/srv/html
) or relative to Caddy's working directory or Caddyfile location (e.g.,.
for current directory,public
for a subdirectory namedpublic
). Using absolute paths is generally more robust for production setups.
file_server [browse]
:- This directive enables the static file serving module.
- When a request comes in, Caddy (with
file_server
enabled) will look for a corresponding file in theroot
directory. For example, a request for/about/contact.html
will look for<root>/about/contact.html
. - If a request is for a directory (e.g.,
/blog/
),file_server
will try to serve an index file from that directory. By default, it looks forindex.html
andindex.txt
. - The optional
browse
sub-directive enables directory listings. If no index file is found in a requested directory, Caddy will display a list of its contents. This is often useful for development or private file sharing but usually disabled for public production sites.
try_files <files...>
:- This directive is extremely useful, especially for Single Page Applications (SPAs) or when you want to have "clean URLs" without file extensions.
- It tries to find files in the order listed. If a file is found, it's served. If none are found, typically a 404 is returned, or it can fall back to another handler or a default file.
- Syntax:
try_files <file1> <file2> ... <fallback>
- Example for a SPA (like React, Vue, Angular):
try_files {path} {path}/index.html /index.html
{path}
: Placeholder for the requested URI path.- This tries to serve the exact file if it exists (e.g.,
/assets/image.png
). - If not, it tries to serve
index.html
from a subdirectory if the request was for a directory (e.g.,/blog/
might serve/blog/index.html
). - If neither is found, it serves
/index.html
from the root. This allows your SPA's JavaScript router to handle all non-asset routes.
handle_path <path_prefix> { ... }
:- Useful if your static assets are in a subdirectory but you want to serve them from the root of a URL path.
- Example:
handle_path /static/* { root * /srv/app/static_assets; file_server }
- Requests starting with
/static/
will have/static/
stripped from the path, and thenfile_server
will look for the remaining path in/srv/app/static_assets
. So,/static/css/style.css
would serve/srv/app/static_assets/css/style.css
.
- Requests starting with
encode zstd gzip
:- Enables response compression. Caddy does this by default with common algorithms. This reduces the size of text-based assets (HTML, CSS, JS) sent to the client, speeding up load times. You rarely need to configure this manually unless you want specific algorithms or levels.
header <field> <value>
:- Often used to set caching headers for static assets to improve performance for returning visitors.
- Example:
header /assets/* Cache-Control "public, max-age=31536000, immutable"
- This tells browsers to cache any file under
/assets/
for a long time.
- This tells browsers to cache any file under
Typical Static Site Caddyfile:
mysite.example.com {
# Set the root directory for your website's files
root * /var/www/mysite.example.com/public_html
# Enable static file serving
file_server
# Enable compression (on by default, but explicit doesn't hurt)
encode zstd gzip
# Optional: Custom error pages
handle_errors {
rewrite * /error.html # Show error.html for any error
file_server
}
# Optional: Logging
log {
output file /var/log/caddy/mysite.access.log
}
# Automatic HTTPS is enabled by default for public domains!
# If this were an internal site or for development, you might add:
# tls internal
}
This setup is robust enough for many static websites. Caddy handles the complexities of HTTPS automatically for public domains.
Workshop Hosting a Static Website with a Custom Domain (Simulated)
In this workshop, we'll create a simple static website and configure Caddy to serve it using a custom domain name. Since we might not have a real public domain and DNS set up for this exercise, we'll simulate it using the local hosts
file. This technique is very useful for local development and testing.
Prerequisites:
- Caddy installed.
- Administrator/root privileges to edit your system's
hosts
file. - A text editor.
Step 1: Create a Simple Static Website
-
Create a directory for your website files:
We're creating a# Linux/macOS mkdir -p ~/my_static_site/public cd ~/my_static_site/public # Windows (PowerShell) # mkdir ~\my_static_site\public -Force # cd ~\my_static_site\public
public
subdirectory. It's good practice to keep your website files in a dedicated folder likepublic
orhtml
inside your project directory, and then point Caddy'sroot
to thispublic
folder. -
Create an
index.html
file in the~/my_static_site/public
directory:<!-- ~/my_static_site/public/index.html --> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>My Awesome Static Site</title> <link rel="stylesheet" href="css/style.css"> </head> <body> <header> <h1>Welcome to My Awesome Static Site!</h1> </header> <nav> <a href="/">Home</a> <a href="/about.html">About Us</a> </nav> <main> <p>This site is proudly served by Caddy.</p> <img src="images/caddy_logo.svg" alt="Caddy Logo" width="150"> <p>(You'll need to download a Caddy logo for this to show up, or use any other image!)</p> </main> <footer> <p>© 2023 My Awesome Site</p> </footer> </body> </html>
-
Create a subdirectory for CSS and an
about.html
page: -
Create a
css/style.css
file in~/my_static_site/public/css/
:/* ~/my_static_site/public/css/style.css */ body { font-family: sans-serif; line-height: 1.6; margin: 0; padding: 0; background-color: #f4f4f4; color: #333; } header, footer { background-color: #333; color: #fff; padding: 1em 0; text-align: center; } nav { text-align: center; padding: 0.5em; background: #444; } nav a { color: white; margin: 0 10px; text-decoration: none; } main { padding: 20px; text-align: center; } img { margin-top: 20px; }
-
Create an
about.html
file in~/my_static_site/public/
:<!-- ~/my_static_site/public/about.html --> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>About Us - My Awesome Static Site</title> <link rel="stylesheet" href="css/style.css"> </head> <body> <header> <h1>About My Awesome Static Site</h1> </header> <nav> <a href="/">Home</a> <a href="/about.html">About Us</a> </nav> <main> <p>This is a simple static site created to demonstrate Caddy's file serving capabilities.</p> <p>We are learning about self-hosting and web servers!</p> </main> <footer> <p>© 2023 My Awesome Site</p> </footer> </body> </html>
-
(Optional) Download the Caddy logo (or any SVG/PNG image) and save it as
caddy_logo.svg
(or adjust the<img>
tag inindex.html
) inside the~/my_static_site/public/images/
directory. You can find Caddy logos on their official website or GitHub repository. For simplicity, if you don't add an image, the alt text will show.
You now have a basic multi-page static website structure.
Step 2: Modify Your hosts
File
The hosts
file on your computer allows you to manually map domain names to IP addresses, bypassing public DNS servers for those specific domains. We'll use it to make mysite.local
(a domain that doesn't exist publicly) point to your local machine (127.0.0.1
).
-
Location of the
hosts
file:- Linux/macOS:
/etc/hosts
- Windows:
C:\Windows\System32\drivers\etc\hosts
- Linux/macOS:
-
Editing the
hosts
file: You'll need administrator/root privileges.- Linux/macOS: Open a terminal and use a command-line editor like
nano
orvim
: - Windows: Search for Notepad, right-click it, select "Run as administrator." Then, from Notepad, go to
File -> Open
and navigate toC:\Windows\System32\drivers\etc\
. You might need to change the file type filter from "Text Documents (.txt)" to "All Files (.*)" to see thehosts
file.
- Linux/macOS: Open a terminal and use a command-line editor like
-
Add the entry: At the end of the
This tells your computer that whenever you try to accesshosts
file, add the following line:mysite.local
, it should go to the IP address127.0.0.1
(your own machine). -
Save the file and exit the editor.
- Nano:
Ctrl+X
, thenY
, thenEnter
. - Vim:
Esc
, then:wq
, thenEnter
. - Notepad:
File -> Save
.
- Nano:
-
Verification (optional): You can try to ping
It should show that it's pingingmysite.local
in your terminal:127.0.0.1
. PressCtrl+C
to stop pinging. Note: Some systems, especially Windows, might heavily cache DNS. If changes don't seem to take effect immediately, a reboot or flushing the DNS cache (ipconfig /flushdns
on Windows command prompt) might be necessary, though often it's quick.
Step 3: Create the Caddyfile
Now, create the Caddyfile
to serve your static site on mysite.local
. This Caddyfile
should be placed in the ~/my_static_site/
directory (one level above public
).
-
Navigate to the
~/my_static_site/
directory (if you're still inpublic
): -
Create a
Caddyfile
with the following content:# ~/my_static_site/Caddyfile mysite.local { # Set the root directory to our 'public' subfolder root * public # Enable static file serving file_server # Enable nice, human-readable logs in the console log { output stdout format console } # For local development with non-public domains like .local, # Caddy won't attempt Automatic HTTPS by default. # It will serve over HTTP. # If you wanted to test HTTPS locally, you could use: # tls internal # This uses Caddy's self-signed local CA. Your browser will # show a warning for self-signed certs unless you trust Caddy's root CA. # For this workshop, HTTP is fine. }
mysite.local
: This is our site address, matching what we put in thehosts
file.root * public
: Crucially, we set the document root to thepublic
subdirectory relative to where theCaddyfile
is. This means Caddy will look forindex.html
,css/style.css
, etc., inside~/my_static_site/public/
.file_server
: Enables serving.log
: Configured for console output for easy viewing.- Caddy is smart: for
*.local
,*.localhost
, or IP addresses as site labels, it typically disables automatic HTTPS via public CAs like Let's Encrypt by default, as these are not publicly resolvable. It will serve over HTTP unless you configuretls internal
or provide your own certs.
Step 4: Run Caddy and Test Your Site
- Ensure you are in the
~/my_static_site/
directory (where theCaddyfile
is). - Validate the Caddyfile:
-
Run Caddy:
Caddy will start. Becausemysite.local
is not a public TLD and we haven't specified a port, Caddy will likely pick a high-numbered port for HTTP if it werelocalhost
, but for a named host likemysite.local
(even if resolved locally), Caddy will attempt to bind to standard HTTP/HTTPS ports (80/443) if it has permissions.- If you run
caddy run
as a regular user, it might fail to bind to port 80. You might see an error like "permission denied" for binding to port 80. - To allow Caddy to use port 80 (and 443 for HTTPS later) without running Caddy as root (which is generally not recommended for the main process):
- On Linux, you can grant capabilities to the Caddy binary:
sudo setcap cap_net_bind_service=+ep $(which caddy)
- Or, for this workshop, you can simply run it with
sudo
:sudo caddy run
. Be mindful that running web servers as root has security implications for long-term use. For production, you'd use a service manager like systemd which can handle privileged port binding more safely.
- On Linux, you can grant capabilities to the Caddy binary:
- Alternatively, you can specify a non-privileged port in your Caddyfile:
If you do this, you'd access the site at
mysite.local:2017 { # Listen on port 2017 root * public file_server log { output stdout; format console } }
http://mysite.local:2017
. For this workshop, let's assume you can usesudo caddy run
or have granted capabilities, so Caddy uses the default HTTP port 80.
- If you run
-
Open your web browser and navigate to
http://mysite.local
(if Caddy is using port 80) orhttp://mysite.local:YOUR_CHOSEN_PORT
if you specified one.- You should see your "Welcome to My Awesome Static Site!" homepage.
- The CSS should be applied.
- The Caddy logo (if you added it) should appear.
- Click the "About Us" link. It should take you to
http://mysite.local/about.html
, and the about page should display correctly.
-
Check Caddy's terminal output. You should see access logs for your requests.
Step 5: (Optional) Testing tls internal
If you want to see Caddy serve this over HTTPS locally (you'll get a browser warning):
- Stop Caddy (
Ctrl+C
). - Modify your
Caddyfile
in~/my_static_site/
to includetls internal
: - Run Caddy again (with
sudo caddy run
if needed for port 443): Caddy will generate a self-signed certificate formysite.local
and its local CA, and install the CA into your system's trust stores if possible (this behavior varies and might require interaction). - Now, try accessing
https://mysite.local
in your browser.- You will likely see a browser warning page ("Your connection is not private," "Warning: Potential Security Risk Ahead," etc.) because the certificate is signed by Caddy's local CA, which your browser doesn't trust by default globally.
- You can usually click "Advanced" or "Proceed anyway" to view the site.
- You'll see the little padlock, but it might have a warning sign on it.
This demonstrates
tls internal
, which is very handy for developing applications that require HTTPS.
Step 6: Cleanup
- Stop Caddy (
Ctrl+C
). - Important: Edit your
hosts
file again (withsudo nano /etc/hosts
or as administrator in Notepad) and remove or comment out the line you added: This is good practice to avoid future confusion if you forget about this local override. - Save the
hosts
file.
This workshop showed you how to serve a complete static website using Caddy, including assets in subdirectories, and how to use the hosts
file to simulate custom domain names for local development. This is a powerful combination for testing web projects before deploying them publicly.
This completes the "Basic Caddy Usage" section. Next we will cover "Intermediate Caddy Usage".
2. Intermediate Caddy Usage
Having mastered the basics of Caddy for static file serving and simple reverse proxying, we now move into more sophisticated configurations. This section will cover techniques that allow for greater control, flexibility, and robustness in your self-hosted setups. We'll explore how Caddy handles multiple backend services, manages HTTPS in more depth, manipulates request paths and headers, and offers better logging and monitoring capabilities.
Advanced Reverse Proxy Techniques
The simple reverse_proxy <upstream>
directive is powerful, but Caddy offers many more features to handle complex proxying scenarios. These include distributing load across multiple backend instances, ensuring backend services are healthy before sending traffic, and modifying HTTP headers for better integration and information flow.
Load Balancing
Load balancing is the process of distributing incoming network traffic across multiple backend servers (also known as upstreams or instances). This is crucial for:
- High Availability: If one backend server fails, the load balancer can redirect traffic to the remaining healthy servers, minimizing downtime.
- Scalability: As traffic to your application grows, you can add more backend servers to handle the load, and the load balancer will distribute requests among them.
- Performance: By distributing the load, individual servers are less likely to become overwhelmed, leading to faster response times.
Caddy's reverse_proxy
directive natively supports load balancing. If you provide multiple upstream addresses, Caddy will distribute requests among them.
By default, Caddy uses a random load balancing policy. However, it supports several policies you can configure within a reverse_proxy
block:
lb_policy <policy_name>
:random
(default): Chooses an available upstream at random.round_robin
: Cycles through available upstreams sequentially.least_conn
: Chooses the upstream with the fewest active connections. This is often a good choice for long-lived connections.first
: Chooses the first available upstream in the order they are listed. Useful for primary/fallback setups.ip_hash
: Selects an upstream based on a hash of the client's IP address. This ensures that requests from the same client IP are consistently routed to the same backend server (sticky sessions based on IP).uri_hash
: Selects an upstream based on a hash of the request URI.header
: Selects an upstream based on the value of a request header (useful for session stickiness if a session cookie/header is set by the backend).
Example with a specific policy:
Notice thereverse_proxy
directive can now have a block with sub-directives like to
(for specifying upstreams) and lb_policy
.
Health Checks
When load balancing, it's vital to ensure that Caddy only sends traffic to healthy backend servers. If a backend server crashes or becomes unresponsive, Caddy should detect this and temporarily stop sending requests to it. This is achieved through health checks.
Caddy's reverse_proxy
can perform active health checks by periodically sending requests to each upstream.
app.example.com {
reverse_proxy {
to app_server1:8001 app_server2:8002
# Load balancing policy
lb_policy least_conn
# Active health checks configuration
health_uri /healthz # Path to request for health check
health_port 8001 # Port to use for health check (if different from service port)
health_interval 15s # How often to perform health checks
health_timeout 5s # How long to wait for a response
health_status 2xx # Expected HTTP status codes for a healthy response (e.g., 200-299)
health_body "OK" # Optional: substring to expect in the response body
health_headers { # Optional: headers to send with health check requests
Host "internal-health.example.com"
}
# Passive health checks (optional, but good)
# If an upstream fails a certain number of requests within a time window,
# it's marked as unhealthy.
fail_duration 30s # How long to consider a backend down after it fails
max_fails 3 # Number of failures within 'fail_duration' to mark as down
unhealthy_status 5xx # Consider these response codes as failures during normal proxying
}
}
- Active Health Checks:
health_uri
: Caddy will periodically request this path on each upstream. Your backend application should be configured to respond appropriately to this path (e.g., with a200 OK
if healthy).health_interval
: Frequency of checks.health_timeout
: Max time to wait for a health check response.health_status
: Defines what HTTP status code(s) are considered "healthy."health_body
: (Optional) A string that must be present in the health check response body.
- Passive Health Checks:
- Caddy also monitors regular proxied requests. If an upstream returns too many errors (like 5xx status codes) for actual user traffic, it can be marked as unhealthy (
unhealthy_status
,max_fails
,fail_duration
).
- Caddy also monitors regular proxied requests. If an upstream returns too many errors (like 5xx status codes) for actual user traffic, it can be marked as unhealthy (
If an upstream fails health checks, Caddy will stop sending traffic to it until it becomes healthy again.
Modifying Headers
When Caddy proxies a request, it acts as an intermediary. The backend application needs to know certain information about the original client request, such as the client's IP address or the original Host
header. Caddy (and other reverse proxies) typically add or modify headers to convey this information.
X-Forwarded-For
(XFF): Contains the IP address of the client that made the request to Caddy. If the request passed through multiple proxies, this header can be a comma-separated list of IPs, with the original client IP usually being the first.X-Forwarded-Proto
: Indicates the protocol (HTTP or HTTPS) that the client used to connect to Caddy. This is crucial if Caddy is terminating SSL, as the backend might receive plain HTTP but needs to know the original request was secure (e.g., for generating correct URLs).X-Forwarded-Host
: Contains the originalHost
header sent by the client.
Caddy manages these headers automatically and correctly by default through the header_up
and header_down
sub-directives within reverse_proxy
.
header_up <field> <value>
: Sets or modifies headers sent to the upstream (backend).- Caddy automatically sets
X-Forwarded-For
,X-Forwarded-Proto
, andX-Forwarded-Host
appropriately. - You can add custom headers or modify existing ones:
reverse_proxy localhost:8000 { header_up Host {http.request.host} # Send original Host header to backend header_up X-Real-IP {http.request.remote.host} # Another common way to send client IP header_up X-Custom-Header "MyValue" header_up -Some-Internal-Header # Remove a header before sending to backend }
{http.request.host}
and{http.request.remote.host}
are Caddy placeholders.
- Caddy automatically sets
header_down <field> <value>
: Sets or modifies headers sent from the upstream back to the client.- Useful for removing internal headers or adding security headers.
WebSocket Proxying
WebSockets provide a persistent, bidirectional communication channel between a client (browser) and a server. Many modern web applications use WebSockets for real-time features like chat, notifications, or live updates.
Reverse proxying WebSockets requires special handling for the Upgrade
and Connection
headers that initiate the WebSocket handshake. Caddy handles WebSocket proxying automatically and transparently when using reverse_proxy
. No special configuration is usually needed beyond the standard reverse_proxy
directive. If the client sends the necessary WebSocket upgrade headers, Caddy will forward them, and the connection will be upgraded.
chatapp.example.com {
# This will correctly proxy both HTTP and WebSocket requests to the backend
reverse_proxy localhost:3000
}
Workshop Implementing Load Balancing and Health Checks
In this workshop, we'll set up two instances of a simple backend service and use Caddy to load balance requests between them. We'll also configure health checks to ensure Caddy only routes traffic to healthy instances.
Prerequisites:
- Caddy installed.
- Python 3 installed.
- Two separate terminal windows/tabs, plus one for Caddy.
Step 1: Create Two Simple Backend Service Instances
We'll use Python's http.server
again, but this time we'll run two instances on different ports, each serving a slightly different page so we can see the load balancing in action.
-
Create a directory for this workshop:
-
Create files for backend instance 1:
~/caddy_lb_workshop/instance1/index.html
:~/caddy_lb_workshop/instance1/healthz
: (This is a plain text file, not HTML) Make theinstance1
directory:mkdir instance1
and put these files inside it.
-
Create files for backend instance 2:
~/caddy_lb_workshop/instance2/index.html
:~/caddy_lb_workshop/instance2/healthz
: Make theinstance2
directory:mkdir instance2
and put these files inside it.
-
In your first backend terminal: Navigate to
This server listens on port 8001. Verify it by going to~/caddy_lb_workshop/instance1
and start the first Python server:http://localhost:8001
(you'll see "Hello from Backend Instance 1!") andhttp://localhost:8001/healthz
(you'll see "OK"). -
In your second backend terminal: Navigate to
This server listens on port 8002. Verify it by going to~/caddy_lb_workshop/instance2
and start the second Python server:http://localhost:8002
(you'll see "Greetings from Backend Instance 2!") andhttp://localhost:8002/healthz
(you'll see "OK").
You now have two backend services running.
Step 2: Create the Caddyfile for Load Balancing and Health Checks
Now, in your main terminal (for Caddy), navigate to ~/caddy_lb_workshop
and create a Caddyfile
:
# ~/caddy_lb_workshop/Caddyfile
localhost:8080 {
reverse_proxy {
# Define our two backend upstreams
to localhost:8001 localhost:8002
# Specify a load balancing policy (e.g., round_robin for easy observation)
lb_policy round_robin
# Configure active health checks
health_uri /healthz # The path Caddy will request
health_interval 5s # Check every 5 seconds
health_timeout 2s # Timeout for health check request
health_status 200 # Expect HTTP 200 for healthy (default is 2xx)
# health_body "OK" # Ensure the body contains "OK"
# Configure passive health checks (optional but good)
fail_duration 30s
max_fails 2
unhealthy_status 500 502 503 504 # Statuses considered failures
}
log {
output stdout
format console
level INFO # Or DEBUG for more verbose health check logging
}
}
to localhost:8001 localhost:8002
: Defines our two backend servers.lb_policy round_robin
: Will alternate requests between the two servers.health_uri /healthz
: Tells Caddy to check the/healthz
endpoint on each backend.health_interval 5s
: Caddy will poll/healthz
every 5 seconds.health_body "OK"
: (Commented out for now, but good to know) This would make Caddy also check that the response body from/healthz
contains the string "OK".- The passive health check settings mean if a backend returns 2 failures (e.g., HTTP 500 errors) for actual traffic within 30 seconds, Caddy will mark it as down for 30 seconds.
Step 3: Run Caddy and Test Load Balancing
-
In your Caddy terminal (in
If you want to see more detailed logging about health checks, you can stop Caddy (~/caddy_lb_workshop
), validate and run Caddy:Ctrl+C
), edit the Caddyfile to setlevel DEBUG
in thelog
block, and runcaddy run
again. -
Open your browser and go to
http://localhost:8080
.- Refresh the page several times. You should see the content alternating between "Hello from Backend Instance 1!" and "Greetings from Backend Instance 2!". This demonstrates
round_robin
load balancing.
- Refresh the page several times. You should see the content alternating between "Hello from Backend Instance 1!" and "Greetings from Backend Instance 2!". This demonstrates
-
Look at the Caddy logs. You should see requests being distributed. If
level DEBUG
is on, you might also see logs related to health checks being performed. You'll also see logs in the respective Python server terminals.
Step 4: Test Health Checks
Let's simulate one backend instance failing.
-
Go to the terminal running the first backend instance (on port 8001) and stop it by pressing
Ctrl+C
. -
Wait for a few seconds (up to the
health_interval
+health_timeout
). Caddy's health checker should detect thatlocalhost:8001
is no longer responding. -
Now, go back to your browser and refresh
http://localhost:8080
multiple times.- You should only see "Greetings from Backend Instance 2!". Caddy has detected that instance 1 is down and is only sending traffic to the healthy instance 2.
-
Look at Caddy's logs (especially with
DEBUG
level). You should see messages indicating that upstreamlocalhost:8001
is unhealthy or failing health checks. For example:DEBUG http.reverse_proxy.health_checker active health check failed {"upstream": "localhost:8001", "duration": "2.001s", "error": "dial tcp 127.0.0.1:8001: connect: connection refused"} INFO http.reverse_proxy.health_checker upstream is unhealthy {"upstream": "localhost:8001", "duration": "5s", "error_count": 1} // or similar
-
Now, restart the first backend instance. Go to its terminal (in
~/caddy_lb_workshop/instance1
) and run: -
Wait for a few seconds. Caddy's health checker will perform its next check, find that
localhost:8001
is responsive again, and mark it as healthy. -
Go back to your browser and refresh
http://localhost:8080
multiple times.- You should see the content alternating between instance 1 and instance 2 again. Caddy has automatically started sending traffic back to the recovered instance.
Step 5: (Optional) Testing Passive Health Checks
This is harder to simulate perfectly with Python's simple server, but imagine instance1
started returning HTTP 500 errors for every request. After max_fails
(2 in our config) such errors within fail_duration
(30s), Caddy would mark it as unhealthy due to passive health checking, even if its /healthz
endpoint was still (hypothetically) returning 200 OK.
Step 6: Cleanup
- Stop Caddy (
Ctrl+C
). - Stop both Python backend servers (
Ctrl+C
in their respective terminals).
This workshop demonstrated how to configure Caddy for load balancing across multiple backend instances and how its health checking mechanism can automatically route traffic away from failing services and back to them once they recover. These are essential features for building resilient self-hosted applications.
HTTPS and TLS Management in Detail
One of Caddy's most acclaimed features is its automated HTTPS management. While it often "just works" for public sites, understanding the underlying mechanisms, how to customize TLS settings, and how to use HTTPS in development environments is crucial for intermediate users.
Automatic HTTPS with Let's Encrypt (ACME Protocol)
When you define a site in your Caddyfile with a public domain name (e.g., myservice.example.com
), Caddy automatically enables HTTPS. It does this using the ACME (Automatic Certificate Management Environment) protocol, primarily with Let's Encrypt, a free, automated, and open Certificate Authority (CA).
How it Works (Simplified):
- Domain Qualification: Caddy first checks if the domain name in your site address appears to be public and not an internal/local name (like
localhost
or*.internal
). - ACME Challenge: To prove control over the domain, Caddy needs to complete a challenge set by the ACME CA (Let's Encrypt). There are two main types:
- HTTP-01 Challenge: Caddy temporarily provisions an HTTP route on your server at
http://<your_domain>/.well-known/acme-challenge/<token>
. The CA then tries to fetch this token over HTTP from your domain (port 80). If successful, it proves you control the web server for that domain. This requires your server to be reachable from the public internet on port 80. - TLS-ALPN-01 Challenge: Caddy provisions a special self-signed TLS certificate that includes the challenge token. The CA connects to your server on port 443 and verifies this special certificate. This requires your server to be reachable from the public internet on port 443. Caddy prefers TLS-ALPN as it doesn't require port 80 to be open if you're already serving HTTPS.
- DNS-01 Challenge: You (or Caddy, via a plugin) create a specific DNS TXT record for your domain. The CA queries DNS for this record. This is the only method that supports wildcard certificates (e.g.,
*.example.com
) and is useful if your server isn't directly accessible on ports 80/443 (e.g., it's behind another firewall/NAT that you can't open ports on, but you can update DNS). Caddy needs a DNS provider plugin for this (e.g., for Cloudflare, GoDaddy, etc.).
- HTTP-01 Challenge: Caddy temporarily provisions an HTTP route on your server at
- Certificate Issuance: If the challenge succeeds, Let's Encrypt issues an SSL/TLS certificate for your domain(s).
- Certificate Installation: Caddy installs this certificate and begins serving your site over HTTPS.
- Automatic Renewal: Let's Encrypt certificates are typically valid for 90 days. Caddy automatically renews them well before they expire (usually around 30 days before expiry), repeating the challenge process.
Requirements for Automatic HTTPS:
- Publicly Resolvable Domain Name: The domain name must resolve via public DNS to the public IP address of your Caddy server.
- Server Reachability: Your Caddy server must be reachable from the public internet on port 80 (for HTTP-01) or port 443 (for TLS-ALPN-01). Firewalls and NAT/port forwarding must be configured correctly.
- No Conflicting Services: No other web server should be exclusively occupying port 80 or 443 on the same public IP.
- (Optional) Email Address: It's highly recommended to set an email address in the global options block of your Caddyfile: This email is used by Let's Encrypt for important notifications, such as if your certificate is about to expire and renewal is failing.
Troubleshooting Automatic HTTPS:
- Check Caddy's logs carefully. They often contain detailed error messages from the ACME process.
- Ensure your DNS A/AAAA records are correct and have propagated.
- Verify port forwarding and firewall rules. Use online tools to check if ports 80/443 are open to your server from the internet.
- Check Let's Encrypt's rate limits if you've made many failed attempts.
Using Custom or Manually Obtained Certificates
Sometimes you might have certificates from another CA, or you need to use certificates obtained through a different process. Caddy allows you to specify your own certificate and private key files using the tls
directive.
myservice.example.com {
# ... other directives ...
tls /path/to/your/fullchain.pem /path/to/your/privkey.pem
}
fullchain.pem
or cert.pem
, it should contain the server certificate and any intermediate CA certificates).
* The second argument is the path to your private key file (often privkey.pem
or key.pem
).
* When you specify certificates this way, Caddy disables automatic HTTPS management for this site. You are responsible for renewing these certificates yourself.
tls internal
for Development
For local development or internal-only services where you need HTTPS but don't have/want a public domain, tls internal
is invaluable.
tls internal
is used:
- Caddy generates its own local Certificate Authority (CA).
- It uses this local CA to sign a certificate for
dev.app.local
. - On first run, Caddy may attempt to install its local CA root certificate into your system's trust store.
- On Linux/macOS, this often requires
sudo
privileges for the initialcaddy run
or forcaddy trust
command. - On Windows, it might pop up a security prompt.
- If the local CA is successfully trusted by your system, your browser will then trust the certificates Caddy issues for sites using
tls internal
, giving you a green padlock without warnings.
- On Linux/macOS, this often requires
If Caddy cannot install its root CA (e.g., due to permissions or if you run Caddy in Docker without specific volume mounts for trust stores), your browser will show a warning for self-signed certificates. You'd have to manually add an exception in the browser or manually install Caddy's root CA certificate (usually found in Caddy's data directory, e.g., ~/.local/share/caddy/pki/authorities/local/root.crt
on Linux).
The caddy trust
command can be used to manage Caddy's local CA installation.
sudo caddy trust
: Installs the Caddy local CA (if it exists) into system trust stores.sudo caddy untrust
: Uninstalls it.
Forcing HTTPS and HTTP Strict Transport Security (HSTS)
By default, if Caddy obtains an HTTPS certificate for a site, it will automatically redirect HTTP requests for that site to HTTPS.
HTTP Strict Transport Security (HSTS) is a security feature that tells browsers to only connect to your site using HTTPS, even if the user types http://
or follows an HTTP link. This mitigates SSL stripping attacks.
Caddy enables HSTS by default on sites for which it manages HTTPS certificates. You can customize HSTS and other security headers using the header
directive:
secure.example.com {
# Automatic HTTPS is on by default
# HSTS is on by default with a reasonable max-age
# To customize HSTS or add other security headers:
header {
# Enable HSTS with a longer max-age and include subdomains
Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
# Add other security headers
X-Content-Type-Options "nosniff"
X-Frame-Options "DENY"
Referrer-Policy "strict-origin-when-cross-origin"
# Content-Security-Policy "default-src 'self'; ..." # CSP is complex, configure carefully
}
reverse_proxy my_secure_app:8000
}
Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
:max-age
: How long (in seconds) the browser should remember to only use HTTPS. 63072000 seconds is two years.includeSubDomains
: If present, HSTS applies to all subdomains of this domain as well.preload
: An indication that you intend to submit your domain to browser HSTS preload lists. This is a stronger commitment; ensure your site and all subdomains are fully HTTPS-capable before usingpreload
.
Workshop Securing a Local Development Site with tls internal
In this workshop, we will take a simple local site and secure it using Caddy's tls internal
feature. We'll observe the process, including trusting Caddy's local CA, to get a valid HTTPS connection in the browser without warnings.
Prerequisites:
- Caddy installed.
- Administrator/root (
sudo
) privileges to runcaddy trust
or potentially the initialcaddy run
for CA installation. - A text editor.
- A web browser.
- Hosts file modification capability (from previous workshop).
Step 1: Set up a Local Domain using hosts
File
We'll use dev.myapp.local
as our local development domain.
- Edit your system's
hosts
file (as administrator/root):- Linux/macOS:
/etc/hosts
- Windows:
C:\Windows\System32\drivers\etc\hosts
- Linux/macOS:
- Add the following line:
- Save the
hosts
file.
Step 2: Create a Simple Site and Caddyfile
-
Create a directory for this workshop:
-
Create a simple
index.html
file in this directory: -
Create a
Caddyfile
in the~/caddy_tls_internal_ws
directory:
Step 3: Trust Caddy's Local CA (if needed)
If this is your first time using tls internal
or if Caddy's local CA isn't trusted yet, your browser will show security warnings. To avoid this, we can try to have Caddy install its CA into your system's trust store.
-
Run
caddy trust
(recommended approach): Open a terminal and run:- On Linux/macOS, this will prompt for your sudo password.
- On Windows, you might need to run this from an Administrator command prompt/PowerShell. Caddy will attempt to install its root CA certificate. You should see a confirmation message if successful. The exact path will vary based on your OS.
-
Alternative: First run with
sudo caddy run
(less ideal for just trusting): Ifcaddy trust
has issues, sometimes the firstcaddy run
withsudo
for atls internal
site can trigger the CA installation. However,caddy trust
is more direct.
Step 4: Run Caddy and Test
-
In your terminal, within the
~/caddy_tls_internal_ws
directory, run Caddy. Since we are using standard HTTPS port 443 (implied bytls internal
for a named host), you might needsudo
if you haven't granted Caddycap_net_bind_service
capabilities:Watch Caddy's startup logs. It should indicate it's serving# If 'caddy trust' was run, you might not need sudo here if ports are >1023, # but for standard ports 80/443: sudo caddy run # Or, to use a non-privileged port: # dev.myapp.local:8443 { ... tls internal ... } # And then run `caddy run` (no sudo needed for ports > 1023)
https://dev.myapp.local
. It will also perform an HTTP->HTTPS redirect by default.{"level":"info","ts":...,"logger":"tls","msg":"stapling OCSP","certificates":["dev.myapp.local"]} // May see OCSP stapling messages {"level":"info","ts":...,"logger":"http.log.access.log0","msg":"server running","name":"srv0","protocols":["h1","h2","h3"]} {"level":"info","ts":...,"logger":"http","msg":"enabling HTTP ETag","server_name":"srv0"} {"level":"info","ts":...,"logger":"http","msg":"enabling automatic HTTP->HTTPS redirects","server_name":"srv0"} ... {"level":"info","ts":...,"msg":"Serving HTTPS on :443"} {"level":"info","ts":...,"msg":"Serving HTTP on :80"}
-
Open your web browser and navigate to
https://dev.myapp.local
.- If Caddy's local CA was successfully trusted by your system (via
caddy trust
or other means), you should see your page "Hello from Secure dev.myapp.local!" with a valid HTTPS padlock (e.g., green lock) in the browser's address bar, without any security warnings. - Click the padlock icon. View the certificate details. You should see it's issued by "Caddy Local Authority - [Year] ECC Root" or similar.
- If Caddy's local CA was successfully trusted by your system (via
-
Try navigating to
http://dev.myapp.local
(the HTTP version).- Caddy should automatically redirect you to
https://dev.myapp.local
.
- Caddy should automatically redirect you to
-
If you still get a browser warning:
- Ensure you ran
sudo caddy trust
correctly and it reported success. - Try closing and reopening your browser completely (not just the tab). Some browsers cache trust information.
- As a last resort for testing, you might need to manually import Caddy's root CA certificate (
root.crt
from the path shown bycaddy trust
or found in Caddy's data directory) into your browser's or system's certificate manager. The exact steps vary by browser and OS. - For example, on Linux, the path is often
~/.local/share/caddy/pki/authorities/local/root.crt
. You could import this into Firefox under Settings -> Privacy & Security -> Certificates -> View Certificates -> Authorities -> Import.
- Ensure you ran
Step 5: Inspect Caddy's Local CA (Optional)
If you're curious, you can find Caddy's generated local CA files.
The default path on Linux is ~/.local/share/caddy/pki/authorities/local/
. You'll find root.crt
(the public CA certificate) and root.key
(the private key for the CA - keep this safe if you care about the integrity of your local CA!).
Step 6: Cleanup
- Stop Caddy (
Ctrl+C
). - Edit your
hosts
file and remove or comment out thedev.myapp.local
line: - (Optional) If you don't want Caddy's local CA trusted by your system anymore, you can run:
This workshop demonstrated the convenience of tls internal
for setting up HTTPS in local development environments. By trusting Caddy's local CA, you can replicate a secure production environment more closely and avoid annoying browser warnings, making development smoother.
Okay, let's continue with more "Intermediate Caddy Usage" topics.
Path Manipulation and Rewrites
Often, the URL path requested by a client isn't exactly what your backend application or file structure expects. Caddy provides powerful directives to manipulate the URI path before it's handled by other directives like reverse_proxy
or file_server
. This includes stripping prefixes, rewriting paths, or conditionally routing requests based on paths.
Key Directives for Path Handling:
-
uri
Directive: This directive allows for various manipulations of the request URI. It's very flexible. Common sub-directives foruri
:strip_prefix <prefix>
: Removes the given<prefix>
from the beginning of the URI path.strip_suffix <suffix>
: Removes the given<suffix>
from the end of the URI path.replace <find> <replace> [limit]
: Replaces occurrences of<find>
with<replace>
in the path.limit
is optional.path <new_path>
: Replaces the entire path with<new_path>
.query <key> <value>
: Adds or sets a query parameter.query delete <key>
removes one.
-
handle_path <path_prefix> { ... }
Directive: We've seen this briefly. It's a specializedhandle
block that matches a request path prefix. Crucially, it strips that prefix from the request URI path before executing the directives within its block. This is extremely common for proxying applications that expect to be served from the root path (/
) but you want to expose them under a subpath on your domain.# Expose app1 (listening on port 9001, expects root path requests) at /app1/ # Expose app2 (listening on port 9002, expects root path requests) at /app2/ example.com { handle_path /app1/* { reverse_proxy localhost:9001 } handle_path /app2/* { reverse_proxy localhost:9002 } }
If a client requests
example.com/app1/some/page
:handle_path /app1/*
matches.- The prefix
/app1
is stripped. The URI path becomes/some/page
. reverse_proxy localhost:9001
sends a request for/some/page
tolocalhost:9001
.
-
rewrite <to>
Directive:
An internal rewrite changes the URI of the request before Caddy decides how to handle it further (e.g., whichfile_server
orreverse_proxy
to use). The client is unaware of this rewrite; their browser URL doesn't change. This is different from aredir
which sends a 3xx HTTP redirect response to the client.# For Single Page Applications (SPAs) # If a requested file isn't found, rewrite to /index.html # so the SPA's router can handle it. example.com { root * /srv/my-spa try_files {path} {path}/ /index.html # This is often preferred for SPAs file_server } # Another rewrite example: clean URLs for a blog # Request to /blog/my-post internally becomes /blog.php?slug=my-post # (assuming a PHP backend that handles this) @blogPost path_regexp ^/blog/([a-zA-Z0-9-]+)$ rewrite @blogPost /blog.php?slug={http.regexp.1} # Then you'd typically reverse_proxy to your PHP-FPM or PHP server # reverse_proxy ...
try_files
is often a more specialized and convenient way to achieve common rewrite patterns for file serving, especially for SPAs (as shown above, it tries to find the file as is, then as a directory index, then falls back to/index.html
).rewrite
is more general-purpose.{http.regexp.1}
is a placeholder for the first capture group from thepath_regexp
matcher.
-
route <matcher> { ... }
Directive:route
blocks allow you to define a group of directives that are processed in the order they appear within theroute
block if the matcher is satisfied. Unlikehandle
blocks, multipleroute
blocks can apply to a single request if their matchers are met (they are not mutually exclusive unless one fully handles the request and stops further processing). This allows for more complex, ordered processing pipelines.The order of directives inside aexample.com { # First, try to serve static assets from a specific path route /assets/* { root * /srv/static_assets file_server # If file_server handles it, request processing might stop here for this route } # Then, for all other requests, proxy to an application route { # No matcher means it applies if previous routes didn't fully handle reverse_proxy localhost:8000 } }
route
block is also significant, and they are processed top-down.
Choosing the Right Directive:
- For simply exposing a root-expecting app under a subpath:
handle_path
is usually cleanest. - For complex path transformations or adding query parameters:
uri
is powerful. - For internal "pretty URL" to actual resource mapping, or SPA fallbacks:
rewrite
ortry_files
. - For defining ordered processing pipelines or conditional middleware:
route
.
Understanding these directives gives you fine-grained control over how URLs are interpreted and processed by Caddy.
Workshop Proxying an Application from a Subpath
Many self-hosted applications are designed to run as if they are at the root of a domain (e.g., they expect their assets at /css/style.css
, not /myapp/css/style.css
). In this workshop, we'll use Caddy's handle_path
to host such an application under a subpath (e.g., localhost:8080/myapp/
) without modifying the application itself.
We'll use a very simple "application": Python's HTTP server serving a site that has root-relative links.
Prerequisites:
- Caddy installed.
- Python 3 installed.
Step 1: Create the "Backend Application" with Root-Relative Links
-
Create a directory for the workshop and the app:
-
Inside
~/caddy_subpath_ws/app_content/
, createindex.html
:<!-- ~/caddy_subpath_ws/app_content/index.html --> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Subpath App</title> <!-- This link is root-relative --> <link rel="stylesheet" href="/css/style.css"> </head> <body> <h1>Welcome to the Subpath App!</h1> <p>This app thinks it's at the root.</p> <!-- This link is also root-relative --> <a href="/page2.html">Go to Page 2</a> <img src="/image.png" alt="Dummy Image (will be broken initially)"> </body> </html>
-
Inside
~/caddy_subpath_ws/app_content/css/
, createstyle.css
: -
Inside
~/caddy_subpath_ws/app_content/
, createpage2.html
: - (Optional) Place a dummy
image.png
in~/caddy_subpath_ws/app_content/
.
Step 2: Run the Backend Application
- Open a new terminal. Navigate to
~/caddy_subpath_ws/app_content/
. - Start the Python HTTP server on port 9000:
- Test it directly: Open your browser to
http://localhost:9000
.- The site should load, CSS should apply, and the link to Page 2 should work. This is because the app is at the root relative to
localhost:9000
.
- The site should load, CSS should apply, and the link to Page 2 should work. This is because the app is at the root relative to
Keep this Python server running.
Step 3: Create the Caddyfile
Now, back in your main Caddy terminal, in the ~/caddy_subpath_ws
directory (one level above app_content
), create a Caddyfile
:
# ~/caddy_subpath_ws/Caddyfile
localhost:8080 {
# We want to serve our app from /myapp/
# The 'handle_path' directive will match requests starting with /myapp/
# and strip that prefix before proxying.
handle_path /myapp/* {
reverse_proxy localhost:9000
}
# A simple root response for requests not to /myapp/
respond "This is the main site. App is at /myapp/"
log {
output stdout
format console
}
}
localhost:8080
: Caddy will listen on this address.handle_path /myapp/*
: This is key.- It matches any request starting with
/myapp/
(e.g.,/myapp/index.html
,/myapp/css/style.css
). - It then strips
/myapp
from the path. - So, a request to
localhost:8080/myapp/css/style.css
becomes a request for/css/style.css
when it's proxied.
- It matches any request starting with
reverse_proxy localhost:9000
: Proxies the modified request to our Python app.respond ...
: For requests tolocalhost:8080/
that don't match/myapp/*
.
Step 4: Run Caddy and Test
-
In your Caddy terminal (in
~/caddy_subpath_ws
), validate and run Caddy: -
Open your browser and navigate to
http://localhost:8080/myapp/
.- You should see the "Welcome to the Subpath App!" page.
- Crucially, the CSS should be applied! Let's trace why:
- Browser requests
http://localhost:8080/myapp/
. - Caddy's
handle_path /myapp/*
matches. Path becomes/
. - Caddy proxies
/
tolocalhost:9000
. Python servesindex.html
. - The
index.html
contains<link rel="stylesheet" href="/css/style.css">
. - The browser, seeing this on the page at
http://localhost:8080/myapp/
, resolves this root-relative link tohttp://localhost:8080/css/style.css
. This is the problem! The browser doesn't know the app is "based" at/myapp
.
- Browser requests
Step 5: The Problem and A Common Solution (application base path awareness)
The CSS and other links (/page2.html
, /image.png
) are broken because the HTML served by the backend still uses root-relative paths (e.g., /css/style.css
). When the browser requests http://localhost:8080/css/style.css
, it doesn't match our /myapp/*
handler in Caddy, so it likely gets the "This is the main site..." response or a 404 if we didn't have that.
There are generally two ways to fix this:
-
Modify the Application (Best for many cases):
- Make the application aware of its base path. Many frameworks allow you to configure a "base URL" or "asset prefix". The application would then generate links like
/myapp/css/style.css
. - For our simple HTML, we would manually change the links:
- This is often the cleanest solution as the application correctly generates its own URLs.
- Make the application aware of its base path. Many frameworks allow you to configure a "base URL" or "asset prefix". The application would then generate links like
-
Rewrite Responses with Caddy (More Complex, Last Resort):
- Caddy can theoretically rewrite HTML responses to fix these paths using plugins or advanced templating, but it's significantly more complex and fragile than fixing it at the application source. For example, the
replace-response
plugin could do this. This is generally avoided if the application can be modified.
- Caddy can theoretically rewrite HTML responses to fix these paths using plugins or advanced templating, but it's significantly more complex and fragile than fixing it at the application source. For example, the
For this workshop, let's simulate the application being "fixed" or already generating correct subpath-aware links. This means the backend application itself is serving HTML that already includes /myapp
in its internal links if it were truly subpath-aware.
Since our Python server is just serving static files, we can't easily make it "base-path aware" without changing the files themselves. The handle_path
directive only modifies the incoming request path to the backend; it does not modify the response content from the backend.
The real lesson of handle_path
is for when your backend already works fine at its own root, and you just want Caddy to map a subpath to that root.
Let's adjust the workshop slightly to better illustrate where handle_path
shines directly without needing response modification.
Imagine the Python app on localhost:9000
was already structured such that localhost:9000/myapp/index.html
was its main page, and its links were relative to that, e.g., css/style.css
(not /css/style.css
). In that case, handle_path
would not be the direct tool.
The most common use of handle_path /foo/* { reverse_proxy backend }
is when backend
expects requests at its root (/
, /page
, /css/style.css
) and handle_path
correctly strips /foo
so the backend gets those paths. The issue arises if the backend's responses contain absolute paths that don't account for the proxy's subpath.
Let's restart the workshop for this section with a backend that doesn't produce problematic absolute paths in its content, or where we understand the limitation.
The previous example correctly showed handle_path
stripping the prefix. The breakage was due to the backend's HTML content, which handle_path
doesn't touch. This is a very important distinction.
Let's make a backend that will work with handle_path
by using relative paths in its HTML, or by simply not having many cross-links.
Revised Step 1: Create a Simpler Backend Application
- In
~/caddy_subpath_ws/app_content/
, createindex.html
:<!-- ~/caddy_subpath_ws/app_content/index.html --> <!DOCTYPE html><html lang="en"><head><title>Subpath App</title> <style>body{background-color: lightgoldenrodyellow; font-family: monospace;} h1{color: darkgreen;}</style> </head><body><h1>Backend App Served via /myapp/</h1> <p>My actual path on backend is /index.html</p> <p><a href="another.html">Link to another.html (relative)</a></p> </body></html>
- In
~/caddy_subpath_ws/app_content/
, createanother.html
:Step 2 (Backend) and 3 (Caddyfile) remain the same.<!-- ~/caddy_subpath_ws/app_content/another.html --> <!DOCTYPE html><html lang="en"><head><title>Another Page</title> <style>body{background-color: lightcyan; font-family: monospace;} h1{color: steelblue;}</style> </head><body><h1>Backend App - Another Page</h1> <p>My actual path on backend is /another.html</p> <p><a href="index.html">Link to index.html (relative)</a></p> </body></html>
Python server inapp_content
on port 9000. Caddyfile incaddy_subpath_ws
usinghandle_path /myapp/* { reverse_proxy localhost:9000 }
.
Revised Step 4: Run Caddy and Test
- Run Caddy:
caddy run
(in~/caddy_subpath_ws
). - Open
http://localhost:8080/myapp/
orhttp://localhost:8080/myapp/index.html
.- You should see "Backend App Served via /myapp/".
- Click the "Link to another.html (relative)". The URL should change to
http://localhost:8080/myapp/another.html
and the new page should load. - Click "Link to index.html (relative)". The URL should change back to
http://localhost:8080/myapp/index.html
and the first page should load.
Why this revised version works flawlessly with handle_path
:
- Client requests
http://localhost:8080/myapp/index.html
. handle_path /myapp/*
matches. Path sent to backend is/index.html
.- Backend serves
index.html
. index.html
contains<a href="another.html">
. This is a relative link.- The browser is currently at
http://localhost:8080/myapp/index.html
(orhttp://localhost:8080/myapp/
). It resolves the relative linkanother.html
by appending it to the current directory path:http://localhost:8080/myapp/another.html
. - This new request
http://localhost:8080/myapp/another.html
again goes to Caddy,handle_path
strips/myapp
, and/another.html
is sent to the backend.
Conclusion of Workshop:
handle_path
is excellent for mapping a URL subpath to a backend application that expects to be at the root, as long as the application itself generates links that are either relative or can be configured with a base path. If an application hardcodes root-relative links (e.g., /css/style.css
) in its HTML output without being aware of the proxy's subpath, handle_path
alone won't fix those broken links in the content. You'd then need to make the application "base path aware" or, in more complex scenarios, use response body rewriting (which is an advanced topic, often involving plugins).
Request and Response Header Manipulation
HTTP headers are key-value pairs exchanged between clients and servers. They convey crucial information like content type, caching policies, authentication tokens, client IP addresses, etc. Caddy allows for fine-grained manipulation of both request headers (sent from Caddy to the backend) and response headers (sent from Caddy to the client).
The header
Directive:
The primary directive for this is header
. Its behavior changes slightly based on whether it's used directly in a site block (affecting response headers) or within a reverse_proxy
block (affecting request headers to the upstream or response headers from the upstream).
1. Modifying Response Headers (Sent to Client):
When header
is used directly in a site block or within route
, handle
, etc. (but not inside reverse_proxy
's own block for header_up
/header_down
), it manipulates the final response headers sent to the client.
Syntax: header [<matcher>] [<field> [<value>|-<remove_suffix>|+<add_prefix>]]
header Field Value
: Sets headerField
toValue
, replacing any existing header with the same name.header Field "Value with spaces"
: Use quotes for values with spaces.header +Field Value
: AddsField: Value
as a new header. IfField
already exists, this creates a second header with the same name (multi-value header).header -Field
: Removes headerField
.header Field
: Removes headerField
(same as-Field
).header Cache-Control "no-cache, no-store, must-revalidate"
header X-Powered-By "My Awesome App"
(Overrides backend'sX-Powered-By
)header -Server
(Removes theServer
header, often Caddy or backend's name)
example.com {
# Add a custom header to all responses
header X-Served-By "Caddy Frontend"
# Remove the Server header
header -Server
# Set common security headers
header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
header X-Content-Type-Options "nosniff"
header X-Frame-Options "SAMEORIGIN" # Or DENY
header Referrer-Policy "strict-origin-when-cross-origin"
# header Content-Security-Policy "default-src 'self';" # Very powerful, configure carefully
# For assets, set a long cache time
@assets path /static/* /images/*
header @assets Cache-Control "public, max-age=31536000, immutable"
reverse_proxy localhost:8000
}
header X-Request-Path {http.request.uri.path}
2. Modifying Request Headers (Sent to Backend - via reverse_proxy
):
Inside a reverse_proxy
block, you use header_up
to modify headers sent to the backend.
Syntax: header_up <field> <value_or_placeholder_or_remove>
header_up Host {http.request.host}
: Passes the original client's Host header to the backend. (Caddy often does this by default or sets it to the upstream's address if not specified).header_up X-Real-IP {http.request.remote.host}
: Sends the client's IP to the backend. (Caddy addsX-Forwarded-For
by default which includes this).header_up X-API-Key "mysecretkey"
: Adds an API key for the backend.header_up -Cookie
: Removes the Cookie header before sending to a backend (e.g., if it's a public cache and shouldn't see user cookies).header_up SomeHeader "{http.request.header.User-Agent} Modified"
3. Modifying Response Headers (From Backend - via reverse_proxy
):
Inside a reverse_proxy
block, you use header_down
to modify headers received from the backend before they are sent to the client.
Syntax: header_down <field> <value_or_placeholder_or_remove>
header_down -X-Powered-By
: RemovesX-Powered-By
if the backend sends it.header_down Access-Control-Allow-Origin "*"
: Adds a CORS header from the backend response.header_down Cache-Control "no-store"
: Overrides backend's caching instructions.
Example combining header_up
and header_down
:
app.example.com {
reverse_proxy localhost:5000 {
# Headers sent TO the backend
header_up X-Authenticated-User "{http.auth.user.id}" # If using Caddy auth
header_up Connection "Upgrade" # For WebSockets etc. (often automatic)
header_up Upgrade "websocket" # (often automatic)
# Headers received FROM the backend, modified before sending to client
header_down Server "MyApp" # Mask backend's Server header
header_down -Set-Cookie "sensitive_cookie_name=.*" # Regex remove specific cookie
}
}
header_down -Set-Cookie "..."
example uses a regular expression to remove a specific cookie. To use regex for header values, you might need to ensure the Caddy build includes the necessary modules or use specific syntax if available. Simple string removal is -HeaderName
.
Use Cases:
- Security: Adding HSTS, CSP, X-Frame-Options, X-Content-Type-Options.
- Caching: Setting
Cache-Control
andExpires
headers. - CORS (Cross-Origin Resource Sharing): Adding
Access-Control-Allow-Origin
headers. - Debugging: Adding custom headers to track requests through systems.
- Authentication: Passing user identity or API keys to backends.
- Masking Information: Removing default server headers like
Server
orX-Powered-By
from backends. - Client Information: Ensuring backends get correct
X-Forwarded-For
,X-Forwarded-Proto
.
Careful header management is essential for security, performance, and interoperability in web applications.
Workshop Implementing Security Headers and Custom Logging Headers
In this workshop, we'll configure Caddy to add common security headers to responses from a simple backend. We'll also add a custom header to requests sent to the backend and log this information.
Prerequisites:
- Caddy installed.
- Python 3 installed.
curl
utility (for inspecting headers easily).
Step 1: Create a Simple Backend Service
-
Create a directory for the workshop:
-
Create a very simple Python Flask app that will display received headers. Name it
app.py
:To run this, you'll need Flask:# ~/caddy_headers_ws/app.py from flask import Flask, request, jsonify app = Flask(__name__) @app.route('/') def hello(): # Prepare headers for display headers_dict = dict(request.headers) return f""" <h1>Hello from Backend!</h1> <p>My job is to show you the headers I received.</p> <h2>Request Headers Received by Backend:</h2> <pre>{jsonify(headers_dict, indent=2).get_data(as_text=True)}</pre> """ if __name__ == '__main__': app.run(host='0.0.0.0', port=5000)
pip install Flask
. -
Run the Flask backend: Open a new terminal, navigate to
It should say~/caddy_headers_ws
, and run:* Running on http://0.0.0.0:5000/
. Test it by going tohttp://localhost:5000
in your browser. You'll see the default headers your browser sends.
Keep this Flask app running.
Step 2: Create the Caddyfile
In your main Caddy terminal, in ~/caddy_headers_ws
, create a Caddyfile
:
# ~/caddy_headers_ws/Caddyfile
localhost:8080 {
# Reverse proxy to our Flask app
reverse_proxy localhost:5000 {
# === Headers sent TO the backend (header_up) ===
# Add a custom header identifying Caddy as the proxy
header_up X-Proxied-By "Caddy Workshop Proxy"
# Pass the original client's User-Agent
header_up X-Original-User-Agent "{http.request.header.User-Agent}"
}
# === Headers added by Caddy to the RESPONSE sent to the client ===
# Common security headers
header Strict-Transport-Security "max-age=31536000;"
header X-Content-Type-Options "nosniff"
header X-Frame-Options "SAMEORIGIN"
header Referrer-Policy "no-referrer-when-downgrade" # A common default
# A custom response header
header X-Workshop-Info "Caddy Headers Demo"
# Remove a header that Flask might add (if any, e.g., Server)
# Flask by default uses Werkzeug, which might set its own Server header.
# This will be overridden by Caddy's default Server header or removed if configured.
header -Server # This removes Caddy's Server header as well
log {
output stdout
format console
# We'll customize this later to log our custom request header
}
}
Dissection:
reverse_proxy localhost:5000
: Points to our Flask app.header_up X-Proxied-By ...
: Adds a custom header to the request Caddy sends to Flask.header_up X-Original-User-Agent ...
: Captures the client's User-Agent and sends it in a new header to Flask.- The
header
directives outside thereverse_proxy
block modify the final response to the client.- We add several standard security headers.
- We add
X-Workshop-Info
. header -Server
: This will remove Caddy's ownServer: Caddy
header from the final response. If Flask/Werkzeug added one,header_down -Server
withinreverse_proxy
would be more targeted for that.
Step 3: Run Caddy and Test with curl
-
In your Caddy terminal (
~/caddy_headers_ws
), validate and run Caddy: -
Now, use
Look at the output.curl -v
(verbose) to inspect headers. Open another terminal for this.-
Response Headers (from Caddy to
curl
, lines starting with<
): You should see:Notice the absence of a< HTTP/1.1 200 OK < Alt-Svc: h3=":8080"; ma=2592000 < Content-Type: text/html; charset=utf-8 < Date: ... < Referrer-Policy: no-referrer-when-downgrade // Added by Caddy < Strict-Transport-Security: max-age=31536000; // Added by Caddy < X-Content-Type-Options: nosniff // Added by Caddy < X-Frame-Options: SAMEORIGIN // Added by Caddy < X-Workshop-Info: Caddy Headers Demo // Added by Caddy < Content-Length: ... < ... HTML content ...
Server
header because we usedheader -Server
. If you remove that line from Caddyfile and restart Caddy, you'll seeServer: Caddy
. -
Request Headers (sent by Caddy to Flask): Open
http://localhost:8080/
in your web browser. The Flask page will display the headers it received. You should see among them:This confirms that{ // ... other headers ... "X-Proxied-By": "Caddy Workshop Proxy", "X-Original-User-Agent": "Mozilla/5.0 (Your Browser's User Agent String...)", "X-Forwarded-For": "127.0.0.1", // Or your actual client IP if not localhost "X-Forwarded-Proto": "http", // ... etc. ... }
header_up
directives worked.
-
Step 4: Customizing Log to Include a Specific Request Header
Let's say we want to log the value of X-Proxied-By
that Caddy sends to the backend.
- Stop Caddy (
Ctrl+C
). -
Modify the
log
block in yourCaddyfile
:Actually, logging headers sent to the upstream (# Caddyfile # ... (rest of the Caddyfile remains the same) ... log { output stdout # Using the default Caddy JSON log format to easily add fields # If using 'format console', adding custom fields is less direct # format json # Ensure JSON output for easy custom fields with 'include' # To log a header Caddy *sends to the upstream*, it's not directly # available as a response placeholder unless you capture it differently. # Let's log a header Caddy *receives from the client* instead, # or a header Caddy *adds to the response*. # Example: Log the 'User-Agent' Caddy received from the client # and the 'X-Workshop-Info' Caddy adds to the response. # The 'format' subdirective in 'log' has evolved. # For Caddy 2.7+, you might use a more structured approach: # encoder json { # include_keys http.request.header.User-Agent http.response.header.X-Workshop-Info # } # Simpler for older Caddy or general idea: # Caddy's default JSON log includes many common things. # For highly custom, you might use a template for the log line. # The 'console' format is less extensible for arbitrary fields. # Let's stick to 'console' for readability and accept its default fields for now. # For truly custom fields in console, one might need more advanced modules or templating. # The most straightforward way to log custom request elements # is often with the default JSON format, which is very comprehensive. # Let's switch to JSON to see more, including standard request headers. format json level INFO }
header_up
) in the main access log is tricky because the access log is typically for the client-to-Caddy leg. To achieve this, you might need more complex logging setups or a custom log format that pulls from Caddy's internal state if available as a placeholder.Let's simplify and log a header Caddy receives or adds to the response. Caddy's default JSON log format is quite rich. If we use
format json
(removeformat console
), it will log most request and response headers by default within nested objects.Let's adjust the log block to show the default JSON output, which is very informative:
-
Save the Caddyfile. Stop Caddy if it's running, then restart
caddy run
. - Make a request with
curl http://localhost:8080/
or your browser. -
Examine Caddy's terminal output. You'll see a JSON log entry. Inside the
"request"
object, you'll find a"headers"
map which includes headers received from the client (likeUser-Agent
). Inside the"resp_headers"
object (if your Caddy version logs it like this, or similar for response headers), you'll see headers Caddy sent to the client (like ourX-Workshop-Info
).Example snippet of the JSON log:
This demonstrates that the default JSON log format is very powerful for seeing the full context, including many headers. For more specific formatting or extracting particular{ "level": "info", "ts": 1678886405.12345, // ... other common fields ... "request": { "remote_ip": "127.0.0.1", "remote_port": "54321", "proto": "HTTP/1.1", "method": "GET", "host": "localhost:8080", "uri": "/", "headers": { "User-Agent": ["curl/7.68.0"], // Header received from client "Accept": ["*/*"] // ... other client headers ... } // ... }, // ... "resp_headers": { // Headers Caddy sent to client "Content-Type": ["text/html; charset=utf-8"], "Referrer-Policy": ["no-referrer-when-downgrade"], "Strict-Transport-Security": ["max-age=31536000;"], "X-Content-Type-Options": ["nosniff"], "X-Frame-Options": ["SAMEORIGIN"], "X-Workshop-Info": ["Caddy Headers Demo"] // Our custom response header }, "status": 200 // ... }
header_up
values into the log, advanced templating or specific logging modules might be needed.
Step 5: Cleanup
- Stop Caddy (
Ctrl+C
). - Stop the Flask backend (
Ctrl+C
). - (Optional) Uninstall Flask:
pip uninstall Flask
.
This workshop showed how to use header
(for responses) and header_up
(for requests to backends) to add, modify, or remove HTTP headers. We also saw how to inspect these changes using curl
and by having the backend display what it received. Effective header management is a key skill for intermediate Caddy users.
Okay, let's dive into the final topics for "Intermediate Caddy Usage."
Understanding Matchers in Detail
Matchers are a fundamental concept in Caddy that allows you to apply directives conditionally based on various properties of an HTTP request. They provide the logic for routing, filtering, and selectively applying configurations. We've used simple matchers like path matchers (/api/*
) already, but Caddy's matcher system is much richer.
What are Matchers?
A matcher defines a set of conditions. If an incoming request satisfies all conditions within a matcher set, the matcher "matches." Directives can then be associated with these named matchers or use inline matchers to control their execution.
Types of Matchers:
Caddy offers a variety of built-in matchers. Some of the most common and useful ones include:
-
path <paths...>
:- Matches if the request path is one of the given paths.
- Supports exact paths (
/foo/bar
) and prefix wildcards (/api/*
,/static/*
). /
matches only the root path.*
as a path matcher matches all paths.- Example:
path /admin/* /config/*
-
host <hosts...>
:- Matches if the request's
Host
header is one of the given hostnames. - Supports exact hostnames (
example.com
) and wildcards (*.example.com
, but notexample.*
). - Example:
host api.example.com admin.example.com
- Matches if the request's
-
method <methods...>
:- Matches if the request's HTTP method is one of the given methods (e.g.,
GET
,POST
,PUT
,DELETE
). Case-insensitive. - Example:
method POST PUT
- Matches if the request's HTTP method is one of the given methods (e.g.,
-
header <field> [<value>]
:- Matches if the request has the specified header field.
- If
<value>
is provided, it also checks if the header's value matches. Value matching can be exact string, wildcard (*
), or even regular expression if Caddy is built with regex support for header matching (check documentation for specific capabilities). - Example:
header Content-Type application/json
- Example:
header X-API-Key
(matches if the header exists, regardless of value)
-
query <key>=<value>
or<key>
:- Matches query parameters in the request URI.
<key>=<value>
: Matches if the query parameterkey
has the exact valuevalue
.<key>
: Matches if the query parameterkey
exists, regardless of its value.- Example:
query debug=true user_id=*
(matches ifdebug
istrue
ANDuser_id
exists)
-
path_regexp [<name>] <pattern>
:- Matches if the request path matches the given regular expression (Go syntax).
- If
<name>
is provided, capture groups from the regex can be accessed using placeholders like{http.regexp.<name>.N}
where N is the capture group index (e.g.,{http.regexp.myre.1}
). - Example:
@userProfile path_regexp ^/users/([0-9]+)$
-
remote_ip [forwarded] <ranges...>
:- Matches if the client's IP address falls within one of the specified CIDR ranges.
- The
forwarded
option tells Caddy to consider IPs fromX-Forwarded-For
(use with caution; ensure you trust upstream proxies setting this header). - Example:
remote_ip 192.168.1.0/24 10.0.0.0/8
- Example:
@officeIP remote_ip forwarded 203.0.113.45/32
-
expression <cel_expression>
:- A very powerful matcher that evaluates a Common Expression Language (CEL) expression. This allows for complex logical combinations and access to a wide range of request properties.
- Example:
expression "request.method == 'POST' && request.path.startsWith('/api/')"
- Example:
expression "has(request.headers['Authorization'])"
-
file [<path_prefix>] { type <files|directories|not_exists> paths <paths...> try_files <files...> try_policy <first_exist|smallest_size|largest_size|most_recently_modified> }
:- Matches based on file existence or properties on the filesystem.
type files
: Checks if paths resolve to existing files.type directories
: Checks if paths resolve to existing directories.try_files
: Tries to find a file from a list, similar to thetry_files
directive but as a matcher.- Example:
@staticFile file { try_files {path} {path}/index.html }
-
not <matcher_definition>
:- Inverts the result of another single matcher definition.
- Example:
not path /admin/*
(matches if the path is NOT under/admin/
)
Defining and Using Matchers:
There are two main ways to use matchers:
-
Named Matcher Sets: You define a named set of conditions using
@name
syntax. This name can then be referenced by directives. A named matcher set is true if all conditions within its block are true (logical AND).@myApiMatcher { path /api/v1/* method POST header Content-Type application/json } @assetsMatcher { path /static/* /images/* } example.com { # Apply directive only if @myApiMatcher matches handle @myApiMatcher { reverse_proxy localhost:8001 } # Apply directive only if @assetsMatcher matches header @assetsMatcher Cache-Control "public, max-age=3600" # Fallback for other requests file_server root * /var/www/html }
-
Inline Matchers (Single Matcher Token): Many directives allow specifying a single matcher token directly before their arguments. This is a shorthand for a simple, unnamed matcher.
If no matcher is specified for a directive, it often defaults to matching all requests (example.com { # 'path' matcher token before 'redir' directive redir /old-page /new-page 301 # 'host' matcher token before 'reverse_proxy' directive # (This is less common; usually host matching is done by site address) # reverse_proxy example.org localhost:9002 # More common with directives like 'header' or 'log' header /confidential/* Cache-Control "no-store" }
*
) within its current scope (e.g., its site block).
Matcher Set Logic (AND vs OR):
-
Within a single named matcher block (
@name { ... }
):
Conditions are implicitly ANDed. All must be true for the set@name
to match. -
Multiple named matcher sets on a directive:
If a directive lists multiple named matcher sets, it usually means OR logic between the sets (though this can depend on the directive's specific design; consult its documentation). More commonly, you'd use Caddy'sexpression
matcher or multiplehandle
blocks for OR logic.To achieve OR logic between different types of conditions, you generally define separate named matchers and use them in separate directive blocks, or use the
expression
matcher.# Example of OR-like behavior using multiple handle blocks @isImage { path *.jpg *.png *.gif } @isDocument { path *.pdf *.doc } handle @isImage { # Do image stuff header Cache-Control "public, max-age=86400" file_server } handle @isDocument { # Do document stuff header X-Content-Type-Options "nosniff" # For PDFs served by file_server file_server }
handle
blocks are mutually exclusive by default and evaluated in order. -
any_of
andall_of
(withinexpression
or as separate matchers if available):
Some contexts or plugins might offer explicitany_of
(OR) orall_of
(AND) grouping for matcher conditions, but the standard way is implicit AND within a set, and structuring for OR. Theexpression
matcher is very flexible for this:
Order of Evaluation:
The order in which Caddy evaluates directives and their associated matchers is crucial. Caddy has a predefined directive order. handle
blocks, for instance, are evaluated in the order they appear in the Caddyfile. Matchers help select which directives within that order are actually executed.
Mastering matchers is key to creating sophisticated and precise Caddy configurations. They enable fine-grained control over how different types of requests are processed.
Workshop Advanced Request Routing with Matchers
In this workshop, we'll create a Caddy configuration that routes requests to different backends or serves different content based on a combination of path, method, and header matchers.
Prerequisites:
- Caddy installed.
- Python 3 installed (for simple backend stubs).
curl
utility.
Scenario:
We want to set up myp Mymegacorp.corp.local
(using hosts file) to do the following:
- Requests to
/api/v1/users
withPOST
method go to a "User Service" backend. - Requests to
/api/v1/products
withGET
method go to a "Product Service" backend. - Requests to
/api/v1/*
with aX-Internal-Auth: secret-token
header (regardless of method or specific subpath beyond/api/v1/
) go to an "Internal Admin Service" backend. This should take precedence if the header is present. - Requests to
/static/*
serve files from a specific static directory. - All other requests get a generic "Welcome" page.
Step 1: Set up hosts
File
Edit your hosts
file (as admin/root) and add:
Step 2: Create Backend Stubs and Static Content
-
Create workshop directory:
-
Backend Stubs (Simple Python HTTP Servers):
user_service.py
(listens on port 9001):# ~/caddy_matcher_ws/user_service.py from http.server import BaseHTTPRequestHandler, HTTPServer class handler(BaseHTTPRequestHandler): def do_POST(self): # Only responds to POST self.send_response(200) self.send_header('Content-type','text/plain') self.end_headers() self.wfile.write(b"User Service: POST request processed.") with HTTPServer(('', 9001), handler) as server: print("User Service listening on port 9001...") server.serve_forever()
product_service.py
(listens on port 9002):# ~/caddy_matcher_ws/product_service.py from http.server import BaseHTTPRequestHandler, HTTPServer class handler(BaseHTTPRequestHandler): def do_GET(self): # Only responds to GET self.send_response(200) self.send_header('Content-type','text/plain') self.end_headers() self.wfile.write(b"Product Service: GET request processed.") with HTTPServer(('', 9002), handler) as server: print("Product Service listening on port 9002...") server.serve_forever()
internal_admin_service.py
(listens on port 9003):# ~/caddy_matcher_ws/internal_admin_service.py from http.server import BaseHTTPRequestHandler, HTTPServer class handler(BaseHTTPRequestHandler): def do_GET(self): self.send_admin_response() # Responds to any method basically def do_POST(self): self.send_admin_response() def send_admin_response(self): self.send_response(200) self.send_header('Content-type','text/plain') self.end_headers() self.wfile.write(b"Internal Admin Service: Request processed.") with HTTPServer(('', 9003), handler) as server: print("Internal Admin Service listening on port 9003...") server.serve_forever()
-
Static Content:
~/caddy_matcher_ws/static_files/info.txt
:
-
Generic Welcome Page:
~/caddy_matcher_ws/welcome.html
:
Step 3: Run the Backend Services
Open three separate terminals. In each, navigate to ~/caddy_matcher_ws
and run one service:
- Terminal 1:
python3 user_service.py
- Terminal 2:
python3 product_service.py
- Terminal 3:
python3 internal_admin_service.py
Ensure all three are running and listening on their respective ports.
Step 4: Create the Caddyfile
In ~/caddy_matcher_ws
, create Caddyfile
:
# ~/caddy_matcher_ws/Caddyfile
mymegacorp.corp.local {
# Define named matchers for clarity
@userService {
path /api/v1/users
method POST
}
@productService {
path /api/v1/products
method GET
}
@internalAdmin {
# Matches any request under /api/v1/ that has the specific header
path /api/v1/*
header X-Internal-Auth "secret-token"
}
@staticContent {
path /static/*
}
# Routing logic. Order of handle blocks matters for precedence.
# The internal admin route should be checked first if path is /api/v1/*
# because it's more specific due to the header requirement.
handle @internalAdmin {
reverse_proxy localhost:9003
}
handle @userService {
reverse_proxy localhost:9001
}
handle @productService {
reverse_proxy localhost:9002
}
handle @staticContent {
root * ./static_files # Serve from static_files subdir relative to Caddyfile
file_server
}
# Fallback for anything not matched above
handle {
# Using 'try_files' to serve welcome.html if it exists at the root,
# or you can directly use 'file' with a specific path.
# For this workshop, a direct 'file' is simpler.
# file_server will look for index.html by default.
# Since we have welcome.html and want it for the root:
rewrite * /welcome.html # Rewrite any non-matched request to /welcome.html
root * . # Root for welcome.html is current dir
file_server
}
log {
output stdout
format console
}
# Use tls internal for local HTTPS
tls internal
}
- Named Matchers: We define clear matchers for each condition.
- Order of
handle
blocks: The@internalAdmin
handle block comes before@userService
and@productService
. Sincehandle
blocks are mutually exclusive and checked in order, a request likePOST /api/v1/users
with theX-Internal-Auth
header will be caught by@internalAdmin
first. If the header is missing, it won't match@internalAdmin
, and then@userService
will be checked. - Static Content:
@staticContent
usesroot
to point to thestatic_files
subdirectory. - Fallback: The final
handle
block (with no specific matcher, so it catches anything not caught before) rewrites the path to/welcome.html
and serves it. tls internal
: For easy HTTPS on our.local
domain.
Step 5: Run Caddy and Test
-
In your Caddy terminal (
~/caddy_matcher_ws
):- If you haven't already for
mymegacorp.corp.local
withtls internal
, runsudo caddy trust
once. - Then run Caddy (with
sudo
if needed for ports 80/443):
- If you haven't already for
-
Testing (use
curl
or a browser REST client like Postman/Insomnia):-
Test User Service (POST to /api/v1/users):
(curl -X POST https://mymegacorp.corp.local/api/v1/users -k # Output: User Service: POST request processed.
-k
or--insecure
is forcurl
to accept the self-signed cert fromtls internal
without prior trust setup forcurl
itself. Browsers will be fine ifcaddy trust
worked.) -
Test Product Service (GET to /api/v1/products):
-
Test Internal Admin Service (e.g., GET to /api/v1/status with header):
Also trycurl -H "X-Internal-Auth: secret-token" https://mymegacorp.corp.local/api/v1/status -k # Output: Internal Admin Service: Request processed.
POST /api/v1/users
with theX-Internal-Auth: secret-token
header. It should go to the Internal Admin Service due to the order ofhandle
blocks. -
Test Static Content: Open
https://mymegacorp.corp.local/static/info.txt
in your browser.- You should see: "This is a static text file from /static_files/."
-
Test Fallback Welcome Page: Open
https://mymegacorp.corp.local/
orhttps://mymegacorp.corp.local/some/other/path
in your browser.- You should see the "Welcome to MyMegaCorp!" page.
-
Test denied cases (wrong method/path for specific services):
GET /api/v1/users
(User service only accepts POST):POST /api/v1/products
(Product service only accepts GET):
-
Step 6: Cleanup
- Stop Caddy (
Ctrl+C
). - Stop the three Python backend services (
Ctrl+C
in their terminals). - Edit your
hosts
file and remove/comment outmymegacorp.corp.local
. - (Optional) Run
sudo caddy untrust
if you only trusted Caddy's CA for this workshop.
This workshop provided a practical example of how to use Caddy's matchers (path
, method
, header
) in combination with ordered handle
blocks to create a sophisticated request routing configuration. This ability to precisely define how different requests are treated is essential for complex applications and microservice architectures.
This concludes the "Intermediate Caddy Usage" section. We've covered advanced reverse proxying, deeper HTTPS/TLS management, path manipulation, header manipulation, and a detailed look at matchers. You should now be well-equipped to handle a wide variety of self-hosting scenarios with Caddy.
Next will cover "Advanced Caddy Usage."
3. Advanced Caddy Usage
This section targets users who are comfortable with intermediate Caddy concepts and are looking to leverage Caddy's full potential. We'll cover topics like Caddy's API for dynamic configuration, custom builds with plugins, advanced logging and metrics, and performance tuning strategies.
Caddy's Admin API and On-Demand TLS
One of Caddy's most powerful and distinguishing features is its fully programmable admin API. This API allows you to inspect, manage, and change Caddy's configuration dynamically, in real-time, without downtime or needing to restart the Caddy process. This is a significant departure from traditional web servers that rely solely on static configuration files.
The Admin API Endpoint:
- By default, Caddy's admin API listens on
localhost:2019
. - This endpoint is not exposed externally by default for security reasons. It's meant for local administration.
- You can change the admin endpoint address or disable it via the global options block in a Caddyfile or through command-line flags when starting Caddy.
Interacting with the API:
You interact with the admin API using HTTP requests (e.g., with curl
or any HTTP client library). The API primarily uses JSON for request and response bodies.
Common API Endpoints and Operations:
-
Getting the Current Configuration (
Even if you use a Caddyfile, Caddy first converts it into this JSON structure internally. Understanding this JSON structure is key to using the API effectively.GET /config/
):
Retrieves the currently active Caddy configuration in its native JSON format. -
Loading a New Configuration (
If successful, Caddy applies the new configuration gracefully without dropping active connections (for most changes).POST /load
):
Replaces the entire Caddy configuration with the one provided in the JSON request body. This is an atomic operation; if the new config is invalid, Caddy will reject it and keep the old one running. -
Modifying Parts of the Configuration (Path-based operations):
You can target specific parts of the configuration tree using paths in the URL.GET /config/<path>
: Get a specific part of the config. Example:curl http://localhost:2019/config/apps/http/servers/srv0/routes/0
POST /config/<path>
: Add a new item to an array/map at<path>
.PUT /config/<path>
: Create or replace an item at<path>
.PATCH /config/<path>
: Partially update an item at<path>
(using JSON Patch or JSON Merge Patch).DELETE /config/<path>
: Delete an item at<path>
.
Example: Adding a new route to the first HTTP server (
srv0
is often the default name):# new_route.json # { # "match": [{"path": ["/new-api/*"]}], # "handle": [{"handler": "reverse_proxy", "upstreams": [{"dial": "localhost:9005"}]}] # } curl -X POST -H "Content-Type: application/json" \ -d @new_route.json \ http://localhost:2019/config/apps/http/servers/srv0/routes/ # Note: The exact path and structure depend on your existing config. # It's often easier to GET the config, modify it, then POST /load.
-
Other Endpoints:
/stop
: Gracefully stops the Caddy server (if enabled)./adapt
: Adapts a Caddyfile to JSON without loading it.curl -X POST -H "Content-Type: text/caddyfile" --data-binary @Caddyfile http://localhost:2019/adapt
/pki/ca/<id>
: Inspect Caddy's local CAs./load
,/config/
, etc., support?pretty
query parameter for formatted JSON output.
Caddy's JSON Configuration Structure:
The root of Caddy's JSON config can have keys like:
admin
: Configures the admin API itself.logging
: Configures global logging.storage
: Configures how Caddy stores assets like certificates (e.g.,file_system
, or with plugins,redis
,consul
).apps
: This is where most of the operational configuration lives. Common apps include:http
: Configures HTTP/S servers.servers
: A map of server definitions (e.g.,srv0
,srv1
).listen
: Array of listener addresses (e.g.,":443"
).routes
: An array of route objects. Each route has:match
: An array of matcher sets (e.g.,[{"host": ["example.com"]}]
).handle
: An array of handler objects (e.g.,[{"handler": "reverse_proxy", ...}]
).terminal
: Boolean, if true, stops processing further routes in this server if this route matches and handles.
tls_connection_policies
: For customizing TLS handshakes.automatic_https
: For disabling or configuring automatic HTTPS.
tls
: Manages TLS certificates and automation.automation
: Configures ACME (Let's Encrypt) settings.policies
: Defines rules for obtaining certificates (e.g., which domains, which CAs, challenge types).on_demand
: Configures On-Demand TLS.
pki
: Manages Caddy's Public Key Infrastructure (local CAs).
Learning this JSON structure (best done by inspecting the output of caddy adapt --config Caddyfile --pretty
or GET /config/
) is essential for advanced API usage.
On-Demand TLS:
This is a groundbreaking Caddy feature, particularly useful for SaaS platforms or services hosting many custom domains for users, where you don't know all the hostnames in advance.
With On-Demand TLS:
- Caddy does not obtain a certificate for a domain until the first TLS handshake for that domain is received.
- When a client attempts to connect to a hostname Caddy is configured to serve on-demand, Caddy briefly pauses the handshake.
- It then performs an ACME challenge for that hostname in real-time.
- If successful, it obtains and caches the certificate, then resumes the TLS handshake with the client using the new certificate.
- Subsequent requests for that hostname use the cached certificate. Caddy also manages its renewal.
Configuration:
On-Demand TLS is configured in the tls
app's automation
section of the JSON config, or via a global option on_demand_tls
in the Caddyfile.
Caddyfile (simpler approach for basic on-demand):
# Global options
{
on_demand_tls {
ask <url_to_ask_endpoint>
# interval 2m
# burst 5
}
email your-email@example.com
}
# A site block that will use on-demand TLS for any hostname it matches
# (e.g., if this Caddy instance handles *.user-sites.com)
*:443 { # Or a specific domain like *.your-saas.com
# Your reverse_proxy or other handling logic
reverse_proxy my_application_router:8000
tls {
on_demand # Enables on-demand for this site block explicitly
}
}
ask <url_to_ask_endpoint>
: Crucial for security. Before Caddy attempts to get a certificate for an unknown domain, it will make a GET request to this URL. Your endpoint at<url_to_ask_endpoint>?domain=<hostname>
must return a200 OK
status if Caddy is allowed to obtain a certificate for<hostname>
, or any other status (e.g., 4xx) to deny. This prevents abuse of your Let's Encrypt rate limits.interval
andburst
: Optional rate limiting for how often Caddy will try to issue on-demand certificates.
JSON configuration for on_demand
offers more granularity within apps.tls.automation.policies
.
Use Cases for Admin API & On-Demand TLS:
- SaaS Platforms: Dynamically add/remove customer domains and SSL certificates without restarting your web server fleet.
- Container Orchestration (Kubernetes, Docker Swarm): An Ingress controller or sidecar can use Caddy's API to update its configuration as services are deployed or scaled. Caddy's official Docker image has a
caddy-api
variant. - Dynamic Backends: Change reverse proxy upstreams based on external service discovery.
- CI/CD Pipelines: Automate deployment of new sites or configuration changes.
- Zero-Downtime Config Reloads: Guarantees that config changes are atomic and don't interrupt service.
The Admin API transforms Caddy from just a web server into a programmable, adaptable part of your infrastructure. On-Demand TLS solves a major challenge for hosting numerous custom domains securely.
Workshop Dynamic Site Provisioning with Caddy's API
In this workshop, we'll simulate a scenario where new websites need to be dynamically added to a running Caddy instance using its API. We won't use On-Demand TLS here, but rather directly load new site configurations.
Prerequisites:
- Caddy installed and running (we'll start it).
curl
andjq
(optional, for pretty printing JSON) installed.- A text editor.
- Python 3 (for a very simple backend).
Step 1: Create a Simple Backend Service
We need a backend that our dynamically added sites can proxy to.
In a new directory, e.g., ~/caddy_api_ws
, create backend_app.py
:
# ~/caddy_api_ws/backend_app.py
from http.server import BaseHTTPRequestHandler, HTTPServer
import cgi
class Handler(BaseHTTPRequestHandler):
def do_GET(self):
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.end_headers()
# Get the 'Host' header to know which site is being requested
site_name = self.headers.get('Host', 'Unknown Site')
# A query parameter to customize message
qs = {}
if '?' in self.path:
qs = cgi.parse_qs(self.path.split('?',1)[1])
message = qs.get('message', ['Default Message'])[0]
self.wfile.write(f"<html><body><h1>Hello from {site_name}!</h1>".encode())
self.wfile.write(f"<p>Your message: {message}</p></body></html>".encode())
if __name__ == '__main__':
server_address = ('', 8000) # Listen on port 8000
httpd = HTTPServer(server_address, Handler)
print('Backend app running on port 8000...')
httpd.serve_forever()
Step 2: Start Caddy with a Minimal Initial Configuration
We'll start Caddy with a Caddyfile that essentially just enables the admin API and maybe a placeholder site, or even an empty HTTP app config. For this workshop, let's start Caddy without a Caddyfile, relying on its default empty config which has the admin API enabled.
- Open a new terminal for Caddy. Ensure you are NOT in a directory with an existing
Caddyfile
that Caddy might load automatically. - Run Caddy:
Caddy will start with a default configuration. Its admin API will be available at
localhost:2019
. It won't be serving any actual sites yet over HTTP/S. You'll see logs like:
Step 3: Prepare JSON Configuration Snippets for New Sites
We will create JSON files representing the configuration for each new site we want to add. Caddy's configuration is a single JSON object. We'll be modifying the apps.http.servers.srv0.routes
array. (Assuming srv0
is the default server Caddy might create or that we'll create if it doesn't exist).
First, let's get the current (empty-ish) config to see its structure:
You might see something like:{
"admin": {
"disabled": false,
"listen": "localhost:2019",
"enforce_origin": false,
"origins": [
"localhost:2019",
"[::1]:2019",
"127.0.0.1:2019"
]
},
"logging": { /* ... */ }
}
apps
or http
server yet. We need to add that structure.
Let's create a JSON file for our first dynamic site, site1.local
(we'll use hosts
file for these).
* ~/caddy_api_ws/site1_config.json
:
{
"match": [{"host": ["site1.local"]}],
"handle": [
{
"handler": "reverse_proxy",
"upstreams": [{"dial": "localhost:8000"}]
},
{
"handler": "headers",
"response": {
"set": {"X-Site-Served": ["Site1 via API"]}
}
}
],
"terminal": true
}
~/caddy_api_ws/site2_config.json
:
Step 4: Add Sites Dynamically using curl
We will use curl
to POST
these route configurations to Caddy. Since Caddy has no HTTP server defined yet, we first need to create one, then add routes to it.
-
Create the initial HTTP app server structure. Let's create
initial_server_config.json
:Load this initial structure:// ~/caddy_api_ws/initial_server_config.json { "apps": { "http": { "servers": { "srv0": { "listen": [":80", ":443"], // Listen on standard ports "routes": [], // Start with an empty routes array "automatic_https": { // For .local domains, need to manage TLS "disable": false, // Keep it enabled "skip_on_demand": true // Don't do on_demand for these } } } }, "tls": { // Add a basic TLS app config to allow internal certs "certificates": {}, "automation": { "policies": [ { // Default policy for internal certs if needed "issuers": [{"module": "internal"}] } ] } } } }
If successful, Caddy will log that it's "serving configuration." Check current config:curl -X POST -H "Content-Type: application/json" -d @initial_server_config.json http://localhost:2019/load
curl http://localhost:2019/config/apps/http/servers/srv0/ | jq
It should show thesrv0
with an emptyroutes
array. -
Add
site1.local
: The path to add a route tosrv0
is/config/apps/http/servers/srv0/routes/
. Note the trailing slash forPOST
to an array to append.You might get an ID back for the new route, e.g.,curl -X POST -H "Content-Type: application/json" \ -d @site1_config.json \ http://localhost:2019/config/apps/http/servers/srv0/routes/
{"id":"your_route_id"}
. -
Add
site2.local
: -
Verify the full configuration:
You should seesite1_config.json
andsite2_config.json
contents within theroutes
array ofsrv0
.
Step 5: Update hosts
File
Add to your /etc/hosts
(or equivalent):
Step 6: Test the Dynamically Added Sites
Caddy should now be serving these sites. Since they are .local
domains, and we set up a basic TLS app config that defaults to internal certs, Caddy should try to issue internal certs for them. You might need to have run sudo caddy trust
earlier for seamless browser experience.
-
Test
site1.local
: Open your browser tohttps://site1.local/?message=DynamicSite1
. You should see "Hello from site1.local!" and "Your message: DynamicSite1". Check response headers (browser dev tools):X-Site-Served: Site1 via API
. -
Test
site2.local
: Open your browser tohttps://site2.local/?message=DynamicSite2
. You should see "Hello from site2.local!" and "Your message: DynamicSite2". Check response headers:X-Site-Served: Site2 via API
.
Step 7: Removing a Site (Example)
To remove a site, you need its path in the config array. Routes are usually added at the end. If site1
was the first one added, it's at index 0
.
You can find its exact path by inspecting GET /config/apps/http/servers/srv0/routes/
.
Let's assume site1.local
route is at index 0 in the routes
array.
https://site1.local/
should fail or give a different result (e.g., Caddy's default "not found" if no other routes match, or it might hit site2.local
if its matcher was too broad, though ours are specific by host).
If you then request https://site2.local/
, it should still work.
Important Considerations for API Use:
- Complexity: Managing deeply nested JSON by hand with
curl
for anything non-trivial is error-prone. It's common to:GET
the entire config.- Modify the JSON programmatically (e.g., in Python, Go, Node.js).
POST
the entire modified config back to/load
. This is often safer and easier.
- Idempotency: Design your API interactions to be idempotent where possible (applying the same operation multiple times has the same effect as applying it once).
- Security of Admin API: If Caddy is on a shared machine, protect the admin API. Use a Unix socket if possible (
admin /var/run/caddy.sock
), or configure it to listen on a specific, firewalled internal IP. You can also setenforce_origin: true
and configureorigins
to restrict which hosts can access the API. For remote access (not generally recommended without a secure tunnel), you'd need to put another reverse proxy in front of the admin API to add authentication and TLS. - Configuration Persistence: Changes made via the API are typically persisted by Caddy (e.g., in
autosave.json
). If you restart Caddy, it will try to load this last known good config. If you also have aCaddyfile
, ensure your workflow for managing config (API vs. Caddyfile) is clear to avoid conflicts. For API-driven Caddy, you often start Caddy with no Caddyfile or a minimal one.
Step 8: Cleanup
- Stop Caddy (
Ctrl+C
). Caddy will save its current API-driven configuration. - Stop the
backend_app.py
(Ctrl+C
). - Remove entries from your
hosts
file. - (Optional) Delete Caddy's autosaved config if you want it to start fresh next time (path shown in Caddy's startup logs, e.g.,
~/.config/caddy/autosave.json
).
This workshop demonstrated the basics of using Caddy's admin API to dynamically alter its configuration without restarts. This is a very powerful feature for automation and dynamic environments.
Building Caddy with Plugins
Caddy's core is deliberately kept lean and focused. Its functionality can be extended through plugins. Caddy v2 has a robust plugin system where plugins are Go modules that register themselves with Caddy. To use a plugin that isn't part of the standard Caddy distribution, you need to compile a custom Caddy binary that includes that plugin.
Why Use Plugins?
- DNS Providers for ACME: The most common reason. If you need to use the DNS-01 challenge for Let's Encrypt (e.g., for wildcard certificates or when ports 80/443 aren't open) and your DNS provider isn't one of the few built-in (Cloudflare is often included), you'll need a plugin for your specific provider (e.g.,
caddy-dns/gandi
,caddy-dns/route53
,caddy-dns/godaddy
). - Custom Authentication Methods: Plugins for OAuth, OpenID Connect, SAML, LDAP, etc. (e.g.,
caddy-security
). - Advanced Caching Strategies: Beyond simple header-based caching.
- Specialized Directives: For unique proxying needs, content transformations (e.g.,
caddy-exec
for running external commands,caddy-ext/transform-encoder
for modifying response bodies). - Custom Log Encoders/Writers: To send logs to specific systems or in custom formats.
- Metrics Collectors: For exposing metrics to systems like Prometheus in different ways.
- Custom Storage Backends: For storing certificates and other assets in places like Redis, Consul, S3, etc.
Finding Plugins:
The official Caddy website has a download page (https://caddyserver.com/download) that allows you to select plugins and get a curl
command or download a pre-built binary with them. You can also browse plugins listed on the Caddy community forums or GitHub.
Building with xcaddy
:
The easiest and recommended way to build Caddy with custom plugins is using xcaddy
. xcaddy
is a command-line tool that automates the process of compiling Caddy with specified plugins.
-
Install Go:
You need a recent version of the Go programming language installed on your system (https://golang.org/doc/install). -
Install
Ensure your Go binary directory (e.g.,xcaddy
:~/go/bin
or/usr/local/go/bin
) is in your system'sPATH
. -
Using
xcaddy
to Build Caddy:
The basic syntax isxcaddy build [<caddy_version>] --with <plugin_module_path>[@<version>] ...
[<caddy_version>]
: Optional. Defaults to the latest Caddy release. You can specify a version likev2.7.6
.--with <plugin_module_path>[@<version>]
: Specifies a plugin to include.<plugin_module_path>
is the Go module path of the plugin (e.g.,github.com/caddy-dns/gandi
).[<version>]
is optional, to pin to a specific plugin version.
Example: Building Caddy with the Gandi DNS plugin:
This command will:- Download the Caddy source code (if not already cached).
- Download the Gandi DNS plugin source code.
- Compile Caddy, linking in the Gandi plugin.
- Produce a
caddy
binary in your current directory.
Example: Building a specific Caddy version with multiple plugins:
-
Replacing Your Existing Caddy Binary:
Oncexcaddy
produces the newcaddy
binary, you would typically replace your system's existing Caddy binary with this new one.- Find where your current Caddy is:
which caddy
- Stop Caddy if it's running as a service:
sudo systemctl stop caddy
- Backup the old binary:
sudo mv $(which caddy) $(which caddy)_old
- Move the new binary:
sudo mv ./caddy /usr/local/bin/caddy
(or wherever your original was) - Ensure it's executable:
sudo chmod +x /usr/local/bin/caddy
- Verify the version and plugins:
caddy version
(it should list the included non-standard modules). - Start Caddy service:
sudo systemctl start caddy
- Find where your current Caddy is:
Using Plugins in Your Caddyfile:
Once Caddy is built with the necessary plugins, you can use their specific directives or configuration options in your Caddyfile or JSON config. Refer to the plugin's documentation for how to configure it.
Example for a DNS plugin (this is conceptual for Gandi):
# Global options
{
acme_dns gandi {env.GANDI_API_KEY}
# Or for some plugins:
# acme_dns gandi your_gandi_api_key
}
*.example.com {
# ... your site config ...
# Caddy will now use the Gandi DNS plugin for wildcard certs
# for example.com due to the global acme_dns setting.
reverse_proxy myapp:8000
}
Considerations for Custom Builds:
- Maintenance: You are responsible for rebuilding your custom Caddy binary whenever you want to update Caddy core or any of the included plugins.
- Reproducibility: Keep track of the exact Caddy version and plugin versions you used for a build if you need to recreate it.
xcaddy
can help manage this if you use Go workspaces or specific version pinning. - Plugin Compatibility: Ensure the plugins you choose are compatible with the Caddy version you are building.
- Security: Only use plugins from trusted sources. Review the plugin's code if you have security concerns.
Building Caddy with plugins unlocks a vast range of extended functionalities, allowing you to tailor Caddy precisely to your infrastructure needs. xcaddy
makes this process manageable.
Okay, let's proceed with more topics in "Advanced Caddy Usage."
Advanced Logging and Metrics
For production systems, robust logging and monitoring are essential for understanding traffic patterns, diagnosing issues, and ensuring system health. Caddy provides flexible logging capabilities and can expose metrics for consumption by monitoring systems like Prometheus.
Advanced Logging:
Caddy's log
directive, which we've touched upon, can be configured in much greater detail.
Key Logging Configuration Aspects:
- Loggers: Caddy can have multiple named loggers. By default, there's a default access logger for each HTTP server. You can define additional loggers.
- Outputs (
output
): Where log entries are written.stdout
,stderr
: Standard output/error streams.file <path>
: Writes to a file. Supports log rotation.net <address>
: Writes to a network address (TCP, UDP, Unix socket).output net udp localhost:9090
discard
: Disables logging (discards entries).
- Encoders (
format
): How log entries are formatted.console
(default forcaddy run
when TTY): Human-readable, single-line format. Limited customization.json
(default forcaddy run
when no TTY, or when logging to files/network): Structured JSON format. Very detailed and good for machine processing.logfmt
: Key-value pair format, e.g.,level=info ts=... msg=...
single_field
: Outputs only a single field from the log entry.filter
: Allows wrapping another encoder and selectively filtering fields.log { format filter { wrap json # Wrap the JSON encoder fields { # Keep only these top-level fields ts include level include msg include # For the 'request' object, drill down request > uri include request > method include request > host include request > remote_ip include # Exclude all other fields from 'request' by default # Add more specific http.request fields if needed } } }
template
: Allows defining a custom log format using Go text/template syntax and Caddy placeholders.log { # Apache Common Log Format (CLF) example format template "{http.request.remote.host} - {http.auth.user.id || '-'} [{time.now.common_log}] \"{http.request.method} {http.request.uri.path} {http.request.proto}\" {http.response.status} {http.response.written}" # Note: Placeholders are subject to Caddy versions and availability. # Consult Caddy's documentation for the full list of log placeholders. }
- Levels (
level
): Filters log entries by severity:DEBUG
,INFO
,WARN
,ERROR
,PANIC
,FATAL
. Default is usuallyINFO
. - Sampling (
sample
): Logs only a sample of requests to reduce log volume. - Including/Excluding Specific Logs: You can create multiple named loggers and apply them conditionally using matchers.
Full JSON Configuration for Logging:
For the most detailed control, logging is configured under the logging
top-level key in Caddy's JSON configuration. This allows defining multiple loggers with distinct sinks (outputs), encoders, and levels.
// Partial Caddy JSON config
{
"logging": {
"sink": { // Default sink for logs not handled by specific loggers
"writer": {"output": "stderr"},
"encoder": {"format": "console"}
},
"logs": { // Named loggers
"default": { // Default logger for most Caddy internal messages
"writer": {"output": "stderr"},
"encoder": {"format": "console"},
"level": "INFO"
},
"http_access_my_server": { // A custom access logger
"writer": {
"output": "file",
"filename": "/var/log/caddy/my_server.access.log",
"roll_size_mb": 10,
"roll_keep": 5
},
"encoder": {
"format": "json"
},
"level": "INFO",
"include": ["http.log.access.my_server"] // Ties to access log emitted by a server
}
}
// ...
}
}
log
directive in Caddyfile simplifies this.
Metrics (Prometheus Exposition):
Caddy can expose internal metrics in a format compatible with Prometheus, a popular open-source monitoring and alerting toolkit. This allows you to track things like:
- Number of requests
- Response times / latencies
- HTTP status codes (2xx, 4xx, 5xx counts)
- TLS handshake counts and failures
- Upstream health check status
- And more...
Enabling Metrics: Metrics are typically enabled in the global options block of your Caddyfile or via the Caddy JSON config.
# Global options
{
# ... other global options ...
servers {
metrics # Enables metrics on the default metrics endpoint (usually /metrics on a separate port like :2019 or :9180)
# Or to customize:
# metrics /mycustommetrics
}
}
:2019/metrics
) or a dedicated metrics port. The exact default can vary or be configured.
If you enable metrics
within the servers
global option, Caddy configures a separate HTTP server (or uses the admin server) to serve the /metrics
endpoint.
Prometheus Configuration:
In your Prometheus configuration file (prometheus.yml
), you would add a scrape job to collect metrics from Caddy:
scrape_configs:
- job_name: 'caddy'
static_configs:
- targets: ['your_caddy_host:admin_port'] # e.g., 'localhost:2019'
# Or if Caddy exposes metrics on a different port/path:
# metrics_path: /mycustommetrics
# scheme: http
# static_configs:
# - targets: ['your_caddy_host:metrics_port']
/metrics
endpoint, and you can use PromQL to query the data and Grafana to visualize it.
Available Metrics:
Caddy exposes a range of metrics. Some common ones include:
caddy_http_requests_total
: Counter for total HTTP requests.caddy_http_request_duration_seconds
: Histogram of request latencies.caddy_http_responses_total
: Counter for HTTP responses, often with labels for status code.caddy_tls_handshakes_total
: Counter for TLS handshakes.caddy_reverse_proxy_upstreams_health
: Gauge indicating health of upstreams.
You can view the full list of available metrics by accessing Caddy's /metrics
endpoint in a browser or with curl
.
Custom Metrics:
While Caddy provides many built-in metrics, if you need highly application-specific metrics, you might:
- Have your backend application expose its own Prometheus metrics endpoint, and Caddy would simply proxy to it.
- Develop a Caddy plugin that registers and exposes custom metrics (more advanced).
Robust logging helps in reactive debugging, while metrics enable proactive monitoring and trend analysis, both crucial for maintaining reliable services.
Workshop Configuring Custom Log Formats and Enabling Prometheus Metrics
In this workshop, we'll customize Caddy's access log format to be similar to Apache's Common Log Format (CLF). We'll also enable Prometheus metrics and inspect the metrics endpoint.
Prerequisites:
- Caddy installed.
- Python 3 (for a simple backend).
curl
.- (Optional for full metrics visualization) Prometheus and Grafana installed, but we'll focus on Caddy's side and inspecting the endpoint.
Part 1: Custom Log Format
Step 1: Simple Backend
Use the backend_app.py
from the "Caddy's Admin API" workshop (or any simple web server). Run it on localhost:8000
.
# ~/caddy_adv_logs_metrics_ws/backend_app.py (same as before)
from http.server import BaseHTTPRequestHandler, HTTPServer
import cgi
class Handler(BaseHTTPRequestHandler): # ... (rest of the simple backend code) ...
# ...
if __name__ == '__main__': # ...
httpd = HTTPServer(('', 8000), Handler) # ...
httpd.serve_forever()
~/caddy_adv_logs_metrics_ws
and run python3 backend_app.py
from there.
Step 2: Caddyfile with Custom Log Format
In ~/caddy_adv_logs_metrics_ws
, create a Caddyfile
:
# ~/caddy_adv_logs_metrics_ws/Caddyfile
localhost:8080 {
reverse_proxy localhost:8000
log {
output file ./access_clf.log # Log to a local file
# Attempting a Common Log Format (CLF) like template
# Placeholders may vary slightly or need specific modules.
# This is a common representation.
format template "{http.request.remote.host} {http.request.header.Cf-Connecting-IP || '-'} {http.auth.user.id || '-'} [{time.now.format('02/Jan/2006:15:04:05 -0700')}] \"{http.request.method} {http.request.uri.path.escaped} {http.request.proto}\" {http.response.status} {http.response.size}"
# Notes on placeholders:
# - {http.request.header.Cf-Connecting-IP || '-'} : Tries to get Cloudflare IP, else '-'
# - {http.auth.user.id || '-'} : For authenticated user, else '-'
# - {time.now.format(...)} : Specific Go time format string for CLF.
# - {http.request.uri.path.escaped}: Ensures path is properly escaped for logging.
# - {http.response.size}: Bytes written in response body.
}
}
format template
:This attempts to replicate the Common Log Format:
remotehost rfc931 authuser [date] "request" status bytes
.
{http.request.remote.host}
: Client's IP.{http.request.header.Cf-Connecting-IP || '-'}
: Logs Cloudflare's connecting IP if present, otherwise a hyphen. This is a common addition.{http.auth.user.id || '-'}
: Logs the authenticated user ID if Caddy authentication is used, otherwise a hyphen.[{time.now.format('02/Jan/2006:15:04:05 -0700')}]
: Formats the current time in CLF style. The format string02/Jan/2006:15:04:05 -0700
is Go's specific way to define this layout.\"{http.request.method} {http.request.uri.path.escaped} {http.request.proto}\"
: The request line (method, path, protocol).uri.path.escaped
is safer for logging.{http.response.status}
: The HTTP status code.{http.response.size}
: The size of the response body in bytes. If the body size is zero or unknown, this might output 0 or-
.
Important Note on Placeholders:
The exact availability and naming of placeholders like {http.response.size}
or specific time.now
formatting options can evolve with Caddy versions. Always consult the official Caddy documentation for the most up-to-date list of placeholders for your Caddy version. Some placeholders might require specific modules to be active (e.g., auth placeholders).
Step 3: Run Caddy and Test Logging
- In the Caddy terminal (
~/caddy_adv_logs_metrics_ws
): - Make a few requests to
http://localhost:8080/
andhttp://localhost:8080/some/path?query=abc
using your browser orcurl
. - Check the content of
./access_clf.log
: You should see entries resembling CLF, for example: (The exact byte counts and timestamp will vary.)
This shows how you can use format template
for highly customized log formats.
Part 2: Enabling Prometheus Metrics
Step 4: Modify Caddyfile to Enable Metrics
Edit your ~/caddy_adv_logs_metrics_ws/Caddyfile
. Add a global metrics
option.
A common way is to add it to the servers
block in global options:
# ~/caddy_adv_logs_metrics_ws/Caddyfile
{
# Global options block
servers {
metrics # Enable metrics endpoint
}
}
localhost:8080 {
reverse_proxy localhost:8000
log {
output file ./access_clf.log
format template "{http.request.remote.host} {http.request.header.Cf-Connecting-IP || '-'} {http.auth.user.id || '-'} [{time.now.format('02/Jan/2006:15:04:05 -0700')}] \"{http.request.method} {http.request.uri.path.escaped} {http.request.proto}\" {http.response.status} {http.response.size}"
}
}
metrics
is enabled this way, Caddy usually serves them on the admin API port (:2019
by default) at the /metrics
path.
Step 5: Restart Caddy and Check Metrics Endpoint
-
Stop and restart Caddy:
Look at Caddy's startup logs. You should see an indication that the metrics endpoint is active, often on the admin server. For example: -
Use
You should see a large text output in Prometheus exposition format. Example lines:curl
to access the metrics endpoint (usuallyhttp://localhost:2019/metrics
):# HELP caddy_http_requests_total Counter of HTTP requests. # TYPE caddy_http_requests_total counter caddy_http_requests_total{server="srv0"} 0 # HELP caddy_http_request_duration_seconds Histogram of HTTP request latencies. # TYPE caddy_http_request_duration_seconds histogram caddy_http_request_duration_seconds_bucket{le="0.005",server="srv0"} 0 caddy_http_request_duration_seconds_bucket{le="0.01",server="srv0"} 0 # ... many more buckets ... caddy_http_request_duration_seconds_sum{server="srv0"} 0 caddy_http_request_duration_seconds_count{server="srv0"} 0 # HELP caddy_http_responses_total Counter of HTTP responses. # TYPE caddy_http_responses_total counter caddy_http_responses_total{server="srv0",status_code="0"} 0 # HELP caddy_reverse_proxy_upstreams_health Health of upstreams for reverse proxy. 1 is healthy, 0 is unhealthy. # TYPE caddy_reverse_proxy_upstreams_health gauge caddy_reverse_proxy_upstreams_health{dial="localhost:8000",server="srv0",upstream="localhost:8000"} 1 # HELP caddy_tls_handshakes_total Counter of TLS handshakes. # TYPE caddy_tls_handshakes_total counter caddy_tls_handshakes_total{server="srv0"} 0 # ... and many more metrics
-
Make some requests to your main site (
http://localhost:8080/
). - Fetch the metrics again:
curl http://localhost:2019/metrics
. You should see the counters (likecaddy_http_requests_total{server="srv0"}
) have incremented. Latency histograms will also start to populate.
Step 6: (Conceptual) Prometheus Integration
If you had Prometheus running, you would add a scrape config like this to your prometheus.yml
:
scrape_configs:
- job_name: 'caddy-workshop'
static_configs:
- targets: ['localhost:2019'] # Assumes Caddy admin API is on localhost:2019
# metrics_path: /metrics # This is usually the default
Step 7: Cleanup
- Stop Caddy (
Ctrl+C
). - Stop the
backend_app.py
(Ctrl+C
). - Delete the
access_clf.log
file if desired.
This workshop demonstrated how to create custom log formats using templates and how to enable Caddy's built-in Prometheus metrics exporter. These features are vital for operating Caddy in a production environment, providing visibility into its operations and performance.
Performance Tuning and Best Practices
While Caddy is performant out-of-the-box for many use cases, understanding some tuning options and best practices can help you optimize it for high-traffic scenarios or resource-constrained environments.
1. Keep Caddy and Go Updated:
- Caddy developers continuously improve performance and fix bugs. Use the latest stable Caddy version.
- Caddy is built with Go. Newer Go versions often include performance improvements in the runtime and standard library (networking, crypto). If building from source with
xcaddy
, it will typically use a recent Go version.
2. Connection Management:
- HTTP Keep-Alives: Caddy supports HTTP keep-alives by default, which allow clients to reuse TCP connections for multiple HTTP requests, reducing latency. Ensure your clients and any intermediate proxies also support them.
- HTTP/2 and HTTP/3: Caddy enables HTTP/2 by default, and HTTP/3 (over QUIC) is also enabled and used if the client supports it. These protocols offer significant performance benefits (multiplexing, header compression, reduced head-of-line blocking for HTTP/2; 0-RTT and better congestion control for HTTP/3). No special Caddy tuning is usually needed for these, as they are on by default.
- Operating System Limits: For very high traffic, you might need to tune OS-level limits:
- File Descriptors: Each TCP connection uses a file descriptor. The default limit might be too low (e.g., 1024 or 4096). Increase it using
ulimit -n <value>
(for the current session) or system-wide configuration (e.g.,/etc/security/limits.conf
on Linux). Caddy tries to raise this limit automatically if it can. - TCP Tuning: Kernel parameters related to TCP buffers, SYN backlog, TIME_WAIT buckets (e.g.,
net.core.somaxconn
,net.ipv4.tcp_max_syn_backlog
,net.ipv4.tcp_tw_reuse
). These are advanced and should be changed with caution and understanding.
- File Descriptors: Each TCP connection uses a file descriptor. The default limit might be too low (e.g., 1024 or 4096). Increase it using
3. Reverse Proxy Optimizations:
- Load Balancing Policies: Choose an appropriate policy.
least_conn
can be better thanround_robin
for services with varying request complexities or long-lived connections. - Health Checks: Configure reasonable intervals and timeouts for health checks. Too frequent can add load; too infrequent can delay detection of failures.
- Upstream Buffering (
reverse_proxy
subdirectives):flush_interval <duration>
: How often to flush response buffers from the upstream. A negative value disables periodic flushing, buffering the entire response (good for small, fast responses; bad for large/streaming). Default is usually fine.buffer_requests
/buffer_responses
: Caddy generally buffers requests and responses. For specific streaming use cases, you might explore if disabling buffering is beneficial, but it's an advanced and rarely needed tweak.
- Connection Pooling to Upstreams: Caddy maintains pools of connections to upstreams to reduce the overhead of establishing new connections for each proxied request. This is generally handled automatically.
4. TLS Performance:
- Session Resumption: Caddy supports TLS session resumption (tickets and session IDs), which significantly speeds up subsequent connections from the same client by reusing previously negotiated cryptographic parameters. This is on by default.
- OCSP Stapling: Caddy automatically staples OCSP (Online Certificate Status Protocol) responses, which avoids clients needing to make a separate OCSP request to check certificate revocation status, improving initial connection speed and privacy.
- Modern Cipher Suites: Caddy uses strong, modern cipher suites by default.
- Hardware Acceleration: If your CPU supports AES-NI (most modern CPUs do), Go's crypto libraries will use it, speeding up TLS encryption/decryption.
5. Caching:
- Client-Side Caching (
Cache-Control
,Expires
,ETag
headers): Configure Caddy to send appropriate caching headers for static assets. This offloads requests from Caddy entirely if the browser has a fresh copy. - Proxy Caching (Experimental/Plugins): While Caddy's core doesn't have a built-in sophisticated proxy cache like Varnish or Nginx's
proxy_cache
, there are experimental features or plugins that can add this. For simple caching needs, sometimesETag
validation at the proxy level can be achieved withtry_files
and a file store, but this is complex. For heavy caching, a dedicated caching proxy might be used in front of or alongside Caddy.
6. Compression (encode
directive):
- Caddy enables Gzip and Zstandard (zstd) compression by default for common text-based content types. This reduces bandwidth and can improve perceived performance. Ensure your backends aren't also compressing if Caddy is already doing it (double compression is wasteful).
- You can tune the compression level or prefer certain algorithms if needed, but defaults are usually good.
encode zstd gzip { level 5 }
7. Serving Static Files:
- Caddy's
file_server
is very efficient. - Use
try_files
appropriately for SPAs to avoid unnecessary disk I/O for non-existent paths before falling back toindex.html
.
8. Caddyfile vs. JSON API for Performance:
- For static configurations, Caddyfile is convenient. It's adapted to JSON internally. The adaptation step is very fast and usually negligible.
- For highly dynamic configurations or very large numbers of sites, managing the config directly via the JSON API can be more efficient as it bypasses the Caddyfile adaptation step. However, the performance difference in config loading is rarely a bottleneck for request serving.
9. Profiling and Benchmarking:
- Go Profiling: Caddy (being a Go application) can expose profiling data (CPU, memory) via the
/debug/pprof
endpoints if the admin API is enabled. This is for developers or very advanced users to diagnose performance bottlenecks within Caddy itself.import _ "net/http/pprof"
would need to be in Caddy's main.go or a plugin. The standard Caddy binary may or may not have pprof endpoints exposed by default on the admin interface; check documentation. - Load Testing Tools: Use tools like
wrk
,ab
(ApacheBench),k6
, orJMeter
to benchmark your Caddy setup under load and identify bottlenecks (which might be in Caddy, your backend, network, or OS). Test realistic scenarios.
10. Resource Allocation:
- Ensure Caddy has sufficient CPU and memory, especially if handling many TLS connections or complex request processing.
- Monitor resource usage.
General Best Practices:
- Keep it Simple: Don't overcomplicate your Caddyfile unless necessary. Simpler configurations are often easier to reason about and less prone to misconfiguration.
- Specific Matchers: Use the most specific matchers possible to avoid unintended directive execution.
- Understand Directive Order: Be aware of Caddy's predefined directive execution order and how
handle
,route
, etc., influence it. - Read the Docs: Caddy's official documentation is excellent and the ultimate source of truth for directive behavior and options.
By applying these considerations, you can ensure Caddy runs efficiently and reliably even under demanding conditions. Most of the time, Caddy's defaults provide excellent performance, and tuning is only needed for specific, identified bottlenecks.
This concludes the "Advanced Caddy Usage" section. We've covered Caddy's API, building with plugins, advanced logging/metrics, and performance considerations. You now have a comprehensive understanding of Caddy, from basic setup to sophisticated deployments.