Replacing NGINX With Caddy
Marek Semjan
CaddyHTTP ServerNGINXLoad balancerApplication load balancerREST APIRound RobinIP HashLeast connected
1177 Words •5 Minutes, 21 Seconds
2025-02-08 13:27 +0000
Introduction
Last year I wrote guides on how to setup load balancer and reverse proxy using NGINX server. As I announced in the recent post about my plans for this year, I’ve decided to try Caddy1 as a replacement for NGINX.
Caddy VS NGINX
Both Caddy and NGINX are HTTP servers with various features. Both have their advantages and disadvantages. In this section we will compare these two applications.
NGINX
Advantages:
- It’s an industry standard
- Highly efficient - Can handle thousands of concurrent connections
- Integrates well with DevOps tools like Docker and Kubernetes
- Highly customizable for various use cases - Can be used as a simple file server, reverse proxy, or load balancer
Disadvantages:
- Has a steeper learning curve
- Setting up HTTPS is a bit more involved process
Caddy
Advantages:
- A built-in support for automatic SSL/TLS
- Easier to setup
- Can be extended using various plugins
Disadvantages:
- Smaller community and support
- NGINX has better performance
Which One Is For You?
I would recommend Caddy for startups, smaller companies, and tech enthusiasts who want HTTPS support without any hassles. Caddy definitely beats NGINX in simplicity and ease of setup.
However, if you need the raw performance for handling thousands of concurrent requests in an enterprise-level applications, then NGINX is clearly a better choice. Moreover, it provides better support for microservices and containerized environments.
Installation
To install Caddy follow the guide for your OS.
Caddyfile
Caddy is configured using a Caddyfile
, which can be found in /etc/caddy/Caddyfile
. To test things, we can configure Caddy as a simple file server with the following code:
{
admin "unix//run/caddy/admin.socket"
}
example.com {
root * /var/www
file_server
}
The first block restricts the access to admin interface to a local unix file socket whose directory is restricted to caddy:caddy
. By default the TPC socket allows arbitrary modification for any process and that has access to the local interface. Don’t leave admin over TCP turned on unless you understands all the implications.
The second block that we specify is for example.com
website. You can replace this with localhost
or the IP address of your computer. Everything between curly braces is a block that contains configuration. If there is only a single block (that is without the global settings for admin
), we can omit the curly braces.
The root
directive specifies the path to the root of the site (in this case we have a wildcard *
that matches everything, but we can specify a concrete path, such as /index.html
), and /var/www
is the path to the folder with the website files. Finally, file_server
directive2 instructs Caddy to function as a file server.
Setting Up Reverse Proxy
Setting up the reverse proxy with Caddy is extremely simple. All you need is to include the following block with reverse_proxy
directive3:
{
admin "unix//run/caddy/admin.socket"
}
example.com {
reverse_proxy localhost:8080
}
After restarting Caddy with sudo systemctl restart caddy
, it should forward requests to the website running on localhost:8080
.
If we want to setup the reverse proxy the same way as we did with our NGINX server (that is, we want to forward only request to example.com/api/*
, and strip /api
prefix), we need to modify the Caddyfile
only slightly:
{
admin "unix//run/caddy/admin.socket"
}
example.com {
handle_path /api/* {
reverse_proxy localhost:8080
}
}
The handle_path
directive4 will match requests starting with /api
, and strip the /api
prefix from the path.
We can additionally add configuration to add various headers (if required):
{
admin "unix//run/caddy/admin.socket"
}
example.com {
handle_path /api/* {
reverse_proxy localhost:8080 {
header_up Host {host}
header_up X-Real-IP {remote_host}
header_up X-Forwarded-For {remote_host}
header_up X-Forwarded-Proto {scheme}
}
}
}
In this example, Caddy will add Host
, X-Real-IP
, X-Forwarded-For
, and X-Forwarded-Proto
to the requests forwarded to our service.
Setting Up Load Balancer
Setting up a load balancer is just as easy as setting up the reverse proxy. In fact, it is done with the reverse_proxy
directive:
{
admin "unix//run/caddy/admin.socket"
}
example.com {
reverse_proxy localhost:8080 localhost:8081 localhost:8082
}
The requests to example.com
will be load-balanced across localhost:8080
, localhost:8081
, and localhost:8082
. By default, round-robin is used. If one of these services becomes unavailable, Caddy will automatically retry the next available one.
In some cases, round-robin is undesirable. For example, our service may require sticky sessions to work properly (e.g. because of caching or sessions). If we need a different load balancing method, we can specify it with lb_policy
directive. Example for using IP hashes to select the server to forward the request to based on client’s IP address:
{
admin "unix//run/caddy/admin.socket"
}
example.com {
reverse_proxy {
lb_policy ip_hash
to localhost:8080 localhost:8081 localhost:8082
}
}
The available options for lb_policy
are summed up in the following table:
Policy | Description |
---|---|
round_robin | Default method, distributes requests evenly across all servers |
random | Selects a random server for each request |
least_requests | Routes traffic to the server with the fewest active connections (similar to NGINX’s least_conn ) |
first | Always picks the first server in the list (useful for failover scenarios, when we want the primary server to handle all traffic unless it fails) |
ip_hash | Assigns a server based on the client’s IP address, ensuring sticky sessions |
uri_hash | Assigns a server based on the request URI, ensuring consistent routing for the same resource |
Moreover, we can easily setup health checks to ensure that unhealthy servers are not used. Example:
{
admin "unix//run/caddy/admin.socket"
}
example.com {
reverse_proxy {
lb_policy least_requests # The server with least connections will be used
to localhost:8080 localhost:8081 localhost:8082
health_uri /health # Check `/health` endpoint
health_interval 10s # Check every 10 seconds
health_timeout 2s # Fail check if there is no response in 2 seconds
}
}
With this setup Caddy will:
- Send traffic to the server with the least active connections
- Automatically remove unhealthy servers from the pool
- Retry failed requests on anther server
Conclusions
The goal of this was to showcase capabilities of Caddy. We looked at the basics in an attempt to reproduce the setup that I have for NGINX (see posts about setting up load balancer and reverse proxy with NGINX).
I think I managed to demonstrate that we can configure the same functionality as for NGINX. The format of Caddyfile is easier to read and write than the one that the NGINX uses. We haven’t gone into depth, and there are still various Caddyfile directives that you can use in your configuration.
Other interesting features to mention are support for Prometheus metrics, automatic HTTPS, templates, or PHP FastCGI, but these are out of scope of this post.
If you are looking for a simple HTTPS server for your home lab, give Caddy a shot. It’s not as popular as NGINX, but it’s easy to setup. The only disadvantage is that the community is much smaller and in case of issues it will be more difficult to find help or a guide with a fix.
Sources
Server, C. W. (n.d.). Caddy - The Ultimate Server with Automatic HTTPS. Caddy Web Server. https://caddyserver.com/ ↩︎
Server, C. W. (n.d.-b). file_server (Caddyfile directive) - Caddy Documentation. https://caddyserver.com/docs/caddyfile/directives/file_server ↩︎
Server, C. W. (n.d.-c). reverse_proxy (Caddyfile directive) - Caddy Documentation. https://caddyserver.com/docs/caddyfile/directives/reverse_proxy ↩︎
Server, C. W. (n.d.-c). handle_path (Caddyfile directive) - Caddy Documentation. https://caddyserver.com/docs/caddyfile/directives/handle_path ↩︎