Just some guy on the internet.

WordPress behind an Nginx reverse proxy

Because apparently I can’t leave well enough alone.

In this post, I’ll dive into how I went about setting up an Nginx reverse proxy for this WordPress site, and some of the challenges I ran into along the way.

This was a task that proved to be more challenging than I anticipated, and there were moments that I questioned my ability to get it working. – It's also got me wondering if my next project should be migrating to a static site generator.

This is part of my Podman project.

I’ve had an ongoing project to convert this site over to rootless containers with Podman. If you are curious about the progression of this project, you can catch up on what I’ve already covered by checking out the links below:

Not the tutorial you are looking for.

While I will provide some configuration files and some explanations, this post is more to outline some of the issues I've run into, how I’ve solved them, and how I plan to move forward. – maybe a complete tutorial will come a little later when I'm all finished and satisfied.

Among the many reasons I don't want to do a conventional tutorial now, is that there are already a lot of great tutorials out there that cover Nginx reverse proxies. Especially as they relate to containers, Seth Kenlon did a good write up for Enable Sysadmin on this topic last year. My goal with this post isn’t really to cover all the how-to’s, but more to share some of the details of my overall design, where I ran into trouble, and how I solved those problems. – This isn't to say that my solution is the best solution, but it's how I solved it, and if it helps you great. If it doesn't help, and you find a better way feel free to let me know!

Hopefully, this story still passes along some information that might come in handy if you are doing your own migration to containers and need a reverse proxy.

Remind me, why am I doing this again?

At this point, I'm mostly finishing this container project to save face... really. Getting WordPress to play nice behind a load balancer was a lot more difficult than I thought it would be.

I told all of you that I was migrating this site to containers and that I'd update periodically with progress and tips. Initially, I was flying through the migration. Learning to build simple images turned out to be much easier than I thought it would be.

After learning the basic ways that SELinux and UID mapping work to help isolate and enable rootless containers, getting the database containerized was really no sweat – should've done it a long time ago. But then, I always knew the database would be the easy part.

For one thing, I know MariaDB databases really well – not that I'm some kind of guru. But if there is any, one computer thing outside of operating systems that I really enjoy learning about and playing with, it's databases and MariaDB is very accessible and user friendly. – at least in my opinion.

Secondly, I'm only running one database, for now, so all I had to do was bind tcp 3306 on the host to tcp 3306 in the container and add the host IP address to the wp-config.php file in WordPress to get the webserver talking to the MariaDB instance. That work is all outlined in my MariaDB post, and was pretty simple.

So why am I whining about the web server set up?

The web server should've been just as easy right? Just bind the host ports to the container ports and move on with your life... Well, sure if all I was hosting was my own personal blog. However, unfortunately for me, I actually have a few websites that I host for family and friends and each of them needs to be available on ports 80 and 443.

The problem with that is when they are moved into containers I can't bind each website to ports 80 and 443, each web container needs to be bound to a different port with traffic redirected via a reverse proxy. Unlike a virtual host in Apache, they can't all answer web requests without some intermediary directing all the traffic. – Enter the reverse proxy.

Reverse proxy testing plan

Or maybe the headline here should've been: “No battle plan survives first contact with the enemy.

For my reverse proxy, I've decided to use Nginx with SSL termination via Certbot. I had evaluated HAProxy, and I'm sure it's a great product, but I think it has way more features than what I need at the moment. Perhaps my final implementation will have HAProxy instead of Nginx, but for now, I'm still trying to figure out how much complexity I can live with long term, and learning another new-to-me technology isn't in the cards for the container project.

The AWS Elastic Load Balancer is also an option if you have the money to run it. If this was a serious workload that my actual job depended upon there is no doubt that I would've chosen a dedicated appliance like ELB. But, alas, I don't have that kind of money to throw at this problem! Plus, the goal is to learn something new, so in that vein.

The basic plan for testing was pretty simple:

  1. Build Nginx container image.
  2. Build Apache webserver image.
  3. Stop httpd and php-fpm on cloud host
  4. Start Nginx and Apache containers
  5. Profit

I suppose I can happily report that steps 1 through 4 went off without a hitch. Step 5 is where the problems started.

A basic rundown of how I set up the Nginx reverse proxy:

  1. All incoming http and https connections would reach the Nginx reverse proxy that is listening on tcp/80 and tcp/443.
  2. Any plain text http connections to any website would get redirected to https and then passed on to the appropriate container.
  3. SSL/TLS termination handled by the reverse proxy using Certbot.
  4. Each website will be restricted to its own container listening on ports 8080, 8081, 8082,etc...

The basic Nginx reverse proxy configuration looks like this, for now:

    upstream sudoeditServers {
        server <ip_address>:8080;
        }


    # HTTP server
    # Proxy with no SSL

        server {rewrite ^(/.well-known/acme-challenge/.*) $1 break; # managed by Certbot


            listen       80;
            server_name  sudoedit.com;
            return 301 https://$host$request_uri;
            location = /.well-known/acme-challenge # managed by Certbot

    }

    # HTTPS server
    # Proxy with SSL

        server {
            listen       443;
            server_name  sudoedit.com;

             location / {
             proxy_set_header Host $http_host;
             proxy_pass http://sudoeditServers$request_uri;
             proxy_set_header X-Real-IP $remote_addr;
             proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
             proxy_set_header X-Forwarded-Host $server_name;
             proxy_set_header X-Forwarded-Proto https;
            }

            ssl                  on;
            ssl_certificate      /etc/letsencrypt/live/sudoedit.com/fullchain.pem;
            ssl_certificate_key  /etc/letsencrypt/live/sudoedit.com/privkey.pem;
            include /etc/letsencrypt/options-ssl-nginx.conf;

          }

For the final implementation I'll have some clean up work to do here. That will all be outlined when I do the final write up after everything is done.

Other than a couple deprecated, but still working parameters what's the problem?

Too many redirects

Much like the train in this picture you have to be careful how you set up redirects when you host a website. This is doubly true if that site is sitting behind a reverse proxy or a load balancer.

When I was running my first tests, I simply made a copy of my virtual host files and used a bind mount to make the site's configuration file available to the web container. At the time it didn't occur to me that there would be a problem when both the Nginx reverse proxy AND the Apache virtual host file tried to redirect the connecting client from http to https. – After all, Nginx should've already done that so any connection to the virtual host should be secure already... right?

Wrong, I hadn't thought that all the way through. Remember Nginx was not just redirecting to https, it was also acting as an SSL termination point, and then passing traffic to a container on an alternate point. This means that all the traffic hitting the container was over http, triggering another redirect on every visit.

What I had initially ignored as a redundant artifact from the pre-container days on this site, turned out to be the cause of an endless redirect loop, that would make it impossible for anyone to view the web page.

Take a look at the virtual host configuration file. At the very bottom you can see the rewrite rules that were put in place by Certbot when I initially set up the website.


    <VirtualHost *:80>
    	# The ServerName directive sets the request scheme, hostname and port that
    	# the server uses to identify itself. This is used when creating
    	# redirection URLs. In the context of virtual hosts, the ServerName
    	# specifies what hostname must appear in the request's Host: header to
    	# match this virtual host. For the default virtual host (this file) this
    	# value is not decisive as it is used as a last resort host regardless.
    	# However, you must set it for any further virtual host explicitly.
    	ServerName sudoedit.com
            Protocols h2 h2c

    	ServerAdmin luke@sudoedit.com
    	DocumentRoot /var/www/html/sudoedit/

    	# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
    	# error, crit, alert, emerg.
    	# It is also possible to configure the loglevel for particular
    	# modules, e.g.
    	#LogLevel info ssl:warn

    	ErrorLog /var/log/httpd/sudoedit_error_log
    	CustomLog /var/log/httpd/sudoedit_access_log combined

    	# For most configuration files from conf-available/, which are
    	# enabled or disabled at a global level, it is possible to
    	# include a line for only one particular virtual host. For example the
    	# following line enables the CGI configuration for this host only
    	# after it has been globally disabled with "a2disconf".
    	#Include conf-available/serve-cgi-bin.conf
    RewriteEngine on
    RewriteCond %{SERVER_NAME} =sudoedit.com
    RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
    </VirtualHost>

Removing the three Rewrite rules at the bottom of file allowed Nginx to be the only source of https redirects, thus putting a stop to the endless redirect loop.

But wait! There's more!

After the redirect loop was broken I was able to get to the home page, but none of the images/css/javascript would load. Further the wp-admin page was inaccessible. In order to get around issues with css loading it looks like you have tell WordPress that if it is getting https traffic forwarded from a proxy that it should behave as if it is recieving that https traffic directly.

I'm not entirely certain why this is the case and over the course of piecing together some little tidbit's across the internet, this is apparently something that is common on php driven sites like WordPress. The solution I found after looking at a bunch of sites notably here: https://ahenriksson.com/2020/01/27/how-to-set-up-wordpress-behind-a-reverse-proxy-when-using-nginx/ and here: https://techblog.jeppson.org/2017/08/fix-wordpress-sorry-not-allowed-access-page/. Involves making an edit to the wp-config.php file:

    define('FORCE_SSL_ADMIN', true);
    define('FORCE_SSL_LOGIN', true);
    if ($_SERVER['HTTP_X_FORWARDED_PROTO'] == 'https')
      $_SERVER['HTTPS']='on';

I'm still on the hunt for a solution that doesn't involve putting code directly into the wp-config file, but until then this does work and if you look into it there is no shortage of people who have had this same problem.

With the redirect issues resolved, I was ready to move on and see if there were any other issues in the way.

404 issue

With the home page finally loading correctly, I was pleased to see that SSL termination was working as intended. My site showed up with a little green lock and all was right with the world. Until I started checking the links to my posts.

Every post, lead to a 404 “Page not Found” error.

Why would the home page load but not the posts themselves? Nothing outside the home page was working, no posts, no archives, no categories, not the admin page... nothing.

The most likely scenario was that something was amiss with my “.htaccess” file. WordPress uses the htaccess file in Apache to handle permalinks. Although that was the most common issue it seemed a little odd that the htaccess file would be an issue. As before with the virtual host configuration file, I had just copied the webroot directory over to a new location and used a bind mount to add it to the Apache container.

I did A LOT of searching on this problem. Verifying my reverse proxy configuration over and over, checking and rechecking file permissions for all the files and directories in my containers. All of it checked out. Even the SELinux wasn't complaining about anything – that's gotta be a first.

Then I stumbled upon this similar issue involving permalinks at serverfault. The person asking the question was kind enough to answer it, and I am glad to know that there is at least one other person out there who makes little mistakes like this. My problem wasn't exactly the same, but it got me thinking about where my problem might be.

I have now resolved this. The problem was I had forgotten to enable Mod-ReWrite for Apache. > >

The thing is I had the rewrite module enabled:

    httpd -M | grep rewrite
     rewrite_module (shared)

So with mod_readwrite enabled, was the main Apache configuration file allowing htaccess files to make those modifications? In a word, Nope.

When I built the Apache container I didn't take into account that WordPress relies on the htaccess file to direct blog traffic to the appropriate posts. Having mod_rewrite enabled was only half the issue, the other half was allowing the htaccess file to change AllowOverride None to AllowOverride All in the httpd.conf file.

    <Directory "/var/www">
        ....
        AllowOverride All
        ....
    </Directory>

This is something I had set up years ago when I first started the blog, and never thought about it again.

What's next?

Right now I have all the containers built.

  1. Nginx container for reverse proxy and ssl termination.
  2. Apache container with php-fpm for WordPress sites.
  3. MariaDB container.

The MariaDB container is up and running. So, the next step will be to do a direct cut over to the Nginx reverse proxy and bring the Apache containers online. Before I do that I need to finish a few things:

I figure at my current pace I should be able to make the final cutover sometime next weekend. Hopefully, with a full write up on what went well and what didn't by the following week. All that is assuming that real life doesn't get in my way too much in the meantime.

I'll be sure to take good notes and document the final product and share my results.