Using nginx to Host a Single Page Application

Deploying any single page application requires a web server to host static content. Traditionally, having a web server requires configuring a server somewhere to run the software to host the application.

I have been using docker for approximately two years, and it's become one of my favorite tools to deploy applications. The main benefit is the simplicity of configuring and using different software components. Using a new technology is easy as a container gets packaged up with all it's dependencies together. It lessens the load to install and configure servers on both developers and IT.

This means, rather than configuring a web server on a specific Linux VM. I set up a container with a web server that can be deployed to any server. This eliminates the need for lengthy software installs, and provides a lot of other benefits around security.

One of these technologies I'll use on docker for an HTTP webserver is nginx. It has a large community and is pretty lightweight. The main detriment is it's configuration. The configuration files and modules can be daunting at times-- even for simple tasks such as this.

There are several considerations to take into account, and things I've had to learn the hard way. Here are some things I've had to take into consideration.

Don't cache the index page

If an application is frequently updated, and the browser caches the main index page, the updates won't occur. Users may end up seeing cached versions of the pages which are older versions of the application.

I will direct the browser not to cache the index page, and let the browsers use the If-Modified-Since HTTP header to avoid having to transfer the full page, unless the page has changed. I feel like this gives me a good balance between network traffic and maintainability.

This also relies on tools such as webpack generate new hashes on their javascript / HTML bundles which will change when there are new builds. In other words-- the index changes when they produce a new build. If this is not the case, then you shouldn't cache any of the application.

SSL

Servers directly reachable from the network should be set up for SSL. I don't have this in my nginx example, but it's one of the easier things to configure.

Using Let's Encrypt is the easiest way to secure your domain and site.

Reverse proxy APIs

If the application has an API it uses, and the API is hosted internally, sometimes it makes sense to have a reverse proxy built into the web server to avoid cross origin issues. This allows API request to be in the same URL structure as the application.

This does raise some extra concerns such as timeouts, added latency of an additional network hop, etc.

Sample configuration

Here's a sample nginx configuration that I use as a boilerplate for my projects.

# Typically I use this file as a boilerplate to configure an nginx docker container
#
# This goes in /etc/nginx/conf.d/default.conf

# If you are reverse proxying an API
upstream api {
    server API_SERVER_GOES_HERE:port;
}

server {
    listen 80; // SSL is not configured, but would be configured here

    # If you are proxied, and the proxy doesn't support URL re-writing, A rewrite rule can be added here
    # rewrite ^REWRITE_URL_GOES_HERE(.*)$ /$1;

    location / {
        root /usr/share/nginx/html;

        # This is due to nginx and the try_files behavior below, it will always
        # try to hit the index as part of try_files.  If I set index as something
        # that doesn't resolve, we don't have to worry about index.html being cached.
        #
        # If frequent updates occur, it's important that index.html not be cached
        # in the browser.  Otherwise the software update will only occur when the
        # cached page expires.  The If-Modified-Since is a better way to handle this
        # for SPAs with frequent updates.
        index unresolvable-file-html.html;

        try_files $uri @index;
    }

    # This seperate location is so the no cache policy only applies to the index and nothing else.
    location @index {
        root /usr/share/nginx/html;
        add_header Cache-Control no-cache;
        expires 0;
        try_files /index.html =404;
    }

    location /api/ {
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_set_header Host $http_host;
        # If you have any long running API calls, the read timeout needs to be increased here
        # proxy_read_timeout 120s;
        proxy_pass http://api/;
    }
}
Share this post:
© 2024 - Built and designed by Jeremy Honl with Gatsby. Images are from Unsplash