Horizontally Scaling Node Containers with Nginx and Docker Compose
Learn how to horizontally scale an application consisting of Node servers using Nginx and Docker Compose. We will learn about Docker container replicas, load balancers, and more.
Table of Contents 📖
Environment Variables
The environment variables below determine the location of our Node servers and Nginx load balancer. The PROJECT_NAME variable will be loaded into the docker-compose.yaml file and used to give our project a name.
INFO: Docker Compose names a lot of its resources after the project directory. Setting it with an environment variable gives us more control.
PROJECT_NAME=my-project
NODE_PORT=1235
NGINX_CONTAINER_NAME=my-nginx-c
NGINX_PORT=1234
Node App
The Node app is an ES6 npm project that uses Express to serve up a JSON array. It will only be accessible from the load balancer (Nginx).
npm init es6 -y
npm i express
import express from "express";
const NODE_PORT = process.env.NODE_PORT;
const app = express();
app.get("/api/users", (req, res) => {
return res.json(['WittCepter', 'Mike', 'WittCode', 'Spencer'])
});
app.listen(NODE_PORT, () => {
console.log(`Server running at ${NODE_PORT}`);
});
"scripts": {
"start": "node ./src/server.js"
},
Nginx Confiugration
Here we configure Nginx to act as a load balancer. We can do this using the upstream directive. The upstream directive is used to define a group of servers.
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main ' [$time_local] $remote_addr - $remote_user "$request" $status';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
upstream backend {
server ${PROJECT_NAME}-server-1:${NODE_PORT} weight=5;
server ${PROJECT_NAME}-server-2:${NODE_PORT};
server ${PROJECT_NAME}-server-3:${NODE_PORT};
server ${PROJECT_NAME}-server-4:${NODE_PORT};
server ${PROJECT_NAME}-server-5:${NODE_PORT};
server ${PROJECT_NAME}-server-6:${NODE_PORT};
}
server {
listen ${NGINX_PORT};
server_name ${NGINX_CONTAINER_NAME};
root /usr/share/nginx/html;
location /api {
proxy_pass http://backend;
}
}
INFO: Notice how we use the project name in the server's address. This is because Docker Compose will name the servers after the project when it creates them.
Above, we also add a weight to the first server. By default, requests are distributed evenly amongst the servers using a round-robin balancing method. Setting a weight of 5 will make it so 5 requests will go to server 1 before server 2 is hit. The proxy_pass directive is used to set the address and protcol of a proxied server. Here we are proxying requests from /api to our upstream server group.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<h1>Making Some Requests!</h1>
<h3>Users</h3>
<div id="div-users"></div>
<script>
const divUsers = document.getElementById('div-users');
fetch('/api/users').then(res => res.json()).then(data => {
divUsers.innerHTML = data.map(user => `<p>${user}</p>`).join('');
});
</script>
</body>
</html>
This is a simple HTML file that will access the Node server group through the Nginx load balancer. Notice how the request is to Nginx and not the Node servers directly.
Dockerfiles
FROM nginx:alpine
COPY ./public /usr/share/nginx/html
COPY ./conf/nginx.conf /etc/nginx/nginx.conf
COPY ./conf/upstream.conf /etc/nginx/templates/default.conf.template
WARNING: The location /etc/nginx/templates is very important with the Nginx Docker image. Files ending in .template that are placed in that directory will have environment variables substituted with envsubst.
FROM node
WORKDIR /server
COPY package*.json .
RUN npm i --omit=dev
COPY . .
CMD ["npm", "start"]
Docker Compose Configuration
Here we configure Docker Compose to spin up our entire application. First we set the name of the project to the environment variable PROJECT_NAME. This will cause Docker Compose to name its resources based off of this value. However, the most important key here is replicas. Setting this to 6 will cause Docker Compose to create 6 copies of the Node server container. Each of them will be named after the project name, the service, and the number of the replica.
name: ${PROJECT_NAME}
services:
server:
pull_policy: build
image: my-node-i
build:
context: server
dockerfile: Dockerfile
env_file: .env
deploy:
replicas: 6
volumes:
- my-node-v:/server/node_modules
reverse-proxy:
pull_policy: build
image: my-nginx-i
container_name: ${NGINX_CONTAINER_NAME}
env_file: .env
build:
context: reverse-proxy
dockerfile: Dockerfile
ports:
- ${NGINX_PORT}:${NGINX_PORT}
depends_on:
- server
volumes:
my-node-v:
name: my-node-v
INFO: We also set pull_policy to build. This makes it so Docker Compose will build the image as opposed to attempting to pull it from a registry.
Testing
To test our application we will flood our load balancer with API requests. We can use the docker stats command to see the CPU fluctuations of all the Node servers.
for i in {1..500}
do
curl localhost:1234/api/users
echo
done
chmod +x test.sh
./test.sh
SUCCESS: We need to make the script file executable by running chmod.
docker compose up -d
docker stats
WARNING: When looking at the docker stats output, notice how the CPU percentage for server 1 is the highest. This is because it was given a weight of 5 in the upstream group.