WittCode💻

Dockerizing a React App for Development and Production

By

Dockerize a react application for both development and production by using docker and docker compose. Learn about docker containers, docker images, and more.

Table of Contents 📖

Why Containerize a Project with Docker?

Dockerizing a project is a good idea because it solves the it only works on my machine problem by providing a consistent and isolated environment. Docker does this by using docker containers which allow us to package an application into an artifact that can be distributed and ran in different environments.

Using a Custom React Project

We will containerize a react application created using the module bundler webpack. Here webpack is using babel to convert JSX code to code that the browser understands. We configure webpack with a few configuration files titled webpack.config.js, webpack.dev.js, and webpack.prod.js.

[object Object]

The file webpack.prod.js is for working with the react project in production, webpack.dev.js is for working with the react project in development, and webpack.config.js contains the commonalities between the two.

Creating a Dockerfile.dev File

First we will add a Dockerfile titled Dockerfile.dev to the root of our project to assemble a Docker image we can use to develop our react application. Docker builds images by reading the instructions placed inside Dockerfiles.

[object Object]

On the first line of the Dockerfile.dev file, add the command FROM node:alpine.

FROM node:alpine

Dockerfiles must start with the FROM instruction to initialize the base image for subsequent instructions. A base image is an image that is used to create container images, they can be official Docker images, custom images, etc. Here, we are specifying that we are going to use the node alpine linux distribution to build the Docker image. Alpine is only 5 MB in size and has access to a very complete package repository, making it an ideal base image. We also don't specify the version of node or alpine which will give us the latest available dockerized version of each.

Next, we want to set the working directory of our Docker container by using the WORKDIR command. Any RUN, CMD, ADD, COPY, or ENTRYPOINT command used inside the Dockerfile succeeding this line will be executed in the specified working diretory.

FROM node:alpine
WORKDIR /client

We specify the working directory to be client to match it with our host folder structure. Now lets bring over our package.json file into the container and run npm i to install the required dependencies. We can bring files into a Docker container by using the COPY command. The COPY command takes the signature COPY where src-path is the path to the file on the host computer and dst-path is the destination to copy the files to in the container.

FROM node:alpine
WORKDIR /client
COPY package.json .
RUN npm i

We want to copy over our package.json file and install dependencies before copying over the rest of the react application so Docker can cache these layers. Therefore, when updating the source code of the react application, if package.json hasn't changed it won't need to be recopied over and have the same dependencies reinstalled. Now lets copy over the rest of our project files by using COPY . . command.

FROM node:alpine
WORKDIR /client
COPY package.json .
RUN npm i
COPY . .

Finally, lets use the CMD command to run the start script in our package.json file. The Docker CMD command specifies instructions to be executed when a Docker container starts.

FROM node:alpine
WORKDIR /client
COPY package.json .
RUN npm i
COPY . .
CMD ["npm", "start"]

Creating a Dockerfile.prod File

Next we will add a Dockerfile titled Dockerfile.prod to the root of our project to assemble a Docker production image.

[object Object]

Our production Dockerfile will use a multi-stage build. Multi-stage builds allow for the creation of smaller container images with better caching. They acheive this by using multiple FROM statements. Each of these FROM statements can use a different base and bring a new stage to the build. The first stage of our production Dockerfile will be essentially the same as the development one except we will give it the name build by using the AS keyword.

FROM node:alpine AS build
WORKDIR /client
COPY package.json .
RUN npm i
COPY . .

Now, as this is a build for production we will run npm run build as opposed to npm start.

FROM node:alpine AS build
WORKDIR /client
COPY package.json .
RUN npm i
COPY . .
RUN npm run build

We will use nginx to serve up this production build. Nginx is a web server that is commonly used as a load balancer but can be used for much more. To do that we will make another build stage using the nginx image. Pull the nginx image by using the command FROM nginx.

FROM node:alpine AS build
WORKDIR /client
COPY package.json .
RUN npm i
COPY . .
RUN npm run build

FROM nginx

Then lets copy over our build folder from the previous stage and place it inside /usr/share/nginx/html in the current stage. We will do this by using the --from flag which allows us to copy from a separate image.

FROM node:alpine AS build
WORKDIR /client
COPY package.json .
RUN npm i
COPY . .
RUN npm run build

FROM nginx
COPY --from=build /client/build /usr/share/nginx/html

We copy to the /usr/share/nginx/html directory as this is the default location nginx looks at when serving content. Placing an index.html file inside this folder will make the file instantly visible on port 80. By default the nginx HTTP server listens for inbound connections on port 80.

Creating a .dockerignore File

Next we will add a .dockerignore file to exclude certain files and folders from the build, allowing us to make a faster and lighter build. Add this .dockerignore file to the root of the project directory.

[object Object]

Within it, lets place some commonnly ignored files and folders. These files and folders will be ignored during the building of the docker image.

.git
.vscode
.dockerignore
.gitignore
.env
config
build
node_modules
docker-compose.dev.yaml
docker-compose.prod.yaml
docker-compose.yaml
Dockerfile
Dockerfile.dev
Dockerfile.prod
Makefile
README.md

Creating a docker-compose.yaml File

Now, this project will only consist of a front end react application. However, in the future we may want to add a node backend, a database, etc. As such, we should create a docker-compose.yaml file to allow us to deploy, combine, and configure multiple docker containers at the same time. Create a docker-compose.yaml file at the root level of the projrect.

[object Object]

This docker-compose.yaml file will not contain a whole lot of information as it will be our base docker-compose file. We will have both production and development docker-compose.yaml files. Within docker-compose.yaml, we will declare the commonalities between both production and development. This is solely the name of the service and the build context.

version: "3.8"
services:
    client:
        build:
            context: ./client

Services describe each container's behavior. Each service has a name, we have called our react service client. The build instruction is how we will build our docker image. The context instruction within build specifies the location of our Dockerfile. Both Dockerfile.dev and Dockerfile.prod are present within the client folder. This will be the only similarity between our development and production docker-compose files.

Creating a docker-compose.dev.yaml File

We will configure our development services inside a docker-compose.dev.yaml file. Create this file at the root directory.

[object Object]

Now lets declare our service the same way we did in our docker-compose.yaml file.

version: "3.8"
services:
    client:

We can declare the name of the image used to create our container using the image instruction. Lets title our image client-dev-i.

version: "3.8"
services:
    client:
        image: client-dev-i

Now, along with specifying the path to our Dockerfile.dev file, we also need to specify the name of it. This can be done with the key dockerfile.

version: "3.8"
services:
    client:
        image: client-dev-i
        build:
            dockerfile: Dockerfile.dev

When this docker-compose.dev.yaml file is combined with docker-compose.yaml, the image will be built using the context ./client and the dockerfile Dockerfile.dev. Now lets name the container created from this image by using the container_name instruction. We will name our container client-dev-c.

version: "3.8"
services:
    client:
        image: client-dev-i
        build:
            dockerfile: Dockerfile.dev
        container_name: client-dev-c

Next lets set up some volumes. A docker volume is an independent file system that is entirely managed by docker. Docker volumes persist the data generated and used by docker containers by mapping a host directory to docker's file system. Volumes are needed because when a container is removed or stopped and restarted then the data is gone. For development, we want to create a volume that maps the client folder on our host machine to the client folder in the docker container. We can do this with the volumes command and supplying it a list where each item is the :. Here, host directory uses a relative path.

version: "3.8"
services:
    client:
        image: client-dev-i
        build:
            dockerfile: Dockerfile.dev
        container_name: client-dev-c,
        volumes:
            - ./client:/client

Now, whenever anything is changed inside the client directory on the host machine these changes will be replicated in the docker container. There are different types of volumes, the volume we just made is called a host volume as we matched the file system on the host machine to the container. Another type of volume is a named volume where we reference the container directory to a name. We will do this with node_modules because we want to prevent the node_modules folder from our host environment to overwrite the node_modules within the container.

version: "3.8"
services:
    client:
        image: client-dev-i
        build:
            dockerfile: Dockerfile.dev
        container_name: client-dev-c,
        volumes:
            - ./client:/client
            - node_modules:/client/node_modules/  
volumes:
    - node_modules:

The location of named volumes on the host is managed by docker. We want to do this because the node_modules folder can cause issues if the container architecture is different than the host operating system that is used for development. For example, running npm i node-sass on a host machine using macOS would install different dependencies than running npm i node-sass on a container that uses Ubuntu. Finally, lets set the NODE_ENV environment variable inside the container to development by using the environment command. Lets also map our host port on 8080 to the container port 8080 by specifying the ports command.

version: "3.8"
services:
    client:
        image: client-dev-i
        build:
            dockerfile: Dockerfile.dev
        container_name: client-dev-c,
        volumes:
            - ./client:/client
            - node_modules:/client/node_modules/
        ports:
            - "8080:8080"
        environment:
            - NODE_ENV=development
volumes:
    - node_modules:

Now when requests are made to localhost:8080 they will be forwarded to the container port 8080.

Creating a docker-compose.prod.yaml File

Now lets work with our production docker-compose file. We will call this file docker-compose.prod.yaml.

[object Object]

This file will be similar to our development file except we will use different ports, container and image names, we will use the Dockerfile.prod dockerfile, and we will set NODE_ENV to production.

version: "3.8"
services:
    client:
        image: client-prod-i
        build:
            dockerfile: Dockerfile.prod
        container_name: client-prod-c
        ports:
            - "8080:80"
        environment:
            - NODE_ENV=production

As this is a production container, we won't set up any volumes with the source code.

Creating a Bash Script to Run docker-compose

Now, lets create a bash script to run our docker-compose files. Create a folder at the top level called bin and place a file inside called deploy.sh.

[object Object]

We now need to make this file executable by running the command chmod u+x deploy.sh. Navigate the the location of deploy.sh and run this command in the terminal.

chmod u+x deploy.sh

At the top of the file, add the shebang #!/bin/bash to instruct the operating sytsem to use bash as the command interpreter.

#!/bin/bash

This command will be ran by ./deploy.sh prod|dev up|down. So, lets first check if the arguments supplied are correct.

#!/bin/bash
if [[ $1 = "prod" || $1 = "dev" ]] && [[ $2 = "down" || $2 = "up" ]]; then

If they are, then we will navigate to where the docker-compose files are located, print to the console the command we are running, and then run the command.

#!/bin/bash
if [[ $1 = "prod" || $1 = "dev" ]] && [[ $2 = "down" || $2 = "up" ]]; then
    cd ..
    fileEnv="docker-compose.${1}.yaml"
    downOrUp=$2
    echo "Running docker-compose -f docker-compose.yaml -f $fileEnv $downOrUp"
    docker-compose -f docker-compose.yaml -f $fileEnv $downOrUp

If the command is incorrect, then we will print the proper usage of it to the user.

#!/bin/bash
if [[ $1 = "prod" || $1 = "dev" ]] && [[ $2 = "down" || $2 = "up" ]]; then
    cd ..
    fileEnv="docker-compose.${1}.yaml"
    downOrUp=$2
    echo "Running docker-compose -f docker-compose.yaml -f $fileEnv $downOrUp"
    docker-compose -f docker-compose.yaml -f $fileEnv $downOrUp
else
    echo 'Need to follow format ./deploy prod|dev down|up'
fi

This bash script will run the command docker-compose -f docker-compose.yaml -f docker-compose.dev.yaml up if ./deploy.sh dev up is ran, it will run docker-compose -f docker-compose.yaml -f docker-compose.prod.yaml down if ./deploy.sh prod down is ran and so on.

What is the docker-compose Command?

The docker-compose command will start up our entire app using everything defined inside the docker-compose.yaml files. The -f flag is how we provide docker-compose.yaml configuration files to the command. The order of which we supply these files is important. The file that is supplied last will overwrite any conflicting information with the first file listed. This is why we provide docker-compose.prod.yaml and docker-compose.dev.yaml after docker-compose.yaml.

The command docker-compose up builds, creates, recreates, starts, etc. the services defined inside the docker-compose.yaml files. The command docker-compose down stops and removes containers, networks, images, and volumes that were created with docker-compose up. The bash script we created is a shortcut to writing these long commands.

Running the Application

Finally, lets run the application in both development and production mode. To run in development run the command ./deploy.sh dev up and then visit localhost:8080.

./deploy.sh dev up

To stop the development application run the command ./deploy.sh dev down.

./deploy.sh dev down

To run the application in production mode, run ./deploy.sh prod up and then visit localhost:8080.

./deploy.sh prod up

To teardown the production environment run ./deploy.sh prod down.

./deploy.sh prod down

Dockerizing a React App for Development and Production