How to deploy a Node.js application with Docker

0 Shares
0
0
0
0

Introduction

Node.js is a JavaScript runtime that has become popular in recent years for building server-side applications.

This tutorial shows how to deploy a Node.js application to a cloud server via Docker, Docker Hub, and Docker Compose.

Prerequisites
  • This tutorial assumes that you have Docker installed on your local system. If you don't have it, you can find instructions on how to install it in the official documentation.
  • You will also need a cloud server running a Linux distribution, preferably Ubuntu 24.04. If you are using another distribution, you may need to follow specific instructions when it comes to installing Docker on the server.
  • Some steps also require a Docker Hub account (free) to upload the Docker image for the application.
  • If you have no previous Docker experience, that's okay, this tutorial is very basic and explains the core concepts behind what we're doing.
  • You need a Node.js application that you can deploy.
About Docker

In case you're just getting started with Docker, here are some terms worth reviewing to make sure we're on the same page.

  • Images: In Docker, images are «snapshots» or templates of a file system and contain everything needed to run an application.
  • Containers: These are actual running instances of the application. They are created by taking a template (an image) and turning it into something that can be started and has state.
  • Layers are the elements that make up a Docker image. Each layer is built on top of another layer, allowing you to provide a feature called layer caching. This means that when just one layer in an image changes, you don't need to rebuild or reload all the layers in an image.
  • Registries are a place where you upload (push) images to make them available to the world or to those with access credentials. In this tutorial, we are going to use Docker Hub, but there are also alternatives provided by GCP, AWS, Azure, GitHub, and others.

Step 1 – Create a Dockerfile

Create a file called Dockerfile with the following content in the root directory of your Node.js project:

FROM node:20.17
ENV NODE_ENV=production
WORKDIR /app
COPY package*.json .
RUN npm ci
COPY . .
EXPOSE 8080
CMD [ "node", "src/index.js" ]

A Dockerfile is where you put the instructions that allow Docker to build an image. Each instruction represents the creation of a layer that is a modification to the image file system.

In this case, we build our image starting from a template, sometimes called a base image, which in this case is node:20.17. This is an official image provided by the Docker company and you can find more information about it here.

The next step sets the NODE_ENV environment variable to production. The main effect here is to avoid installing development packages when running the following npm install, but it can often lead to better optimizations in modules you might rely on.

With the WORKDIR command, we move the current directory to /app, which in turn becomes the directory in which the following instructions are executed.

The package line COPY*.json . Copies the package.json and package-lock.json files to the /app folder of the Docker image file system. Note that the trailing dot is required to indicate the current directory.

Now we use the RUN directive to install production dependencies using the npm ci command (ci stands for clean install and is designed for use in automated environments).

One thing to note at this point is that until now we were only copying package*.json files into the build instead of the entire project directory. This allows us to take advantage of Docker layer caching, so that if the dependent packages are unchanged, the layers can be used without rebuilding.

The following line (COPY . .) copies the remaining files in the image.

Optionally, we can specify that we want to expose a specific network port from the container so that it can be accessed by a web application. Note that the EXPOSE directive does not actually expose the port: as the documentation says, it «acts as a kind of document between the person building the image and the person running the container about the ports that are intended to be published». .

Finally, the last directive specifies the command that Docker should use to execute the application when the container starts. In this case, we assume that the application's entry point is the index.js file.

It's usually a good idea to also create a file called .dockerignore along with your Dockerfile. This ensures that when you run COPY. ., useless files from your computer aren't copied into the image:

.git
Dockerfile
node_modules

In this case, we don't want development versions of directories like git or node_modules to be available in the template we build.

Step 2 – Create the image

Now that we have a Dockerfile, we can tell Docker to use it to build an image.

The basic command to do this looks like the following command and should be run in the main project folder:

docker build -t myproject .

If it doesn't complete successfully with the error "/bin/sh -c npm ci", replace npm ci in the Dockerfile with npm install and try again.

 

The -t option specifies the image name, in this case myproject. At the end of the line, you need to tell Docker to look for the Dockerfile in the current directory.

Note: The first time you run the build, it will take a while because Docker needs to download all the layers of the base image (in this case Node.js 20.17).

Since we plan to upload this image to the online Docker Hub registry (to access it from our server), we need to name the image using a specific naming convention.

So the above command would look like this:

docker build -t username/myproject:latest .

Where username is your Docker Hub username and last is the image tag. An image can have multiple tags, so sometimes you'll see a workflow similar to this:

docker build -t myproject .
docker tag myproject username/myproject:latest
docker tag myproject username/myproject:20240905

These commands create an image and then tag it with the tags latest and 20240904 (the date this tutorial was last updated).

Docker Hub does not delete old images by default, so it allows you to keep a history of all the images you have pushed to the registry. The image with the latest tag is always the one that was most recently built, while older images are tagged with a date.

Step 3 – Press the image

Now that we have the image, we need to push it to the registry. First of all, run the following command to make sure that your Docker instance is authenticated with Docker Hub:

docker login

Then run docker push to upload the image along with all the tags.

docker push username/myproject

If your application is small, this command should complete quickly, as it only needs to upload the layers related to the Node.js application and its JavaScript dependencies.

Once you have a new version of the image, you need to run the push command again to make sure it is uploaded to Docker Hub.

Step 4 – Install Docker on Ubuntu 24.04

Now we can move on to installing Docker and Docker Compose on the server. As mentioned in the prerequisites, the assumption here is that you have an Ubuntu 24.04 server already set up.

First of all, installing Docker requires some system dependencies which can be installed with the following commands:

sudo apt-get update
sudo apt-get install ca-certificates curl

Now add the official Docker GPG key and configure a custom apt repository:

sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Finally, update the apt directory again and install Docker Community Edition:

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

The above command also installs Docker Compose, a tool that greatly simplifies the management of containers and their lifecycle.

The last useful step involves adding the current Ubuntu user to the docker group so that we can run Docker commands directly from it.

This can be easily done with the following command:

sudo gpasswd -a myuser docker

Make sure everything went well by running the following commands:

docker --version
docker ps
docker compose version

If you don't see any errors or warnings, you're good to go.

Step 5 – Run the container with Docker Compose

Create a file called docker-compose.yml with the following content on your server:

services:
myproject:
container_name: 'myproject'
image: 'username/myproject'
restart: unless-stopped

This is a very basic Docker Compose file that configures a single container named myproject based on the username/image myproject from Docker Hub. If you don't specify a label, it will default to the last one, but you can also set a specific label if you want:

services:
myproject:
container_name: 'myproject'
image: 'username/myproject:20240904'
restart: unless-stopped

Finally, the restart attribute indicates that the container should be automatically restarted on failure unless manually stopped.

If you run this Compose command now, the Docker image will be unregistered and your application will hopefully run:

docker compose -f docker-compose.yml up

This command creates a container and runs it. The output of the container is captured by Docker and presented to you in the console. Press CTRL + C (or CMD + C) and wait a few seconds for the container to stop.

If everything went well, you are now ready to run the container as a ghost, so that it continues to run in the background until stopped. This can be achieved by adding the -d option to the command:

docker compose -f docker-compose.yml up -d

Boom, knot! (Oh, I mean it)

Be sure to take a quick look at the Compose file reference documentation, where you can find useful features like mapping network ports between the server and the container. Here's a quick example that maps external port 80 to internal port 8080:

services:
myproject:
container_name: 'myproject'
image: 'username/myproject'
restart: unless-stopped
ports:
- '80:8080'

Step 6 – Install a new version

Let's say you need to release a change to your application. Unless you have automated builds enabled, you'll need to repeat steps 2 and 3 until a new image appears in Docker Hub.

Then, on your server, you need to manually pull the new image, like this:

docker compose -f docker-compose.yml pull

And restart the container with the new image:

docker compose -f docker-compose.yml up -d --force-recreate

Result

Great, you did it! That was the basic introduction to deploying a Node.js application on Ubuntu 24.04 using Docker, Docker Hub, and Docker Compose.

We saw how to write a simple Dockerfile, how to build the image, push it, and deploy it to the server.

There's more to Docker than this tutorial covers, so be sure to take a look at the Docker and Docker Compose documentation to learn more about the concepts and features.

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like