Distributed Systems With Node.js: Part 5 Containers

Matthew MacFarquhar
5 min readApr 30, 2024

Introduction

In this series, I will be working through this book on Distributed Systems using Node.js. The book is quite large and in depth, but in these articles I will be distilling the big picture pieces of the book into bite size tutorials and walk throughs.

In this section, we will learn how to container-ize our applications so that they run the same regardless of what machine they are deployed to. Then, we will explore how to run and coordinate multiple dependent systems together using simple orchestration with docker-compose.

The code for this demo is found in this repo at the commit here

Docker-izing Our Applications

The book talks a little about containers versus virtual machines and how containers have won the battle of distributed, isolated program running, by sharing the same kernel and OS as other containers which reduces the overall CPU load of running multiple programs in isolation on the same machine.

We then go through some basic docker commands to run images that exist on Docker hub and execute commands directly in the container’s shell. Like…

docker run -it --rm --name ephemeral ubuntu /bin/bash

Which runs an ubuntu image and then creates an interactive bash terminal for us to use. When we stop the container, it is removed thanks to the rm flag.

Recipe API DockerFile

We then jump into defining our own DockerFile for the Recipe API app so that we can build an image from it.

FROM node:18.0.0-alpine3.14 AS deps

WORKDIR /srv
COPY package*.json ./
RUN npm ci --only=production

FROM alpine:3.12 AS release

ENV V 18.0.0
ENV FILE node-v$V-linux-x64-musl.tar.xz

RUN apk add --no-cache libstdc++ && apk add --no-cache --virtual .deps curl \
&& curl -fsSLO --compressed "https://unofficial-builds.nodejs.org/download/release/v$V/$FILE" \
&& tar -xJf $FILE -C /usr/local --strip-components=1 \
&& rm -f $FILE /usr/local/bin/npm /usr/local/bin/npx \
&& rm -rf /usr/local/lib/node_modules \
&& apk del .deps

WORKDIR /srv
COPY --from=deps /srv/node_modules ./node_modules
COPY . .

EXPOSE 1337
ENV HOST 0.0.0.0
ENV PORT 1337
CMD ["node", "producer-http-basic.js"]

We use a larger image — node:18.0.0-alpine3.14 — to do all of the dependency building required for our application, then we use a smaller alpine:3.12 for our release image — which strips away some of the extra features the node image offers that we do not need.

We set up some environment variables for the version of node we will use and the expected file name of the tarball of our desired node version we will download.

Next, we have a rather large RUN command which will download a c++ library which node relies on and our desired version of node. Then, it removes a couple things we do not want or need which come with installing node.

Finally, we can copy over our node_modules dependencies we built in our deps image into our release image, and start the recipe-api on port 1337 — which we also expose to callers.

Building & Running a Docker Image

Now that we have defined a DockerFile, we can use it to create our Docker image.

docker build -t mattmacf98/recipe-api:v0.0.1 // <repository>/<name>:<version>

and run our image using

docker run --rm --name recipe-api-1 -p 8000:1337 mattmacf98/recipe-api:v0.0.1

This will spin up a container called recipe-api-1 which will be running our newly created image for our recipe-api app. We can now interact with our recipe api app the same as we always would via localhost:8000.

With this process, we can build images of our applications and publish them to Docker Hub to allow our cloud instances or other developers to create a container and run our application with the assurance that it will run the same on their machine as it did on ours.

Summary

  • We can create DockerFiles for our apps to detail what software needs to be installed, what environment variables need to be set and what commands need to be run to start our app
  • We can build images — local or living on some online registry — into containers which will run the same no matter what machine they are loaded onto

Orchestration

We should now have a good grasp on how to containerize applications and how it makes deployment across various types of machines safe and easy. But what if we wanted to spin up multiple apps at the same time — like our web api and recipe api — would we need to run a separate docker run in another terminal for each service? This is where orchestration steps in, we will delve more into advanced orchestration in a later article using Kubernetes. For now, docker-compose gives us an easy way to get most of the things we want done.

Creating Docker Images

In order to orchestrate running multiple docker images, we are going to need multiple docker images. In this example we will create a docker-compose that runs our web app, recipe api and a zipkin service.

Recipe API

FROM node:18.0.0

WORKDIR /srv
COPY package*.json ./
RUN npm ci --only=production
COPY . .
CMD ["node", "producer-http-zipkin.js"]

I have greatly simplified the Docker image for our recipe api (although at the cost of it being bloated with unnecessary items from the node image).

Web API

FROM node:18.0.0
WORKDIR /srv
COPY package*.json ./
RUN npm ci --only=production
COPY . .
CMD ["node", "consumer-http-zipkin.js"]

This image is almost identical to the recipe-api one and creates our web-api

Compose

Our docker-compose.yml will organize running our custom app images alongside a publivly available zipkin image.

version: "3.7"
services:
zipkin:
image: openzipkin/zipkin-slim:2.19
ports:
- "127.0.0.1:9411:9411"
recipe-api:
build:
context: ./recipe-api
dockerfile: Dockerfile-zipkin
ports:
- '127.0.0.1:4000:4000'
environment:
HOST: 0.0.0.0
ZIPKIN: zipkin:9411
depends_on:
- zipkin
web-api:
build:
context: ./web-api
dockerfile: Dockerfile-zipkin
ports:
- '127.0.0.1:3000:3000'
environment:
TARGET: recipe-api:4000
ZIPKIN: zipkin:9411
HOST: 0.0.0.0
depends_on:
- zipkin
- recipe-api

We have three services here:

zipkin: runs on port 9411 and runs the zipkin-slim image which we get from docker hub

recipe-api: runs on port 4000 and uses our custom recipe-api DockerFile to build and run the image. It also depends on the zipkin service being ready before it starts up.

web-api: runs on port 3000 and uses our custom web-api DockerFile to build and run the image. It also depends on the zipkin and our recipe-api services being ready before it starts up.

Summary

  • We can use docker-compose to preform some basic orchestration of docker images
  • The Docker images used in docker-compose can be local DockerFiles or images published to a registry

Conclusion

In this section, we explored how to container-ize our applications, making them readily available to deploy to the cloud. We also briefly looked at orchestration which allows us to easily run multiple services at once on the same machine. In the next section, we will make use of our new found Docker skills to build an end-to-end deployment pipeline with tests that run on PR and a final phase to package our app using Docker and hosting it on Heroku. We will also be touching on the steps required to deploy a package to an npm registry.

--

--

Matthew MacFarquhar

I am a software engineer working for Amazon living in SF/NYC.