Distributed Systems With Node.js: Part 6 Deployments
Introduction
In this series, I will be working through this book on Distributed Systems using Node.js. The book is quite large and in depth, but in these articles I will be distilling the big picture pieces of the book into bite size tutorials and walk throughs.
In this section, we will learn how to build an end-to-end CI/CD pipeline for a new example application we will create. We will also explore what it takes to deploy an NPM package of our own to the public NPM registry or to one we own and host internally.
The code for this demo is found in this new repo which we will build an end-to-end deployment pipeline for
and this branch in our running repo, where we will explore the NPM registry and creating our own NPM package.
CI/CD
CI/CD stands for Continuous Integration & Continuous Deployment, which means every pull request results in a new build, test and publication so that there is a constant stream of small updated versions of the package unlike some other release cadences which are more manual and may release large groups of changes in a single update which will come at a much more infrequent rate.
A good CI/CD system has a fully automated pipeline which triggers when a code change is published. We will be building a pipeline which will…
- Build our application
- Run unit and integration tests
- Ensure the code is properly covered by test cases
- Deploy the build artifact to a Web Server to receive production traffic
Pipeline Setup With Travis CI
The book uses Travis CI to integrate with a Github repo we will create, and automatically runs our deployment processes. The actual application code is not the focus of this section, but the logic lives in this repo.
The first thing we need to do is set up our travis configuration in a .travis.yml file
language: node_js
node_js:
- "14"
install:
- npm install
script:
- PORT=0 npm test
This basic config tells travis to use node v14 and install our packages with npm install, then it will run npm test (we have not created any tests yet so change the npm test script in package.json to trivially pass).
"scripts": {
"test": "echo \"Fake Test\" && exit 0"
}
We can configure travis to run on each Pull Request made to the Github Repo of our choice and we will trivially pass our “tests” so travis will give us a green check at this point.
Testing
Now let’s actually have travis do something useful by creating some unit and integration tests for it to run.
We will install a basic test framework called tape and update our scripts object to look like this.
"scripts": {
"test": "tape ./test/**/*.js"
}
This tells tape that all our test files are in the /test directory and end in js. Tape will run everything that matches that glob pattern.
Here is an example of some unit testing
#!/usr/bin/env node
const test = require("tape");
const Recipe = require("../recipe");
test('Recipe#hydrate()', async (t) => {
const r = new Recipe(42);
await r.hydrate();
t.equal(r.name, "Recipe: #42", 'name equality');
});
test('Recipe#serialize()', (t) => {
const r = new Recipe(42);
t.deepLooseEqual(r, {id: 42, name: null}, 'serializes properly');
t.end();
});
We will also create an integration test, to test the end to end functionality of our application.
#!/usr/bin/env node
const { spawn } = require('child_process');
const test = require('tape');
const fetch = require('node-fetch');
const serverStart = () => new Promise((resolve, reject) => {
const server = spawn('node', ['../server.js'],
{
env: Object.assign({}, process.env, {PORT: 0}),
cwd: __dirname
}
);
server.stdout.once('data', async (data) => {
const message = data.toString().trim();
const url = /Server running at (.+)$/.exec(message)[1];
resolve({server, url});
});
});
test('GET /recipes/42', async (t) => {
const { server, url } = await serverStart();
const result = await fetch(`${url}/recipes/42`);
const body = await result.json();
t.looseEqual(body.id, 42);
server.kill();
});
test('GET /', async (t) => {
const { server, url } = await serverStart();
const result = await fetch(`${url}/`);
const body = await result.text();
t.equal(body, "Hello from Distributed Node.js!");
server.kill();
});
This spins up a server process and then grabs the url which we use to send messages to our tests.
Now — when we make a code change — travis will actually run our tests and ensure everything passes before giving us a green check.
Code Coverage
We are able to automatically detect and run tests in our application when they are present, but how can we enforce they are written in the first place? This is where code coverage comes in. With code coverage, we can make rules to fail a build if the percent of new business logic lines covered by tests is less than say 90% — the book says you should try for 100% but in my experience that is quite ambitious. With this mechanism, we can ensure our application is well tested and we hopefully will not ship any bugs to production.
We will use nyc to run code coverage and update our test script one more time.
"scripts": {
"test": "nyc tape ./test/**/*.js"
}
We must also define our coverage rules in a file named .nyrc
{
"reporter": ["lcov", "text-summary"],
"all": true,
"check-coverage": true,
"branches": 100,
"lines": 100,
"functions": 100,
"statements": 100
}
We have some reporters which help grab the results of our tests from stdout and show them in a nice HTML which we can view in our browser. We also enforce 100% coverage of all lines, branches, functions and statements (good luck maintaining that standard in an actual production application!)
Deploying
Travis can now build our app, run our tests and generate code coverage to make sure our application is good and ready to be deployed to prod. There is only one step left, the actual deployment.
language: node_js
node_js:
- "14"
install:
- npm install
script:
- PORT=0 npm test
deploy:
provider: script
script: bash deploy-heroku.sh
on:
branch: main
env:
global:
- secure: "sFbAAeBP0ZHSfoD9Rr9Bpl6NGvOZ5EEG7myImBXIufVr+dmTIFJytXe4miWsVmzanmkGjB4n4CDd0Cpr+9LoE/xmJTCPc9iYNmV0h+tgMD4Np5jBPgF8g45Js5i0ikzdxEoUmlJXtLtkPTZdtrpUn2C/babiS8SdBCgt0JKDtl3YXm5jwdD9v94Z14Q4vicZOX8rn0a40mKQEyy0xybvUoDc+XmHUboBjjYJUmv5VXwXJVmVBTKVpbkJqh/ZxZ8AGFqe3AwHHwO1KfCMjRikHUGfoVFRXH+Bq8Z6CKOPGPXR0wKN9+RMcpK8GZAbgfE+WX3TX+oXh4aBaizF6jFbd+RXA/Xlh8FoObAF6RRywQbV8OoC928tYK9D8WJlchJVMX8ZT8ChPJIBYjJWcjeWcjsIDMd8LZn9YEtjEmYnY6E9E+6TO6GNwiZA/rDNHVWRqbhWBO9RmUVal23qUTQbPlCjOt/mYIlNvR55HDNiuZjwHVx80uzpK/CEn+cqsDX/FUSPihgHBRLKtM+ATNKU2cIAAipP646XQN0R5cEvNt1I5ElrRxmGh0uAonDFggEHJRZMdRZ9sxkcUNB7pPYmAIPzrrDxHVJHCWTe3sPfuOQ9jRPipYUVoJ7quoJ+GdHuNxOe7oGUKDotXeePyng49JeUNnnqkUZglYrkkkiYbKw="
Our updates here add a deploy step that travis will run when everything else passes, the logic lives in an additional bash script and runs on the main branch. We also use travis to encrypt and write a secure key that contains our Heroku API key, this is encrypted with our Github token so it is safe to commit into a public version control if you wish.
Below is our deployment Bash script
#!/bin/bash
wget -qO- https://toolbelt.heroku.com/install-ubuntu.sh | sh
heroku plugins:install @heroku-cli/plugin-container-registry
heroku container:login
heroku container:push web --app mattmacf98-distnode
heroku container:release web --app mattmacf98-distnode
This script will login to Heroku and build our application using our Dockerfile. Then, it will push the image to Heroku’s Docker registry and to our app name that we set up in Heroku’s UI.
FROM node:14.0.0-alpine3.14
WORKDIR /srv
COPY package*.json ./
RUN npm ci --only=production
COPY . .
ENV HOST=0.0.0.0
CMD ["node", "server.js"]
The DockerFile is very simple and just pulls down a node image and runs our application.
Summary
- CI/CD pipelines should be automatic and run when code changes are made to source
- A good CI/CD will: build, (integ/unit) test, collect coverage and actually deploy a final build of the application
- Travis CI is a mature tool which makes setting up CI/CD pipelines on top of Github repos fast and easy.
NPM Packages
Publishing an NPM package may look slightly different than an application, the book goes over some of the things a good node package should have.
Basic Hygiene of Packages
We talk about semantic versioning <major>.<minor>.<patch> and what changes would require bumping each section of this versioning. Essentially, bug fixes are patches, added features are minor and breaking changes (i.e. like removing a previously present function) is major.
The book mentioned using .gitignore or .npmignore to exclude files from the published tarball for our package to keep our final package as small and efficient as possible.
We also touched on node_module de-duplication and how matching packages of the same version would be hoisted out of sub directories if they are shared to reduce the burden of downloading the exact same code just because it lives in two separate branches in the dependency hierarchy.
NPM Registries
We then dove into running our own internal NPM registry using Verdaccio which we spun up in Docker
docker run -it --rm --name verdaccio -p 4873:4873 verdaccio/verdaccio:4.8
We talked about pre-fixing the package with some scope string so that it does not clash with other packages (e.x. @scope/my-cool-package).
Then, we saw how to change the registry npm is using from the public one to our internal one and publish our package.
We could then use the internal package in code like so
{
"name": "sample-npm-registry-app",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"@mattmacf98/leftish-padder": "^0.1.1"
}
}
console.log(require('@mattmacf98/leftish-padder')(10, 4, 0));
Summary
- Use Semantic Versioning for packages
- Prepend packages with @<SCOPE>
- If your package has proprietary information, you can host your own internal npm registry (or pay for a premium npm account to enable creating multiple private packages)
Conclusion
In this section, we explored how to create an end-to-end automated deployment pipeline for our applications which included good testing practices and a publish to prod step. We also explored what goes into creating our own npm packages and even how we could host our own internal npm registry.