Working Through Rust Web Programming pt 7: Deploying our App

Matthew MacFarquhar
13 min readApr 9, 2024

Introduction

In this series, I will be working through this book on Rust Web Programming. The book is quite large and in depth, but in these articles I will be distilling the big picture pieces of the book into bite size tutorials and walk throughs.

In this section, we will craft a build server to utilize an EC2 in order to create and publish images from our front and back end repos to Docker hub. Then, we will create a separate project to pull down those images in an EC2 and run our service in the cloud.

The backend work will be tracked in this github project https://github.com/mattmacf98/web_app. This specific work is achieved in this commit.

The front-end state corresponds to this commit.

We also create new build server and deployment server repos.

Build Server

Our first step is to get our front_end and web_app repos into Docker hub so they can be easily pulled down when it is time for them to be run on our deployment server.

Create Docker Files of Repos

To create Docker images on Docker hub, we must first create Docker files to tells us how to build the images.

This is the Dockerfile for the front end

FROM node:17.0.0

WORKDIR .
COPY . ./

RUN npm install -g serve
RUN npm install
RUN npm run react-build

EXPOSE 4000
CMD ["serve", "-s", "build", "-l", "4000"]

It is very simple, it installs node, copies our front end code and runs a production version of our front end on port 4000.

Our web_app Dockerfile looks like this

FROM rust:1.74

RUN apt-get update -yqq && apt-get install -yqq cmake g++
RUN cargo install diesel_cli --no-default-features --features postgres



COPY . .
WORKDIR .

RUN cargo clean
RUN cargo build --release

RUN cp ./target/release/web_app ./web_app
RUN rm -rf ./target
RUN rm -rf ./src
RUN rm config.yml
RUN chmod +x ./web_app

EXPOSE 8000

CMD ["./web_app", "config.yml"]

It is also quite simple, it installs some dependencies, builds our web_app, removes some garbage we do not need and then runs our rust web_app executable with config.yml as an arg. You may notice we remove config.yml in our build file, this is so it doesn’t conflict with the updated config.yml we upload during deployment to EC2 — this newer config.yml is the one referenced in our CMD.

Deploying to AWS

The book uses AWS as its preferred cloud provider — but the general deployment practice we use is cloud provider agnostic. To use AWS, we need to generate a PEM key so that we can SSH into our instances. We must also create an IAM user with admin access so we can run aws configure and allow terraform to use the admin user credentials to preform infrastructure deployments.

Terraform

Now we can create a terraform file which will allow us to describe what infrastructure we want spun up by using code instead of by “pointing and clicking” in the AWS console.

terraform {
required_version = ">= 1.1.3"
}

provider "aws" {
version = ">= 2.28.1"
region = "us-west-2"
}

resource "aws_security_group" "instance_sg" {
name = "instance-sg"
description = "Security group for EC2 instance"

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_instance" "build_server" {
ami = "ami-0a70b9d193ae8a799"
instance_type = "t2.medium"
key_name = "remotebuild"
user_data = file("server_build.sh")
security_groups = [aws_security_group.instance_sg.name]
tags = {
Name = "to-do build server"
}
# root disk
root_block_device {
volume_size = "15"
volume_type = "gp2"
delete_on_termination = true
}
}

output "ec2_global_ips" {
value = [aws_instance.build_server.*.public_ip]
}

My configuration differs a little from the book’s — possibly because AWS has changed their default set up since the book was written.

In Terraform, we create a security group which allows anyone with our PEM key to access our EC2 via SSH (port 22) and allow traffic from our EC2 to egress anywhere — so our EC2 can make calls to the outside world. Then we create our EC2 under our security group, give it some storage and tell it to run our server_build.sh on startup. Finally, we will output the public ip of the EC2 instance once it is built, we will use this IP to copy over some needed files and run some commands later in our build process.

server_build.sh

#!/bin/bash

sudo yum update -y
sudo yum install git -y
sudo yum install cmake -y
sudo yum install tree -y
sudo yum install vim -y
sudo yum install tmux -y
sudo yum install make automake gcc gcc-c++ kernel-devel -y

sudo yum install -y https://download.postgresql.org/pub/repos/yum/10/redhat/rhel-7-x86_64/pgdg-centos10-10-2.noarch.rpm
sudo yum install -y postgresql10-server postgresql10-devel
sudo yum install -y epel-release


sudo yum install -y docker
sudo systemctl start docker
sudo usermod -aG docker ec2-user

sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose

echo "FINISHED" > home/ec2-user/output.txt

Our server_build.sh will just install a bunch of pre-reqs we need to do our image building, and then spit out FINISHED to a file called output.txt. We can continuously scan for this file to detect when the pre-reqs finish downloading before trying to build our image.

Build Script

Now we can run a python script to deploy our infrastructure and start building our images for Dockerhub

from subprocess import Popen
from pathlib import Path
import json
import time
import argparse

parser = argparse.ArgumentParser()
parser.add_argument('--u', action='store', help='Docker username', type=str, required=True)
parser.add_argument('--p', action='store', help='Docker password', type=str, required=True)
args = parser.parse_args()

DIRECTORY_PATH = Path(__file__).resolve().parent
print(DIRECTORY_PATH)

init_process = Popen(f'cd "{DIRECTORY_PATH}" && terraform init', shell=True)
init_process.wait()

apply_process = Popen(f'cd "{DIRECTORY_PATH}" && terraform apply', shell=True)
apply_process.wait()

produce_output = Popen(f'cd "{DIRECTORY_PATH}" && terraform output -json > "{DIRECTORY_PATH}/output.json"', shell=True)

with open(f'{DIRECTORY_PATH}/output.json', "r") as file:
data = json.loads(file.read())

server_ip = data["ec2_global_ips"]["value"][0][0]

print("waiting for server to be built")
time.sleep(5)
print("attempting to enter server")

build_process = Popen(f'cd "{DIRECTORY_PATH}" && sh ./run_build.sh {server_ip} {args.u} {args.p}', shell=True)
build_process.wait()

destroy_process = Popen(f'cd "{DIRECTORY_PATH}" && terraform destroy', shell=True)
destroy_process.wait()

Our script takes in our Dockerhub username and password so that we can publish our images, we run our terraform commands to deploy our EC2 and then run run_build.sh — this is where most of the actual image building is done– finally we spin down our EC2 instance.

NOTE: I have often noticed that the run_build.sh fails to properly SSH into the EC2 since the IP is not a recognized host, I usually run the python script, exit after the EC2 is deployed, SSH into the EC2 manually to register the IP and then re-run the script. This is not ideal for a production build server and could probably be fixed by configuring an elastic IP so the EC2 IP is not different each time we run this.

Below is the work horse of our build process

#!/usr/bin/env bash

SCRIPTPATH="$( cd "$(dirname "$0")"; pwd -P)"
cd "$SCRIPTPATH"

echo "making web_app"
ssh -i "~/.ssh/keys/remotebuild.pem" -t ec2-user@"$1" "mkdir web_app"

echo "copying web_app resources"
scp -i "~/.ssh/keys/remotebuild.pem" -r ../web_app/src ec2-user@"$1":/home/ec2-user/web_app/src
scp -i "~/.ssh/keys/remotebuild.pem" -r ../web_app/Cargo.toml ec2-user@"$1":/home/ec2-user/web_app/Cargo.toml
scp -i "~/.ssh/keys/remotebuild.pem" -r ../web_app/config.yml ec2-user@"$1":/home/ec2-user/web_app/config.yml
scp -i "~/.ssh/keys/remotebuild.pem" -r ../web_app/Dockerfile ec2-user@"$1":/home/ec2-user/web_app/Dockerfile

echo "downloading rust"
ssh -i "~/.ssh/keys/remotebuild.pem" -t ec2-user@"$1" << EOF
curl https://sh.rustup.rs -sSf | bash -s -- -y
until [ -f ./output.txt ]
do
sleep 2
done
echo "File Found"
EOF
echo "Rust has been initialized"

echo "logging into Docker"
ssh -i "~/.ssh/keys/remotebuild.pem" -t ec2-user@"$1" << EOF
echo $3 | docker login --username $2 --password-stdin
EOF

echo "building Rust Docker image"
ssh -i "~/.ssh/keys/remotebuild.pem" -t ec2-user@"$1" << EOF
cd web_app
docker build . -t rust_app
docker tag rust_app:latest mattmacf98/to_do_actix:latest
docker push mattmacf98/to_do_actix:latest
EOF
echo "web app Docker image built"

echo "copying React app"
rm -rf ../front_end/node_modules/
rm -rf ../front_end/dist/
rm -rf ../front_end/build/
scp -i "~/.ssh/keys/remotebuild.pem" -r ../front_end/ ec2-user@"$1":/home/ec2-user/front_end

echo "installing node on build server"
ssh -i "~/.ssh/keys/remotebuild.pem" -t ec2-user@"$1" << EOF
curl -o- https:/raw.githubusercontent.com/nvm-sh/nvm/v0.37.2/install.sh | bash
. ~/.nvm/nvm.sh
nvm install --lts
EOF

echo "building front_end on server"
ssh -i "~/.ssh/keys/remotebuild.pem" -t ec2-user@"$1" << EOF
cd front_end
docker build . -t front_end
docker tag front_end:latest mattmacf98/to_do_react:latest
docker push mattmacf98/to_do_react:latest
EOF
echo "front end docker image built"

Our run_build.sh will

  1. Copy over the necessary web_app files
  2. Download Rust
  3. Wait for server_build.sh to finish by detecting output.txt
  4. log into Dockerhub
  5. Build our web_app Docker image and push it to Dockerhub
  6. Copy our front_end app
  7. Install Node
  8. Build our front_end Docker image and push it to Dockerhub

Now we have successfully built a pipeline to build and publish our repos as Docker images that can be pulled down from anywhere.

Summary

  • Create Docker files to create images of our repos
  • Use AWS to get credentials for SSHing and get IAM credentials to deploy infrastructure
  • Use terraform to configure what we will deploy to AWS and create any scripts to be run immediately on EC2 creation using user_data
  • Use Python to glue together the creation of the build server and the bash script to copy our dependent project files and Docker files to the EC2 and then build and publish the Docker image

Deployment Server

Now that we have our images on Docker hub, we can use them to spin up our web app in an EC2

Setting up our Infrastructure

terraform {
required_version = ">= 1.1.3"
}

provider "aws" {
version = ">= 2.28.1"
region = "us-west-2"
}

resource "aws_security_group" "db_instance_sg" {
name = "db_instance_sg-sg"
description = "Security group for RDS instance"

ingress {
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_db_instance" "main_db" {
instance_class = "db.t3.micro"
allocated_storage = 5
engine = "postgres"
username = var.db_username
password = var.db_password
vpc_security_group_ids = [aws_security_group.db_instance_sg.id]
db_name = "to_do"
publicly_accessible = true
skip_final_snapshot = true
tags = {
Name = "to-do production database"
}
}

resource "aws_security_group" "instance_sg" {
name = "instance-sg"
description = "Security group for EC2 instance"

ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}

egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}

resource "aws_instance" "deployment_server" {
ami = "ami-0a70b9d193ae8a799"
instance_type = "t2.medium"
key_name = "remotebuild"
user_data = file("server_build.sh")
security_groups = [aws_security_group.instance_sg.name]
tags = {
Name = "to-do deployment server"
}
# root disk
root_block_device {
volume_size = "15"
volume_type = "gp2"
delete_on_termination = true
}
}

output "ec2_global_ips" {
value = [aws_instance.deployment_server.*.public_ip]
}

output "db_endpoint" {
value = aws_db_instance.main_db.*.endpoint
}

Our terraform file creates our EC2 instance with a security group to allow for SSH access and ingress on port 80 to allow us to hit the EC2 with HTTP traffic.

In our deployed server, we extract our postgres service from our EC2 into its own RDS instance. That way, we can have multiple web_app EC2s use the same RDS instance so that our customers receive the same data no matter what web_app EC2 they interact with. We configure the postgres instance with a provided username and password for accessing the database and we expose port 5432 on this RDS instance since that is postgres’s port for interactions.

We will also export the postgres db endpoint so we can tell our web_app where it should send our queries to.

Configuring our Docker Assets

As mentioned, the docker-compose we spin up on our EC2 will no longer include a postgres server, it will now look like this.

services:
nginx:
container_name: 'nginx-rust'
image: 'nginx:latest'
ports:
- "80:80"
links:
- rust_app
- front_end
volumes:
- ./nginx_config.conf:/etc/nginx/nginx.conf
redis_production:
container_name: 'to-do-redis-production'
image: 'redis:5.0.5'
ports:
- '6379:6379'
volumes:
- ./data/redis:/tmp
rust_app:
container_name: 'rust_app'
image: "mattmacf98/to_do_actix:latest"
restart: always
ports:
- "8000:8000"
links:
- redis_production
expose:
- 8000
volumes:
- ./rust_config.yml:/config.yml
front_end:
container_name: 'front_end'
image: "mattmacf98/to_do_react:latest"
restart: always
ports:
- "4000:4000"
expose:
- 4000

We create an nginx entry point to receive traffic in port 80 (HTTP), we also spin up a redis service. We build our rust web_app and front end services by using the images in Docker hub we deployed using our build server.

Our web_app has a volume which copies whatever is in rust_config.yml to the web_app’s config.yml — allowing us to inject config into our Dockerized web_app.

The new thing in this compose file is nginx which handles routing between the front_end and backend when a request comes in.

worker_processes auto;
error_log /var/log/nginx/error.log warn;

events {
worker_connections 512;
}

http {
server {
listen 80;
location /v1 {
proxy_pass http://rust_app:8000/v1;
}
location / {
proxy_pass http://front_end:4000;
}
}
}

here our nginx listens on port 80 and routes to our backend web_app if the request contains /v1 and to the front_end if it does not. This is quite a simple nginx configuration but is a nice intro for us to get comfortable with what it does.

Building the Service

Similarly to the build server, our deployment server will have a python script which glues the infrastructure deployment together with our actual application spin up script.

from subprocess import Popen
from pathlib import Path
import json
import time
import argparse
import yaml

parser = argparse.ArgumentParser()
parser.add_argument('--u', action='store', help='Docker username', type=str, required=True)
parser.add_argument('--p', action='store', help='Docker password', type=str, required=True)
args = parser.parse_args()

DIRECTORY_PATH = Path(__file__).resolve().parent

with open("./database.json") as json_file:
db_data = json.load(json_file)

params = f' -var="db_password={db_data["password"]}" -var="db_username={db_data["user"]}"'

init_process = Popen(f'cd "{DIRECTORY_PATH}" && terraform init', shell=True)
init_process.wait()

apply_process = Popen(f'cd "{DIRECTORY_PATH}" && terraform apply' + params, shell=True)
apply_process.wait()

produce_output = Popen(f'cd "{DIRECTORY_PATH}" && terraform output -json > "{DIRECTORY_PATH}/output.json"', shell=True)

with open(f'{DIRECTORY_PATH}/output.json', "r") as file:
data = json.loads(file.read())

database_url = f"postgresql://{db_data['user']}:{db_data['password']}@{data['db_endpoint']['value'][0]}/to_do"
with open("./database.txt", "w") as file:
file.write("DATABASE_URL=" + database_url)

with open("./rust_config.yml") as file:
config = yaml.load(file, Loader=yaml.FullLoader)

config["DB_URL"] = database_url

with open("./rust_config.yml", "w") as file:
yaml.dump(config, file, default_flow_style=False)


server_ip = data["ec2_global_ips"]["value"][0][0]

print("waiting for server to be built")
time.sleep(5)
print("attempting to enter server")

build_process = Popen(f'cd "{DIRECTORY_PATH}" && sh ./run_build.sh {server_ip} {args.u} {args.p}', shell=True)
build_process.wait()

We create our AWS deployment and then construct our db_url using the output from spinning up our RDS instance — which we then write to a config file called rust_config.yml and to database.txt.

We run our run_build.sh like we did for our build server

The only difference in this process is that we do not tear down our deployment server since we want it to remain alive to serve our customers.

#!/usr/bin/env bash

SCRIPTPATH="$( cd "$(dirname "$0")"; pwd -P)"
cd "$SCRIPTPATH"

scp -i "~/.ssh/keys/remotebuild.pem" -r ./deployment-compose.yml ec2-user@"$1":/home/ec2-user/docker-compose.yml
scp -i "~/.ssh/keys/remotebuild.pem" -r ./rust_config.yml ec2-user@"$1":/home/ec2-user/rust_config.yml
scp -i "~/.ssh/keys/remotebuild.pem" -r ./database.txt ec2-user@"$1":/home/ec2-user/.env
scp -i "~/.ssh/keys/remotebuild.pem" -r ./nginx_config.conf ec2-user@"$1":/home/ec2-user/nginx_config.conf
scp -i "~/.ssh/keys/remotebuild.pem" -r ../web_app/migrations ec2-user@"$1":/home/ec2-user/

echo "installing rust"
ssh -i "~/.ssh/keys/remotebuild.pem" -t ec2-user@"$1" << EOF
curl https://sh.rustup.rs -sSf | bash -s -- -y
until [ -f ./output.txt ]
do
sleep 2
done
echo "File Found"
EOF

echo "installing diesel"
ssh -i "~/.ssh/keys/remotebuild.pem" -t ec2-user@"$1" << EOF
cargo install diesel_cli --no-default-features --features postgres
EOF

echo "building system"
ssh -i "~/.ssh/keys/remotebuild.pem" -t ec2-user@"$1" << EOF
echo $3 | docker login --username $2 --password-stdin
docker-compose up -d
sleep 2
diesel migration run
curl --location --request POST 'http://localhost/v1/user/create' \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "Matthew",
"email": "test@gmail.com",
"password": "test"
}'
EOF

To prepare our EC2 instance to do a deployment, we will copy over our deployment docker compose file, our rust_config.yml (to inject into our web_app’s Docker image), our database url (for diesel to properly be configured), our nginx config and our migrations directory.

Next, we install rust and wait for the server_build.sh setup installs to complete. Then, we install diesel.

Now — to actually deploy our images — we will login to Docker so we can pull down our images and run our docker-compose file. Then, we run diesel to prepare our database and create a user in our database for us to use.

At this point, we are able to navigate to the EC2’s public IP using HTTP and use our app with our created user.

Summary

  • Set up deployment infrastructure using terraform
  • Create EC2 deployment set up using nginx and docker-compose with some volumes to map files on the EC2 to the spun up Docker images
  • Use a python script to glue together outputs from terraforming into the script that is run on the EC2 to start up your Docker images

Enable HTTPS

Use HTTPS

Currently, our system accepts HTTP traffic which is generally bad for a production setup both for latency and security issues. Switching to require HTTPS is quite simple.

We first update our nginx config to look like this

worker_processes auto;
error_log /var/log/nginx/error.log warn;

events {
worker_connections 512;
}

http {
server {
listen 80;
return 301 https://$host$request_uri;
}

server {
listen 443 ssl http2;

ssl_certificate /etc/nginx/ssl/self.crt;
ssl_certificate_key /etc/nginx/ssl/self.key;

location /v1 {
proxy_pass http://rust_app:8000/v1;
}
location / {
proxy_pass http://front_end:4000;
}
}
}

this will now redirect any traffic on port 80 (HTTP) to port 443 (HTTPS). We only need to create our self-signed certificates by using openssl to generate a key and a crt which we pass into our nginx docker image.

Deploying an HTTPS app

I stopped implementation of what the book did at this point since any further changes for this chapter required purchasing a DNS. Essentially the last couple steps were…

  1. Buy a Domain and register some certificates using AWS route 53 via the console and point it to an elastic IP we requested (also done via AWS console)
  2. Configure a load balancer which uses a certificate in our Route 53 managed domain using terraform
  3. Update terraform security groups so that only HTTPS traffic can hit the load balancer
  4. Update the web_app EC2’s security group and the nginx config so that it only accepts HTTP traffic from the load balancer’s security group and allows for SSH access from anywhere

If we follow these steps, we will have a domain name which points to our load balancer and only accepts HTTPS. That load balancer is the only entity that can talk to our web_app EC2s and it does so over HTTP. We finally create a backdoor to our web_app EC2s so we can still SSH into them if we have the proper PEM key to do so.

Summary

  • We can use nginx with self-signed certs to configure a simple, single node system which only allows HTTPS requests
  • If we want to have more nodes managed by a load balancer, we will need to set up a DNS
  • We can configure our security groups to lock our EC2s down so they can only receive HTTP traffic from our load balancer (and SSH traffic from someone with the proper keys)

Conclusion

This was the piece of this book I enjoyed the most, getting our hands dirty with the deployment piece really brings our project full circle. We have the know how to create a secure, scalable app for the whole world to enjoy. In the next section we will just be cleaning up a few small things to really bring our todo app up to par with conventional development structures and best practices for web development.

--

--

Matthew MacFarquhar

I am a software engineer working for Amazon living in SF/NYC.