Working Through Rust Web Programming pt 8: Cleaning up our App

Matthew MacFarquhar
7 min readApr 11, 2024

Introduction

In this series, I will be working through this book on Rust Web Programming. The book is quite large and in depth, but in these articles I will be distilling the big picture pieces of the book into bite size tutorials and walk throughs.

In this section, we will be cleaning up our web_app code slightly to adhere to best practices outlined in the book.

The backend work will be tracked in this github project https://github.com/mattmacf98/clean_web_app.

House Keeping

The first thing we are going to do is update our code and configs a little bit

Env over Config

Up until now, we have been moving our config around using YAML files and reading from them with our Config struct. However, using environment variables for configuration is a best practice and has the most compliance with things like Kubernetes and AWS.

We will delete our Config struct and replace code like

let connection_string = Config::new().map.get("DB_URL").unwrap().as_str().unwrap().to_string();

with

let connection_string = env::var("DB_URL").unwrap();

we can now set these environment variables before running our web_app with the command

export DB_URL="eampleurl.com"

This has removed the need to copy over config files wherever we need to run our app, but it has placed a new burden on us that we need to export the environment variables before we run our code — we will see later this is not as bad as it seems with the help of some handy bash scripts.

Database Docker

When we want to run a database migration, we have to rely on installing and running the diesel cli ourselves or using some bash scripting to do it. Now, we will use the book author’s migration management tool to run our language agnostic migration inside of its own Docker image.

FROM postgres

RUN apt-get update && apt-get install -y wget && wget -O - https://raw.githubusercontent.com/yellow-bird-consult/build_tools/develop/scripts/install.sh | bash && cp ~/yb_tools/database.sh ./database.sh

WORKDIR .
ADD . .

CMD ["bash", "./database.sh", "db", "rollup"]

This Docker image installs the author’s migration tool from github and then runs our migrations. Now, instead of installing and configuring diesel, we can just spin up this image to do our database migrations in Postgres.

Distroless Docker

Our current Docker image for our web_app is giant (around 2.5 gb). This is because by using the rust image as our starting point, we include a lot of stuff that we don’t really need for our server — like terminal access. Removing these unnecessary dependencies from our final image will decrease the size and also provide us a sense of safety that nobody can SSH into our web_app docker image and run a muck of things (since there is no terminal to SSH into).

We need to first get the libraries that we should keep for our web app, we do this by running ldd web_app inside our bloated web_app Docker image (which has a terminal).

Using those libraries we can create our build file

FROM rust:1.74 as build

RUN apt-get update
RUN apt-get install libpq5 -y

WORKDIR /app
COPY . .

ARG ENV="PRODUCTION"
RUN echo "$ENV"

RUN if ["$ENV" = "PRODUCTION"]; then cargo build --release; else cargo build; fi
RUN if ["$ENV" = "PRODUCTION"]; then echo "no need to copy"; else mkdir /app/target/release && cp /app/target/debug/clean_app /app/target/release/clean_app; fi

FROM gcr.io/distroless/cc-debian10

# "ldd clean_app" command on the static binary to get all the needed libraries
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libpq.so.5 /lib/x86_64-linux-gnu/libpq.so.5
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2 /lib/x86_64-linux-gnu/libgssapi_krb5.so.2
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libkrb5.so.3 /lib/x86_64-linux-gnu/libkrb5.so.3
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libk5crypto.so.3 /lib/x86_64-linux-gnu/libk5crypto.so.3
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libkrb5support.so.0 /lib/x86_64-linux-gnu/libkrb5support.so.0
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libgcc_s.so.1 /lib/x86_64-linux-gnu/libgcc_s.so.1
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libpthread.so.0 /lib/x86_64-linux-gnu/libpthread.so.0
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libm.so.6 /lib/x86_64-linux-gnu/libm.so.6
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libdl.so.2 /lib/x86_64-linux-gnu/libdl.so.2
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libc.so.6 /lib/x86_64-linux-gnu/libc.so.6

COPY --chown=1001:1001 --from=build /lib64/ld-linux-x86-64.so.2 /lib64/ld-linux-x86-64.so.2

COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libresolv.so.2 /lib/x86_64-linux-gnu/libresolv.so.2
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libsasl2.so.2 /lib/x86_64-linux-gnu/libsasl2.so.2
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libgnutls.so.30 /lib/x86_64-linux-gnu/libgnutls.so.30
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libkeyutils.so.1 /lib/x86_64-linux-gnu/libkeyutils.so.1
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libp11-kit.so.0 /lib/x86_64-linux-gnu/libp11-kit.so.0
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libidn2.so.0 /lib/x86_64-linux-gnu/libidn2.so.0
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libunistring.so.2 /lib/x86_64-linux-gnu/libunistring.so.2
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libtasn1.so.6 /lib/x86_64-linux-gnu/libtasn1.so.6
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libnettle.so.8 /lib/x86_64-linux-gnu/libnettle.so.8
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libhogweed.so.6 /lib/x86_64-linux-gnu/libhogweed.so.6
COPY --chown=1001:1001 --from=build /usr/lib/x86_64-linux-gnu/libgmp.so.10 /lib/x86_64-linux-gnu/libgmp.so.10
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libssl.so.3 /lib/x86_64-linux-gnu/libssl.so.3
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libcom_err.so.2 /lib/x86_64-linux-gnu/libcom_err.so.2
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libcrypto.so.3 /lib/x86_64-linux-gnu/libcrypto.so.3
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libldap-2.5.so.0 /lib/x86_64-linux-gnu/libldap-2.5.so.0
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/liblber-2.5.so.0 /lib/x86_64-linux-gnu/liblber-2.5.so.0
COPY --chown=1001:1001 --from=build /lib/x86_64-linux-gnu/libffi.so.8 /lib/x86_64-linux-gnu/libffi.so.8

COPY --from=build /app/target/release/clean_app /usr/local/bin/clean_app

EXPOSE 8000
ENTRYPOINT ["clean_app"]

You can see we still use the rust image as a build image so we can actually compile our web_app code. However, our actual image is going to be on top of gcr.io/distroless/cc-debian10. You can see the size differences immediately, rust is around 500MB while our distroless debian image is only 10 MB.

We copy all those libraries that we found using ldd from the rust build image to our actual image and finally we copy the web_app executable over as well and run it.

Summary

  • Use environment variables to store configuration our app can use at runtime
  • We can spin up language agnostic Docker images to run tasks — like database migrations — for us
  • Using distroless images with only the required libraries to run our app has made our web_app Docker image much lighter and also more secure

Bash Scripts

Remember when we removed Config and switched to using environment variables? This has changed the way we can run our app now, since we must always export some environment variables before running. All interactions with our app should be done via scripts which set the environment variable for us.

Unit Test Script

This one is very simple, we just need to export a few variables for our JWT tests which use the secret key and expiration time.

#!/usr/bin/env bash

SCRIPTPATH="$( cd "$(dirname "$0")"; pwd -P)"
cd "$SCRIPTPATH"
cd ..


export SECRET_KEY="secret"
export EXPIRE_MINUTES=60
cargo test

Run Dev Server

This one should be run to locally spin up our app after we have docker-compose up’d our local docker-compse.yml

#!/usr/bin/env bash

SCRIPTPATH="$( cd "$(dirname "$0")"; pwd -P)"
cd "$SCRIPTPATH"
cd ..


export SECRET_KEY="secret"
export EXPIRE_MINUTES=60
export DB_URL="postgres://username:password@localhost/to_do"
export REDIS_URL="redis://127.0.0.1/"

cargo run

Similarly to the unit test script, it just sets some environment variables and runs our app.

Integration Testing Script

To achieve this script, we had to make our postman tests self contained, previously we had a script to curl postman and get token which we injected into the test json. Postman actually has a way to set variables based on previous calls, so we can inject our auth pre-req directly into our postman json tests.

We also created a new docker-compose for testing — this is similar to what we would use for a production deployment docker-compose as well.

version: "3.7"

services:
test_server:
container_name: test_server
image: test_auth_server
build:
context: ../
args:
ENV: "NOT_PRODUCTION"
restart: always
environment:
- 'DB_URL=postgres://username:password@test_postgres:5432/to_do'
- 'SECRET_KEY=secret'
- 'EXPIRE_MINUTES=60'
- 'REDIS_URL=redis://test_redis/'
depends_on:
test_redis:
condition: service_started
ports:
- "8000:8000"
expose:
- 8000

test_postgres:
container_name: 'test_postgres'
image: 'postgres'
restart: always
ports:
- '5432:5432'
environment:
- 'POSTGRES_USER=username'
- 'POSTGRES_DB=to_do'
- 'POSTGRES_PASSWORD=password'

test_redis:
container_name: 'test_redis'
image: 'redis:5.0.5'
ports:
- '6379:6379'
init_test_db:
container_name: 'init_test_db'
image: init_test_db
build:
context: ../database
environment:
- 'DB_URL=postgres://username:password@test_postgres:5432/to_do'
depends_on:
test_postgres:
condition: service_started
restart: on-failure

We spin up our redis and postrges services same as ever, we also use the database migration image for the init_test_db service which will trigger migrations once our test_postgres service is up. Finally, we spin up our web_app using our distroless Dockerfile for the web_app image.

#!/usr/bin/env bash

SCRIPTPATH="$( cd "$(dirname "$0")"; pwd -P)"
cd "$SCRIPTPATH"

if [ "$(uname -m)" = "arm64" ]
then
cp ../builds/arch_build ../Dockerfile
else
cp ../builds/x86_64_build ../Dockerfile
fi

cd ../tests

docker-compose build --no-cache
docker-compose up -d

sleep 5

newman run todo.postman_collection.json

docker-compose down
docker image rm test_server
docker image rm init_test_db
docker image rm test_postgres
rm ../Dockerfile

Our script sets up our distroless Dockerfile based on our system architecture and then moves to our test directory to up our docker-compose infrastructure for testing. We run our integration tests using newman and finally tear everything down.

Summary

  • If we use environment variables for our app config, we should rely on scripts to run our app so that the scripts can set the environment variables for us
  • We can combine specialized Docker images for tasks like database migration with a bash script to spin up the docker-compose to simplify our infrastructure setup for either deployment or testing.

Conclusion

With a few small changes we increased the security and decreased the size of our Docker images and set up our code to use the best practice for configuration — environment variables. In the next chapter, we will put our app to the side as we dive into async rust programming using the tokio crate.

--

--

Matthew MacFarquhar

I am a software engineer working for Amazon living in SF/NYC.