Working Through Rust Web Programming pt 6: Testing our System
Introduction
In this series, I will be working through this book on Rust Web Programming. The book is quite large and in depth, but in these articles I will be distilling the big picture pieces of the book into bite size tutorials and walk throughs.
In this section, we will finally start adding tests to our system. In your real life development work, please do not wait as long as we have to write tests. Tests should be written side by side with code, but since this project is structured like a book, it makes sense to package all the tests into one chapter.
There are two types of tests we will go over
- unit tests — test small isolated pieces of code and can be run without setting up any infrastructure and by mocking dependent services
- integration tests — or functional tests, test our application end-to-end and ensure our units behave properly when interacting with one another. For these tests, we will need some infrastructure spun up.
The backend work will be tracked in this github project https://github.com/mattmacf98/web_app. This specific work is achieved in this commit.
Unit Tests
Struct Testing
We start with the simplest form of tests, which is directly testing struct functionality without any mocking. In rust, tests are written directly into the file they are testing — which I think is a very cool feature and really pushes us to take a code+test approach rather than the usual code then test approach we often see in projects.
#[cfg(test)]
mod base_tests {
use super::Base;
use super::TaskStatus;
#[test]
fn new() {
let expected_title = String::from("test title");
let expected_status = TaskStatus::DONE;
let new_base_struct = Base {
title: expected_title.clone(),
status: TaskStatus::DONE
};
assert_eq!(expected_title, new_base_struct.title);
assert_eq!(expected_status, new_base_struct.status);
}
}
Above is an example of how tests are written in rust, we have a cfg block specifying the mod is only used when we run cargo test. Each test case is specified with a #[test] macro. In this test we simply create a base to_do item and assert it has our expected values.
Mocking
Sometimes the things we want to test will have dependencies on other code, and we cannot use their production version since they preform some network call — also adding these dependencies gets away from the whole “unit” idea.
To address this we can mock things.
use std::collections::HashMap;
use std::env;
use serde_yaml;
pub struct Config {
pub map: HashMap<String, serde_yaml::Value>
}
impl Config {
#[cfg(not(test))]
pub fn new() -> Config {
let args: Vec<String> = env::args().collect();
let file_path = &args[args.len() - 1];
let file = std::fs::File::open(file_path).unwrap();
let map: HashMap<String, serde_yaml::Value> = serde_yaml::from_reader(file).unwrap();
return Config {map};
}
#[cfg(test)]
pub fn new() -> Config {
let mut map = HashMap::new();
map.insert(String::from("DB_URL"), serde_yaml::from_str("postgres://username:password@localhost:5433/to_do").unwrap());
map.insert(String::from("SECRET_KEY"), serde_yaml::from_str("secret").unwrap());
map.insert(String::from("EXPIRE_MINUTES"), serde_yaml::from_str("120").unwrap());
map.insert(String::from("REDIS_URL"), serde_yaml::from_str("redis://127.0.0.1/").unwrap());
return Config {map};
}
}
In the above code, we can re-define Config so that in an actual build, it will pull config data from a file. However, for our unit test, we might not want to depend on that file being present (since we are not testing config, it is just a dependency in the thing we want to test) so we create a #[cfg(test)] of the new config to be run during our tests.
We can then use this mocked config in our JWT tests like below
#[test]
fn get_exp() {
let config = Config::new();
let minutes = config.map.get("EXPIRE_MINUTES").unwrap().as_i64().unwrap();
assert_eq!(120, minutes);
}
Now, the config we create here is our mocked one and does not actually read from a file.
For code we do not own, some libraries export testing macros to mock their structs (like actix_web)
#[actix_web::test]
async fn test_no_token_request() {
let app = init_service(App::new().route("/", web::get().to(test_handler))).await;
let req = TestRequest::default()
.insert_header(ContentType::plaintext())
.to_request();
let resp = call_service(&app, req).await;
assert_eq!("401", resp.status().as_str());
}
here we see we need an actix_web app to test our JWToken functionality, but we do not actually want to spin up an app for our unit testing. We can pre-pend the test with #[actix_web::test] to tell rust to use the mocked version of the actix_web crates.
Summary
- In rust, we write tests directly in the files of the things we are testing
- We can create our own mocks of our structs using #[cfg(test)] and using #[cfg(not(test))]
- We can import mocks from some 3rd party crates to automatically mock their functionality
Integration Tests
Creating API Tests
To Create a suite of API tests, we leverage Postman and we end up with something that looks like this file.
This file can be used by Newman to run our API calls in sequence and assert the results using the verification scripts we wrote in Postman.
This is cool, but there are two problems with this testing approach
- We must manually spin up and configure our postgres server, run diesel migrations, spin up our web app, create a user, extract the auth token and spin down the server and web app every time we want to test our code
- We must manually change the auth token in our test json since it expires
Creating a test pipeline
To address the issues in the last section, we are going to write a bash script to preform our server spin up, auth token generation, test running and spin down.
#!/bin/bash
# move to project directory
SCRIPTPATH="$( cd "$(dirname "$0")"; pwd -P)"
cd "$SCRIPTPATH"
cd ..
# start up docker and wait until it postgres accepts connections
docker-compose up -d
until pg_isready -h localhost -p 5432 -U username
do
echo "Waiting for postgres"
sleep 2;
done
cargo build
cargo test
# run server in background
cargo run config.yml &
SERVER_PID=$!
sleep 5
export DYLD_LIBRARY_PATH=/usr/local/Cellar/postgresql@14/14.11_1/lib/postgresql@14/:/usr/local/bin/psql
diesel migration run
cd ./scripts
# create new user
curl --location 'http://localhost:8000/v1/user/create' \
--header 'Content-Type: application/json' \
--data-raw '{
"name": "matthew",
"email": "test@gmail.com",
"password": "test"
}'
#login to get a fresh token
echo $(curl --location --request GET 'http://localhost:8000/v1/auth/login' \
--header 'Content-Type: application/json' \
--data-raw '{
"username": "matthew",
"password": "test"
}') > ./fresh_token.json
TOKEN=$(jq '.token' fresh_token.json)
jq '.auth.apikey.value = '"$TOKEN"'' todo.postman_collection.json > test_newman.json
newman run test_newman.json
rm ./test_newman.json
rm ./fresh_token.json
kill "$SERVER_PID"
cd ..
docker-compose down
This script is great! It will
- Spin up our postgres server using docker-compose
- Build and spin up our web app
- Run our diesel migration to set up our database
- Create our user & login to extract our fresh token
- Update our test json with the new token and run our integration tests
- And finally, it will spin down our server and postgres server for us
This script can be run anywhere, so we could plug this into Jenkins or github actions to run whenever new code is checked in to auto-run our integration tests and allow us to sleep soundly knowing we did not push broken code to prod.
Summary
- We can create integration tests and run them with the help of Postman and Newman
- We can create a shell script to set up infrastructure for our tests and then run them
Conclusion
We now have a pretty robust suite of tests — our app is arguably better tested than most big tech companies’ code— testing is a critical part of any development project and should be built side by side with the app. We have a good understanding of the differences between unit tests and integration tests and why both are important. Next, we will be going through my personal favorite chapter which is deploying our app to AWS EC2!