Working Through Rust Web Programming pt 5: Making our Server RESTful

Matthew MacFarquhar
6 min readApr 1, 2024

--

Introduction

In this series, I will be working through this book on Rust Web Programming. The book is quite large and in depth, but in these articles I will be distilling the big picture pieces of the book into bite size tutorials and walk throughs.

In this section, we will be adding some additional features to our web server and front end app to make our project RESTful. The main pieces of a RESTful API are…

  1. Uniform Interface: The interface between the client and the server should be uniform. This means that similar methods should be used for different resources. We have already set up our endpoints like this so there is not much we need to do here.
  2. Stateless Communication: Communication between the client and server should be stateless. Each request from the client to the server should contain all the necessary information for the server to understand and process the request. We don’t have any state stored on the server currently, but we will create some and store the state properly in a Redis service so that the state can be used regardless of what backend node handles the request.
  3. Cache-ability: Responses from the server can be labeled as cacheable or non-cacheable. Cacheable responses can be reused by clients or intermediary servers to reduce latency and improve performance. We have some calls that could be cached, so we will go into the front end to cache them and avoid unnecessary requests.

The backend work will be tracked in this github project https://github.com/mattmacf98/web_app. This specific work is achieved in this commit. We will also make some front end changes to add caching, at this commit.

Backend

Build File

We first talk about splitting our app into a v1 and v2 interface, this part of the chapter didn’t seem to have a whole lot to do with making our app RESTful, and was more so about learning how to create a rust build file.

All our build file does is write a version number to a file, which our server will read to determine the allowed API version in our main.rs.

const ALLOWED_VERSION: &'static str = include_str!("./output_data.txt");

Below is our build.rs to create this output_data.txt

use std::fs::File;
use std::io::Write;
use std::collections::HashMap;
use serde_yaml;

fn main() {
let file = File::open("./build_config.yml").unwrap();
let map: HashMap<String, serde_yaml::Value> = serde_yaml::from_reader(file).unwrap();
let version = map.get("ALLOWED_VERSION").unwrap().as_str().unwrap();
let mut f = File::create("./src/output_data.txt").unwrap();
write!(f, "{}", version).unwrap();
}

It reads from a build_config.yml to get the ALLOWED_VERSION value, and then writes it to our output_data.txt. Is this really necessary you may ask? No it is not, but it does show us how to use the build file feature in Rust to run some code when building.

Redis

Next we are going to create a counter state to keep track of how many times our server is called. Currently, putting the state directly in our Rust web app would work for our single node setup. However, once we expand to having multiple servers handling multiple user requests, then we will need to extract the state out of the Rust web app (it is also good RESTful practice for our backend server to be stateless).

Let’s go ahead and update our docker-compose file to spin up a redis service alongside our postgres database.

version: "3.7"

services:
postgres:
container_name: "to-do-postgres"
image: 'postgres:11.2'
restart: always
ports:
- '5432:5432'
environment:
- 'POSTGRES_USER=username'
- 'POSTGRES_DB=to_do'
- 'POSTGRES_PASSWORD=password'
- 'POSTGRES_HOST_AUTH_METHOD=trust'
redis:
container_name: 'to-do-redis'
image: 'redis:5.0.5'
ports:
- '6379:6379'

We can see our Redis service in our updated docker-compose.yml file. Redis is useful to store ephemeral data which doesn’t really belong in a database. Redis is an in memory key-value store which is great for quick gets and sets, but lacks the ability to support complex queries like a SQL database.

use futures::future::err;
use serde::{Deserialize, Serialize};
use crate::config::Config;
use redis;
use redis::{RedisError, RedisResult};

#[derive(Serialize, Deserialize, Debug)]
pub struct Counter {
pub count: i32
}

impl Counter {
fn get_redis_url() -> String {
let config = Config::new();
config.map.get("REDIS_URL").unwrap().as_str().unwrap().to_owned()
}

pub fn save(self) -> Result<(), RedisError> {
let serialized = serde_yaml::to_vec(&self).unwrap();

let client = match redis::Client::open(Counter::get_redis_url()) {
Ok(client) => client,
Err(error) => return Err(error)
};

let mut con = match client.get_connection() {
Ok(con) => con,
Err(error) => return Err(error)
};

match redis::cmd("SET").arg("COUNTER").arg(serialized).query::<Vec<u8>>(&mut con) {
Ok(_) => Ok(()),
Err(error) => Err(error)
}
}

pub fn load() -> Result<Counter, RedisError> {
let client = match redis::Client::open(Counter::get_redis_url()) {
Ok(client) => client,
Err(error) => return Err(error)
};

let mut con = match client.get_connection() {
Ok(con) => con,
Err(error) => return Err(error)
};

let byte_data: Vec<u8> = match redis::cmd("GET").arg("COUNTER").query(&mut con) {
Ok(data) => data,
Err(error) => return Err(error)
};

Ok(serde_yaml::from_slice(&byte_data).unwrap())
}
}

We encapsulate our counter and our ability to load from and save to and from our Redis service into a file named counter.rs.

The basic pattern for operating on our Redis service is

  1. Create a client with the Redis url stored in our config.yml
  2. Use the client to get a connection
  3. Execute the operation we want using the connection and return something (if needed)

We can then use Counter in our web app code like below

let mut site_counter = counter::Counter::load().unwrap();
site_counter.count += 1;
println!("{:?}", &site_counter);
site_counter.save();

Logging traffic

It is always a good idea to add logging to our code, there are a couple different levels of logging we can use: (VERBOSE, INFO, WARN and ERROR) which indicate what action should be taken — if any — when these occur.

We can setup logging in our system using a middleware.

env_logger::init_from_env(env_logger::Env::new().default_filter_or("info"));

and wrap our actix web app with a new logger

App::new()
.wrap_fn(|req, srv| {
...
})
.configure(views::views_factory).wrap(cors).wrap(Logger::new("%a %{User-Agent}i %r %s %D"))

This logging string says that every request logs:

IP of web app | the user-agent extracted from header | the first line of the request | response status code | time taken to serve the request

Summary

  • We can use build.rs to run some code every time we run cargo build
  • We can set up a Redis service in Docker and then save and load values into it to extract state management from our web app and delegate it to Redis
  • We can log server traffic very easily using a middleware and we should always include logging in our code

Front End

Caching

At the end of the chapter, we take a very quick stab at doing front end caching. Caching on the front-end is a great practice to lighten the load on our server, we must be careful to only cache things where it is ok if they are a little stale.

For example, we wouldn’t cache a login request since the token has an expiration on it. We will cache our GET items request so that if the last GET items occurred less than 2 minutes ago we don’t trouble the server with a new request and instead just use the value we got from the last call.

Every time we get an item list, we will save the response in local storage along with the timestamp the cache entry was made.

localStorage.setItem("item-cache-date", new Date());
localStorage.setItem("item-cache-data-pending", JSON.stringify(pending_items));
localStorage.setItem("item-cache-data-done", JSON.stringify(done_items));

Now, we wrap our getItems logic in an if-else branch

let cachedDate = Date.parse(localStorage.getItem("item-cache-date"));
let now = Date.now();
let difference = Math.round((now - cachedDate) / 1000);

Where if (difference < 120) => use cached values Else call server.

We must also cache the returned items from our other requests (i.e. Edit and Delete) since those also return an updated state of our todo items.

Now, we will only call the server to getItems if there has been no action by the user in the last 2 minutes. This extra call saved might not seem like a lot, but when we scale our app up to millions of users, our server — and our wallets– will be grateful we took extra care to do this caching.

Summary

  • Cache data on the front-end if it is ok to not be 100% fresh, make sure to have an expiration on the cache so we don’t use un-fresh data indefinitely.

Conclusion

This covers all the functionality of our app, we now have a way to store shared ephemeral state on the backend so that API requests to different nodes do not behave differently, we added some nice logs so we can debug issues in the future and we cached our data on the front end to reduce the workload on our server. Next, we will be going over something that in a production setup we should be doing from the start, Testing!

--

--

Matthew MacFarquhar
Matthew MacFarquhar

Written by Matthew MacFarquhar

I am a software engineer working for Amazon living in SF/NYC.

No responses yet