Ray Tracing in One Weekend: Part 2 Rendering Spheres

Matthew MacFarquhar
8 min readNov 2, 2024

--

Introduction

I have been reading through this book series on Raytracing which walks the reader through the creation of a Raytracer using C++. In this series of articles, I will be going through this book and implementing the lessons in Rust instead and diving deep into the pieces.

In this article, we will render out our first two spheres, we will learn about casting rays, calculating hits on spheres and anti-aliasing.

The following link is the commit in my Github repo that matches the code we will go over.

Ray and Camera

First things first, we are gonna need some new structs, our Ray struct will allow us to represent a directional line. This struct is probably one of the most important when building a RayTracer!

Ray.rs

#[derive(Default)]
pub struct Ray {
origin: Point3,
dir: Vec3
}

A Ray has a starting point and a direction it points in, a ray goes on forever in space and you can parameterize and compute the position on the ray at t by doing origin + t * dir.

impl Ray {
pub fn new(origin: Point3, dir: Vec3) -> Ray {
Ray {
origin,
dir
}
}

pub fn origin(&self) -> Point3 {
self.origin
}

pub fn direction(&self) -> Vec3 {
self.dir
}

pub fn at(&self, t:f64) -> Point3 {
self.origin + t * self.dir
}
}

We give our ray a basic constructor, two getters and the parametrization compute function we discussed above.

Camera.rs

Our camera will encapsulate some parameters and logic that go into creating our image.

pub struct Camera {
origin: Point3,
lower_left_corner: Point3,
horizontal: Vec3,
vertical: Vec3
}

We give our camera an origin and 3D representations for the viewport extends in the vertical and horizontal directions as well as a bottom left value which we will use to bound our viewport.

impl Camera {
pub fn new() -> Camera {
let aspect_ratio = 16.0 / 9.0;
let viewport_height = 2.0;
let viewport_width = aspect_ratio * viewport_height;
let focal_length = 1.0;

let origin = Point3::new(0.0, 0.0, 0.0);
let horizontal = Vec3::new(viewport_width, 0.0, 0.0);
let vertical = Vec3::new(0.0, viewport_height, 0.0);
let lower_left_corner =
origin - horizontal / 2.0 - vertical / 2.0 - Vec3::new(0.0, 0.0, focal_length);

Camera {
origin,
lower_left_corner,
horizontal,
vertical,
}
}

pub fn get_ray(&self, u: f64, v: f64) -> Ray {
Ray::new(self.origin, self.lower_left_corner + u * self.horizontal + v * self.vertical - self.origin)
}
}

In our constructor, we create some constants and use those constants to create our struct values. Our get_ray function creates a new ray which goes from our camera to some position in our world given by u and v — how far right and up in our viewport to shoot the ray.

Hits

The main thing our Raytracer will be doing is shooting out rays into our world and seeing if they hit anything in order to determine how we should paint a pixel — the color of the object if we hit something, or the ambient background color if we don’t.

Hittable.rs

We are going to define a trait called hittable.

pub trait Hittable {
fn hit(&self, ray: &Ray, t_min: f64, t_max: f64, rec: &mut HitRecord) -> bool;
}

Implementers of this trait will be hit testable by our renderer and can be used to generate a HitRecord.

#[derive(Clone, Default)]
pub struct HitRecord {
pub p: Point3,
pub normal: Vec3,
pub t: f64,
pub front_face: bool
}

impl HitRecord {
pub fn new() -> HitRecord {
Default::default()
}

pub fn set_face_normal(&mut self, r: &Ray, outward_norm: Vec3) {
self.front_face = vec3::dot(r.direction(), outward_norm) < 0.0;
self.normal = if self.front_face {
outward_norm
} else {
-outward_norm
};
}
}

The HitRecord that the hittable object will send back will tell us where the hit was in world space (p) the normal direction for the point hit (normal), the amount along the sent in ray the hit was made (t) and if we hit the front face of the object.

our function set_face_normal, will compute if we hit a front face by checking if our incoming direction points in the opposite direction of the normal. If we did not hit the front face, then we flip the normal in our HitRecord.

Hittable_List.rs

Our world will consist of a hittable list of objects that implement hittable. The hittable list itself is also an implementer of hittable.

#[derive(Default)]
pub struct HittableList {
objects: Vec<Box<dyn Hittable>>
}

impl HittableList {
pub fn new() -> HittableList {
Default::default()
}

pub fn add(&mut self, object: Box<dyn Hittable>) {
self.objects.push(object);
}
}

It is very simple and just maintains a list of hittable objects.

impl Hittable for HittableList {
fn hit(&self, ray: &crate::ray::Ray, t_min: f64, t_max: f64, rec: &mut crate::hittable::HitRecord) -> bool {
let mut temp_rec = HitRecord::new();
let mut hit_anything = false;
let mut closest_so_far = t_max;

for object in &self.objects {
if object.hit(ray, t_min, closest_so_far, &mut temp_rec) {
hit_anything = true;
closest_so_far = temp_rec.t;
*rec = temp_rec.clone();
}
}

hit_anything
}
}

We will also implement the Hittable trait for our Hittable_List. To determine if we hit something, we will iterate through our list of hittable objects. Each time we determine a hit, we set our hit record to that of our hittable and reduce the max t our ray casts to be less than the thing we just hit . Essentially, we are progressively checking for hits closer to the camera. Once we have gone through our list, we can return the closest hit record or tell the caller that we did not hit anything for the given ray.

Sphere.rs

Finally, we will create our first world objects.

pub struct Sphere {
center: Point3,
radius: f64
}

impl Sphere {
pub fn new(center: Point3, radius: f64) -> Sphere {
Sphere {
center,
radius
}
}
}

Our sphere takes in a center and a radius. Now lets make the sphere hittable.

impl Hittable for Sphere {
fn hit(&self, ray: &crate::ray::Ray, t_min: f64, t_max: f64, rec: &mut crate::hittable::HitRecord) -> bool {
let oc = ray.origin() - self.center;
let a = ray.direction().length_squared();
let half_b = vec3::dot(oc, ray.direction());
let c = oc.length_squared() - self.radius * self.radius;
let discriminant = half_b * half_b - a * c;
if discriminant < 0.0 {
return false;
}

let sqrt_disc = f64::sqrt(discriminant);

// find nearest root
let mut root = (-half_b - sqrt_disc) / a;
if root <= t_min || root >= t_max {
root = (-half_b + sqrt_disc) / a;
if root <= t_min || root >= t_max {
return false;
}
}

rec.t = root;
rec.p = ray.at(root);
let outward_norm = (rec.p - self.center) / self.radius;
rec.set_face_normal(ray, outward_norm);
true
}
}

The math for calculating if the ray intersects the sphere is based on solving a quadratic equation which is explained very well here if you want a deeper read. But for now lets just take the equation as given.

Once we have the square root of the discriminant, we can check if it has valid roots and if either lies between t_min and t_max. If so, we populate our hit record and tell our caller that we got a hit.

Rendering

Now lets get back to our renderer in main.rs, we will need to build our hittable_list of spheres — which I have called world — and then cast rays for every pixel to determine the color we will paint.

fn main() {
const ASPECT_RATIO: f64 = 16.0 / 9.0;
const IMAGE_WIDTH: i32 = 400;
const IMAGE_HEIGHT: i32 = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
const SAMPLES_PER_PIXEL: i32 = 100;

let mut world = HittableList::new();
world.add(Box::new(Sphere::new(Point3::new(0.0, 0.0, -1.0), 0.5)));
world.add(Box::new(Sphere::new(Point3::new(0.0, -100.5, -1.0), 100.0)));

// Camera
let camera = Camera::new();

// Render
print!("P3\n{} {}\n255\n", IMAGE_WIDTH, IMAGE_HEIGHT);
for j in (0..IMAGE_HEIGHT).rev() {
eprint!("\rScanlines remaining: {}", j);
for i in 0..IMAGE_WIDTH {
let mut pixel_color = Color::new(0.0, 0.0, 0.0);
for _ in 0..SAMPLES_PER_PIXEL {
let u = (i as f64 + random_double()) / (IMAGE_WIDTH - 1) as f64;
let v = (j as f64 + random_double()) / (IMAGE_HEIGHT - 1) as f64;
let r = camera.get_ray(u, v);

pixel_color += ray_color(&r, &world);
}
color::write_color(&mut io::stdout(), pixel_color, SAMPLES_PER_PIXEL);
}
}

eprint!("\nDone.\n");
}

Anti-Aliasing

Let’s go over this code, the first thing to talk about is our new inner inner loop over the number of SAMPLES_PER_PIXEL. In our old render, we only took one sample per pixel, but in a real render this would cause jagged edges on the borders like below.

To get around this, when determining a pixel’s color, we now actually take multiple samples of nearby positions by adding some randomness to the position we send out our ray to. Then, when painting the pixel we will average the accumulated color to create a smoother render on the object borders.

We also need to update Color to take in the samples per pixel and preform the averaging for us.

pub fn write_color(out: &mut impl Write, pixel_color: Color, samples_per_pixel: i32) {
let mut r = pixel_color.x();
let mut g = pixel_color.y();
let mut b = pixel_color.z();

let scale = 1.0 / samples_per_pixel as f64;
r *= scale;
g *= scale;
b *= scale;

writeln!(
out,
"{} {} {}",
(256.0 * f64::clamp(r, 0.0, 0.999)) as i32,
(256.0 * f64::clamp(g, 0.0, 0.999)) as i32,
(256.0 * f64::clamp(b, 0.0, 0.999)) as i32,
).expect("writing color");
}

Getting Color From Ray

The other thing we added in here is a function to take in our world and a ray which is cast into the world and then probe for and return a color to us.

fn ray_color(ray: &Ray, world: &dyn Hittable) -> Color {
let mut rec = HitRecord::new();
if world.hit(ray, 0.0, common::INFINITY, &mut rec) {
return 0.5 * (rec.normal + Color::new(1.0, 1.0, 1.0));
}

let unit_direction = vec3::unit_vector(ray.direction());
let t = 0.5 * (unit_direction.y() + 1.0);
(1.0 - t) * Color::new(1.0, 1.0, 1.0) + t * Color::new(0.5, 0.7, 1.0)
}

Our ray_color function will cast in a ray and see if we hit something, if we do then we color the pixel a 50–50 mix between white and the color corresponding to the normal direction of the point we hit. If we hit nothing, then we render a gradient that from white to blueish as we move up in the y direction of our world. We should get something like this where we have a small sphere atop a much larger sphere acting as our ground.

Conclusion

In this article we mad a lot of progress with our renderer, adding in a camera and casting rays from it to determine the colors of our world based on the objects in it. We have not really built a raytracer yet — more so a ray caster — but we are well on our way!

--

--