Ray Tracing in One Weekend: Part 4 Camera and Render Improvements
Introduction
I have been reading through this book series on Raytracing which walks the reader through the creation of a Raytracer using C++. In this series of articles, I will be going through this book and implementing the lessons in Rust instead and diving deep into the pieces.
In this article, we will be adding some more properties to our camera like depth of field, target and position. We will also spend some time parallelizing our code so that we can use all the cores in our CPU and render faster. Finally, we will start our journey into the second book in the series and add in motion blur.
The following link is the commit in my Github repo that matches the code we will go over.
Parallelizing The Render
Before we get into our next steps with the camera, lets speed up our render. Currently, we are doing all the render work on a single CPU core iterating through the pixels one by one. However, all of this tracing is completely independent per pixel which means we could definitely speed the process up with some parallelization.
We will use the rayon crate to do this cargo add rayon. We are going to parallelize the pixels per line we render.
print!("P3\n{} {}\n255\n", self.image_width, self.image_height);
for j in (0..self.image_height).rev() {
eprint!("\rScanlines remaining: {}", j);
let pixel_colors: Vec<_> = (0..self.image_width)
.into_par_iter()
.map(|i| {
let mut pixel_color = Color::new(0.0, 0.0, 0.0);
for _ in 0..self.samples_per_pixel {
let u = ((i as f64) + random_double()) / (self.image_width - 1) as f64;
let v = ((j as f64) + random_double()) / (self.image_height - 1) as f64;
let r = self.get_ray(u, v);
pixel_color += Self::ray_color(&r, world, self.max_depth);
}
pixel_color
})
.collect();
for pixel_color in pixel_colors {
color::write_color(&mut io::stdout(), pixel_color, self.samples_per_pixel);
}
}
eprint!("\nDone.\n");
The only thing we have added is using into_par_iter — instead of a basic for loop — to loop over the pixels in our line.
We will also need to add Send + Sync to the hittable and material traits so that they can be sent through threads like this.
pub trait Hittable: Send + Sync {
fn hit(&self, ray: &Ray, t_min: f64, t_max: f64, rec: &mut HitRecord) -> bool;
}
Lastly, we just need to replace all of our Rcs with Arcs (which can be safely sent between threads).
For example, our material in the sphere struct now uses an Arc instead of an Rc.
pub struct Sphere {
center: Point3,
radius: f64,
mat: Arc<dyn Material>,
}
With all of this, our render should be much faster thanks to fearless parallelization.
Camera
Our Camera will now take in a couple more parameters which include the camera position, orientation, lens radius, etc…
pub struct Camera {
image_width: i32,
image_height: i32,
samples_per_pixel: i32,
max_depth: i32,
origin: Point3,
lower_left_corner: Point3,
horizontal: Vec3,
vertical: Vec3,
u: Vec3,
v: Vec3,
lens_radius: f64
}
impl Camera {
pub fn new(image_width: i32, image_height: i32, samples_per_pixel: i32, max_depth: i32,
eye: Point3, lookat: Point3, up: Vec3, vfov: f64, aspect_ratio: f64, aperature: f64, focus_dist: f64) -> Camera {
let theta = degrees_to_radians(vfov);
let h = f64::tan(theta / 2.0);
let viewport_height = 2.0 * h;
let viewport_width = aspect_ratio * viewport_height;
let w = vec3::unit_vector(eye - lookat);
let u = vec3::unit_vector(vec3::cross(up, w));
let v = vec3::cross(w, u);
let origin = eye;
let horizontal = focus_dist * viewport_width * u;
let vertical = focus_dist * viewport_height * v;
let lower_left_corner = origin - horizontal / 2.0 - vertical / 2.0 - focus_dist * w;
let lens_radius = aperature/2.0;
Camera {
image_width,
image_height,
samples_per_pixel,
max_depth,
origin,
lower_left_corner,
horizontal,
vertical,
u,
v,
lens_radius
}
}
We use the eye to target vector and the up direction of the camera to create a set of basis vectors (w,u,v) for the camera and use those to determine the viewport bounds.
We now will pull our rendering function into our camera implementation.
pub fn render(&self, world: &HittableList) {
print!("P3\n{} {}\n255\n", self.image_width, self.image_height);
for j in (0..self.image_height).rev() {
eprint!("\rScanlines remaining: {}", j);
let pixel_colors: Vec<_> = (0..self.image_width)
.into_par_iter()
.map(|i| {
let mut pixel_color = Color::new(0.0, 0.0, 0.0);
for _ in 0..self.samples_per_pixel {
let u = ((i as f64) + random_double()) / (self.image_width - 1) as f64;
let v = ((j as f64) + random_double()) / (self.image_height - 1) as f64;
let r = self.get_ray(u, v);
pixel_color += Self::ray_color(&r, world, self.max_depth);
}
pixel_color
})
.collect();
for pixel_color in pixel_colors {
color::write_color(&mut io::stdout(), pixel_color, self.samples_per_pixel);
}
}
eprint!("\nDone.\n");
}
fn ray_color(ray: &Ray, world: &dyn Hittable, depth: i32) -> Color {
if depth <= 0 {
return Color::new(0.0, 0.0, 0.0);
}
if let Some(hit_rec) = world.hit(ray, 0.001, common::INFINITY) {
if let Some(scatter_rec) = hit_rec.mat.scatter(ray, &hit_rec) {
return scatter_rec.attenuation * Self::ray_color(&scatter_rec.scattered, world, depth - 1);
}
return Color::new(0.0, 0.0, 0.0);
}
let unit_direction = vec3::unit_vector(ray.direction());
let t = 0.5 * (unit_direction.y() + 1.0);
(1.0 - t) * Color::new(1.0, 1.0, 1.0) + t * Color::new(0.5, 0.7, 1.0)
}
fn get_ray(&self, s: f64, t: f64) -> Ray {
let rd = self.lens_radius * vec3::random_in_unit_disk();
let offset = self.u * rd.x() + self.v * rd.y();
Ray::new(self.origin + offset, self.lower_left_corner + s * self.horizontal + t * self.vertical - self.origin - offset)
}
Most of this is just refactoring and moving render and ray_color into the camera, we also add a little offset blur in our get_ray function to blur items farther away from our target to give us a depth of field effect.
Render
Now our main function is pretty simple.
fn main() {
const ASPECT_RATIO: f64 = 3.0 / 2.0;
const IMAGE_WIDTH: i32 = 600;
const IMAGE_HEIGHT: i32 = (IMAGE_WIDTH as f64 / ASPECT_RATIO) as i32;
const SAMPLES_PER_PIXEL: i32 = 250;
const MAX_DEPTH: i32 = 50;
let world = random_scene();
// Camera
let eye = Point3::new(13.0, 2.0, 3.0);
let lookat = Point3::new(0.0, 0.0, 0.0);
let up = Point3::new(0.0, 1.0, 0.0);
let dist_to_focus = 10.0;
let aperture = 0.1;
let camera = Camera::new(IMAGE_WIDTH, IMAGE_HEIGHT, SAMPLES_PER_PIXEL, MAX_DEPTH, eye, lookat, up, 20.0, ASPECT_RATIO, aperture, dist_to_focus);
camera.render(&world);
}
We create a camera and a world and then call render on the camera to generate our image.
Below is the random_scene function.
fn random_scene() -> HittableList {
let mut world = HittableList::new();
let ground_material = Arc::new(Lambertian::new(Color::new(0.5, 0.5, 0.5)));
world.add(Box::new(Sphere::new(Point3::new(0.0, -1000.0, 0.0), ground_material, 1000.0)));
for a in -11..11 {
for b in -11..11 {
let center = Point3::new(a as f64 + 0.9 *random_double(), 0.2, b as f64 + 0.9 *random_double());
if (center - Point3::new(4.0, 0.2, 0.0)).length() > 0.9 {
let choose_mat = random_double();
if choose_mat < 0.8 {
//Diffuse
let albedo = Color::random() * Color::random();
let sphere_material = Arc::new(Lambertian::new(albedo));
world.add(Box::new(Sphere::new(center, sphere_material, 0.2)));
} else if choose_mat < 0.95 {
//Metal
let albedo = Color::random_range(0.5, 1.0);
let fuzz = random_double_range(0.0, 0.5);
let sphere_material = Arc::new(Metal::new(albedo, fuzz));
world.add(Box::new(Sphere::new(center, sphere_material, 0.2)));
} else {
//Glass
let sphere_material = Arc::new(Dielectric::new(1.5));
world.add(Box::new(Sphere::new(center, sphere_material, 0.2)));
}
}
}
}
let material1 = Arc::new(Dielectric::new(1.5));
world.add(Box::new(Sphere::new(
Point3::new(0.0, 1.0, 0.0),
material1,
1.0,
)));
let material2 = Arc::new(Lambertian::new(Color::new(0.4, 0.2, 0.1)));
world.add(Box::new(Sphere::new(
Point3::new(-4.0, 1.0, 0.0),
material2,
1.0,
)));
let material3 = Arc::new(Metal::new(Color::new(0.7, 0.6, 0.5), 0.0));
world.add(Box::new(Sphere::new(
Point3::new(4.0, 1.0, 0.0),
material3,
1.0,
)));
world
}
And after all of this we should get a final render which looks something like this.
Motion Blur
The last feature we will add in this article is motion blur, allowing our objects to “move” a bit between ray casts, giving the illusion of motion blur.
First, we will add a time value to the ray struct
pub struct Ray {
origin: Point3,
dir: Vec3,
time: f64
}
Then, we will change Sphere to use a ray instead of a point3 for its center.
pub struct Sphere {
center: Ray,
radius: f64,
mat: Arc<dyn Material>
}
Now our Sphere’s implementation of hittable will use the ray time to get the current_center instead of just using a static center.
impl Hittable for Sphere {
fn hit(&self, ray: &crate::ray::Ray, t_min: f64, t_max: f64) -> Option<HitRecord> {
let current_center = self.center.at(ray.time());
let oc = ray.origin() - current_center;
...
}
}
And now when we send in a ray from the camera, we will get a random time to render the ray at.
fn get_ray(&self, s: f64, t: f64) -> Ray {
let rd = self.lens_radius * vec3::random_in_unit_disk();
let offset = self.u * rd.x() + self.v * rd.y();
let ray_time = random_double();
Ray::new(self.origin + offset, self.lower_left_corner + s * self.horizontal + t * self.vertical - self.origin - offset, ray_time)
}
We can then add stationary or moving spheres to our world
fn random_scene() -> HittableList {
let mut world = HittableList::new();
let ground_material = Arc::new(Lambertian::new(Color::new(0.5, 0.5, 0.5)));
world.add(Box::new(Sphere::new(Ray::new(Point3::new(0.0, -1000.0, 0.0) , Vec3::new(0.0, 0.0, 0.0), 0.0), ground_material, 1000.0)));
for a in -11..11 {
for b in -11..11 {
let center = Point3::new(a as f64 + 0.9 *random_double(), 0.2, b as f64 + 0.9 *random_double());
if (center - Point3::new(4.0, 0.2, 0.0)).length() > 0.9 {
let choose_mat = random_double();
if choose_mat < 0.8 {
//Diffuse
let moving_ray = Ray::new(center , Vec3::new(0.0, random_double_range(0.0, 0.5), 0.0), 0.0);
let albedo = Color::random() * Color::random();
let sphere_material = Arc::new(Lambertian::new(albedo));
world.add(Box::new(Sphere::new(moving_ray, sphere_material, 0.2)));
} else if choose_mat < 0.95 {
//Metal
let stationary_ray = Ray::new(center , Vec3::new(0.0, 0.0, 0.0), 0.0);
let albedo = Color::random_range(0.5, 1.0);
let fuzz = random_double_range(0.0, 0.5);
let sphere_material = Arc::new(Metal::new(albedo, fuzz));
world.add(Box::new(Sphere::new(stationary_ray, sphere_material, 0.2)));
} else {
//Glass
let stationary_ray = Ray::new(center , Vec3::new(0.0, 0.0, 0.0), 0.0);
let sphere_material = Arc::new(Dielectric::new(1.5));
world.add(Box::new(Sphere::new(stationary_ray, sphere_material, 0.2)));
}
}
}
}
let material1 = Arc::new(Dielectric::new(1.5));
world.add(Box::new(Sphere::new(
Ray::new(Point3::new(0.0, 1.0, 0.0), Vec3::new(0.0, 0.0, 0.0), 0.0),
material1,
1.0,
)));
let material2 = Arc::new(Lambertian::new(Color::new(0.4, 0.2, 0.1)));
world.add(Box::new(Sphere::new(
Ray::new(Point3::new(-4.0, 1.0, 0.0), Vec3::new(0.0, 0.0, 0.0), 0.0),
material2,
1.0,
)));
let material3 = Arc::new(Metal::new(Color::new(0.7, 0.6, 0.5), 0.0));
world.add(Box::new(Sphere::new(
Ray::new(Point3::new(4.0, 1.0, 0.0), Vec3::new(0.0, 0.0, 0.0), 0.0),
material3,
1.0,
)));
world
}
Giving us a render which looks like this
Conclusion
In this article, we have improved our raytracing speed and refactored our rendering functionality into our camera and gave it more control over the viewport and the focus of the scene as well as adding in some motion blur for our objects. At this point, we have completed the ray tracing in one weekend part of the series and will continue on through the rest of the series in coming articles.