Ray Tracing in One Weekend: Part 10 Monte Carlo Optimizations

Matthew MacFarquhar
6 min readNov 15, 2024

--

Introduction

I have been reading through this book series on Raytracing which walks the reader through the creation of a Raytracer using C++. In this series of articles, I will be going through this book and implementing the lessons in Rust instead and diving deep into the pieces.

In this article we will be applying some Monte Carlo approaches to make our render more performant. The main idea behind our sampling approach is that there is some ground truth of the image and we get closer to it by taking lots of samples per pixel. However, the way we scatter our rays to generate our pixel samples is random and we could converge faster if we make our scatter rays favor going towards important objects — like lights.

The following link is the commit in my Github repo that matches the code we will go over.

Stratification

The first thing we will do to improve our render is adding in stratification when we sample pixels. In ray tracing, stratification refers to the process of dividing the pixel area or sampling domain into smaller subregions (strata) and ensuring that samples are distributed across these regions to improve the uniformity and quality of the sampling. This technique is used to reduce noise and improve the convergence of the rendered image toward a photorealistic result.

First, we will update our pixel sampling to introduce sort of 2D structure. Instead of just sampling N times per pixel we will sample sqrt(N) * sqrt(N) times so the total will still be N, but we can inject some 2D parameterization into the sample.

let pixel_colors: Vec<_> = (0..self.image_width)
.into_par_iter()
.map(|i| {
let mut pixel_color = Color::new(0.0, 0.0, 0.0);
for s_i in 0..self.sqrt_samples {
for s_j in 0..self.sqrt_samples {
let u = (i as f64 + random_double()) / (self.image_width - 1) as f64;
let v = (j as f64 + random_double()) / (self.image_height - 1) as f64;
let r = self.get_ray(u, v, s_i, s_j);
pixel_color += self.ray_color(&r, world, self.max_depth);
}
}
pixel_color
})
.collect();

Now, our getRay function will use the stratified indices when getting the color.

fn get_ray(&self, s: f64, t: f64, s_i: i32, s_j: i32) -> Ray {
let offset = self.sample_square_stratification(s_i, s_j);
let pixel_sample = self.lower_left_corner + s * self.horizontal + t * self.vertical;

let ray_origin = if self.lens_radius <= 0.0 {
self.origin
} else {
let rd = self.lens_radius * vec3::random_in_unit_disk();
self.origin + self.u * rd.x() + self.v * rd.y()
};
let ray_direction = pixel_sample - (ray_origin + offset);
let ray_time = random_double();

Ray::new(ray_origin + offset, ray_direction, ray_time)
}

We use the stratification parameters to create an offset in a more grid like approach (before we would just randomly sample in a unit disk)

fn sample_square_stratification(&self, s_i: i32, s_j: i32) -> Vec3 {
let recip_sqrt_samples = 1.0 / self.sqrt_samples as f64;
let px = ((s_i as f64 + random_double()) * recip_sqrt_samples) - 0.5;
let py = ((s_j as f64 + random_double()) * recip_sqrt_samples) - 0.5;

Vec3::new(px, py, 0.0)
}

The results (10 samples per pixel) of this approach are pretty much un-noticable. If you look very very closely, the edges may look different.

Scattering PDF and Importance Sampling

Scattering PDFs describe the likelihood of light being scattered in a particular direction, allowing for physically accurate and computationally efficient light transport simulations. If we can model a good PDF so that scatter rays toward “important” objects in our render — like light sources — we can approach a good render much much faster than the random scattering we have been using.

Orthonormal Basis

First, we are going to set up an ONB class to help us encapsulate some of the concepts we need to build these scene PDFs.

pub struct Onb {
axis: [Vec3; 3]
}

impl Onb {
pub fn new(n: &Vec3) -> Self {
let normal = vec3::unit_vector(n.clone());
let a = if f64::abs(normal.x()) > 0.9 {
Vec3::new(0.0, 1.0, 0.0)
} else {
Vec3::new(1.0, 0.0, 0.0)
};

let u = vec3::unit_vector(vec3::cross(normal, a));
let v = vec3::cross(normal, u);

Onb {
axis: [u, v, normal]
}
}

pub fn u(&self) -> Vec3 {
self.axis[0]
}

pub fn v(&self) -> Vec3 {
self.axis[1]
}

pub fn w(&self) -> Vec3 {
self.axis[2]
}

pub fn transform(&self, v: Vec3) -> Vec3 {
self.axis[0] * v.x() + self.axis[1] * v.y() + self.axis[2] * v.z()
}
}

Our ONB class has a set of three Vec3s constructed by some cross product operations with a given normal vector.

It has a transform vector which will take in a Vec3 in our basis space and transform it into world space.

Random Cosine

We also will create a new way to generate a random Vec3

pub fn random_cosine_direction() -> Vec3 {
let r1 = random_double();
let r2 = random_double();
let phi = 2.0 * std::f64::consts::PI * r1;
let x = f64::cos(phi) * f64::sqrt(r2);
let y = f64::sin(phi) * f64::sqrt(r2);
let z = f64::sqrt(1.0 - r2);
Vec3::new(x, y, z)
}

We create random variables (r1, r2) used to generate a random direction in a cosine-weighted hemisphere distribution — phi. Then, we use that to go from polar coordinates to cartesian coordinates.

Material

We are going to add a pdf_value field in our scatter record

pub struct ScatterRecord {
pub attenuation: Color,
pub scattered: Ray,
pub pdf_value: f64
}

and create a new function for the material trait to get a pdf value directly

pub trait Material: Send + Sync {
fn scatter(&self, r_in: &Ray, rec: &HitRecord) -> Option<ScatterRecord>;
fn emitted(&self, u: f64, v: f64, p: &Point3) -> Color {
Color::new(0.0, 0.0, 0.0)
}
fn scatter_pdf(&self, r_in: &Ray, rec: &HitRecord, scattered: &Ray) -> f64 {
0.0
}
}

Our Isotropic material will just always return a constant 1/(4*PI) so the material provides the same scatter ray and distribution function no matter where the incoming ray hits.

impl Material for Isotropic {
fn scatter(&self, r_in: &Ray, rec: &HitRecord) -> Option<ScatterRecord> {
let scattered = Ray::new(rec.p, random_unit_vector(), r_in.time());
let attenuation = self.albedo.get_color(rec.u, rec.v, &rec.p);


Some(ScatterRecord {
attenuation,
scattered,
pdf_value: 1.0 / (4.0 * std::f64::consts::PI)
})
}

fn scatter_pdf(&self, r_in: &Ray, rec: &HitRecord, scattered: &Ray) -> f64 {
1.0 / (4.0 * std::f64::consts::PI)
}
}

Our Lambertian material will use a cosine sampled direction to create a scatter ray and get a scatter value.

impl Material for Lambertian {
fn scatter(&self, r_in: &Ray, rec: &HitRecord) -> Option<ScatterRecord> {

let uvw = Onb::new(&rec.normal);
let scatter_direction = uvw.transform(Vec3::random_cosine_direction());

Some(ScatterRecord {
attenuation: self.albedo.get_color(rec.u, rec.v, &rec.p),
scattered: Ray::new(rec.p, scatter_direction, r_in.time()),
pdf_value: dot(uvw.w(), scatter_direction) / std::f64::consts::PI
})
}

fn scatter_pdf(&self, r_in: &Ray, rec: &HitRecord, scattered: &Ray) -> f64 {
let cos_theta = vec3::dot(rec.normal, scattered.direction());
if cos_theta > 0.0 {
cos_theta / std::f64::consts::PI
} else {
0.0
}
}
}

Now, inside our camera’s ray_color function, we can use this scatter pdf direction and value.

Some(scatter_rec) => {
let scatter_pdf = hit_rec.mat.scatter_pdf(ray, &hit_rec, &scatter_rec.scattered);
let color_from_scatter = (scatter_rec.attenuation * self.ray_color(&scatter_rec.scattered, world, depth - 1) * scatter_pdf) / scatter_pdf;
return color_from_emission + color_from_scatter;
},

This does not do anything to our render yet since we multiply by scatter_pdf and divide by the same scatter_pdf, but this will set the infrastructure up for us to construct more useful PDFs and update this logic to not cancel out.

This will give us the following render (with 1000 samples per pixel)

Conclusion

In this article we started to gather some intuition about how Monte Carlo processes can help us converge to a good render faster with the help of probability. We sampled pixels in a more uniform manner and started to develop our idea of PDFs for determining scatter rays which will help us build up our importance sampling feature in the coming articles.

--

--

Matthew MacFarquhar
Matthew MacFarquhar

Written by Matthew MacFarquhar

I am a software engineer working for Amazon living in SF/NYC.

No responses yet