Ray Tracing in One Weekend: Part 11 Importance Sampling on Lights
Introduction
I have been reading through this book series on Raytracing which walks the reader through the creation of a Raytracer using C++. In this series of articles, I will be going through this book and implementing the lessons in Rust instead and diving deep into the pieces.
In this article we will be adding importance sampling to our raytracer with the help of PDFs (Probability Density Functions). These PDFs model the distribution of “important” items in our scene, allowing our renders to fire off more informed scatter rays and to quickly converge to an acceptable image with less samples per pixel required.
The following link is the commit in my Github repo that matches the code we will go over.
Adding PDFs to Our Renderer
The first thing we will need to do to be able to apply PDFs to our rendering process is to add PDF properties to our shapes. Let’s start with encapsulating this PDF logic into a struct.
Probability Density Function
We will create a trait for structs to implement and generate some necessary PDF values.
pub trait Pdf {
fn value(&self, direction: Vec3) -> f64;
fn generate(&self) -> Vec3;
}
These will let us generate a random vector which abides by the PDF distribution and determine the probability (value) that a given direction came from this PDF.
The first PDF we will build up is the sphere PDF which samples rays in a sphere randomly. Every vector is just as likely as any other in the sphere PDF so our value function just returns a constant value.
pub struct SpherePdf {}
impl SpherePdf {
pub fn new() -> SpherePdf {
SpherePdf {}
}
}
impl Pdf for SpherePdf {
fn value(&self, _direction: Vec3) -> f64 {
1.0 / (4.0 * std::f64::consts::PI)
}
fn generate(&self) -> Vec3 {
random_unit_vector()
}
}
Our cosine PDF generates a random vector by sampling a random vector in cosine space and then transforming it into world space. We determine the probability value by dotting the vector with the PDF’s Orthonormal basis w vector, and then making sure it is non-negative and divide it by PI to make the value range between 0 and 1.
pub struct CosinePdf {
uvw: Onb
}
impl CosinePdf {
pub fn new(w: Vec3) -> CosinePdf {
CosinePdf {
uvw: Onb::new(&w)
}
}
}
impl Pdf for CosinePdf {
fn value(&self, direction: Vec3) -> f64 {
let cos_theta = dot(unit_vector(direction), self.uvw.w());
f64::max(0.0, cos_theta / std::f64::consts::PI)
}
fn generate(&self) -> Vec3 {
self.uvw.transform(Vec3::random_cosine_direction())
}
}
Our last PDF — HittablePDF — will delegate the probability calculation and generated vector to the underlying object it holds.
pub struct HittablePdf {
origin: Point3,
objects: Arc<dyn Hittable>
}
impl HittablePdf {
pub fn new(origin: Point3, objects: Arc<dyn Hittable>) -> HittablePdf {
HittablePdf { origin, objects }
}
}
impl Pdf for HittablePdf {
fn value(&self, direction: Vec3) -> f64 {
self.objects.pdf_value(self.origin, direction)
}
fn generate(&self) -> Vec3 {
self.objects.random(self.origin)
}
}
This means our hittable trait must now implement a couple new functions to aid in this PDF calcualtion.
pub trait Hittable: Send + Sync{
fn hit(&self, ray: &Ray, t_min: f64, t_max: f64) -> Option<HitRecord>;
fn pdf_value(&self, origin: Point3, direction: Vec3) -> f64 {
0.0
}
fn random(&self, origin: Point3) -> Vec3 {
Vec3::new(1.0, 0.0, 0.0)
}
}
Quad PDF
We will start with implementing PDF for our most prominent object — the quad. We will need to calculate the area of our quad to do this.
pub fn new(q: Point3, u: Vec3, v: Vec3, mat: Arc<dyn Material>) -> Self {
let n = cross(u, v);
...
let area = n.length();
Quad {
q,
u,
v,
w,
mat,
d,
normal,
area,
}
}
Now, we can allow a quad to implement PDF functions.
fn pdf_value(&self, origin: Point3, direction: Vec3) -> f64 {
let rec = self.hit(&Ray::new(origin, direction, 0.0), 0.001, common::INFINITY);
if let None = rec {
return 0.0;
}
let rec = rec.expect("Should have a hit record");
let distance_squared = rec.t * rec.t * direction.length_squared();
let cosine = f64::abs(dot(direction, rec.normal) / direction.length());
distance_squared / (cosine * self.area)
}
fn random(&self, origin: Point3) -> Vec3 {
let p = self.q + (random_double() * self.u) + (random_double() * self.v);
unit_vector(p - origin)
}
Our random vector will get a point on the Quad’s surface (p) and then get the direction vector between our origin and the point (essentially we generate a random vector from our origin to some point on our quad).
The probability value calculation first determines if the ray hits the quad, if not, we return 0.0 since there is no chance this ray would be sampled from our quad PDF.
Distance Squared accounts for how the PDF changes with distance. The farther the quad is, the less likely the ray is to intersect it, so the PDF increases proportionally to the squared distance.
Cosine accounts for the angle at which the ray hits the surface. If the ray grazes the surface (cosine close to 0), the probability decreases because such intersections are less likely geometrically. So the cosine is inversely related to the pdf value.
Material Clean Up
Before we can add this PDF system to our render, we are going to make two small additions to our material.
First, we will add an empty material so we can add it to our importance sampling shapes.
pub struct Empty {}
impl Empty {
pub fn new() -> Empty {
Empty {}
}
}
impl Material for Empty {
fn scatter(&self, _r_in: &Ray, _rec: &HitRecord) -> Option<ScatterRecord> {
None
}
}
Then, we just need to make a quick bug fix so that our Diffuse Light material only shoots rays out when we hit the front face.
impl Material for DiffuseLight {
fn scatter(&self, _r_in: &Ray, _rec: &HitRecord) -> Option<ScatterRecord> {
None
}
fn emitted(&self, hit_rec: &HitRecord, u: f64, v: f64, p: &Point3) -> Color {
if hit_rec.front_face {
self.albedo.get_color(u, v, p)
} else {
Color::new(0.0, 0.0, 0.0)
}
}
}
Rendering With Importance Sampling
Now, let’s update our rendering system to take in importance shapes to direct our rays and converge on a good image faster.
Adding Lights Array
We are going to update our camera render function to take in a Hittable for Lights in our scene — which represents the important things in our scene — and just pass it through to ray_color (which will also has its parameters updated to accept importance hittables).
pub fn render(&self, world: &HittableList, lights: Arc<dyn Hittable>) {
print!("P3\n{} {}\n255\n", self.image_width, self.image_height);
for j in (0..self.image_height).rev() {
eprint!("\rScanlines remaining: {}", j);
let pixel_colors: Vec<_> = (0..self.image_width)
.into_par_iter()
.map(|i| {
let mut pixel_color = Color::new(0.0, 0.0, 0.0);
for s_i in 0..self.sqrt_samples {
for s_j in 0..self.sqrt_samples {
let u = (i as f64 + random_double()) / (self.image_width - 1) as f64;
let v = (j as f64 + random_double()) / (self.image_height - 1) as f64;
let r = self.get_ray(u, v, s_i, s_j);
pixel_color += self.ray_color(&r, world, lights.clone(), self.max_depth);
}
}
pixel_color
})
.collect();
for pixel_color in pixel_colors {
color::write_color(&mut io::stdout(), pixel_color, self.samples_per_pixel);
}
}
eprint!("\nDone.\n");
}
In our world construction, we can create a shape of the important light in the scene. Currently, we have only implemented the pdf stuff for quads so our light will be a quad, eventually, we will implement the PDF functions for a hittable list so we can take in many lights.
let lights = Arc::new(Quad::new(Point3::new(343.0, 554.0, 332.0), Vec3::new(-130.0, 0.0, 0.0), Vec3::new(0.0, 0.0, -105.0), Arc::new(Empty::new())));
...
camera.render(&world, lights);
Updating Rendering
Now, we can update our ray_color function, all of our changes will be inside the logic block when we hit something and deal with how to scatter our ray.
fn ray_color(&self, ray: &Ray, world: &dyn Hittable, lights: Arc<dyn Hittable>, depth: i32) -> Color {
if depth <= 0 {
return Color::new(0.0, 0.0, 0.0);
}
if let Some(hit_rec) = world.hit(ray, 0.001, common::INFINITY) {
let color_from_emission = hit_rec.mat.emitted(&hit_rec, hit_rec.u, hit_rec.v, &hit_rec.p);
return match hit_rec.mat.scatter(ray, &hit_rec) {
Some(scatter_rec) => {
let light_pdf = HittablePdf::new(hit_rec.p, lights.clone());
let scattered_ray = Ray::new(hit_rec.p, light_pdf.generate(), ray.time());
let pdf_value = light_pdf.value(scattered_ray.direction());
let scattered_pdf = hit_rec.mat.scatter_pdf(ray, &hit_rec, &scattered_ray);
let sample_color = self.ray_color(&scattered_ray, world, lights.clone(), depth - 1);
let color_from_scatter = (scatter_rec.attenuation * scattered_pdf * sample_color) / pdf_value;
return color_from_emission + color_from_scatter;
},
None => color_from_emission
};
} else {
return self.background;
}
}
We created a light_pdf using our lights hittable, and use that PDF to generate a direction for our scattered_ray. We then get our probability for that ray being sample from our PDF, and get the scattered_pdf value from the material we hit.
We then use our scattered_ray to shoot off a PDF informed ray out from our hit, and we multiply our color by a ratio of the pdf_value from our scattered ray and the value our material gave us to sum into our final pixel color.
By sampling from the light distribution, we can converge on a good render much much faster, this render below was generated with only 10 samples per pixel since we accurately modeled where the important features in our scene were and helped our rays scatter.
Conclusion
In this article, we applied PDFs to our rendering logic and showed how it can lead to extreme speed ups in converging to a good image. Importance sampling in ray tracing enhances efficiency and image quality by concentrating sampling on light paths that significantly impact the scene’s appearance, reducing noise and computational cost.