WebGPU Rendering: Part 5 Lighting

Matthew MacFarquhar
4 min readNov 26, 2024

--

Introduction

I have been reading through this online book on WebGPU. In this series of articles, I will be going through this book and implementing the lessons in a more structured typescript class approach and eventually we will build three types of WebGPU renderers: Gaussian Splatting, Ray tracing and Rasterization.

In this article we will really complete our renderer, adding in lighting to allow our model to respond to lights in our scene and adding in multi-sample anti-aliasing to reduce jagged edges on our render borders.

The following link is the commit in my Github repo that matches the code we will go over.

Lighting

We have pretty much everything we need in a renderer at this point, the one thing we are really missing is lights. To use lights we need to adopt a shader theory. We will be using the Phong Model because it is relatively simple compared with other more realistic shader theories.

This Shading model contains three types of light interactions with material

  • Ambient: Uniform light that affects all surfaces, regardless of orientation.
  • Diffuse: Light that reflects off a rough surface, scattering in all directions. This is based on the angle of the light relative to the surface.
  • Specular: Highlights or shiny spots on surfaces that depend on the viewer’s angle and the light’s direction.

Lighting and View Direction

To Calculate these values, we will need two new uniforms: lighting direction and viewing direction.

const lightDirection = glMatrix.vec3.fromValues(-1, -1, -1);
const viewDirection = glMatrix.vec3.fromValues(-1, -1, -1);

These will just point in the same direction so we can see the specular very clearly.

Then, we can use the lighting and viewing data in our render function.

const lightDirectionBuffer = this._createGPUBuffer(lightDirection, GPUBufferUsage.UNIFORM);
const viewDirectionBuffer = this._createGPUBuffer(viewDirection, GPUBufferUsage.UNIFORM);

...
const lightDirectionBindGroupInput: IBindGroupInput = {
type: "buffer",
buffer: lightDirectionBuffer,
}
const viewDirectionBindGroupInput: IBindGroupInput = {
type: "buffer",
buffer: viewDirectionBuffer,
}

Shader Changes

We have passed in some necessary constants for implementing the Phong Shader theory, but — as the name suggests — most of the work is in the shader code.

We create some global values for our ambient, diffuse and specular colors and create some functions to calculate the diffuse and specular to apply to a fragment given some parameters like view direction, lighting direction and normal

@group(0) @binding(0)
var<uniform> modelView: mat4x4<f32>;
@group(0) @binding(1)
var<uniform> projection: mat4x4<f32>;
@group(0) @binding(2)
var<uniform> normalMatrix: mat4x4<f32>;

struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) viewDir: vec3<f32>,
@location(1) lightDir: vec3<f32>,
@location(2) normal: vec3<f32>
};

const ambientColor: vec4<f32> = vec4<f32>(0.15, 0.0, 0.0, 1.0);
const diffuseColor: vec4<f32> = vec4<f32>(0.25, 0.25, 0.25, 1.0);
const specularColor: vec4<f32> = vec4<f32>(1.0, 1.0, 1.0, 1.0);
const shininess: f32 = 20.0;

const diffuseConstant:f32 = 1.0;
const specularConstant:f32 = 1.0;
const ambientConstant: f32 = 1.0;

fn specular(lightDir:vec3<f32>, viewDir:vec3<f32>, normal:vec3<f32>, specularColor:vec3<f32>, shininess:f32) -> vec3<f32> {
var reflectDir:vec3<f32> = reflect(-lightDir, normal);
var specDot:f32 = max(dot(reflectDir, viewDir), 0.0);
return pow(specDot, shininess) * specularColor;
}

fn diffuse(lightDir:vec3<f32>, normal:vec3<f32>, diffuseColor:vec3<f32>) -> vec3<f32> {
return max(dot(lightDir, normal), 0.0) * diffuseColor;
}

@vertex
fn vs_main(
@location(0) inPos: vec3<f32>,
@location(1) inNormal: vec3<f32>
) -> VertexOutput {
var out: VertexOutput;
out.viewDir = normalize((normalMatrix * vec4<f32>(-viewDirection, 0.0)).xyz);
out.lightDir = normalize((normalMatrix * vec4<f32>(-lightDirection, 0.0)).xyz);
out.normal = normalize(normalMatrix * vec4<f32>(inNormal, 0.0)).xyz;
out.clip_position = projection * modelView * vec4<f32>(inPos, 1.0);
return out;
}

@group(0) @binding(3)
var<uniform> lightDirection: vec3<f32>;
@group(0) @binding(4)
var<uniform> viewDirection: vec3<f32>;

@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
var lightDir:vec3<f32> = in.lightDir;
var n:vec3<f32> = normalize(in.normal);
var viewDir: vec3<f32> = in.viewDir;

var radiance:vec3<f32> = ambientColor.rgb * ambientConstant +
diffuse(lightDir, n, diffuseColor.rgb)* diffuseConstant +
specular(lightDir, viewDir, n, specularColor.rgb, shininess) * specularConstant;

return vec4<f32>(radiance ,1.0);
}

In vs_main, we do our transformations to the positions and normals and then we do some transformations for our lighting and view directions to pass them all into the fragment shader.

In fs_main, we determine our radiance by taking a weighted sum of our ambient, diffuse and specular color results to produce a final color.

Multi-Sample Anti-Aliasing

MSAA (Multisample Anti-Aliasing) reduces jagged edges (aliasing) in 3D graphics by sampling multiple points within a pixel and averaging their colors, focusing on edges where aliasing is most noticeable. This creates smoother, more natural-looking edges while being more efficient than processing the entire image.

The GPU handles the nitty gritty implementation of MSAA for us, all we will need to do is to put a parameter for the MSAA count in our:

CreateRenderTarget function

private _createRenderTarget(depthTexture?: GPUTexture): GPURenderPassDescriptor {
...
if (this._msaa) {
const msaaTexture = this._device.createTexture({
size: { width: this._canvas.width, height: this._canvas.height },
sampleCount: this._msaa,
format: navigator.gpu.getPreferredCanvasFormat() as GPUTextureFormat,
usage: GPUTextureUsage.RENDER_ATTACHMENT,
});

colorAttachment = {
view: msaaTexture.createView(),
resolveTarget: colorTextureView,
clearValue: { r: 1, g: 0, b: 0, a: 1 },
loadOp: "clear",
storeOp: "store",
}
} else {
colorAttachment = {
view: colorTextureView,
clearValue: { r: 1, g: 0, b: 0, a: 1 },
loadOp: "clear",
storeOp: "store",
}
}

const renderPassDescriptor: GPURenderPassDescriptor = {
colorAttachments: [colorAttachment]
}

...

return renderPassDescriptor;
}

Our CreatePipeline function

private _createPipeline(shaderModule: GPUShaderModule, vertexBuffers: GPUVertexBufferLayout[], uniformBindGroups: GPUBindGroupLayout[]): GPURenderPipeline {
...
const pipelineDescriptor: GPURenderPipelineDescriptor = {
layout: layout,
vertex: {
module: shaderModule,
entryPoint: WebGPUContext.VERTEX_ENTRY_POINT,
buffers: vertexBuffers,
},
fragment: {
module: shaderModule,
entryPoint: WebGPUContext.FRAGMENT_ENTRY_POINT,
targets: [colorState],
},
primitive: this._primitiveState,
depthStencil: this._depthStencilState,
multisample: this._msaa ? { count: this._msaa} : undefined
}

const pipeline = this._device.createRenderPipeline(pipelineDescriptor);
return pipeline;
}

and our CreateDepthTexture function

private _createDepthTexture(): GPUTexture {
const depthTextureDesc: GPUTextureDescriptor = {
size: { width: this._canvas.width, height: this._canvas.height },
dimension: '2d',
sampleCount: this._msaa,
format: 'depth24plus-stencil8',
usage: GPUTextureUsage.RENDER_ATTACHMENT
};

const depthTexture = this._device.createTexture(depthTextureDesc);
return depthTexture;
}

Conclusion

With this introduction of lighting and implementation of a simple shader theory, we have built a fully usable WebGPU based OBJ loader. There is a lot more we can — and will — do to augment this renderer in the coming articles. Some upcoming features include: controls for interactive scenes, shadows, different shader theories and animations.

--

--

Matthew MacFarquhar
Matthew MacFarquhar

Written by Matthew MacFarquhar

I am a software engineer working for Amazon living in SF/NYC.

No responses yet