WebGPU Rendering: Part 4 Loading a Model

Matthew MacFarquhar
5 min read1 hour ago

--

Introduction

I have been reading through this online book on WebGPU. In this series of articles, I will be going through this book and implementing the lessons in a more structured typescript class approach and eventually we will build three types of WebGPU renderers: Gaussian Splatting, Ray tracing and Rasterization.

In this article we will no longer be stuck playing around with a single triangle, we will develop the functionality to build a 3D model loader to display an entire OBJ, we will also discuss surface normals and how to generate them from a given OBJ’s faces.

The following link is the commit in my Github repo that matches the code we will go over.

Loading OBJ Data

We will be loading OBJ data from a file with the help of obj-file-parser. We will wrap the parsing logic in our own class to make accessing the vertex attributes easier.

ObjDataExtractor

Our ObjDataExtractor will have getters and properties for the things we care about for our render (vertex positions, normals and indices).

class ObjDataExtractor {
private _vertexPositions: Float32Array;
private _indices: Uint16Array;
private _normals: Float32Array;
constructor(objText: String) {
const objFileParser = new ObjFileParser(objText);
const objFile = objFileParser.parse();
this._vertexPositions = new Float32Array(objFile.models[0].vertices.flatMap(v => [v.x, v.y, v.z]));

const indices: number[] = [];
const normals: number[] = Array(this._vertexPositions.length).fill(0);
for (const face of objFile.models[0].faces) {
let points = [];
let facet_indices = [];
for (const v of face.vertices) {
const index = v.vertexIndex - 1;
indices.push(index);

const vertex = glMatrix.vec3.fromValues(this._vertexPositions[index * 3], this._vertexPositions[index * 3 + 1], this._vertexPositions[index * 3 + 2]);
points.push(vertex);
facet_indices.push(index);
}

const v1 = glMatrix.vec3.subtract(glMatrix.vec3.create(), points[1], points[0]);
const v2 = glMatrix.vec3.subtract(glMatrix.vec3.create(), points[2], points[0]);
const cross = glMatrix.vec3.cross(glMatrix.vec3.create(), v1, v2);
const normal = glMatrix.vec3.normalize(glMatrix.vec3.create(), cross);

for (let i of facet_indices) {
normals[i*3] += normal[0];
normals[i*3 + 1] += normal[1];
normals[i*3 + 2] += normal[2];
}
}
this._normals = new Float32Array(normals);

this._indices = new Uint16Array(indices);
}

public get vertexPositions(): Float32Array {
return this._vertexPositions;
}

public get indices(): Uint16Array {
return this._indices;
}

public get normals(): Float32Array {
return this._normals;
}
}

All of work is done in the constructor, we get the OBJ vertexPositions by loading them in from our OBJ file and accessing the vertices property. To get the indices and normals we will need to preform our own parsing.

We iterate through the OBJ faces and use the vertex references in the face as the indices to put into our indices array.

In the next portion, we determine the surface normal of our face, by getting the normalized cross product of two of the face edges and then setting the normal for each of the vertices on the face to be the calculated surface normal.

Normal Matrix

We need to compute a separate transformation matrix for our normals since they are vectors (not points like our positions) and are vulnerable to non-uniform scaling when rotation and scaling is involved.

https://shi-yan.github.io/webgpuunleashed/Basics/understanding_normals.html

the second image is what would happen if we blindly use the transformation matrix on our normal, we can see it is no longer perpendicular to the surface.

Below is how we construct our normal matrix it is inverse transpose of the model-view matrix. The inverse undoes any transformations like translation, and the transpose ensures that the angle between vectors is preserved after applying non-uniform scaling.

const modelViewMatrix = glMatrix.mat4.lookAt(glMatrix.mat4.create(), glMatrix.vec3.fromValues(3, 3, 3), glMatrix.vec3.fromValues(0, 0, 0), glMatrix.vec3.fromValues(0.0, 0.0, 1.0));
const projectionMatrix = glMatrix.mat4.perspective(glMatrix.mat4.create(), 1.4, 640.0 / 480.0, 0.1, 1000.0);
const modelViewMatrixInverse = glMatrix.mat4.invert(glMatrix.mat4.create(), modelViewMatrix);
const normalMatrix = glMatrix.mat4.transpose(glMatrix.mat4.create(), modelViewMatrixInverse);

Render

Now we can put this new logic into a render function.

public async render_obj_model(shaderCode: string, objFilePath: string, transformationMatrix: Float32Array, projectionMatrix: Float32Array, normalMatrix: Float32Array, primitiveState: GPUPrimitiveState, depthStencilState: GPUDepthStencilState) {
const objResponse = await fetch(objFilePath);
const objBlob = await objResponse.blob();
const objText = await objBlob.text();
const objDataExtractor = new ObjDataExtractor(objText);

const depthTexture = this._createDepthTexture();

const transformationMatrixBuffer = this._createGPUBuffer(transformationMatrix, GPUBufferUsage.UNIFORM);
const projectionMatrixBuffer = this._createGPUBuffer(projectionMatrix, GPUBufferUsage.UNIFORM);
const normalMatrixBuffer = this._createGPUBuffer(normalMatrix, GPUBufferUsage.UNIFORM);

const transformationMatrixBindGroupInput: IBindGroupInput = {
type: "buffer",
buffer: transformationMatrixBuffer,
}
const projectionMatrixBindGroupInput: IBindGroupInput = {
type: "buffer",
buffer: projectionMatrixBuffer,
}
const normalMatrixBindGroupInput: IBindGroupInput = {
type: "buffer",
buffer: normalMatrixBuffer,
}

const { bindGroupLayout: uniformBindGroupLayout, bindGroup: uniformBindGroup } = this._createUniformBindGroup([transformationMatrixBindGroupInput, projectionMatrixBindGroupInput, normalMatrixBindGroupInput]);

const { buffer: positionBuffer, layout: positionBufferLayout } = this._createSingleAttributeVertexBuffer(objDataExtractor.vertexPositions, { format: "float32x3", offset: 0, shaderLocation: 0 }, 3 * Float32Array.BYTES_PER_ELEMENT);
const { buffer: normalBuffer, layout: normalBufferLayout } = this._createSingleAttributeVertexBuffer(objDataExtractor.normals, { format: "float32x3", offset: 0, shaderLocation: 1 }, 3 * Float32Array.BYTES_PER_ELEMENT);
const indexBuffer = this._createGPUBuffer(objDataExtractor.indices, GPUBufferUsage.INDEX);

const commandEncoder = this._device.createCommandEncoder();

const passEncoder = commandEncoder.beginRenderPass(this._createRenderTarget(depthTexture));
passEncoder.setViewport(0, 0, this._canvas.width, this._canvas.height, 0, 1);
passEncoder.setPipeline(this._createPipeline(this._createShaderModule(shaderCode), [positionBufferLayout, normalBufferLayout], [uniformBindGroupLayout], primitiveState, depthStencilState));
passEncoder.setVertexBuffer(0, positionBuffer);
passEncoder.setVertexBuffer(1, normalBuffer);
passEncoder.setIndexBuffer(indexBuffer, "uint16");
passEncoder.setBindGroup(0, uniformBindGroup);
passEncoder.drawIndexed(objDataExtractor.indices.length, 1, 0, 0, 0);
passEncoder.end();

this._device.queue.submit([commandEncoder.finish()]);
}

The new highlights here are: we load our vertex positions, indices and normals from our loaded OBJ instead of getting them directly from our application and we have a new bind group entry for our normal matrix.

Shader

Now we need a shader which will use these surface normal that we are passing in.

@group(0) @binding(0)
var<uniform> modelView: mat4x4<f32>;
@group(0) @binding(1)
var<uniform> projection: mat4x4<f32>;
@group(0) @binding(2)
var<uniform> normalMatrix: mat4x4<f32>;

struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) normal: vec3<f32>
};

@vertex
fn vs_main(
@location(0) inPos: vec3<f32>,
@location(1) inNormal: vec3<f32>
) -> VertexOutput {
var out: VertexOutput;
out.clip_position = projection * modelView * vec4<f32>(inPos, 1.0);
out.normal = normalize(normalMatrix * vec4<f32>(inNormal, 0.0)).xyz;
return out;
}

@fragment
fn fs_main(in: VertexOutput, @builtin(front_facing) face: bool) -> @location(0) vec4<f32> {
if (face) {
var normal: vec3<f32> = normalize(in.normal);
return vec4<f32>(normal, 1.0);
}
else {
return vec4<f32>(0.0, 1.0, 0.0 ,1.0);
}
}

In vs_main, we transform our position using our transformation and projection matrices as always, and now we transform and normalize our normals using our input normal matrix.

In our fs_main, we use the normal direction to inform what color we will paint the fragment.

Conclusion

In this article, we have built an OBJ loader and renderer which will work with any OBJ with positions and create surface normals for them. At this point we have almost everything we need to build a basic renderer, We could now easily enhance our OBJ loader to get the UVs of our OBJ and add a material to our model.

--

--