WebGPU Rendering: Part 9 Controls
Introduction
I have been reading through this online book on WebGPU. In this series of articles, I will be going through this book and implementing the lessons in a more structured typescript class approach and eventually we will build three types of WebGPU renderers: Gaussian Splatting, Ray tracing and Rasterization.
In this article we will talk about how to properly re-render our scene if the canvas we are painting to changes size and we will also introduce some camera controls allowing us to see the object from lots of different angles instead of just a static one. Both of these updates rely on running our render in a loop and updating textures and uniforms to correspond to updated canvas sizes and updated transformation matrices for the camera.
The following link is the commit in my Github repo that matches the code we will go over.
Canvas Updating
Re-rendering when the canvas size changes is important because the size directly impacts the coordinate system and how elements are drawn. Neglecting this update can potentially lead to distorted or misaligned visuals. A size change requires recalculating transformations and scaling to maintain the correct appearance and proportions of rendered objects.
Updating Data
To achieve proper rendering on canvas resizing, we need to update our projectionMatrix and our depth texture (since we manage it manually unlike the color target texture which the GPU takes care of for us).
We will detect when our current canvas width or height does not match with the values we have in our object state and trigger an update.
const devicePixelRatio = window.devicePixelRatio || 1;
const currenCanvasWidth = this._canvas.clientWidth * devicePixelRatio;
const currentCanvasHeight = this._canvas.clientHeight * devicePixelRatio;
let projectionMatrixUpdateBuffer = null;
if (currenCanvasWidth != this._canvas.width || currentCanvasHeight != this._canvas.height) {
this._canvas.width = currenCanvasWidth;
this._canvas.height = currentCanvasHeight;
depthTexture.destroy();
depthTexture = this._createDepthTexture();
const updateProjectionMatrix = glMatrix.mat4.perspective(glMatrix.mat4.create(), 1.4, this._canvas.width / this._canvas.height, 0.1, 1000.0);
projectionMatrixUpdateBuffer = this._createGPUBuffer(Float32Array.from(updateProjectionMatrix), GPUBufferUsage.COPY_SRC);
}
Then — when encoding our commands — we check to see if there is a projection matrix to update and if so, send in a command to preform the buffer copy.
if (projectionMatrixUpdateBuffer != null) {
commandEncoder.copyBufferToBuffer(projectionMatrixUpdateBuffer, 0, projectionMatrixBuffer, 0, 16 * Float32Array.BYTES_PER_ELEMENT);
}
Adding an Observer
This updating logic lives inside our render function, but how can we trigger the function when the canvas changes? This is where the built in ResizeObserver comes in.
We add a new ResizeObserver with a callback to render a new frame, and then attach this observer to our canvas.
const resizeObserver = new ResizeObserver(() => {
requestAnimationFrame(render);
});
resizeObserver.observe(this._canvas);
and thats all we need to correctly resize our render to match the canvas size!
ArcBall Controls
Now lets jump into adding an ArcBall to control our camera, the idea for updating will be very similar to how the canvas re-sizing works
- Detect a change caused by user action
- Get uniform buffers corresponding to this change
- Copy them to the GPU and re-render
ArcBall
We will encapsulate our camera orientation logic inside of an ArcBall class which will keep track of the modelViewMatrix for our camera and update it when it receives actions like pitching, yawing and rolling.
class Arcball {
private _radius: number;
private _forward: glMatrix.vec4;
private _up: glMatrix.vec4;
private _currentRotation: glMatrix.mat4;
constructor(radius: number) {
this._radius = radius;
this._forward = glMatrix.vec4.fromValues(this._radius, 0, 0, 0);
this._up = glMatrix.vec4.fromValues(0, 0, 1, 0);
this._currentRotation = glMatrix.mat4.create();
}
get forward() {
return this._forward;
}
public yawPith(originalX: number, originalY: number, newX: number, newY: number): void {
const originalPoint = glMatrix.vec3.fromValues(1.0, originalX, originalY);
const newPoint = glMatrix.vec3.fromValues(1.0, newX, newY);
let rotationAxisVec3 = glMatrix.vec3.cross(glMatrix.vec3.create(), originalPoint, newPoint);
let rotationAxisVec4 = glMatrix.vec4.fromValues(rotationAxisVec3[0], rotationAxisVec3[1], rotationAxisVec3[2], 0.0);
rotationAxisVec4 = glMatrix.vec4.transformMat4(glMatrix.vec4.create(), rotationAxisVec4, this._currentRotation);
rotationAxisVec3 = glMatrix.vec3.normalize(glMatrix.vec3.create(), [rotationAxisVec4[0], rotationAxisVec4[1], rotationAxisVec4[2]]);
const sin = glMatrix.vec3.length(rotationAxisVec3) / (glMatrix.vec3.length(originalPoint) * glMatrix.vec3.length(newPoint));
const rotationMatrix = glMatrix.mat4.fromRotation(glMatrix.mat4.create(), Math.asin(sin) * -0.03, rotationAxisVec3);
if (rotationMatrix !== null) {
this._currentRotation = glMatrix.mat4.multiply(glMatrix.mat4.create(), rotationMatrix, this._currentRotation);
this._forward = glMatrix.vec4.transformMat4(glMatrix.vec4.create(), this._forward, rotationMatrix);
this._up = glMatrix.vec4.transformMat4(glMatrix.vec4.create(), this._up, rotationMatrix);
}
}
public roll(originalX: number, originalY: number, newX: number, newY: number): void {
const originalVec = glMatrix.vec3.fromValues(originalX, originalY, 0.0);
const newVec = glMatrix.vec3.fromValues(newX, newY, 0.0);
const crossProd = glMatrix.vec3.cross(glMatrix.vec3.create(), originalVec, newVec);
const cos = glMatrix.vec3.dot(glMatrix.vec3.normalize(glMatrix.vec3.create(), originalVec), glMatrix.vec3.normalize(glMatrix.vec3.create(), newVec));
const rad = Math.acos(Math.min(cos, 1.0)) * Math.sign(crossProd[2]);
const rotationMatrix = glMatrix.mat4.fromRotation(glMatrix.mat4.create(), -rad, glMatrix.vec3.fromValues(this._forward[0], this._forward[1], this._forward[2]));
this._currentRotation = glMatrix.mat4.multiply(glMatrix.mat4.create(), rotationMatrix, this._currentRotation);
this._up = glMatrix.vec4.transformMat4(glMatrix.vec4.create(), this._up, this._currentRotation);
}
public getMatrices() {
const modelViewMatrix = glMatrix.mat4.lookAt(glMatrix.mat4.create(),
glMatrix.vec3.fromValues(this._forward[0], this._forward[1], this._forward[2]),
glMatrix.vec3.fromValues(0, 0, 0),
glMatrix.vec3.fromValues(this._up[0], this._up[1], this._up[2]));
return modelViewMatrix;
}
}
This class has some getters to retrieve the forward direction and get the transformation matrix of the camera. The two ways we can update the camera are by yawPitch (rotation around Y and X axises) and roll (rotation around Z axis).
Controls
Our ArcBall holds information about the camera orientation, but it does not interface directly with the canvas to receive user actions. We will build a Controls class which will help us map canvas actions to ArcBall manipulations.
enum DragType {
NONE,
YAW_PITCH,
ROLL
}
class Controls {
private _canvas: HTMLCanvasElement;
private _prevX: number;
private _prevY: number;
private _draggingType: DragType;
private _arcball: Arcball;
private _render: () => void;
constructor(canvas: HTMLCanvasElement, arcBall: Arcball, render: () => void) {
this._arcball = arcBall;
this._canvas = canvas;
this._prevX = 0;
this._prevY = 0;
this._draggingType = DragType.NONE;
this._render = render;
this._canvas.onmousedown = (event: MouseEvent) => {
const rect = this._canvas.getBoundingClientRect();
const x = event.clientX - rect.left;
const y = event.clientY - rect.top;
const width = rect.right - rect.left;
const height = rect.bottom - rect.top;
let radius = width;
if (height < radius) {
radius = height;
}
radius *= 0.5;
const originX = width * 0.5;
const originY = height * 0.5;
this._prevX = (x - originX) / radius;
this._prevY = (originY - y) / radius;
if ((this._prevX * this._prevX + this._prevY * this._prevY) <= 0.64) {
this._draggingType = DragType.YAW_PITCH;
} else {
this._draggingType = DragType.ROLL;
}
}
this._canvas.onmousemove = (event: MouseEvent) => {
const rect = this._canvas.getBoundingClientRect();
const x = event.clientX - rect.left;
const y = event.clientY - rect.top;
const width = rect.right - rect.left;
const height = rect.bottom - rect.top;
let radius = width;
if (height < radius) {
radius = height;
}
radius *= 0.5;
const originX = width * 0.5;
const originY = height * 0.5;
const currentX = (x - originX) / radius;
const currentY = (originY - y) / radius;
if (this._draggingType == DragType.YAW_PITCH) {
this._arcball.yawPith(this._prevX, this._prevY, currentX, currentY);
} else if (this._draggingType == DragType.ROLL) {
this._arcball.roll(this._prevX, this._prevY, currentX, currentY);
}
this._prevX = currentX;
this._prevY = currentY;
requestAnimationFrame(this._render);
}
canvas.onmouseup = (event: MouseEvent) => {
this._draggingType = DragType.NONE;
}
}
get arcball() {
return this._arcball;
}
public getMatrices() {
return this._arcball.getMatrices();
}
}
To determine the type of dragging we are doing, we add some logic onMouseDown to detect if we are inside or outside a imaginary circle around the object. If we are outside, we ROLL otherwise we YAW_PITCH.
We detect mouse movement, and if we had previously clicked — meaning we are dragging — we will proxy to the appropriate ArcBall function to roll or yawPitch using the mouse position delta. At this point, we also request an update to the frame since we have made a change that would affect how the scene is rendered.
Updating Data
Now that we can detect a change and ask for a re-render, we can properly update the necessary buffers to match our ArcBall state.
const modelViewMatrix = arcBall.getMatrices();
const modelViewMatrixUpdateBuffer = this._createGPUBuffer(Float32Array.from(modelViewMatrix), GPUBufferUsage.COPY_SRC);
const modelViewMatrixInverse = glMatrix.mat4.invert(glMatrix.mat4.create(), modelViewMatrix);
const normalMatrix = glMatrix.mat4.transpose(glMatrix.mat4.create(), modelViewMatrixInverse);
const normalMatrixUpdateBuffer = this._createGPUBuffer(Float32Array.from(normalMatrix), GPUBufferUsage.COPY_SRC);
const viewDirection = glMatrix.vec3.fromValues(-arcBall.forward[0], -arcBall.forward[1], -arcBall.forward[2]);
const viewDirectionUpdateBuffer = this._createGPUBuffer(Float32Array.from(viewDirection), GPUBufferUsage.COPY_SRC);
We update our modelViewMatrix by directly grabbing it from our ArcBall, we also update our normalMatrix (since it is derived from our modelViewMatrix) and our viewDirection.
We then just issue commands to copy the data to their correct buffers.
commandEncoder.copyBufferToBuffer(modelViewMatrixUpdateBuffer, 0, transformationMatrixBuffer, 0, 16 * Float32Array.BYTES_PER_ELEMENT);
commandEncoder.copyBufferToBuffer(normalMatrixUpdateBuffer, 0, normalMatrixBuffer, 0, 16 * Float32Array.BYTES_PER_ELEMENT);
commandEncoder.copyBufferToBuffer(viewDirectionUpdateBuffer, 0, viewDirectionBuffer, 0, 3 * Float32Array.BYTES_PER_ELEMENT);
commandEncoder.copyBufferToBuffer(viewDirectionUpdateBuffer, 0, lightDirectionBuffer, 0, 3 * Float32Array.BYTES_PER_ELEMENT);
and ta-da! We now have some controls in our viewer!
Conclusion
In this article we looked into how to add controls to our viewer, the process of mapping user actions into the GPU world followed a common pattern. Detect UI change -> ask canvas to re-render -> update data corresponding to UI change -> copy data to GPU.