WebGPU Rendering: Part 1 Basic Triangle

Matthew MacFarquhar
7 min read1 day ago

--

Introduction

I have been reading through this online book on WebGPU. In this series of articles, I will be going through this book and implementing the lessons in a more structured typescript class approach and eventually we will build three types of WebGPU renderers: Gaussian Splatting, Ray tracing and Rasterization.

In this article we will be dip our toes into the world of WebGPU by creating the appropriate systems and abstractions to render the above triangle.

The following link is the commit in my Github repo that matches the code we will go over.

Build Out WebGPUContext Class

In this series we will encapsulate a lot of the WebGPU functionality into a class to make our actual interface from our application simpler.

Static async Context Creation

Since WebGPU initialization requires some async calls, we cannot create a context with a regular constructor, a common workaround for objects which require async calls for set up is to create a private constructor and a public async factory method.

Below are our class private variables and our private constructor (these live inside a WebGPUContext Class.

private static VERTEX_ENTRY_POINT = "vs_main";
private static FRAGMENT_ENTRY_POINT = "fs_main";
private static _instance: WebGPUContext;
private _context: GPUCanvasContext;
private _device: GPUDevice;
private _canvas: HTMLCanvasElement;

private constructor(context: GPUCanvasContext, device: GPUDevice, canvas: HTMLCanvasElement) {
this._context = context;
this._device = device;
this._canvas = canvas;
}

We will have one static instance variable created by our factory method. On init, we will request all the needed contexts and components to create our WebGPUContext.

Below is our static async factory method — create. It will preform the necessary steps to obtain a reference to our GPU device and create a context to pass to our private constructor.

 public static async create(canvas: HTMLCanvasElement): Promise<WebGpuContextInitResult> {
if (WebGPUContext._instance) {
return { instance: WebGPUContext._instance };
}

// make sure gpu is supported
if (!navigator.gpu) {
return { error: "WebGPU not supported" };
}

//grab the adapter
const adapter = await navigator.gpu.requestAdapter();
if (!adapter) {
return { error: "Failed to get WebGPU adapter" };
}

//create the device (should be done immediately after adapter in case adapter is lost)
const device = await adapter.requestDevice();
if (!device) {
return { error: "Failed to get WebGPU device" };
}

//create the context
const context = canvas.getContext("webgpu");
if (!context) {
return { error: "Failed to get WebGPU context" };
}

const canvasConfig: GPUCanvasConfiguration = {
device: device,
format: navigator.gpu.getPreferredCanvasFormat() as GPUTextureFormat,
usage: GPUTextureUsage.RENDER_ATTACHMENT,
alphaMode: "opaque",
}

context.configure(canvasConfig);

WebGPUContext._instance = new WebGPUContext(context, device, canvas);
return { instance: WebGPUContext._instance };
}

Our application can obtain a WebGPUContext by calling create, if one has already been instantiated we get it, else we will go and build one.

Building the Pipeline

Now that we have our GPUContext, we need to go through a couple more steps to actually render something to the canvas, we will encapsulate these steps within helper functions.

Create Render Target

This helper function will define the texture we want to render output to, we will pass this into our render pipeline as the output destination for our process.

private _createRenderTarget(): GPURenderPassDescriptor {
const colorTexture = this._context.getCurrentTexture();
const colorTextureView = colorTexture.createView();

const colorAttachment: GPURenderPassColorAttachment = {
view: colorTextureView,
clearValue: { r: 1, g: 0, b: 0, a: 1 },
loadOp: "clear",
storeOp: "store",
}

const renderPassDescriptor: GPURenderPassDescriptor = {
colorAttachments: [colorAttachment]
}

return renderPassDescriptor;
}

We get the texture view using the context we created in our initialization, we say that our background color will be red and our render pass should clear what is on the screen first and then store the output of our render pass to the screen.

Create Vertex Attribute Buffers

Now we need a way to pass in things to render onto our target, the way to pass data into a GPU render pipeline is via Vertices. We will define a function to take in an array of numbers and some descriptions about what they mean and put them on the GPU for us to use.

private _createSingleAttributeVertexBuffer(vertices: Float32Array, attributeDesc: GPUVertexAttribute, arrayStride: number): IGPUVertexBuffer {
const layout: GPUVertexBufferLayout = {
arrayStride,
stepMode: "vertex",
attributes: [attributeDesc],
}

const bufferDesc: GPUBufferDescriptor = {
size: vertices.byteLength,
usage: GPUBufferUsage.VERTEX,
mappedAtCreation: true,
}

const buffer = this._device.createBuffer(bufferDesc);
const writeArray = new Float32Array(buffer.getMappedRange());
writeArray.set(vertices);
buffer.unmap();

return { buffer, layout };
}

Vertex attributes that we upload to the GPU will have a layout — which we will use to construct our pipeline— and a GPU buffer which we can pass through the pipeline to use as input data in our GPU functions.

The AttributeDesc is passed in by the caller and will tell us things like the format of the data we passed in (e.x. should these be reads as float32x2 for something like UVs or float32x3 for something like position), the offset within a buffer and where it is mapped to on the GPU shader code. We use that to create the layout.

We also take in our CPU Float32Array and use our GPUDevice to throw it into a GPUBuffer.

Create Shader Module

Shader code holds the “functions” we will call on our GPU, in order to load them into our GPU, we need to upload them using the device.

private _createShaderModule(source: string) {
const shaderModule = this._device.createShaderModule({ code: source });
return shaderModule;
}

Create Pipeline

Now, we are ready to build our pipeline which will combine the vertex buffer layouts and render targets we have created.

private _createPipeline(shaderModule: GPUShaderModule, vertexBuffers: GPUVertexBufferLayout[]): GPURenderPipeline {
// layour
const pipelineLayoutDescriptor: GPUPipelineLayoutDescriptor = {bindGroupLayouts: []};
const layout = this._device.createPipelineLayout(pipelineLayoutDescriptor);

//TODO: parametrize?
const colorState = {
format: 'bgra8unorm' as GPUTextureFormat,
}

const pipelineDescriptor: GPURenderPipelineDescriptor = {
layout: layout,
vertex: {
module: shaderModule,
entryPoint: WebGPUContext.VERTEX_ENTRY_POINT,
buffers: vertexBuffers,
},
fragment: {
module: shaderModule,
entryPoint: WebGPUContext.FRAGMENT_ENTRY_POINT,
targets: [colorState],
},
primitive: {
topology: 'triangle-list' as GPUPrimitiveTopology,
frontFace: 'cw' as GPUFrontFace,
cullMode: 'back' as GPUCullMode,
},
}

const pipeline = this._device.createRenderPipeline(pipelineDescriptor);
return pipeline;
}

We first initialize a base layout with our uniforms, we have none right now but you can think of them as constants that our GPU functions will use. Then, we build up our pipeline specifying the shader entry points for vertex and fragment stages as well as the target color state we want to output to.

There are a couple more properties we will add later as we build this class out, currently the only non-vertex buffer/ shader based property is our primitive section which dictates how vertices should be understood and assembled by the GPU.

Command Encoder

Up until this point, all we have done is set up layouts and upload necessary data buffers to the GPU, to actually preform a render we have to encode a set of commands that will execute on the GPU creating a render.

 public render(shaderCode: string, vertexCount: number, instanceCount: number, vertices: Float32Array, colors: Float32Array) {

const { buffer: positionBuffer, layout: positionBufferLayout } = this._createSingleAttributeVertexBuffer(vertices, { format: "float32x3", offset: 0, shaderLocation: 0 }, 3 * Float32Array.BYTES_PER_ELEMENT);
const { buffer: colorBuffer, layout: colorBufferLayout } = this._createSingleAttributeVertexBuffer(colors, { format: "float32x3", offset: 0, shaderLocation: 1 }, 3 * Float32Array.BYTES_PER_ELEMENT);

const commandEncoder = this._device.createCommandEncoder();

const passEncoder = commandEncoder.beginRenderPass(this._createRenderTarget());
passEncoder.setViewport(0, 0, this._canvas.width, this._canvas.height, 0, 1);
passEncoder.setPipeline(this._createPipeline(this._createShaderModule(shaderCode), [positionBufferLayout, colorBufferLayout]));
passEncoder.setVertexBuffer(0, positionBuffer);
passEncoder.setVertexBuffer(1, colorBuffer);
passEncoder.draw(vertexCount, instanceCount);
passEncoder.end();

this._device.queue.submit([commandEncoder.finish()]);
}

We first create some vertex attributes for positions and colors using our helper. Then, we initiate our command encoder, we can read the following lines as instructions sent from our CPU to the GPU

  1. Get ready to Render, and set our render output target
  2. Set the output size to be the canvas width and height
  3. Create our pipeline using our helper and the layouts from our vertex buffers
  4. Map the uploaded position and color buffers to indexes so they can be referred to by our shader code
  5. Draw instanceCount instances of the first vertexCount vertices
  6. Finish

Shaders

We have set up a very nice class to abstract the pipeline setup and CPU -> GPU data passing, now we will discuss the actual GPU code to be run — called a shader. A shader — for rendering purposes — is made up of two functions: a vertex function which runs on each of the input vertices and a fragment function which runs for every pixel we will be outputting to our render target.

Triangle Shader

In our Triangle shader, we define a VertexOutput struct that our vertex shader will pass through the GPU render pipeline to the rasterization step.

struct VertexOutput {
@builtin(position) position: vec4<f32>,
@location(0) color: vec3<f32>,
}

@vertex
fn vs_main(@location(0) inPos: vec3<f32>, @location(1) inColor: vec3<f32>) -> VertexOutput {
var out: VertexOutput;
out.position = vec4<f32>(inPos, 1.0);
out.color = inColor;
return out;
}

@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
return vec4<f32>(in.color, 1.0);
}

Our vs_main function takes in an instance from our two vertexAttribute buffers and essentially just passes them through to the Rasterizer.

The rasterizer (not programmable) interpolates between the vertices and their associated VertexOutput to define VertexOutput values for each pixel with each triangle. This is why — in our triangle — you will see a nice blend between the three vertex colors.

Our fs_main function gets a VertexOutput, and then just paints the pixel with that color.

Application

Since we have done a good job encapsulating our WebGPU context, our actual application interaction is very simple and intuitive.

const App = () => {
const canvasRef = useRef<HTMLCanvasElement>(null);

const render = async () => {
const webGpuContext = await WebGPUContext.create(canvasRef.current!);
if (webGpuContext.error) {
console.error(webGpuContext.error);
return;
}

const positions = new Float32Array([
1.0, -1.0, 0.0, -1.0, -1.0, 0.0, 0.0, 1.0, 0.0
]);
const colors = new Float32Array([
1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0
]);
webGpuContext.instance!.render(triangleWgsl, 3, 1, positions, colors);
}

useEffect(() => {
if (canvasRef.current) {
render();
}
}, []);

return (
<div>
<canvas ref={canvasRef} width={640} height={480}></canvas>
</div>
)
};

When we render on component mount, we get a WebGPUContextClass bound on our canvas. Then we create two sets of float 32s for positions (3 float32s) and colors (2 float32s) for our 3 vertices. Finally, we just have to call render passing in our shader code string, the number of vertices and instances to render and our data for the vertex attributes.

Conclusion

In this article we built a lot of scaffolding to encapsulate functionality we will need for more complex rendering, this will significantly reduce the complexity of our code going forward and keep our actual application interface nice and simple, hiding the GPU/CPU interop inside of our WebGPUContext class

--

--