WebGPU Rendering: Part 2 Texture Triangle

Matthew MacFarquhar
6 min read1 day ago

--

Introduction

I have been reading through this online book on WebGPU. In this series of articles, I will be going through this book and implementing the lessons in a more structured typescript class approach and eventually we will build three types of WebGPU renderers: Gaussian Splatting, Ray tracing and Rasterization.

In this article we will add to our basic WebGPU Renderer, introducing textures and uniforms, our textures will allow us to color our vertices based on an image and our uniforms will enable us to pass in offsets to our vertices and camera projection matrices.

The following link is the commit in my Github repo that matches the code we will go over.

Updates to Current Setup

The first thing we will need to do is update our current pipeline code to abstract it a bit. It turns out, that we will need to create GPU buffers for more than just Vertex Attributes so we extract that functionality out of our _createSingleAttributeVertexBuffer function.

private _createGPUBuffer(data: Float32Array | Uint16Array, usage: GPUBufferUsageFlags): GPUBuffer {
const bufferDesc: GPUBufferDescriptor = {
size: data.byteLength,
usage: usage,
mappedAtCreation: true
}

const buffer = this._device.createBuffer(bufferDesc);
if (data instanceof Float32Array) {
const writeArray = new Float32Array(buffer.getMappedRange());
writeArray.set(data);
} else if (data instanceof Uint16Array) {
const writeArray = new Uint16Array(buffer.getMappedRange());
writeArray.set(data);
}

buffer.unmap();
return buffer;
}

private _createSingleAttributeVertexBuffer(vertexAttributeData: Float32Array, attributeDesc: GPUVertexAttribute, arrayStride: number): IGPUVertexBuffer {
const layout: GPUVertexBufferLayout = {
arrayStride,
stepMode: "vertex",
attributes: [attributeDesc],
}

const buffer = this._createGPUBuffer(vertexAttributeData, GPUBufferUsage.VERTEX);

return { buffer, layout };
}

Now we can use the createGPUBuffer function on multiple types of data and buffer usages.

Loading Texture Data

To use our texture in our render pipeline, we will need two things in the GPU: the texture and a sampler. The texture holds the actual image data on the GPU and the sampler tells us how to wrap textures and magnify or minify them to color our vertices.

Sampler

private _createSampler(): GPUSampler {
const samplerDescriptor: GPUSamplerDescriptor = {
addressModeU: "repeat",
addressModeV: "repeat",
magFilter: "linear",
minFilter: "linear",
mipmapFilter: "linear",
}

const sampler = this._device.createSampler(samplerDescriptor);
return sampler;
}

For the sampler, we just define a GPU sampler with some presets and then use the device to create it.

Texture

private _createTexture(imageBitmap: ImageBitmap): GPUTexture {
const textureDescriptor: GPUTextureDescriptor = {
size: { width: imageBitmap.width, height: imageBitmap.height },
format: "rgba8unorm",
usage: GPUTextureUsage.TEXTURE_BINDING | GPUTextureUsage.COPY_DST | GPUTextureUsage.RENDER_ATTACHMENT,
}

const texture = this._device.createTexture(textureDescriptor);

this._device.queue.copyExternalImageToTexture({ source: imageBitmap }, {texture}, textureDescriptor.size);

return texture;
}

Our texture will take in some image bitmap data and then create the buffer on the GPU and copy data from the imageBitmap to the GPU buffer.

Uniform BindGroup

Our uniforms are shared data that are applied to all vertices and or fragments, we define an input type to make it easy to construct and create different uniform bind groups.

interface IBindGroupInput {
type: "buffer" | "texture" | "sampler";
buffer?: GPUBuffer;
texture?: GPUTexture;
sampler?: GPUSampler;
}

We have three bind group types and then an associated buffer, texture or sampler (depending on the type)

private _createUniformBindGroup(bindGroupInputs: IBindGroupInput[]): IUniformBindGroup {
const layoutEntries = [];
const bindGroupEntries = [];
for (let i = 0; i < bindGroupInputs.length; i++) {
const input = bindGroupInputs[i];
switch (input.type) {
case "buffer":
layoutEntries.push({ binding: i, visibility: GPUShaderStage.VERTEX, buffer: {} });
bindGroupEntries.push({ binding: i, resource: { buffer: input.buffer! } });
break;
case "texture":
layoutEntries.push({ binding: i, visibility: GPUShaderStage.FRAGMENT, texture: {} });
bindGroupEntries.push({ binding: i, resource: input.texture!.createView() });
break;
case "sampler":
layoutEntries.push({ binding: i, visibility: GPUShaderStage.FRAGMENT, sampler: {} });
bindGroupEntries.push({ binding: i, resource: input.sampler! });
break;
}
}
const uniformBindGroupLayout = this._device.createBindGroupLayout({
entries: layoutEntries
});

const uniformBindGroup = this._device.createBindGroup({
layout: uniformBindGroupLayout,
entries: bindGroupEntries
});

return { bindGroupLayout: uniformBindGroupLayout, bindGroup: uniformBindGroup };
}

In our createUniformBindGroup function, we take in an array of bind group inputs and create an array of the entries and layouts with appropriate visibility, resource and group binding.

We use these arrays of layout entries and bind group entries to upload to the GPU and return the results to the caller which can be used in our render pipeline.

Render

We can now use all these to construct and execute our pipeline.

public async render_textured_shape(shaderCode: string, vertexCount: number, instanceCount: number, vertices: Float32Array, texCoords: Float32Array,
transformationMatrix: Float32Array, projectionMatrix: Float32Array, imgUri: string) {
const response = await fetch(imgUri);
const blob = await response.blob();
const imageBitmap = await createImageBitmap(blob);

// CREATE UNIFORMS
const transformationMatrixBuffer = this._createGPUBuffer(transformationMatrix, GPUBufferUsage.UNIFORM);
const projectionMatrixBuffer = this._createGPUBuffer(projectionMatrix, GPUBufferUsage.UNIFORM);
const texture = this._createTexture(imageBitmap);
const sampler = this._createSampler();

const transformationMatrixBindGroupInput: IBindGroupInput = {
type: "buffer",
buffer: transformationMatrixBuffer,
}
const projectionMatrixBindGroupInput: IBindGroupInput = {
type: "buffer",
buffer: projectionMatrixBuffer,
}
const textureBindGroupInput: IBindGroupInput = {
type: "texture",
texture: texture,
}
const samplerBindGroupInput: IBindGroupInput = {
type: "sampler",
sampler: sampler,
}
const { bindGroupLayout: uniformBindGroupLayout, bindGroup: uniformBindGroup } = this._createUniformBindGroup([transformationMatrixBindGroupInput, projectionMatrixBindGroupInput, textureBindGroupInput, samplerBindGroupInput]);

// CREATE VERTEX BUFFERS
const { buffer: positionBuffer, layout: positionBufferLayout } = this._createSingleAttributeVertexBuffer(vertices, { format: "float32x3", offset: 0, shaderLocation: 0 }, 3 * Float32Array.BYTES_PER_ELEMENT);
const { buffer: texCoordBuffer, layout: texCoordBufferLayout } = this._createSingleAttributeVertexBuffer(texCoords, { format: "float32x2", offset: 0, shaderLocation: 1 }, 2 * Float32Array.BYTES_PER_ELEMENT);

// CREATE COMMAND ENCODER
const commandEncoder = this._device.createCommandEncoder();

const passEncoder = commandEncoder.beginRenderPass(this._createRenderTarget());
passEncoder.setViewport(0, 0, this._canvas.width, this._canvas.height, 0, 1);
passEncoder.setPipeline(this._createPipeline(this._createShaderModule(shaderCode), [positionBufferLayout, texCoordBufferLayout], [uniformBindGroupLayout]));
passEncoder.setVertexBuffer(0, positionBuffer);
passEncoder.setVertexBuffer(1, texCoordBuffer);
passEncoder.setBindGroup(0, uniformBindGroup);
passEncoder.draw(vertexCount, instanceCount);
passEncoder.end();

this._device.queue.submit([commandEncoder.finish()]);
}

We load up our image data from a url and then create an image bitmap which we can pass to the GPU to get our sampler and texture.

We create BindGroup inputs for a given transformationMatrix, projectionMatrix, sampler and texture to create the necessary layout and bind group items.

We create our Vertex attributes for positions and UVs (image coordinates) and we can execute our render pass, setting vertex buffers and bind groups appropriately.

Shader

We can now use our texture, sampler and other uniforms in our shader.

@group(0) @binding(0)
var<uniform> transform: mat4x4<f32>;
@group(0) @binding(1)
var<uniform> projection: mat4x4<f32>;

struct VertexOutput {
@builtin(position) position: vec4<f32>,
@location(0) tex_coords: vec2<f32>,
};

@vertex
fn vs_main(
@location(0) inPos: vec3<f32>,
@location(1) inTexCoords: vec2<f32>
) -> VertexOutput {
var out: VertexOutput;
out.position = projection * transform * vec4<f32>(inPos, 1.0);
out.tex_coords = inTexCoords;
return out;
}

@group(0) @binding(2)
var t_diffuse: texture_2d<f32>;
@group(0) @binding(3)
var s_diffuse: sampler;

@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
return textureSample(t_diffuse, s_diffuse, in.tex_coords);
}

Our vs_main function transforms our input positions using the uniform projection and transformation matrices and passes the tex_coords through along with the transformed position.

Our fs_main function uses the built in textureSample function with the input texture and sampler along with our tex_coords in order to sample a final color.

Application

Our application code will use the glMatrix package to create transformation and projection matrices to pass in as uniforms. We will also create a basic triangle with positions and UVs (texCoords). We can then just call our exposed function with all those parameters and an image of our choice.

const App = () => {
const canvasRef = useRef<HTMLCanvasElement>(null);

const render = async () => {
const webGpuContext = await WebGPUContext.create(canvasRef.current!);
if (webGpuContext.error) {
console.error(webGpuContext.error);
return;
}

//TEXTURED SHAPE
const transformationMatrix = glMatrix.mat4.lookAt(glMatrix.mat4.create(),
glMatrix.vec3.fromValues(100, 100, 100),
glMatrix.vec3.fromValues(0,0,0),
glMatrix.vec3.fromValues(0.0, 0.0, 1.0));
const projectionMatrix = glMatrix.mat4.perspective(glMatrix.mat4.create(), 1.4, 640.0 / 480.0, 0.1, 1000.0);
const positions = new Float32Array([
100.0, -100.0, 0.0,
0.0, 100.0, 0.0,
-100.0, -100.0, 0.0
]);
const texCoords = new Float32Array([
1.0, 0.0,
0.0, 0.0,
0.5, 1.0
]);
webGpuContext.instance!.render_textured_shape(textureWgsl, 3, 1, positions, texCoords, Float32Array.from(transformationMatrix), Float32Array.from(projectionMatrix), "baboon.png");

}

useEffect(() => {
if (canvasRef.current) {
render();
}
}, []);

return (
<div>
<canvas ref={canvasRef} width={640} height={480}></canvas>
</div>
)
};

Conclusion

In this article we greatly enhanced our renderer adding in concepts like camera, transformation matrices and textures. We learned how uniforms are applied to all vertices and/or fragments and that we can pass in custom uniforms — that are just numerical buffers — as well as special texture and sampler uniforms.

--

--

Matthew MacFarquhar
Matthew MacFarquhar

Written by Matthew MacFarquhar

I am a software engineer working for Amazon living in SF/NYC.

No responses yet