WebGPU Rendering: Part 8 Text Rendering

Matthew MacFarquhar
3 min readDec 5, 2024

--

Introduction

I have been reading through this online book on WebGPU. In this series of articles, I will be going through this book and implementing the lessons in a more structured typescript class approach and eventually we will build three types of WebGPU renderers: Gaussian Splatting, Ray tracing and Rasterization.

In this article we will talk about how to render text to the screen using WebGPU, there are a couple approaches that the book outlines and the complexities of them made my head spin, in the end we are simply going to leverage WebAPI’s canvas context to draw text on an offscreen canvas and use that as our texture file.

The following link is the commit in my Github repo that matches the code we will go over.

Render

All the importatnt stuff is in our new render function for text textures.

public async render_text(shaderCode: string, transformationMatrix: Float32Array, projectionMatrix: Float32Array,
text: string, width: number, height: number, alpha: number, fontWeight: string, fontFamily: string, fillStyle: string, fontSize: number, textLength: number) {
const canvas = new OffscreenCanvas(width, height);
const ctx = canvas.getContext("2d")!;
ctx.clearRect(0,0, width, height);
ctx.globalAlpha = alpha;
ctx.font = `${fontWeight} ${fontSize}px ${fontFamily}`;
ctx.fillStyle = fillStyle;
const textMeasure = ctx.measureText(text);
ctx.fillText(text, 0, textLength);
const neareastPowerof2 = 1 << (32 - Math.clz32(Math.ceil(textMeasure.width)));
const texture = this._createTexture(neareastPowerof2, fontSize);
this._device.queue.copyExternalImageToTexture({ source: canvas, origin: {x: 0, y:0}}, {texture: texture}, {width: neareastPowerof2, height: fontSize});
const transformationMatrixBuffer = this._createGPUBuffer(transformationMatrix, GPUBufferUsage.UNIFORM);
const projectionMatrixBuffer = this._createGPUBuffer(projectionMatrix, GPUBufferUsage.UNIFORM);
const sampler = this._createSampler();
const transformationMatrixBindGroupInput: IBindGroupInput = {
type: "buffer",
visibility: GPUShaderStage.VERTEX,
buffer: transformationMatrixBuffer,
}
const projectionMatrixBindGroupInput: IBindGroupInput = {
type: "buffer",
visibility: GPUShaderStage.VERTEX,
buffer: projectionMatrixBuffer,
}
const textureBindGroupInput: IBindGroupInput = {
type: "texture",
visibility: GPUShaderStage.FRAGMENT,
texture: texture,
}
const samplerBindGroupInput: IBindGroupInput = {
type: "sampler",
visibility: GPUShaderStage.FRAGMENT,
sampler: sampler,
}
const { bindGroupLayout: uniformBindGroupLayout, bindGroup: uniformBindGroup } = this._createUniformBindGroup([transformationMatrixBindGroupInput, projectionMatrixBindGroupInput, textureBindGroupInput, samplerBindGroupInput]);
const positions = new Float32Array([
textMeasure.width *0.5, -16.0, 0.0,
textMeasure.width*0.5, 16.0, 0.0,
-textMeasure.width*0.5, -16.0, 0.0,
-textMeasure.width*0.5, 16.0, 0.0
]);
const w = textMeasure.width / neareastPowerof2;
const texCoords = new Float32Array([
w,
1.0,

w,
0.0,
0.0,
1.0,
0.0,
0.0
]);
const { buffer: positionBuffer, layout: positionBufferLayout } = this._createSingleAttributeVertexBuffer(positions, { format: "float32x3", offset: 0, shaderLocation: 0 }, 3 * Float32Array.BYTES_PER_ELEMENT);
const { buffer: texCoordBuffer, layout: texCoordBufferLayout } = this._createSingleAttributeVertexBuffer(texCoords, { format: "float32x2", offset: 0, shaderLocation: 1 }, 2 * Float32Array.BYTES_PER_ELEMENT);
const blend: GPUBlendState = {
color: {
srcFactor: "one",
dstFactor: "one-minus-src",
operation: "add",
},
alpha: {
srcFactor: "one",
dstFactor: "one-minus-src",
operation: "add",
}
}
const commandEncoder = this._device.createCommandEncoder();
const passEncoder = commandEncoder.beginRenderPass(this._createRenderTarget(this._context.getCurrentTexture(), {r: 1.0, g: 0.0, b: 0.0, a: 1.0}, this._msaa));
passEncoder.setViewport(0, 0, this._canvas.width, this._canvas.height, 0, 1);
passEncoder.setPipeline(this._createPipeline(this._createShaderModule(shaderCode), [positionBufferLayout, texCoordBufferLayout], [uniformBindGroupLayout], "bgra8unorm", blend));
passEncoder.setVertexBuffer(0, positionBuffer);
passEncoder.setVertexBuffer(1, texCoordBuffer);
passEncoder.setBindGroup(0, uniformBindGroup);
passEncoder.draw(4, 1);
passEncoder.end();
this._device.queue.submit([commandEncoder.finish()]);
}

There are two key points here to enable text

  1. Setting up and writing to the offscreen canvas which will be used as our source texture
  2. adding a blend state to our pipeline so that we render just the white text from the canvas image (and not the black background)
render without blend

Conclusion

In this article we saw how to render text to our screen in WebGPU by first drawing the text into an offscreen canvas and then using that canvas’s texture as our source alongside some blending configuration to only render the text and not the background.

--

--

Matthew MacFarquhar
Matthew MacFarquhar

Written by Matthew MacFarquhar

I am a software engineer working for Amazon living in SF/NYC.

No responses yet