WebGPU Rendering: Part 19 Loading Materials

Matthew MacFarquhar
13 min readJan 30, 2025

--

Introduction

In these next couple articles, we will use the foundations of WebGPU programming we have learned to build a glb loader. A lot of these upcoming articles will be based on a combination of the 0 to glTF article series and this webgpu+glTF case study. For my version, I will use typescript instead of the original javascript and focus more on encapsulating logic within classes — like we were doing in our previous articles.

In this article, we will talk about loading materials and using them to paint our model. Currently we disregard materials entirely and just paint our models using surface normals.

The state of the viewer in this article can be found here.

What we will Build

GlTF uses a complex Physically Based Rendering approach. Materials contain textures for baseColor, normals, metallicness, roughness, occlusion and emissiveness.

To keep things very simple, we will just load in the baseColor for now and do a simple paint of the pixels using the base color. However, we will set up our loading in a way that we can expand to the full PBR pipeline at some point.

We will need three new GLTF classes to help us

GLTFImage — will hold the imagebitmap data for our textures and handles copying the image up to the GPU

GLTFSampler — will hold metadata about how an image should be sampled when used for a texture

GLTFTexture — will point to the sampler and image to use for a texture

GLTFMaterial — will use the textures that we set up to build up a final material which we can give to our meshes (and primitives) to use when rendering

Loading GLB Model

When we load the glb, we now need to add some extra steps to load in our material items as well.

export async function uploadGLB(buffer: ArrayBuffer, device: GPUDevice) {
const header = new Uint32Array(buffer, 0, 5);

if (header[0] != 0x46546C67) {
throw Error("Invalid GLB magic");
}
if (header[1] != 2) {
throw Error("Unsupported glb version (only glTF 2.0 is supported)")
}
if (header[4] != 0x4E4F534A) {
throw Error("Invalid glB: The first chunk of the glB file should be JSON");
}

const jsonContentLength = header[3];
const jsonChunk = JSON.parse(new TextDecoder("utf-8").decode(new Uint8Array(buffer, 20, jsonContentLength)));
const binaryHeader = new Uint32Array(buffer, 20 + jsonContentLength, 2);
if (binaryHeader[1] != 0x004E4942) {
throw Error("Invalid glB: The second chunk of the glB file should be binary");
}
const binaryContentLength = binaryHeader[0];
const binaryChunk = new GLTFBuffer(buffer, 20 + jsonContentLength + 8, binaryContentLength);

const bufferViews: GLTFBufferView[] = loadBufferViews(jsonChunk, binaryChunk)
const accessors: GLTFAccessor[] = loadAccessors(jsonChunk, bufferViews);
const samplers: GLTFSampler[] = loadSamplers(jsonChunk);
const images: GLTFImage[] = await loadImages(jsonChunk, bufferViews);
const textures: GLTFTexture[] = loadTextures(jsonChunk, images, samplers);
const materials: GLTFMaterial[] = loadMaterials(jsonChunk, textures);
const meshes: GLTFMesh[] = loadMesh(jsonChunk, accessors, materials);

bufferViews.forEach((bufferView: GLTFBufferView) => {
if (bufferView.needsUpload) {
bufferView.upload(device);
}
});

images.forEach((img: GLTFImage) => {
img.upload(device);
});
samplers.forEach((sampler: GLTFSampler) => {
sampler.create(device);
});
materials.forEach((material: GLTFMaterial) => {
material.upload(device);
})

const sceneNodesJson = jsonChunk["scenes"][0]["nodes"];
const sceneNodes: GLTFNode[] = loadNodes(jsonChunk, sceneNodesJson, meshes);

return new GLTFScene(sceneNodes);
}

GLTFImage

Our GLTFImage is like a fancy accessor, the image json looks like this

{
"bufferView":3,
"mimeType":"image/png"
}

It will point to a bufferView to read as an image and give us the mime type. We use this data to create a blob object and create an imagebitmap which we use to construct our GLTFImage.

Our image needs to be uploaded to our gpu, and must tell us the format it is in (which differs depending on what its use is). We upload the image bitmap data to the gpu and create a view to the image that we store in the image’s view property.

export async function loadImages(jsonChunk: any, bufferViews: GLTFBufferView[]) {
const images: GLTFImage[] = [];
if (!jsonChunk.images) {
return images;
}

for (const image of jsonChunk.images) {
const bufferView = bufferViews[image["bufferView"]];
const blob = new Blob([bufferView.view], {type: image["mimeType"]});
const bitmap = await createImageBitmap(blob);
images.push(new GLTFImage(bitmap));
}

return images;
}

export class GLTFImage {
bitmap: ImageBitmap;
usage: ImageUsage = ImageUsage.BASE_COLOR;
image?: GPUTexture;
view?: GPUTextureView;

constructor(bitmap: ImageBitmap) {
this.bitmap = bitmap;
}

setUsage(usage: ImageUsage) {
this.usage = usage;
}

upload(device: GPUDevice) {
let format: GPUTextureFormat = "rgba8unorm-srgb";
switch (this.usage) {
case ImageUsage.BASE_COLOR:
format = "rgba8unorm-srgb";
break;
case ImageUsage.METALLIC_ROUGHNESS:
format = "rgba8unorm";
break;
case ImageUsage.NORMAL:
case ImageUsage.OCCLUSION:
case ImageUsage.EMISSION:
throw new Error("Unhandled image format for now, TODO!");
}

const imageSize = [this.bitmap.width, this.bitmap.height, 1];
this.image = device.createTexture({
size: imageSize,
format: format,
usage: GPUTextureUsage.TEXTURE_BINDING | GPUTextureUsage.COPY_DST | GPUTextureUsage.RENDER_ATTACHMENT
});
device.queue.copyExternalImageToTexture(
{source: this.bitmap},
{texture: this.image},
imageSize
);

this.view = this.image.createView();
}
}

Once we have loaded all our images we return them to the loader for the next step.

GLTFSampler

Our sampler holds information about how to read the image data — like the accessor does for the buffer view. A sampler tells us how to deal with wrapping the texture around a mesh primitive.

{
"magFilter": 9729,
"minFilter": 9986,
"wrapS": 10497,
"wrapT": 10497
}

The numbers map to enum constant values.

export enum GLTFTextureFilter {
NEAREST = 9728,
LINEAR = 9729,
NEAREST_MIPMAP_NEAREST = 9984,
LINEAR_MIPMAP_NEAREST = 9985,
NEAREST_MIPMAP_LINEAR = 9986,
LINEAR_MIPMAP_LINEAR = 9987,
}

export enum GLTFTextureWrap {
REPEAT = 10497,
CLAMP_TO_EDGE = 33071,
MIRRORED_REPEAT = 33648
}

When we load a sampler, we create a sampler object on the gpu and return the list of GLTFSampler objects back to the loader for the next step.

function gltfTextureFilterMode(filter: GLTFTextureFilter) {
switch (filter) {
case GLTFTextureFilter.NEAREST_MIPMAP_NEAREST:
case GLTFTextureFilter.NEAREST_MIPMAP_LINEAR:
case GLTFTextureFilter.NEAREST:
return "nearest" as GPUFilterMode;
case GLTFTextureFilter.LINEAR_MIPMAP_NEAREST:
case GLTFTextureFilter.LINEAR_MIPMAP_LINEAR:
case GLTFTextureFilter.LINEAR:
return "linear" as GPUFilterMode;
}
}
function gltfAddressMode(mode: GLTFTextureWrap) {
switch (mode) {
case GLTFTextureWrap.REPEAT:
return "repeat" as GPUAddressMode;
case GLTFTextureWrap.CLAMP_TO_EDGE:
return "clamp-to-edge" as GPUAddressMode;
case GLTFTextureWrap.MIRRORED_REPEAT:
return "mirror-repeat" as GPUAddressMode;
}
}

export function loadSamplers(jsonChunk: any) {
const samplers: GLTFSampler[] = [];
if (!jsonChunk.samplers) {
return [];
}

for (const sampler of jsonChunk.samplers) {
samplers.push(new GLTFSampler(sampler["magFilter"] as GLTFTextureFilter, sampler["minFilter"] as GLTFTextureFilter,
sampler["wrapS"] as GLTFTextureWrap, sampler["wrapT"] as GLTFTextureWrap));
}
return samplers;
}

export class GLTFSampler {
magFilter: GPUFilterMode;
minFilter: GPUFilterMode;
wrapU: GPUAddressMode;
wrapV: GPUAddressMode;
sampler?: GPUSampler;

constructor(magFilter: GLTFTextureFilter, minFilter: GLTFTextureFilter, wrapU: GLTFTextureWrap, wrapV: GLTFTextureWrap) {
this.magFilter = gltfTextureFilterMode(magFilter);
this.minFilter = gltfTextureFilterMode(minFilter);
this.wrapU = gltfAddressMode(wrapU);
this.wrapV = gltfAddressMode(wrapV);
}

create(device: GPUDevice) {
this.sampler = device.createSampler({
magFilter: this.magFilter,
minFilter: this.minFilter,
addressModeU: this.wrapU,
addressModeV: this.wrapV,
mipmapFilter: "nearest"
})
}
}

GLTFTexture

Our textures will use our arrays of images and samplers to construct references which our materials will use. Textures are like the accessors of the glTF material world.

{
"sampler": 0,
"source": 0
}

Loading a texture is then quite easy and just needs to maintain pointers to the sampler and image source objects.

export function loadTextures(jsonChunk: any, images: GLTFImage[], samplers: GLTFSampler[]) {
const textures: GLTFTexture[] = [];
if (!jsonChunk.textures) {
return textures;
}

const defaultSampler = new GLTFSampler(
GLTFTextureFilter.LINEAR,
GLTFTextureFilter.LINEAR,
GLTFTextureWrap.REPEAT,
GLTFTextureWrap.REPEAT
);
let usedDefaultSampler = false;

for (const texture of jsonChunk.textures) {
let sampler = null;
if ("sampler" in texture) {
sampler = samplers[texture["sampler"]];
} else {
sampler = defaultSampler;
usedDefaultSampler = true;
}

textures.push(new GLTFTexture(sampler, images[texture["source"]]));
}

if (usedDefaultSampler) {
samplers.push(defaultSampler);
}
return textures;
}

export class GLTFTexture {
sampler: GLTFSampler;
image: GLTFImage;

constructor(sampler: GLTFSampler, image: GLTFImage) {
this.sampler = sampler;
this.image = image;
}

setUsage(usage: ImageUsage) {
this.image.setUsage(usage);
}
}

We can then return our constructed textures to our loader which will then be used to craft our materials.

{
"pbrMetallicRoughness": {
"baseColorTexture": {
"index": 0
},
"metallicFactor": 0.0
},
"emissiveFactor": [
0.0,
0.0,
0.0
],
"name": "blinn3-fx"
}

A material will contain some factors which can be used in the shader to multiply against the texture values for a fragment. It will also contain indexes for the texture they should use for their texture properties (currently we only care about baseColorTexture).

When we load a material, we parse the JSON for our factors and texture indices which we simply pass to our constructor.

Our constructor just sets the properties and usages for the textures.

function createSolidColorTexture(device: GPUDevice, r: number, g: number, b: number, a: number) {
const data = new Uint8Array([r * 255, g * 255, b * 255, a * 255]);
const texture = device.createTexture({
size: { width: 1, height: 1 },
format: 'rgba8unorm',
usage: GPUTextureUsage.TEXTURE_BINDING | GPUTextureUsage.COPY_DST | GPUTextureUsage.RENDER_ATTACHMENT
});
device.queue.writeTexture({ texture }, data, {}, { width: 1, height: 1 });
return texture;
}

export function loadMaterials(jsonChunk: any, textures: GLTFTexture[]) {
const materials = [];
for (const material of jsonChunk.materials) {
const pbrMaterial = material["pbrMetallicRoughness"];
const baseColorFactor = pbrMaterial["baseColorFactor"] ?? [1,1,1,1];
const metallicFactor = pbrMaterial["metallicFactor"] ?? 1;
const roughnessFactor = pbrMaterial["roughnessFactor"] ?? 1;


let baseColorTexture: GLTFTexture | null = null;
if ("baseColorTexture" in pbrMaterial) {
baseColorTexture = textures[pbrMaterial["baseColorTexture"]["index"]]
}

let metallicRoughnessTexture: GLTFTexture | null = null;
if ("metallicRoughnessTexture" in pbrMaterial) {
metallicRoughnessTexture = textures[pbrMaterial["metallicRoughnessTexture"]["index"]];
}


materials.push(new GLTFMaterial(baseColorFactor, baseColorTexture, metallicFactor, roughnessFactor, metallicRoughnessTexture));
}

return materials;
}

export class GLTFMaterial {
baseColorFactor: vec4 = [1, 1, 1, 1];
baseColorTexture: GLTFTexture | null = null;

metallicFactor: number = 1;
rougnessFactor: number = 1;
metallicRoughnessTexture: GLTFTexture | null = null;

paramBuffer: GPUBuffer | null = null;

bindGroupLayout: GPUBindGroupLayout | null = null;
bindGroup: GPUBindGroup | null = null;

constructor(baseColorFactor: vec4, baseColorTexture: GLTFTexture | null, metallicFactor: number, roughnessFactor: number, metallicRoughnessTexture: GLTFTexture | null) {

this.baseColorFactor = baseColorFactor;
this.baseColorTexture = baseColorTexture;
if (this.baseColorTexture) {
this.baseColorTexture.setUsage(ImageUsage.BASE_COLOR);
}

this.metallicFactor = metallicFactor;
this.rougnessFactor = roughnessFactor;
this.metallicRoughnessTexture = metallicRoughnessTexture;
if (this.metallicRoughnessTexture) {
this.metallicRoughnessTexture.setUsage(ImageUsage.METALLIC_ROUGHNESS);
}
}

upload(device: GPUDevice) {
this.paramBuffer = device.createBuffer({
size: 8 * Float32Array.BYTES_PER_ELEMENT, // We'll be passing 6 floats, which round up to 8 in UBO alignment
usage: GPUBufferUsage.COPY_DST | GPUBufferUsage.UNIFORM,
mappedAtCreation: true
});

const params = new Float32Array(this.paramBuffer.getMappedRange());
params.set(this.baseColorFactor, 0);
params.set([this.metallicFactor, this.rougnessFactor], 4);

this.paramBuffer.unmap();

const bindGroupLayoutEntries: GPUBindGroupLayoutEntry[] = [
{
binding: 0,
visibility: GPUShaderStage.FRAGMENT,
buffer: {
type: "uniform"
}
}
];

const bindGroupEntries: GPUBindGroupEntry[] = [
{
binding: 0,
resource: {
buffer: this.paramBuffer,
size: 8 * Float32Array.BYTES_PER_ELEMENT
}
}
];

//BASECOLOR

bindGroupLayoutEntries.push(
{
binding: 1,
visibility: GPUShaderStage.FRAGMENT,
sampler: {}
}
);
bindGroupLayoutEntries.push(
{
binding: 2,
visibility: GPUShaderStage.FRAGMENT,
texture: {}
}
);

if (this.baseColorTexture) {
bindGroupEntries.push(
{
binding: 1,
resource: this.baseColorTexture.sampler!.sampler!
}
);
bindGroupEntries.push(
{
binding: 2,
resource: this.baseColorTexture.image!.view!
}
);
} else {
const opaqueWhiteTexture = createSolidColorTexture(device, 1, 1, 1, 1);
const defaultSampler = new GLTFSampler(
GLTFTextureFilter.LINEAR,
GLTFTextureFilter.LINEAR,
GLTFTextureWrap.REPEAT,
GLTFTextureWrap.REPEAT
);
defaultSampler.create(device);

bindGroupEntries.push(
{
binding: 1,
resource: defaultSampler.sampler!
}
);
bindGroupEntries.push(
{
binding: 2,
resource: opaqueWhiteTexture.createView()
}
);
}

//METALLIC ROUGHNESS
bindGroupLayoutEntries.push(
{
binding: 3,
visibility: GPUShaderStage.FRAGMENT,
sampler: {}
}
);
bindGroupLayoutEntries.push(
{
binding: 4,
visibility: GPUShaderStage.FRAGMENT,
texture: {}
}
);
if (this.metallicRoughnessTexture) {
bindGroupEntries.push(
{
binding: 3,
resource: this.metallicRoughnessTexture.sampler!.sampler!
}
);
bindGroupEntries.push(
{
binding: 4,
resource: this.metallicRoughnessTexture.image!.view!
}
);
} else {
const transparentBlackTexture = createSolidColorTexture(device, 0, 0, 0, 0);
const defaultSampler = new GLTFSampler(
GLTFTextureFilter.LINEAR,
GLTFTextureFilter.LINEAR,
GLTFTextureWrap.REPEAT,
GLTFTextureWrap.REPEAT
);
defaultSampler.create(device);

bindGroupEntries.push(
{
binding: 3,
resource: defaultSampler.sampler!
}
);
bindGroupEntries.push(
{
binding: 4,
resource: transparentBlackTexture.createView()
}
);
}

this.bindGroupLayout = device.createBindGroupLayout({
entries: bindGroupLayoutEntries
});

this.bindGroup = device.createBindGroup({
layout: this.bindGroupLayout,
entries: bindGroupEntries
});
}
}

When we upload the material to the GPU, we need to handle uploading the material float factors and the texture samplers and images.

We will build a single bind group to hold our factors and textures. For now, we will only upload the baseColor factor (a float4), metallic factor (float) and roughness factor (float). This is a total of 6 bytes but we will make our bind group 8 bytes to abide by the 4 byte offset rule for uniform bind groups. We set these factors to use the first entry in our bind group.

Our duck will only have a base color texture but we will also build out our pipeline to support metallicRoughness textures.

To support a texture, we need to add bind group layout entries for the texture’s sampler and texture items and then add the bindGroup entries for the actual data.

If we don’t actually have an image for one of the textures in our material, we will make a fake default one. In the case of basecolor, it will be a 1x1 opaque white texture, while for metallicRoughness, it will be a transparent black 1x1 texture.

After all of this factor and texture uploading is done we set the material’s bindGroupLayout and bindGroup properties and return an array of materials to be used by our mesh and primitives.

GLTFPrimitive

Our meshes will just pass the materials array down to the primitives (so I have omitted showing that update).

{
"attributes": {
"TEXCOORD_0": 0,
"NORMAL": 1,
"TANGENT": 2,
"POSITION": 3
},
"indices": 4,
"material": 0
}

Our material index will be passed into our primitive via the material JSON property with the primitive JSON.

export function loadPrimitives(jsonChunk: any, meshJson: any, accessors: GLTFAccessor[], materials: GLTFMaterial[]) {
const meshPrimitives = [];
for (const meshPrimitive of meshJson.primitives) {
const topology = meshPrimitive["mode"] || GLTFRenderMode.TRIANGLES;

let indices = null;
if (jsonChunk["accessors"][meshPrimitive["indices"]] !== undefined) {
indices = accessors[meshPrimitive["indices"]];
}

let positions = null;
let texcoords = null;
let normals = null;
for (const attribute of Object.keys(meshPrimitive["attributes"])) {
const accessor = accessors[meshPrimitive["attributes"][attribute]];
if (attribute === "POSITION") {
positions = accessor;
} else if (attribute === "TEXCOORD_0") {
texcoords = accessor;
} else if (attribute === "NORMAL") {
normals = accessor;
}
}

if (positions == null) {
throw new Error("No positions found");
}

if (texcoords == null) {
console.log("No texcoords found");
const fakeTexCoordBufferByteLength = Float32Array.BYTES_PER_ELEMENT * 2 * positions.count;
const buffer = new ArrayBuffer(fakeTexCoordBufferByteLength);
const fakeBufferView = new GLTFBufferView(new GLTFBuffer(buffer, 0, fakeTexCoordBufferByteLength), fakeTexCoordBufferByteLength, 0, 2 * Float32Array.BYTES_PER_ELEMENT);
fakeBufferView.addUsage(GPUBufferUsage.VERTEX);
texcoords = new GLTFAccessor(fakeBufferView, positions.count, GLTFComponentType.FLOAT, GLTFType.VEC2, 0);
}

if (normals == null) {
console.log("No normals found");
const fakeNormalBufferByteLength = Float32Array.BYTES_PER_ELEMENT * 3 * positions.count;
const buffer = new ArrayBuffer(fakeNormalBufferByteLength);
const fakeBufferView = new GLTFBufferView(new GLTFBuffer(buffer, 0, fakeNormalBufferByteLength), fakeNormalBufferByteLength, 0, 3 * Float32Array.BYTES_PER_ELEMENT);
fakeBufferView.addUsage(GPUBufferUsage.VERTEX);
normals = new GLTFAccessor(fakeBufferView, positions.count, GLTFComponentType.FLOAT, GLTFType.VEC3, 0);
}

const material = materials[meshPrimitive["material"]];
meshPrimitives.push(new GLTFPrimitive(material,positions, indices || undefined, texcoords, normals, topology));
}

return meshPrimitives;
}

Nothing changes here except for the fact that we get the material that is associated with this primitive and pass it into the constructor so it can be set on the primitive instance.

Rendering Process

Now lets look at how adding materials changes how our pipeline is built and how our models are rendered.

Shader

First lets take a look at our shader updates.

alias float4 = vec4<f32>;
alias float3 = vec3<f32>;
alias float2 = vec2<f32>;

struct VertexInput {
@location(0) position: float3,
@location(1) texcoords: float2
};

struct VertexOutput {
@builtin(position) position: float4,
@location(0) world_pos: float3,
@location(1) texcoords: float2
};

struct ViewParams {
view_proj: mat4x4<f32>,
};

struct NodeParams {
model: mat4x4<f32>,
};

@group(0) @binding(0)
var<uniform> view_params: ViewParams;

@group(1) @binding(0)
var<uniform> node_params: NodeParams;


@vertex
fn vertex_main(vert: VertexInput) -> VertexOutput {
var out: VertexOutput;
out.position = view_params.view_proj * node_params.model * float4(vert.position, 1.0);
out.world_pos = vert.position.xyz;
out.texcoords = vert.texcoords;
return out;
}

struct MaterialParams {
base_color_factor: float4,
metallic_factor: f32,
roughness_factor: f32,
};

@group(2) @binding(0)
var<uniform> material_params: MaterialParams;

@group(2) @binding(1)
var base_color_sampler: sampler;

@group(2) @binding(2)
var base_color_texture: texture_2d<f32>;

fn linear_to_srgb(x: f32) -> f32 {
if (x <= 0.0031308) {
return 12.92 * x;
}
return 1.055 * pow(x, 1.0 / 2.4) - 0.055;
}

@fragment
fn fragment_main(in: VertexOutput) -> @location(0) float4 {
let base_color = textureSample(base_color_texture, base_color_sampler, in.texcoords);
var color = material_params.base_color_factor * base_color;

color.x = linear_to_srgb(color.x);
color.y = linear_to_srgb(color.y);
color.z = linear_to_srgb(color.z);
color.w = 1.0;
return color;
}

We will receive texture coordinates with our vertices, which we will just pass out of the vertex shader.

We now have a new bind group containing our material factors, our basecolor sampler and base color texture.

To get the color in our fragment shader, we now perform a textureSample operation with our base color texture, base color sampler and our UV coords. We multiply that color by our base color factor.

We have to do a slight bit of color correction so the rendered texture looks correct on the screen. Lighting and shading calculations are performed in linear space for accuracy, but monitors expect sRGB (gamma-corrected) colors, so directly displaying linear colors would make them look too dark. To fix this, we must convert linear color to sRGB.

Building renderPipeline

Let’s see how building the render pipeline will now change.

We will need one more vertex attribute to achieve texture rendering now — the texcoords. our primitive’s build step now needs to add those to the vertex buffer layout as well. The way to do normals is provided as well but commented out since they are currently unused.

buildRenderPipeline(device: GPUDevice, shaderModule: GPUShaderModule, colorFormat: GPUTextureFormat,
depthFormat: GPUTextureFormat, uniformsBGLayout: GPUBindGroupLayout, nodeParamsBindGroupLayout: GPUBindGroupLayout) {
const vertexBuffers: GPUVertexBufferLayout[] = [
{
arrayStride: this.positions.byteStride,
attributes:[
{
format: this.positions.elementType as GPUVertexFormat,
offset: 0,
shaderLocation: 0
}
]
}
];

if (this.texcoords) {
if (this.texcoords.view.gpuBuffer === null) {
this.texcoords.view.upload(device);
}
vertexBuffers.push(
{
arrayStride: this.texcoords.byteStride,
attributes: [
{
format: this.texcoords.elementType as GPUVertexFormat,
offset: 0,
shaderLocation: 1
}
]
}
)
}

//TODO: normals
// if (this.normals) {
// if (this.normals.view.gpuBuffer === null) {
// this.normals.view.upload(device);
// }
// vertexBuffers.push(
// {
// arrayStride: this.normals.byteStride,
// attributes: [
// {
// format: this.normals.elementType as GPUVertexFormat,
// offset: 0,
// shaderLocation: 2
// }
// ]
// }
// )
// }

const primitive = this.topology == GLTFRenderMode.TRIANGLE_STRIP ? {topology: "triangle-strip" as GPUPrimitiveTopology, stripIndexFormat: this.indices!.elementType as GPUIndexFormat} : {topology: "triangle-list" as GPUPrimitiveTopology};

this.renderPipeline = getPipelineForArgs(vertexBuffers, primitive, colorFormat, depthFormat, this.material, uniformsBGLayout, nodeParamsBindGroupLayout, device, shaderModule)

}

When building our render pipelines, we will explicitly change the bind group layouts to include the material ones. We also are implicitly changing the vertexBufferLayouts since we now use texcoords.

const pipelineGPUData = new Map();
let numPipelines = 0;

export function getPipelineForArgs(vertexBufferLayouts: GPUVertexBufferLayout[], primitive: GPUPrimitiveState, colorFormat: GPUTextureFormat, depthFormat: GPUTextureFormat,
material: GLTFMaterial, uniformsBGLayout: GPUBindGroupLayout, nodeParamsBindGroupLayout: GPUBindGroupLayout, device: GPUDevice, shaderModule: GPUShaderModule) {

const key = JSON.stringify({vertexBufferLayouts, primitive});
let pipeline = pipelineGPUData.get(key);
if (pipeline) {
return pipeline;
}

numPipelines++;
console.log(`Pipeline #${numPipelines}`);

const layout = device.createPipelineLayout({
bindGroupLayouts: [uniformsBGLayout, nodeParamsBindGroupLayout, material.bindGroupLayout]
});

pipeline = device.createRenderPipeline({
vertex: {
entryPoint: "vertex_main",
module: shaderModule,
buffers: vertexBufferLayouts
},
fragment: {
module: shaderModule,
entryPoint: "fragment_main",
targets: [
{
format: colorFormat,
}
],
},
primitive: {
...primitive,
cullMode:"back",
},
depthStencil: {
format: depthFormat,
depthWriteEnabled: true,
depthCompare: "less"
},
layout: layout
});

pipelineGPUData.set(key, pipeline);

return pipeline;
}

Rendering

Our render updates are entirely contained within our primitive object.

render(renderPassEncoder: GPURenderPassEncoder) {
renderPassEncoder.setPipeline(this.renderPipeline!);
renderPassEncoder.setBindGroup(2, this.material.bindGroup);

renderPassEncoder.setVertexBuffer(0,
this.positions.view.gpuBuffer,
this.positions.byteOffset,
this.positions.byteLength
);

if (this.texcoords) {
renderPassEncoder.setVertexBuffer(1,
this.texcoords.view.gpuBuffer,
this.texcoords.byteOffset,
this.texcoords.byteLength
);
}

//TODO: normals
// if (this.normals) {
// renderPassEncoder.setVertexBuffer(2,
// this.normals.view.gpuBuffer,
// this.normals.byteOffset,
// this.normals.byteLength
// )
// }

if (this.indices) {
renderPassEncoder.setIndexBuffer(this.indices.view.gpuBuffer!, this.indices.elementType as GPUIndexFormat, this.indices.byteOffset, this.indices.byteLength);
renderPassEncoder.drawIndexed(this.indices.count);
} else {
renderPassEncoder.draw(this.positions.count);
}
}`

We will need to set the 2nd bind group to the material for the rendered primitive and upload our texcoords in vertex buffer slot 1, then we just draw as usual.

Our application code remains unchanged thanks to our encapsulation of loading and rendering being held within our glTF classes!

Conclusion

In this article we built out our material loading process for our glTF models. We started with a very simple shading process to just use the base color texture, however we built our pipeline and classes up in a way such that we can easily extend our materials to use more textures in the future.

--

--

Matthew MacFarquhar
Matthew MacFarquhar

Written by Matthew MacFarquhar

I am a software engineer working for Amazon living in SF/NYC.

No responses yet