WebGPU Rendering: Part 15 Transparency with Depth Peeling
Introduction
I have been reading through this online book on WebGPU. In this series of articles, I will be going through this book and implementing the lessons in a more structured typescript class approach and eventually we will build three types of WebGPU renderers: Gaussian Splatting, Ray tracing and Rasterization.
In this article we will talk about the Depth peeling which is a rendering technique used to accurately display overlapping transparent objects by “peeling” layers of depth in a scene. In each pass, it captures the closest visible transparent fragments, blends them, and then moves to the next layer behind until all layers are rendered. This ensures correct blending and avoids artifacts from incorrect depth sorting, though it can be computationally expensive due to multiple rendering passes.
The following link is the commit in my Github repo that matches the code we will go over.
Depth Peeling
We will be rendering a semi-transparent teapot by peeling and displaying multiple fragment layers to build up our final image.
First Pass:
- Render the scene and capture the frontmost surface of the teapot (e.g., the spout or lid).
- Store its depth and blend its color into the destination texture.
Second Pass:
- Capture the next visible layer (e.g., the body of the teapot behind the spout) using the depth buffer from the previous pass to pick which fragments are shown.
- Blend it with the previous layer.
Repeat:
- Continue rendering and blending deeper layers until we hit our max pass count (in our case 6).
Our first shader is our Blend shader, it will sample our texture to paint to the output texture. Our blend pipeline will load a given destination texture and paint to it, so we will gradually accumulate colors on our texture and finally paint the result to our screen.
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) tex_coords: vec2<f32>
};
@vertex
fn vs_main(
@location(0) inPos: vec4<f32>
) -> VertexOutput {
var out: VertexOutput;
out.clip_position = vec4<f32>(inPos.xy, 0.0, 1.0);
out.tex_coords = inPos.zw;
return out;
}
// Fragment shader
@group(0) @binding(0)
var t_src: texture_2d<f32>;
@group(0) @binding(1)
var s: sampler;
@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
var color:vec4<f32> = textureSample(t_src, s, in.tex_coords);
return color;
}
Our OBJ shader will be very similar to what we have in our other OBJ shaders.
@group(0) @binding(0)
var<uniform> modelView: mat4x4<f32>;
@group(0) @binding(1)
var<uniform> projection: mat4x4<f32>;
@group(0) @binding(2)
var<uniform> normalMatrix: mat4x4<f32>;
@group(0) @binding(3)
var<uniform> lightDirection: vec3<f32>;
@group(0) @binding(4)
var<uniform> viewDirection: vec3<f32>;
@group(1) @binding(0)
var<uniform> offset: vec3<f32>;
@group(1) @binding(1)
var<uniform> ambientColor:vec4<f32>;// = vec4<f32>(0.15, 0.10, 0.10, 1.0);
@group(1) @binding(2)
var<uniform> diffuseColor:vec4<f32>;// = vec4<f32>(0.55, 0.55, 0.55, 1.0);
@group(1) @binding(3)
var<uniform> specularColor:vec4<f32>;// = vec4<f32>(1.0, 1.0, 1.0, 1.0);
@group(1) @binding(4)
var<uniform> shininess:f32;// = 20.0;
const diffuseConstant:f32 = 1.0;
const specularConstant:f32 = 1.0;
const ambientConstant: f32 = 1.0;
fn specular(lightDir:vec3<f32>, viewDir:vec3<f32>, normal:vec3<f32>, specularColor:vec3<f32>,
shininess:f32) -> vec3<f32> {
let reflectDir:vec3<f32> = reflect(-lightDir, normal);
let specDot:f32 = max(dot(reflectDir, viewDir), 0.0);
return pow(specDot, shininess) * specularColor;
}
fn diffuse(lightDir:vec3<f32>, normal:vec3<f32>, diffuseColor:vec3<f32>) -> vec3<f32>{
return max(dot(lightDir, normal), 0.0) * diffuseColor;
}
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) viewDir: vec3<f32>,
@location(1) normal: vec3<f32>,
@location(2) lightDir: vec3<f32>,
@location(3) inPos: vec4<f32>,
};
@vertex
fn vs_main(
@location(0) inPos: vec3<f32>,
@location(1) inNormal: vec3<f32>
) -> VertexOutput {
var out: VertexOutput;
out.viewDir = normalize((normalMatrix * vec4<f32>(-viewDirection, 0.0)).xyz);
out.lightDir = normalize((normalMatrix * vec4<f32>(-lightDirection, 0.0)).xyz);
out.normal = normalize(normalMatrix * vec4<f32>(inNormal, 0.0)).xyz;
var wldLoc:vec4<f32> = modelView * vec4<f32>(inPos+offset, 1.0);
out.clip_position = projection * wldLoc;
out.inPos = projection * wldLoc;
return out;
}
@group(2) @binding(0)
var t_depth: texture_depth_2d;
@group(2) @binding(1)
var s_depth: sampler_comparison;
@fragment
fn fs_main(in: VertexOutput, @builtin(front_facing) face: bool) -> @location(0) vec4<f32> {
var uv:vec2<f32> = 0.5*(in.inPos.xy/in.inPos.w + vec2(1.0,1.0));
var visibility:f32 = textureSampleCompare(
t_depth, s_depth,
vec2(uv.x, 1.0-uv.y), in.clip_position.z - 0.0001
);
if (visibility < 0.5) {
discard;
}
var lightDir:vec3<f32> = normalize(in.lightDir);
var n:vec3<f32> = normalize(in.normal);
var color:vec3<f32> = diffuseColor.rgb;
if (!face) {
n = normalize(-in.lightDir);
}
var viewDir: vec3<f32> = in.viewDir;
var radiance:vec3<f32> = ambientColor.rgb * ambientConstant +
diffuse(-lightDir, n, color)* diffuseConstant +
specular(-lightDir, viewDir, n, specularColor.rgb, shininess) * specularConstant;
return vec4<f32>(radiance * diffuseColor.w, diffuseColor.w);
}
Our visibility value is used to check if the fragment is in front of the values stored in our depth buffer from the previous pass minus a small offset giving us just the fragments that would be in the next layer.
We go through the normal steps of checking the direction of the face, and getting a radiance value for the lighting. One key difference is that we pre-multiply our output color RGB by our alpha.
We pre-multiply alpha to make blending mathematically correct and efficient, especially for semi-transparent objects. This avoids artifacts by ensuring color channels are already scaled by their transparency, simplifying interpolation and compositing.
Depth Peeling Classes
We will have a class to manage our pipeline infrastructure, a class for rendering the teapot and classes to preform intermediate blends and the final “blend” which paints to the screen.
Pipeline
Our pipeline process is quite complex and needs to be used by the teapot OBJ so we will encapsulate it in its own class.
export class Pipeline {
private _uniformBindGroupLayoutPeeling: GPUBindGroupLayout;
private _uniformBindGroupLayoutObject: GPUBindGroupLayout;
private _uniformBindGroupGlobal: GPUBindGroup;
private _renderPipeline: GPURenderPipeline;
private _sampler: GPUSampler;
private _uniformBindGroupPeeling0?: GPUBindGroup;
private _uniformBindGroupPeeling1?: GPUBindGroup;
public get uniformBindGroupLayoutObject(): GPUBindGroupLayout {
return this._uniformBindGroupLayoutObject;
}
public get uniformBindGroupGlobal(): GPUBindGroup {
return this._uniformBindGroupGlobal;
}
public get uniformBindGroupPeeling0(): GPUBindGroup | undefined {
return this._uniformBindGroupPeeling0;
}
public get uniformBindGroupPeeling1(): GPUBindGroup | undefined {
return this._uniformBindGroupPeeling1;
}
public get renderPipeline(): GPURenderPipeline {
return this._renderPipeline;
}
public static async init(device: GPUDevice, modelViewMatrixUniformBuffer: GPUBuffer,
projectionMatrixUniformBuffer: GPUBuffer, normalMatrixUniformBuffer: GPUBuffer,
viewDirectionUniformBuffer: GPUBuffer, lightDirectionUniformBuffer: GPUBuffer, shaderCode: string): Promise<Pipeline> {
const shaderModule: GPUShaderModule = device.createShaderModule({ code: shaderCode });
const sampler: GPUSampler = device.createSampler({
addressModeU: 'clamp-to-edge',
addressModeV: 'clamp-to-edge',
magFilter: 'nearest',
minFilter: 'nearest',
mipmapFilter: 'nearest',
compare: 'greater'
});
const uniformBindGroupLayoutGlobal: GPUBindGroupLayout = device.createBindGroupLayout({
entries: [
{
binding: 0,
visibility: GPUShaderStage.VERTEX,
buffer: {}
},
{
binding: 1,
visibility: GPUShaderStage.VERTEX,
buffer: {}
},
{
binding: 2,
visibility: GPUShaderStage.VERTEX,
buffer: {}
},
{
binding: 3,
visibility: GPUShaderStage.VERTEX,
buffer: {}
},
{
binding: 4,
visibility: GPUShaderStage.VERTEX,
buffer: {}
}
]
});
const uniformBindGroupLayoutObject: GPUBindGroupLayout = device.createBindGroupLayout({
entries: [
{
binding: 0,
visibility: GPUShaderStage.VERTEX,
buffer: {}
},
{
binding: 1,
visibility: GPUShaderStage.FRAGMENT,
buffer: {}
},
{
binding: 2,
visibility: GPUShaderStage.FRAGMENT,
buffer: {}
},
{
binding: 3,
visibility: GPUShaderStage.FRAGMENT,
buffer: {}
},
{
binding: 4,
visibility: GPUShaderStage.FRAGMENT,
buffer: {}
}
]
});
const uniformBindGroupLayoutPeeling: GPUBindGroupLayout = device.createBindGroupLayout({
entries: [
{
binding: 0,
visibility: GPUShaderStage.FRAGMENT,
texture: {
sampleType: "depth"
}
},
{
binding: 1,
visibility: GPUShaderStage.FRAGMENT,
sampler: {
type: 'comparison',
},
}
]
});
const uniformBindGroupGlobal: GPUBindGroup = device.createBindGroup({
layout: uniformBindGroupLayoutGlobal,
entries: [
{
binding: 0,
resource: {
buffer: modelViewMatrixUniformBuffer
}
},
{
binding: 1,
resource: {
buffer: projectionMatrixUniformBuffer
}
},
{
binding: 2,
resource: {
buffer: normalMatrixUniformBuffer
}
},
{
binding: 3,
resource: {
buffer: lightDirectionUniformBuffer
}
},
{
binding: 4,
resource: {
buffer: viewDirectionUniformBuffer
}
}
]
});
const positionAttribDesc: GPUVertexAttribute = {
shaderLocation: 0,
offset: 0,
format: 'float32x3'
};
const positionBufferLayoutDesc: GPUVertexBufferLayout = {
attributes: [positionAttribDesc],
arrayStride: Float32Array.BYTES_PER_ELEMENT * 3,
stepMode: 'vertex'
};
const normalAttribDesc: GPUVertexAttribute = {
shaderLocation: 1,
offset: 0,
format: 'float32x3'
};
const normalBufferLayoutDesc: GPUVertexBufferLayout = {
attributes: [normalAttribDesc],
arrayStride: Float32Array.BYTES_PER_ELEMENT * 3,
stepMode: 'vertex'
};
const layout: GPUPipelineLayout = device.createPipelineLayout(
{
bindGroupLayouts: [uniformBindGroupLayoutGlobal, uniformBindGroupLayoutObject, uniformBindGroupLayoutPeeling]
}
);
const colorState: GPUColorTargetState = {
format: 'bgra8unorm'
};
const pipelineDesc: GPURenderPipelineDescriptor = {
layout: layout,
vertex: {
module: shaderModule,
entryPoint: 'vs_main',
buffers: [positionBufferLayoutDesc, normalBufferLayoutDesc]
},
fragment: {
module: shaderModule,
entryPoint: 'fs_main',
targets: [colorState]
},
primitive: {
topology: 'triangle-list',
frontFace: 'ccw',
cullMode: 'none'
},
depthStencil: {
depthWriteEnabled: true,
depthCompare: 'less',
format: 'depth32float'
}
}
const pipeline: GPURenderPipeline = device.createRenderPipeline(pipelineDesc);
return new Pipeline(uniformBindGroupLayoutPeeling, uniformBindGroupLayoutObject, uniformBindGroupGlobal, sampler, pipeline);
}
public updateDepthPeelingUniformGroup(device: GPUDevice, depthTexture0: GPUTexture, depthTexture1: GPUTexture) {
this._uniformBindGroupPeeling0 = device.createBindGroup({
layout:this._uniformBindGroupLayoutPeeling,
entries: [
{
binding: 0,
resource: depthTexture0.createView()
},
{
binding: 1,
resource:
this._sampler
}
]
});
this._uniformBindGroupPeeling1 = device.createBindGroup({
layout:this._uniformBindGroupLayoutPeeling,
entries: [
{
binding: 0,
resource: depthTexture1.createView()
},
{
binding: 1,
resource:
this._sampler
}
]
});
}
private constructor(uniformBindGroupLayoutPeeling: GPUBindGroupLayout, uniformBindGroupLayoutObject: GPUBindGroupLayout, uniformBindGroupGlobal: GPUBindGroup, sampler: GPUSampler, renderPipeline: GPURenderPipeline) {
this._uniformBindGroupLayoutPeeling = uniformBindGroupLayoutPeeling;
this._uniformBindGroupGlobal = uniformBindGroupGlobal;
this._uniformBindGroupLayoutObject = uniformBindGroupLayoutObject;
this._sampler = sampler;
this._renderPipeline = renderPipeline;
}
}
We will be constructing a pipeline which has 3 bind groups, one for our global lighting and projections, one for our OBJ’s colors and one to hold the depth texture values from the previous render pass.
We will expose a function to allow our application to update the depth textures we will alternate between in rendering.
Teapot
Our Teapot will load in some vertex data from our teapot OBJ and assign some colors to use for our teapot fragment shader. Finally, we will set up our render pipeline.
export class Teapot {
private _positionBuffer: GPUBuffer;
private _normalBuffer: GPUBuffer;
private _uniformBindGroup: GPUBindGroup;
private _indexBuffer?: GPUBuffer;
private _indexSize?: number;
public static async init(device: GPUDevice, pipeline: Pipeline): Promise<Teapot> {
const objResponse = await fetch("./objs/teapot.obj");
const objBlob = await objResponse.blob();
const objText = await objBlob.text();
const objDataExtractor = new ObjDataExtractor(objText);
const positions = objDataExtractor.vertexPositions;
const positionBuffer = createGPUBuffer(device, positions, GPUBufferUsage.VERTEX);
const normals = objDataExtractor.normals;
const normalBuffer = createGPUBuffer(device, normals, GPUBufferUsage.VERTEX);
const indices = objDataExtractor.indices;
const indexBuffer = createGPUBuffer(device, indices, GPUBufferUsage.INDEX);
const indexSize = indices.length;
const ambientUniformBuffer = createGPUBuffer(device, new Float32Array([0.05, 0.01, 0.01, 1.0]), GPUBufferUsage.UNIFORM);
const diffuseUniformBuffer = createGPUBuffer(device, new Float32Array([0.85, 0.05, 0.05, 0.5]), GPUBufferUsage.UNIFORM);
const specularUniformBuffer = createGPUBuffer(device, new Float32Array([1.0, 1.0, 1.0, 1.0]), GPUBufferUsage.UNIFORM);
const shininessUniformBuffer = createGPUBuffer(device, new Float32Array([80.0]), GPUBufferUsage.UNIFORM);
const offsetUniformBuffer = createGPUBuffer(device, new Float32Array([0.0, 0.0, 0.0]), GPUBufferUsage.UNIFORM);
const uniformBindGroup = device.createBindGroup({
layout: pipeline.uniformBindGroupLayoutObject,
entries: [
{
binding: 0,
resource: {
buffer: offsetUniformBuffer
}
},
{
binding: 1,
resource: {
buffer: ambientUniformBuffer
}
},
{
binding: 2,
resource: {
buffer: diffuseUniformBuffer
}
},
{
binding: 3,
resource: {
buffer: specularUniformBuffer
}
},
{
binding: 4,
resource: {
buffer: shininessUniformBuffer
}
}
]
});
return new Teapot(positionBuffer, normalBuffer, uniformBindGroup, indexBuffer, indexSize);
}
public encodeRenderPass(renderPassEncoder: GPURenderPassEncoder, pipeline: Pipeline, peelingTextureIndex: number) {
renderPassEncoder.setPipeline(pipeline.renderPipeline);
renderPassEncoder.setBindGroup(0, pipeline.uniformBindGroupGlobal);
renderPassEncoder.setBindGroup(1, this._uniformBindGroup);
if (peelingTextureIndex % 2 == 0) {
renderPassEncoder.setBindGroup(2, pipeline.uniformBindGroupPeeling0);
} else {
renderPassEncoder.setBindGroup(2, pipeline.uniformBindGroupPeeling1);
}
renderPassEncoder.setVertexBuffer(0, this._positionBuffer);
renderPassEncoder.setVertexBuffer(1, this._normalBuffer);
renderPassEncoder.setIndexBuffer(this._indexBuffer!, 'uint16');
renderPassEncoder.drawIndexed(this._indexSize!);
}
private constructor(positionBuffer: GPUBuffer, normalBuffer: GPUBuffer, uniformBindGroup: GPUBindGroup, indexBuffer: GPUBuffer, indexSize: number) {
this._positionBuffer = positionBuffer;
this._normalBuffer = normalBuffer;
this._uniformBindGroup = uniformBindGroup;
this._indexBuffer = indexBuffer;
this._indexSize = indexSize;
}
}
Our typical encodeRenderPass function now takes in an additional peelingTextureIndex parameter to tell us which depth texture we should be reading from. In our pipeline, we will take turns reading from one depth texture and writing to the other and then switching.
Blend
This Blend class is responsible for taking in a texture and drawing it to our output texture.
export class Blend {
private _uniformBindGroupLayout: GPUBindGroupLayout;
private _sampler: GPUSampler;
private _positionBuffer: GPUBuffer;
private _pipeline: GPURenderPipeline;
private _uniformBindGroup?: GPUBindGroup;
public static async init(device: GPUDevice, shaderCode: string): Promise<Blend> {
const shaderModule: GPUShaderModule = device.createShaderModule({ code: shaderCode });
const sampler: GPUSampler = device.createSampler({
addressModeU: 'clamp-to-edge',
addressModeV: 'clamp-to-edge',
magFilter: 'linear',
minFilter: 'linear',
mipmapFilter: 'linear'
});
const uniformBindGroupLayout: GPUBindGroupLayout = device.createBindGroupLayout({
entries: [
{
binding: 0,
visibility: GPUShaderStage.FRAGMENT,
texture: {}
},
{
binding: 1,
visibility: GPUShaderStage.FRAGMENT,
sampler: {}
}
]
});
const positions: Float32Array = new Float32Array([
-1, -1, 0, 1,
1, -1, 1, 1,
-1, 1, 0, 0,
1, 1, 1, 0
]);
const positionBuffer: GPUBuffer = createGPUBuffer(device, positions, GPUBufferUsage.VERTEX);
const positionAttribDesc: GPUVertexAttribute = {
shaderLocation: 0,
offset: 0,
format: 'float32x4'
};
const positionBufferLayoutDesc: GPUVertexBufferLayout = {
attributes: [positionAttribDesc],
arrayStride: Float32Array.BYTES_PER_ELEMENT * 4,
stepMode: 'vertex'
};
const layout: GPUPipelineLayout = device.createPipelineLayout({
bindGroupLayouts: [uniformBindGroupLayout]
});
const colorState: GPUColorTargetState = {
format: 'bgra8unorm',
blend: {
color: {
operation: "add",
srcFactor: 'dst-alpha',
dstFactor: 'one',
},
alpha: {
operation: "add",
srcFactor: 'zero',
dstFactor: 'one-minus-src-alpha',
}
}
};
const pipelineDesc: GPURenderPipelineDescriptor = {
layout: layout,
vertex: {
module: shaderModule,
entryPoint: 'vs_main',
buffers: [positionBufferLayoutDesc]
},
fragment: {
module: shaderModule,
entryPoint: 'fs_main',
targets: [colorState]
},
primitive: {
topology: 'triangle-strip',
frontFace: 'ccw',
cullMode: 'none'
}
}
const pipeline = device.createRenderPipeline(pipelineDesc);
return new Blend(uniformBindGroupLayout, sampler, positionBuffer, pipeline);
}
public updateTexture(device: GPUDevice, srcTexture: GPUTexture) {
this._uniformBindGroup = device.createBindGroup({
layout: this._uniformBindGroupLayout,
entries: [
{
binding: 0,
resource: srcTexture.createView()
},
{
binding: 1,
resource: this._sampler
}
]
})
}
public encodeRenderPass(renderPassEncoder: GPURenderPassEncoder) {
renderPassEncoder.setPipeline(this._pipeline);
renderPassEncoder.setBindGroup(0, this._uniformBindGroup);
renderPassEncoder.setVertexBuffer(0, this._positionBuffer);
renderPassEncoder.draw(4, 1);
}
constructor(uniformBindGroupLayout: GPUBindGroupLayout, sampler: GPUSampler, positionBuffer: GPUBuffer, pipeline: GPURenderPipeline) {
this._sampler = sampler;
this._uniformBindGroupLayout = uniformBindGroupLayout;
this._positionBuffer = positionBuffer;
this._pipeline = pipeline;
}
}
When we paint our given texture to our target texture, we use a special blend configuration.
When we paint color, the destination color will be weighted by 1 and the source color will be weighted by the current destination alpha.
outputColor=(srcColor×dstAlpha)+(dstColor×1)
When we are picking our final alpha, we will weight our destination to be 1- source alpha.
outputAlpha=(srcAlpha×0)+(dstAlpha×(1−srcAlpha))
= dstAlpha - dstAlpha*srcAlpha
The color blending makes the source color mix more with the destination color if the destination is more transparent, while the alpha blending makes it so the more transparent the source, the less it influences the color and transparency of the final result.
Final
The final blend class will paint our semi-transparent texture onto the final background.
export class Final {
private _uniformBindGroupLayout: GPUBindGroupLayout;
private _sampler: GPUSampler;
private _positionBuffer: GPUBuffer;
private _pipeline: GPURenderPipeline;
private _uniformBindGroup?: GPUBindGroup;
public static async init(device: GPUDevice, shaderCode: string): Promise<Final> {
const shaderModule: GPUShaderModule = device.createShaderModule({ code: shaderCode });
const sampler: GPUSampler = device.createSampler({
addressModeU: 'clamp-to-edge',
addressModeV: 'clamp-to-edge',
magFilter: 'linear',
minFilter: 'linear',
mipmapFilter: 'linear'
});
const uniformBindGroupLayout: GPUBindGroupLayout = device.createBindGroupLayout({
entries: [
{
binding: 0,
visibility: GPUShaderStage.FRAGMENT,
texture: {}
},
{
binding: 1,
visibility: GPUShaderStage.FRAGMENT,
sampler: {}
}
]
});
const positions: Float32Array = new Float32Array([
-1, -1, 0, 1,
1, -1, 1, 1,
-1, 1, 0, 0,
1, 1, 1, 0
]);
const positionBuffer: GPUBuffer = createGPUBuffer(device, positions, GPUBufferUsage.VERTEX);
const positionAttribDesc: GPUVertexAttribute = {
shaderLocation: 0,
offset: 0,
format: 'float32x4'
};
const positionBufferLayoutDesc: GPUVertexBufferLayout = {
attributes: [positionAttribDesc],
arrayStride: Float32Array.BYTES_PER_ELEMENT * 4,
stepMode: 'vertex'
};
const layout: GPUPipelineLayout = device.createPipelineLayout({
bindGroupLayouts: [uniformBindGroupLayout]
});
const colorState: GPUColorTargetState = {
format: 'bgra8unorm',
blend: {
color: {
operation: "add",
srcFactor: 'one',
dstFactor: 'one-minus-src-alpha',
},
alpha: {
operation: "add",
srcFactor: 'one',
dstFactor: 'one-minus-src-alpha',
}
}
};
const pipelineDesc: GPURenderPipelineDescriptor = {
layout: layout,
vertex: {
module: shaderModule,
entryPoint: 'vs_main',
buffers: [positionBufferLayoutDesc]
},
fragment: {
module: shaderModule,
entryPoint: 'fs_main',
targets: [colorState]
},
primitive: {
topology: 'triangle-strip',
frontFace: 'ccw',
cullMode: 'none'
}
}
const pipeline = device.createRenderPipeline(pipelineDesc);
return new Final(uniformBindGroupLayout, sampler, positionBuffer, pipeline);
}
public updateTexture(device: GPUDevice, dstTexture: GPUTexture) {
this._uniformBindGroup = device.createBindGroup({
layout: this._uniformBindGroupLayout,
entries: [
{
binding: 0,
resource: dstTexture.createView()
},
{
binding: 1,
resource: this._sampler
}
]
})
}
public encodeRenderPass(renderPassEncoder: GPURenderPassEncoder) {
renderPassEncoder.setPipeline(this._pipeline);
renderPassEncoder.setBindGroup(0, this._uniformBindGroup);
renderPassEncoder.setVertexBuffer(0, this._positionBuffer);
renderPassEncoder.draw(4, 1);
}
constructor(uniformBindGroupLayout: GPUBindGroupLayout, sampler: GPUSampler, positionBuffer: GPUBuffer, pipeline: GPURenderPipeline) {
this._sampler = sampler;
this._uniformBindGroupLayout = uniformBindGroupLayout;
this._positionBuffer = positionBuffer;
this._pipeline = pipeline;
}
}
Our color will come completely from our texture if it is fully opaque (i.e. alpha = 1). If not, some of the color will come from the background texture — exactly 1-alpha amount of the color that is.
Our alpha will be fully opaque if the dst_alpha is 1 since the equation is
final_alpha = src_alpha + (1-src_alpha)*dst_alpha
if we had a more transparent background (say alpha = 0.9) we get
final_alpha = 0.1 * src_alpha + 0.9
So we can get a final semi-transparent final texture if both the blended texture and background texture have some transparency.
Render
Our render pipeline will set up our pipeline, teapot, blend and final blend objects and instantiate two depth textures, a final destinationTexture and a colorTextureForCleanup to use during our blend process.
We have our blend object take in our colorTextureForCleanup texture while our final blend object will use our destination texture.
const renderTransparencyExample = async () => {
const adapter = await navigator.gpu.requestAdapter();
const device = await adapter!.requestDevice();
const canvas = document.getElementById("canvas") as HTMLCanvasElement;
const context = canvas.getContext("webgpu");
const canvasConfig: GPUCanvasConfiguration = {
device: device!,
format: navigator.gpu.getPreferredCanvasFormat() as GPUTextureFormat,
usage: GPUTextureUsage.RENDER_ATTACHMENT,
alphaMode: "opaque",
}
context!.configure(canvasConfig);
let angle = 0.0;
const arcball = new Arcball(5.0);
const modelViewMatrix = arcball.getMatrices();
const modelViewMatrixUniformBuffer = createGPUBuffer(device!, new Float32Array(modelViewMatrix), GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST);
const viewDir = glMatrix.vec3.fromValues(-10.0, -10.0, -10.0);
const viewDirectionUniformBuffer = createGPUBuffer(device!, new Float32Array(viewDir), GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST);
const lightDirectionBuffer = createGPUBuffer(device!, new Float32Array(viewDir), GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST);
const modelViewMatrixInverse = glMatrix.mat4.invert(glMatrix.mat4.create(), modelViewMatrix)!;
const normalMatrix = glMatrix.mat4.transpose(glMatrix.mat4.create(), modelViewMatrixInverse);
const normalMatrixUniformBuffer = createGPUBuffer(device!, new Float32Array(normalMatrix), GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST);
const projectionMatrix = glMatrix.mat4.perspective(glMatrix.mat4.create(), 1.4, canvas.width / canvas.height, 0.1, 1000.0);
const projectionMatrixUnifromBuffer = createGPUBuffer(device!, new Float32Array(projectionMatrix), GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST);
const pipeline = await Pipeline.init(device!, modelViewMatrixUniformBuffer, projectionMatrixUnifromBuffer, normalMatrixUniformBuffer, viewDirectionUniformBuffer, lightDirectionBuffer, transparencyObjModelWgsl);
const teapot = await TransparentTeapot.init(device!, pipeline);
const final = await Final.init(device!, finalBlendShader);
const blend = await Blend.init(device!, blendShader);
let depthTexture0: GPUTexture | null = null;
let depthStencilAttachment0: GPURenderPassDepthStencilAttachment | undefined = undefined;
let depthTexture1: GPUTexture | null = null;
let depthStencilAttachment1: GPURenderPassDepthStencilAttachment | undefined = undefined;
let dstTexture: GPUTexture | null = null;
let colorTextureForCleanup: GPUTexture | null = null;
async function render() {
const devicePixelRatio = window.devicePixelRatio || 1;
let currentCanvasWidth = canvas.clientWidth * devicePixelRatio;
let currentCanvasHeight = canvas.clientHeight * devicePixelRatio;
let projectionMatrixUniformBufferUpdate = null;
let colorTextureForDebugging: GPUTexture | null = null;
if (currentCanvasWidth != canvas.width || currentCanvasHeight != canvas.height || colorTextureForDebugging == null || dstTexture == null || depthTexture0 == null || depthTexture1 == null) {
canvas.width = currentCanvasWidth;
canvas.height = currentCanvasHeight;
if (depthTexture0 !== null) {
depthTexture0.destroy();
}
if (depthTexture1 !== null) {
depthTexture1.destroy();
}
const depthTextureDesc: GPUTextureDescriptor = {
size: [canvas.width, canvas.height, 1],
dimension: '2d',
format: 'depth32float',
usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.COPY_SRC | GPUTextureUsage.TEXTURE_BINDING
};
depthTexture0 = device!.createTexture(depthTextureDesc);
depthTexture0.label = "DEPTH_0"
depthTexture1 = device!.createTexture(depthTextureDesc);
depthTexture1.label = "DEPTH_1"
pipeline.updateDepthPeelingUniformGroup(device!, depthTexture0, depthTexture1);
depthStencilAttachment0 = {
view: depthTexture1.createView(),
depthClearValue: 1,
depthLoadOp: 'clear',
depthStoreOp: 'store'
};
depthStencilAttachment1 = {
view: depthTexture0.createView(),
depthClearValue: 1,
depthLoadOp: 'clear',
depthStoreOp: 'store'
};
let projectionMatrix = glMatrix.mat4.perspective(glMatrix.mat4.create(),
1.4, canvas.width / canvas.height, 0.1, 1000.0);
projectionMatrixUniformBufferUpdate = createGPUBuffer(device!, new Float32Array(projectionMatrix), GPUBufferUsage.COPY_SRC);
const colorTextureForDstDesc: GPUTextureDescriptor = {
size: [canvas.width, canvas.height, 1],
dimension: '2d',
format: 'bgra8unorm',
usage: GPUTextureUsage.RENDER_ATTACHMENT | GPUTextureUsage.TEXTURE_BINDING | GPUTextureUsage.COPY_SRC
};
if (colorTextureForCleanup !== null) {
colorTextureForCleanup.destroy();
}
colorTextureForDebugging = device!.createTexture(colorTextureForDstDesc);
colorTextureForDebugging.label ="DEBUG_TEXTURE";
colorTextureForCleanup = colorTextureForDebugging;
if (dstTexture !== null) {
dstTexture.destroy();
}
dstTexture = device!.createTexture(colorTextureForDstDesc);
dstTexture.label ="DEST_TEXTURE";
blend.updateTexture(device!, colorTextureForDebugging);
final.updateTexture(device!, dstTexture);
}
const modelViewMatrix = arcball.getMatrices();
const modelViewMatrixUniformBufferUpdate = createGPUBuffer(device!, new Float32Array(modelViewMatrix), GPUBufferUsage.COPY_SRC);
const modelViewMatrixInverse = glMatrix.mat4.invert(glMatrix.mat4.create(), modelViewMatrix);
const normalMatrix = glMatrix.mat4.transpose(glMatrix.mat4.create(), modelViewMatrixInverse);
const normalMatrixUniformBufferUpdate = createGPUBuffer(device!, new Float32Array(normalMatrix), GPUBufferUsage.COPY_SRC);
const viewDir = glMatrix.vec3.fromValues(-arcball.forward[0], -arcball.forward[1], -arcball.forward[2]);
const viewDirectionUniformBufferUpdate = createGPUBuffer(device!, new Float32Array(viewDir), GPUBufferUsage.COPY_SRC);
const lightDir = glMatrix.vec3.fromValues(Math.cos(angle) * 8.0, Math.sin(angle) * 8.0, 10);
const lightDirectionBufferUpdate = createGPUBuffer(device!, new Float32Array(lightDir), GPUBufferUsage.COPY_SRC);
const colorTexture = context!.getCurrentTexture();
const colorTextureView = colorTexture.createView();
const colorAttachment0: GPURenderPassColorAttachment = {
view: colorTextureForDebugging!.createView(),
clearValue: { r: 0, g: 0, b: 0, a: 0 },
loadOp: 'clear',
storeOp: 'store'
};
const colorAttachment1: GPURenderPassColorAttachment = {
view: colorTextureForDebugging!.createView(),
clearValue: { r: 0, g: 0, b: 0, a: 0 },
loadOp: 'clear',
storeOp: 'store'
};
const cleanUpColorAttachment: GPURenderPassColorAttachment ={
view: dstTexture!.createView(),
clearValue: {r: 0, g: 0, b: 0, a: 1},
loadOp: 'clear',
storeOp: 'store'
}
const blendColorAttachment: GPURenderPassColorAttachment ={
view: dstTexture!.createView(),
clearValue: {r: 0, g: 0, b: 0, a: 0},
loadOp: 'load',
storeOp: 'store'
}
const finalColorAttachment: GPURenderPassColorAttachment ={
view: colorTextureView,
clearValue: {r: 0, g: 0, b: 0, a: 1},
loadOp: 'load',
storeOp: 'store'
}
const renderPassCleanupDesc: GPURenderPassDescriptor = {
colorAttachments: [cleanUpColorAttachment]
};
const renderPassDesc0: GPURenderPassDescriptor = {
colorAttachments: [colorAttachment0],
depthStencilAttachment: depthStencilAttachment0
}
const renderPassDesc1: GPURenderPassDescriptor = {
colorAttachments: [colorAttachment1],
depthStencilAttachment: depthStencilAttachment1
}
const renderPassBlend: GPURenderPassDescriptor = {
colorAttachments: [blendColorAttachment]
}
const renderPassFinal: GPURenderPassDescriptor = {
colorAttachments: [finalColorAttachment]
}
const commandEncoder = device!.createCommandEncoder();
if (projectionMatrixUniformBufferUpdate != null) {
commandEncoder.copyBufferToBuffer(projectionMatrixUniformBufferUpdate, 0, projectionMatrixUnifromBuffer, 0, 16 * Float32Array.BYTES_PER_ELEMENT);
}
commandEncoder.copyBufferToBuffer(modelViewMatrixUniformBufferUpdate, 0, modelViewMatrixUniformBuffer, 0, 16 * Float32Array.BYTES_PER_ELEMENT);
commandEncoder.copyBufferToBuffer(normalMatrixUniformBufferUpdate, 0, normalMatrixUniformBuffer, 0, 16 * Float32Array.BYTES_PER_ELEMENT);
commandEncoder.copyBufferToBuffer(viewDirectionUniformBufferUpdate, 0, viewDirectionUniformBuffer, 0, 3 * Float32Array.BYTES_PER_ELEMENT);
commandEncoder.copyBufferToBuffer(lightDirectionBufferUpdate, 0, lightDirectionBuffer, 0, 3 * Float32Array.BYTES_PER_ELEMENT);
const passEncoderCleanup = commandEncoder.beginRenderPass(renderPassCleanupDesc);
passEncoderCleanup.setViewport(0, 0, canvas.width, canvas.height, 0, 1);
passEncoderCleanup.end();
for (let p = 0; p < 6; p++) {
const passEncoder0 = p % 2 == 0 ? commandEncoder.beginRenderPass(renderPassDesc0) : commandEncoder.beginRenderPass(renderPassDesc1);
passEncoder0.setViewport(0, 0, canvas.width, canvas.height, 0, 1);
teapot.encodeRenderPass(passEncoder0, pipeline, p);
passEncoder0.end()
const passEncoder1 = commandEncoder.beginRenderPass(renderPassBlend);
passEncoder1.setViewport(0, 0, canvas.width, canvas.height, 0, 1);
blend.encodeRenderPass(passEncoder1);
passEncoder1.end()
}
const finalEncoder = commandEncoder.beginRenderPass(renderPassFinal);
final.encodeRenderPass(finalEncoder);
finalEncoder.end();
device!.queue.submit([commandEncoder.finish()]);
await device!.queue.onSubmittedWorkDone();
if (projectionMatrixUniformBufferUpdate) {
projectionMatrixUniformBufferUpdate.destroy();
}
modelViewMatrixUniformBufferUpdate.destroy();
normalMatrixUniformBufferUpdate.destroy();
viewDirectionUniformBufferUpdate.destroy();
lightDirectionBufferUpdate.destroy();
angle += 0.01;
requestAnimationFrame(render);
}
new Controls(canvas, arcball, render);
requestAnimationFrame(render);
}
We have five types of render passes: clean up, rendering for depth texture 0, rendering for depth texture 1, blending render pass and the final render pass, each of them have an appropriate destination textures.
Our first render pass will clear our destination texture, giving it an alpha of 1 and a completely black rgb color.
Then, we will go through six iterations of depth peeling. We first render our teapot with one of two render pass descriptors — these will alternate on each of our six iterations — the output of this pass is stored in colorTextureForDebugging. Then we have our blend pass which will read what is stored in colorTextureForDebugging and write to our destination texture.
Once we are done with our six iterations, we will use our final blend class to take the data accumulated in the destination texture and write it onto the screen.
Conclusion
In this article, we saw how transparency can be done using depth peeling. We took multiple steps rendering different layers onto a texture to accumulate the values and also storing depth values of previous passes to help us pick which fragments get used in subsequent passes. We finally took the destination texture values and rendered it to the screen.