Does the current Nodes system support writing to depth buffer in WebGPU?

In WebGL, we can set the fragment depth from within the fragment shader, which writes to the depth buffer:

gl_FragDepthEXT = 0.123;

In WGSL, it can be done as follows:

struct OutputStruct {
	@location(0) color: vec4<f32>,
	@builtin(frag_depth) depth: f32

fn main() -> OutputStruct {
	var output: OutputStruct;
	output.color = vec4<f32>(0.1, 0.2, 0.3, 0.4);
	output.depth = 0.123;
	return output;

This cannot be achieved with webglFn() beause it requires changing the main function and the output struct, which are generated outside the user’s control. I am familiar with the output nodes, with which I can write to multiple render targets, but not (it seems) to the built-in frag_depth location.

Now, I am able to make this work in Three.js by manually manipulating the shader code produced by the node system before it gets executed. However, I cannot figure out a way to do this depth writing with the current node system, even using wgslFn(). I was wondering whether it has been implemented, and whether it is possible to do it at all with the current nodes?

This has not yet been implemented, we would certainly have to have a Node for this, perhaps something like depthPixel. This could be included in a wgslFn() include.

EDIT: Added here: WebGPURenderer: Depth Pixel & Logarithmic Depth Buffer by sunag · Pull Request #27243 · mrdoob/three.js · GitHub


One can render a depthTexture. I will use this later to do scattering.
Maybe a depthTexture can also help you. What would you like to do?
In itself, it would be practical if we could also influence the depth in wgslFn. In webGl I use logarithmic DepthBuffer and I have to take that into account in the shaders.


const logDepthBufFC = 2.0 / ( Math.log( + 1.0) / Math.LN2);
//vertex shader
out float vFragDepth;
// in main
vFragDepth = 1.0 + gl_Position.w;
//fragment shader
in float vFragDepth;
//in main
gl_FragDepth = log2(vFragDepth) * logDepthBufFC * 0.5;
1 Like

Thanks @sunag for confirming. Is this something worth making a feature request for? (I’m not familiar with the practices around the node system development.)

I imagined it could be part of the NodeMaterial.outputNode and OutputStruct, rather than wgslFn. At least that was the first place I looked, as a user. I am already able to write to two different render targets using those, just not to the special-case builtin frag_depth location. So it feels like we are very close already :slight_smile:

Thanks for your answer @Attila_Schroeder. Yes, that’s quite a similar use case to mine, but how to do it in WebGPU? :slight_smile: My particular case is just a mess of rendering passes and post processing, and I sometimes need to record the depth for the next pass, and sometimes not. On the WebGL side we just write to gl_FragDepth or gl_FragDepthEXT (extension), like you say.

So do you have a solution for your logarithmic depth for WebGPU?

No, I haven’t dealt with the depthBuffer in webgpu yet. This will also be a very important element for me if I want to port my webgl planet generator to webgpu.
So far I have focused heavily on texture generation in webgpu because the compute shaders and the ability to store textures directly have added enormous possibilities. With r159 it could then be enough to create a v1 of my ocean generator, which I then upload to github.

With r159 we will be able to use varyings to communicate from the vertexStage (positionNode) to the fragmentStage (colorNode) like in webgl. The possibility was missing until now.

I’ve had the topic of the depth buffer in the back of my mind for some time, but since I don’t need it in the current project, I haven’t brought it up as an extension suggestion yet. In addition, the way I use it above also requires varyings and we will have them with r159. So an important step for their usability is coming.
It would be a nice thing to be able to write the fragment depth into the shader. It is absolutely important for dealing with large differences in scale.

1 Like

I’m already doing this :slight_smile: … I’ll post the PR link here soon.


After the many times I’ve asked you about extensions in the past few months, I hardly had the courage to ask :sweat_smile:
But then I have twice as much reason to be happy. Because of the varyings you have included in r159 and the frag_depth extension in the near future.

1 Like

I tried to create a depthTexture like in the webgpu example “webgpu_depth_texture”. The example is very clear and short.

As soon as I add the material to the composer, as I do with other materials to control textures, I get error messages.

Have you tried creating a depthTexture in webgpu?

Is it already Christmas :christmas_tree::smile:
Now I can hardly think of anything that I miss from the webgl world. You are currently switching on the expansion turbo as christmas season approaches


Wow, that speed! Thank you @sunag, it’s Christmas indeed! :christmas_tree:
This looks perfect, can’t wait for r159 if this will be included in the release.

1 Like

Sorry @Attila_Schroeder, I forgot about this question. Yes, I am using materials in WebGPU with depth textures, and it is working so far. I remember I had to do some workaround hacks when using them with wgslFn, but what are the issues you are getting?

Hi Berthur,

do you have a short code example? I can’t get the depthTexture into the wgslFn shader. I made a mini CodePen and line 40 caused the shader to stop working

I almost forgot that, sunag has integrated the varyings. I integrated these extensively into my project

@Attila_Schroeder Oh, that was a rabbit hole :smiley:
I don’t fully understand what goes on under the surface on every step, and also I’m still on r157 so take this with a pinch of salt, but this is what I found:

  1. You probably need to render it to a render target if you provide a depth texture. In your example, that backend-error goes away if you uncomment your render target code and render only to that, not to the screen at all. This is also what I do in my own project. Then, I copy the final render target over to the screen using a simple copy pass.

  2. After that, you still run into an error in WGSL compilation, which I assume is a bug in the current node system. This bug I also ran into myself, which I solved using the mentioned workaround.

Basically, when you pass a depth texture as a parameter to wgslFn(), it interprets it in a weird way, and the generated code will sample the depth texture at the available UVs, and pass the resulting float in place of the buffer itself. This will give you a compilation error like ‘expected texture_depth_2d but got f32’. My workaround is as follows:

const colorNodeParameters = {
	uv: Nodes.uv(),
	hack: Nodes.texture(this.depthTexture).label('depthBuffer'), // NB: This parameter is required by the Three.js node system to pass the actual depth buffer to the shader. The parameter gets converted to a depth sample (float). The label, however, allows us to access the entire depth buffer by that identifier. TODO: Do this without hacks when supported by Three.js.
	hack_sampler: Nodes.texture(this.depthTexture),

const colorNode = Nodes.wfslFn(`
	fn mainColor(
        uv: vec2<f32>,
        hack: f32
    ) -> vec4<f32> {
		var correctedUv: vec2<f32> = vec2(uv.x, 1.0 - uv.y);
		var depthSample: f32 = textureSample(depthBuffer, depthBuffer_sampler, correctedUv);
		return vec4(...);

Notice that I declare my hack parameter as f32, and never use it. Instead, I magically access depthBuffer, because it becomes a global uniform, named by my label. If you inspect the generated WGSL, that parameter is passed as the actual depth value at the fragment, but since I need to read depth values at other coordinates, I had to implement this workaround.

I should probably make a separate bug report about this behaviour unless it has already been fixed in newer versions, but maybe it might be of some help to you meanwhile :slight_smile:

1 Like

Oh dear, that’s a rather scary solution. I think I’ll wait for r160. Then “texture_depth_2d” will work. With r159 there are some new things that are worth it, for me the varyings to send values ​​from the vertexStageShader (positionNode) to the fragmentStageShader (colorNode) as you know from the varyings or out/in from webgl. In addition, other things have been fixed that were not yet possible with r157.

You don’t need to create an issue. Here is the link where you can see the PR below that sunag created for it.

But I’m glad that with you someone else exept me is already so enthusiastic about webgpu.
You put a lot of effort into the workaround and I have to add it to my reference archive.

1 Like

Thanks, good to know this is already in hand and no further effort is required from me than to wait and get all the goodies later :slight_smile:. Meanwhile my scary solution is working great for its purpose.

Just thought I’d mention still that I think your application would also work if you just define the depth texture parameter as a f32 in the WGSL code, and use it directly for your calculations. As I explained, it gets passed as the depth value for that fragment, so if that’s all you need to access, it would work fine already in current versions, without the need for further hacks.

Also good to know about the varyings. I have been content with the implicit varyings for now, available through the node system, but full custom support is very welcomed. Unfortunately, we use a somewhat customized version of Three.js to fulfill our needs, so upgrading the three.js version is a major project :confused:. But looks like 159 or 160 will be worth it again.

And yes, glad to give some attention to the WebGPU implementation even if not (yet) able to actually contribute :smile:. I have also found your previous posts very useful in the absence of documentation (your ocean looks great).