This cannot be achieved with webglFn() beause it requires changing the main function and the output struct, which are generated outside the userās control. I am familiar with the output nodes, with which I can write to multiple render targets, but not (it seems) to the built-in frag_depth location.
Now, I am able to make this work in Three.js by manually manipulating the shader code produced by the node system before it gets executed. However, I cannot figure out a way to do this depth writing with the current node system, even using wgslFn(). I was wondering whether it has been implemented, and whether it is possible to do it at all with the current nodes?
This has not yet been implemented, we would certainly have to have a Node for this, perhaps something like depthPixel. This could be included in a wgslFn() include.
One can render a depthTexture. I will use this later to do scattering.
Maybe a depthTexture can also help you. What would you like to do?
In itself, it would be practical if we could also influence the depth in wgslFn. In webGl I use logarithmic DepthBuffer and I have to take that into account in the shaders.
Thanks @sunag for confirming. Is this something worth making a feature request for? (Iām not familiar with the practices around the node system development.)
I imagined it could be part of the NodeMaterial.outputNode and OutputStruct, rather than wgslFn. At least that was the first place I looked, as a user. I am already able to write to two different render targets using those, just not to the special-case builtin frag_depth location. So it feels like we are very close already
Thanks for your answer @Attila_Schroeder. Yes, thatās quite a similar use case to mine, but how to do it in WebGPU? My particular case is just a mess of rendering passes and post processing, and I sometimes need to record the depth for the next pass, and sometimes not. On the WebGL side we just write to gl_FragDepth or gl_FragDepthEXT (extension), like you say.
So do you have a solution for your logarithmic depth for WebGPU?
No, I havenāt dealt with the depthBuffer in webgpu yet. This will also be a very important element for me if I want to port my webgl planet generator to webgpu.
So far I have focused heavily on texture generation in webgpu because the compute shaders and the ability to store textures directly have added enormous possibilities. With r159 it could then be enough to create a v1 of my ocean generator, which I then upload to github.
With r159 we will be able to use varyings to communicate from the vertexStage (positionNode) to the fragmentStage (colorNode) like in webgl. The possibility was missing until now.
Iāve had the topic of the depth buffer in the back of my mind for some time, but since I donāt need it in the current project, I havenāt brought it up as an extension suggestion yet. In addition, the way I use it above also requires varyings and we will have them with r159. So an important step for their usability is coming.
It would be a nice thing to be able to write the fragment depth into the shader. It is absolutely important for dealing with large differences in scale.
After the many times Iāve asked you about extensions in the past few months, I hardly had the courage to ask
But then I have twice as much reason to be happy. Because of the varyings you have included in r159 and the frag_depth extension in the near future.
Is it already Christmas
Now I can hardly think of anything that I miss from the webgl world. You are currently switching on the expansion turbo as christmas season approaches
Sorry @Attila_Schroeder, I forgot about this question. Yes, I am using materials in WebGPU with depth textures, and it is working so far. I remember I had to do some workaround hacks when using them with wgslFn, but what are the issues you are getting?
do you have a short code example? I canāt get the depthTexture into the wgslFn shader. I made a mini CodePen and line 40 caused the shader to stop working
I almost forgot that, sunag has integrated the varyings. I integrated these extensively into my project
@Attila_Schroeder Oh, that was a rabbit hole
I donāt fully understand what goes on under the surface on every step, and also Iām still on r157 so take this with a pinch of salt, but this is what I found:
You probably need to render it to a render target if you provide a depth texture. In your example, that backend-error goes away if you uncomment your render target code and render only to that, not to the screen at all. This is also what I do in my own project. Then, I copy the final render target over to the screen using a simple copy pass.
After that, you still run into an error in WGSL compilation, which I assume is a bug in the current node system. This bug I also ran into myself, which I solved using the mentioned workaround.
Basically, when you pass a depth texture as a parameter to wgslFn(), it interprets it in a weird way, and the generated code will sample the depth texture at the available UVs, and pass the resulting float in place of the buffer itself. This will give you a compilation error like āexpected texture_depth_2d but got f32ā. My workaround is as follows:
const colorNodeParameters = {
uv: Nodes.uv(),
hack: Nodes.texture(this.depthTexture).label('depthBuffer'), // NB: This parameter is required by the Three.js node system to pass the actual depth buffer to the shader. The parameter gets converted to a depth sample (float). The label, however, allows us to access the entire depth buffer by that identifier. TODO: Do this without hacks when supported by Three.js.
hack_sampler: Nodes.texture(this.depthTexture),
};
const colorNode = Nodes.wfslFn(`
fn mainColor(
uv: vec2<f32>,
hack: f32
) -> vec4<f32> {
var correctedUv: vec2<f32> = vec2(uv.x, 1.0 - uv.y);
var depthSample: f32 = textureSample(depthBuffer, depthBuffer_sampler, correctedUv);
return vec4(...);
}
`);
Notice that I declare my hack parameter as f32, and never use it. Instead, I magically access depthBuffer, because it becomes a global uniform, named by my label. If you inspect the generated WGSL, that parameter is passed as the actual depth value at the fragment, but since I need to read depth values at other coordinates, I had to implement this workaround.
I should probably make a separate bug report about this behaviour unless it has already been fixed in newer versions, but maybe it might be of some help to you meanwhile
Oh dear, thatās a rather scary solution. I think Iāll wait for r160. Then ātexture_depth_2dā will work. With r159 there are some new things that are worth it, for me the varyings to send values āāfrom the vertexStageShader (positionNode) to the fragmentStageShader (colorNode) as you know from the varyings or out/in from webgl. In addition, other things have been fixed that were not yet possible with r157.
You donāt need to create an issue. Here is the link where you can see the PR below that sunag created for it.
But Iām glad that with you someone else exept me is already so enthusiastic about webgpu.
You put a lot of effort into the workaround and I have to add it to my reference archive.
Thanks, good to know this is already in hand and no further effort is required from me than to wait and get all the goodies later . Meanwhile my scary solution is working great for its purpose.
Just thought Iād mention still that I think your application would also work if you just define the depth texture parameter as a f32 in the WGSL code, and use it directly for your calculations. As I explained, it gets passed as the depth value for that fragment, so if thatās all you need to access, it would work fine already in current versions, without the need for further hacks.
Also good to know about the varyings. I have been content with the implicit varyings for now, available through the node system, but full custom support is very welcomed. Unfortunately, we use a somewhat customized version of Three.js to fulfill our needs, so upgrading the three.js version is a major project . But looks like 159 or 160 will be worth it again.
And yes, glad to give some attention to the WebGPU implementation even if not (yet) able to actually contribute . I have also found your previous posts very useful in the absence of documentation (your ocean looks great).