I’m trying to create a depthTexture 512 x 512 using mipmaps.
this.renderTarget = new THREE.WebGLRenderTarget( 512, 512 );
this.renderTarget.stencilBuffer = false;
this.renderTarget.depthTexture = new THREE.DepthTexture();
this.renderTarget.depthTexture.type = THREE.FloatType;
this.renderTarget.depthTexture.format = THREE.DepthFormat;
this.renderTarget.depthTexture.minFilter = THREE.NearestFilter;
this.renderTarget.depthTexture.magFilter = THREE.NearestFilter;
this.renderTarget.depthTexture.generateMipmaps = true;
this.renderTarget.depthTexture.needsUpdate = true;
However, the mipmaps lead to the following error message when I want to use this depthTexture in three.webgpu.js in the shader. Is there something I’m missing when configuring the depthTexture?
The mip level count (10) of [TextureView of Texture (unlabeled 512x512 px, TextureFormat::Depth32Float)] used as attachment is greater than 1.
- While validating depthStencilAttachment.
- While encoding [CommandEncoder renderContext_0].BeginRenderPass([null]).
I deactivated the stencil buffer right at the beginning and thought that would solve the problem.
I thought I’d ask here in the forum and save myself some time effort in error analysis.
MipMaps aren’t classically associated with depthmaps, since filtering/interpolation of depth values is a destructive process.
If I downsample a depth of 1 and a depth of 2, an averaged depth of 1.5 isn’t super useful for most operations involving depth.
to downsample depth maps you would instead want to do some kind of filter like picking the max or min sample, depending on your needs.
I did recently see a legitimate use of downsampled depth maps for removing shadow acne in shadowmapping, so it’s not neccesarily bad… it’s just enough of a niche operation that there isn’t really automatic/hardware support for it.
edit: here’s the video where I saw a use of downsample depth maps for solving shadow acne: https://youtu.be/jusWW2pPnA0?si=xUE1IQwppJpE5yRr&t=998
2 Likes
Hi @manthrax nice to write to each other again
The reason I wish a depth pyramid is for occlusion culling in the GPU with a compute shader. The lower depth mips are used for occlusion culling of objects that are further away. You know, I have high expectations of myself
It’s not super important and I can do just fine without it. Only if someone had done this before would I have been able to implement it quickly, because the compute shader already has everything for it. I hardcoded the miplevel to 0 at the moment.
It is for this:
1:1 comparison between visibility frustum check on cpu and gpu side. In total there are over 1 billion triangles
CPU side
GPU side
In total there are so many dragons that I reach the buffer size limit. Nothing more is possible unless I modify the device settings of threejs.
The GPU-side frustum check is already so good that culling is more of a luxury than a necessity. Nevertheless, I also implemented culling.
2 Likes
Right! Yeah that would definitely be cool… The depth pyramid however isn’t just a downsampling/average of the full depth buffer yeah? It’s like… the “closest” or “furthest” sample of the 4 or something? So you won’t get automatic hardware generated mipmaps like you get with textures just by setting .generateMipmaps = true… (I’m only guessing… since I have only a vague idea how the depth pyramid is used), so… You’ll probably have to generate the mips yourself w a shader, I’m thinking? (since I googled threejs depth pyramid and didn’t get any hits. )
I like the look of what you are cooking!!!
It’s really about mipmaps. Only there is no interpolation between them.
You select them in the compute shader depending on the conditions. In WebGPU, in addition to textureSample
for filterable textures, there is also the textureSampleLevel
command in which you can then set the miplevel. The sampler then does exactly as you suspect and selects the next area because of nearestFilter.
There is no need for linear filtering for occlusion, that would be a waste of resources. I read in the Unreal documentation about the nanite system that they do it that way and so I thought I would do that too
But it’s more of a luxury because even without mipmaps it’s already so far in the green that the fps counter doesn’t even twitch anymore and stays stuck at 120 fps. It does it without any culling.
With this I could now make a Spacex Starship in Blender where I make each heat protection tile as a 3D object instead of faking them with a texture. Or huge forests.
threejs.webgpu is extremely powerful
1 Like
Are you using metis to create your meshlets?
Yes, exactly, metis is ideal for this.
1 Like
And some extensions in threejs, not all of which are currently available in r171. But I’ve already made PRs so that they will come.
1 Like
@manthrax I also thought about loading the depthTexture into a compute shader and then simply creating the lower lods in this. The compute shader can then store them in a mip texture.
But I have now decided not to do that for now, because the gain would be very, very small in relation to the additional effort with another compute shader.
I have now added backface culling, which brings more than the depthTexture lods.
But I dont’t notice a difference because I’m already constantly at 120 fps.
The struct PR I did for it could last until next year. It is in review and the developers will probably use this as an opportunity to make other additions. As the saying goes, good things take time.
In order to see the surface better, I used the normal ones. Looks like it was cast from one piece
I’m going to combine this with my virtual texture system