Beware of logarithmic depth buffer--it can degrade scene performance!

I’ve been using logarithmic depth buffer in three.js for both WebGL and WebGPU for a while, and it works great for eliminating z-fighting at large distances.

But beware…if you have logarithmicDepthBuffer enabled, and you are rendering many sprites in your scene (for example, thousands of grass/foliage sprites on a terrain), you will incur a noticeable decrease in performance!

I did some testing with and without logarithmicDepthBuffer on my terrain scene (with thousands of foliage sprites in InstancedMesh objects), and having it enabled reduces the FPS by 20 percent!

This is because logarithmic depth must write to gl_FragDepth (GLSL) and frag_depth (WGSL) in the shader, which disables early-Z testing. When early-Z testing is disabled, all grass sprite fragments will run regardless if fragments are occluded, resulting in massive overdraw!

Be sure to switch to reverse z-buffer when it is implemented in three.js!

1 Like

Just a side note that doesn’t invalidate your general observations:

This doesn’t mean anything. An fps reduction of 20% can mean that it went from 500 fps to 400 fps, which is a nothingburger. It can also mean that it went from 60 to 48 fps, which would be a significant reduction: Going from 500 fps to 400 fps is a difference of 0.5ms. 60 down to 48 fps is a difference of 4.2 ms – 8x as much.

It’s best to measure performance in terms of time per frame, never frames per time. fps is a lousy measure when you want to compare performance.

How do we avoid logarithmic depth performance issues, I can not remove logarithmic because its important on large scale open world scene.

For not very big distance of view (~500 meteres) can increase camera.near instead logarithmicDepthBuffer