If I set the shadow bias to -0.0005, I can mitigate some of the sawtooth artifacts, but it does not resolve them completely (as can be seen on the left part), and also introduces an unwanted offset to the shadows.
Any chances to demonstrate the issue with a small live example?
We have observed multiple reports regarding shadow acne issues with WebGPURenderer. It is more often required to set a shadow bias or a larger bias to mitigate artifacts. However, larger bias leads to “Peter Panning” artifacts which means the shadow translates away from the geometry. We potentially need an issue for this at GitHub. A live example would still be useful though.
I ran into similar issues when testing WebGPURenderer. In the end, I reverted back to WebGLRenderer for production. WebGPU is promising, but it still has some way to go before it’s fully reliable for real-world applications, especially around shadows and stability. For now, I’m keeping WebGPU strictly experimental until it’s mature enough to be a true replacement for WebGL.
Thanks for the input, yes the “Peter Panning” artifact is exactly what I experienced when increasing the shadow bias value. However, this should not even be necessary, as the WebGPURenderer result demonstrates.
Luckily, I was able to reproduce a minimal example that shows the same issues:
For me, it points towards a bug in the WebGPU shadow mapping
Yeah, in general I’ve noticed various rendering differences across WebGLRenderer and WebGPURenderer that previously prevented me from migrating. If some algos are supposed to be identical, then yeah this shows they aren’t. I do imagine WebGPURenderer is the main focus now, and that algos will start to differ as WebGLRenderer becomes outdated (and eventually deprecated/removed?).
It is true that normalBias and or shadow bias can mitigate some of the effects.
But, as the WebGL case shows it should not even be necessary to increase them here. In general, they should be kept as low as possible, as they would introduce unwanted artifacts. In my opinion, the only viable explanations for them requiring to be adapted in the WebGPU case would be that either
WebGPU would have lower precision for their depth buffers
There is a bug in the WebGPU implementation adding some imprecisions