Hi,
Recently I’ve started digging into soft particles techniques to implement with three.js but so far all my researches brought me to using WebGLRenderTarget
combined with DepthTexture
in order to map the scene depth into a texture and use it in my particle shader. This technique is by far the most popular, it’s used in a lot of tutorials and examples, in this fiddle related to a similar question and even in three.js’ webgl_depth_texture and SAOPass examples.
It works as expected, however I have one concern about performance: basically, as far as I understand, we need to render the whole scene twice: once for the depth render target and then as usual with WebGLRenderer
…, of course there might be some optimizations for the render target, but anyway, as @Mugen87 mentioned here, there seems to be another approach:
Two solutions are implemented: one uses the ability of DirectX10 to read the depth buffer as a texture; the other uses a more conventional second render target to store depth values.
I’m curious about if the first solution is more or less standardized now, if it’s supported by three.js/WebGL2 and if yes, does it actually makes sense to prefere it over the “conventional” render target in large scenes with 50+ moving Meshes (or maybe the scale of the scene and the amount of elements in it doesn’t make any difference 'cause when .setRenderTarget
method is used, WebGLRenderer.render
won’t take this into account).
If not, is there another way to get view space depth without using WebGLRenderTarget
? Maybe with MeshDepthMaterial
or maybe there’s a cheap color buffer with depth information in three.js shaders?
And lastly, there’s also this reply which suggest a significant performance boost, but it seems not to be implemented in the fiddle above. I don’t quite understand if it’s still actual or if it relies on three.js < r100 and takes advantage of renderTarget
argument to be passed into WebGLRenderer.render
method?
Sorry if that was too long, any clarification with this would be much appreciated, thanks!