Soft Particles rendering performance

Hi,

Recently I’ve started digging into soft particles techniques to implement with three.js but so far all my researches brought me to using WebGLRenderTarget combined with DepthTexture in order to map the scene depth into a texture and use it in my particle shader. This technique is by far the most popular, it’s used in a lot of tutorials and examples, in this fiddle related to a similar question and even in three.js’ webgl_depth_texture and SAOPass examples.

It works as expected, however I have one concern about performance: basically, as far as I understand, we need to render the whole scene twice: once for the depth render target and then as usual with WebGLRenderer…, of course there might be some optimizations for the render target, but anyway, as @Mugen87 mentioned here, there seems to be another approach:

Two solutions are implemented: one uses the ability of DirectX10 to read the depth buffer as a texture; the other uses a more conventional second render target to store depth values.

I’m curious about if the first solution is more or less standardized now, if it’s supported by three.js/WebGL2 and if yes, does it actually makes sense to prefere it over the “conventional” render target in large scenes with 50+ moving Meshes (or maybe the scale of the scene and the amount of elements in it doesn’t make any difference 'cause when .setRenderTarget method is used, WebGLRenderer.render won’t take this into account).

If not, is there another way to get view space depth without using WebGLRenderTarget? Maybe with MeshDepthMaterial or maybe there’s a cheap color buffer with depth information in three.js shaders?

And lastly, there’s also this reply which suggest a significant performance boost, but it seems not to be implemented in the fiddle above. I don’t quite understand if it’s still actual or if it relies on three.js < r100 and takes advantage of renderTarget argument to be passed into WebGLRenderer.render method?

Sorry if that was too long, any clarification with this would be much appreciated, thanks!

Depth textures are supported by WebGL 2 by default. However, if you just need the depth of a scene as a texture, it does not matter if you are using depth textures or render the scene with MeshDepthMaterial as a override material. In both cases you need a render target and one render pass.

Using a depth texture makes sense if you need the depth AND the beauty pass (the rendered scene) for further post-processing. SSAO would be a typical use case.

Hi @Mugen87, thanks for a quick response and for the tip! After checking the SSAO source code, I’ve noticed that it also uses render target to store scene depth information as a texture and then reuses it for futher post-processing as you’ve explained. So I guess this is the proper way to do it and there is no way to get the same information (scene depth buffer) in the shader without an extra render pass for the WebGLRenderTarget…? I was hoping to optimize the rendering process a little bit by using only one render step, but since a lot of threejs post-processing effects are using this technique, I guess I have to stick with it as well. Good to know, thanks again.

You definitely need an additional pass.

2 Likes