Hello
I’ve been playing around with fog effect, using THREE.Points with some transparent smoke textures. First problem came up when the fully transparent edges of the images would clip weirdly:
After switching to [depthWrite: false] I got more correct depth perception. The clipping is gone and walls obstruct the images. However, now you can see these ugly edges when it goes through a mesh:
I realize those are natural and should happen, but nonetheless it looks unpleasent to the eye. Im just trying to create a nice-looking fog effect . Is there any smart way of avoiding these ugly edges in the [depthWrite: false] scenario? Maybe some clever custom shader material trick or post-processing effect?
Points cloud with textures looks really good and I’d love to make it work!
Thank you for any suggestions!
I think you are looking for a technique called soft particles. From a Nvidia resource:
Particle sprites are widely used in most games. They are used for a variety of semitransparent effects such as fog, smoke, dust, magic spells and explosions. However, sprites commonly produce artifacts – unnaturally sharp edges – where they intersect the rest of the scene.
We present a simple way to fade out flat sprites against a 3D scene, softening the artificial edges. Two solutions are implemented: one uses the ability of DirectX10 to read the depth buffer as a texture; the other uses a more conventional second render target to store depth values.
Thank you a lot for these resources. http://blog.wolfire.com/2010/04/Soft-Particles explains this well, however Im having trouble implementing it in three.js, specifically with getting the depth value of the particles and passing the depth buffer to it’s shader. Is it possible to use the existing PointsMaterial modified or is custom shader a must?
I would implement this in a custom shader since it’s better than hacking PointsMaterial. Besides, the shader code of PointsMaterial is straightforward so it’s no problem to copy the essential parts into your own code.
I got the shader up and running, but what would be the proper way to get the depth value of the particles, in the same scale/way as MeshDepthMaterial from my scene does? I tried to analyze its shader code, but it uses many [include] statements and I get a little lost :s
I’m not sure you can use the following approach but the normal way is to render to whole scene with MeshDepthMaterial into a render target. You can then use the respective texture as a depth map for your shader. However, I suggest you use THREE.DepthTexture since the respective extension is widely supported. In this way, you can automatically store the depth buffer into a texture when rendering your scene. A typical setup looks like this:
Hmm, thank you! Approach with THREE.depthTexture is a lot simpler than my way of playing around with MeshDepthMaterial. I have the depth texture of the solid objects ready to be passed to my soft particles shader, but I dont know how to properly read the depth of my particle in the shader. If I understand the soft algorithm correctly, I need it in order to compare both of them to check how close the particle is to the scene meshes (detect edges and smooth out the alpha).
It’s gl_FragCoord.z. It think it makes sense to linearize this value and the value from the depth map if you are going to compare them. You can do this by including the packing chunk into your shader and then do this:
Thank you, wouldn’t have done it without your help!
Also btw I think DepthTexture, just like WebGLRenderTarget, uses min and mag filters (unless the docs have it wrong)
Would you like to share your code as a new three.js example? I think it would be good to have a soft particles example in the repo. It could be named as webgl_points_soft.
Hmm should I paste the whole code here or just a .zip? softParticles.zip (1.5 MB)
I have to say, Im not sure what should go into the uvTransform uniform of the shader material.
I’ve seen that you have copied the entire points material shader. I personally would keep the example more simple and remove all shader chunks which are not directly related to soft particles.
If you are writing a custom shader, it’s important to manage the file like so. Ensure to define all uniforms in the uniforms section. The usage of type is not necessary.
When you create a shader material based on your custom shader, do it always like shown in the following listing. It’s very important to clone your uniforms, otherwise it can have very strange side effects if more than one custom material are created with this shader.
this.customMaterial = new THREE.ShaderMaterial( {
defines: Object.assign( {}, THREE.CustomShader.defines ),
uniforms: THREE.UniformsUtils.clone( THREE.CustomShader.uniforms ),
vertexShader: THREE.CustomShader.vertexShader,
fragmentShader: THREE.CustomShader.fragmentShader,
// now set common material properties
transparent: true,
depthTest: false,
depthWrite: false
} );
The code of your example does not respect the three.js code style, yet. Ensure to lint your code with the mdcs setting, don’t use ES6 statements like let and structure your demo like official ones. Notice how functions are defined or at what place init() and animate() are executed. Also avoid the usage of flags that controls optional features of the demo (like your shadows variable)
You are using THREE.DepthTexture correctly but you perform an unnecessary render pass. After configuring your render target, you can render the diffuse and depth information via a single pass. Right now, you use renderer.render() to save the depth information and then use an instance of RenderPass for the beauty pass. The latter one is not necessary. Consider to manage your code in a SoftParticlesPass similar to SAOPass.
Hello, I’ve got this far today, will do more tomorrow. softParticlesExample.zip (1.5 MB)
I’ve narrowed the whole scene down to objects necessary for the example, and reformatted most code. Also, could you please expand on your last point? Im not sure how to implement that.
I mean you create a new file SoftParticlesPass and implement the rendering logic related to soft particles there. You don’t use EffectComposer in SoftParticlesPass but manage the render targets by yourself similar to SSAOPass. This should provide you the flexibility you need in order to save the mentioned render pass. In your app/example, you could use the pass like so:
var softParticlesPass = new THREE.SoftParticlesPass( scene, camera );
softParticlesPass.renderToScreen = true;
var effectComposer = new THREE.EffectComposer( renderer );
var effectComposer.addPass( softParticlesPass );
I’m not sure this is the best approach but you need to find a way in order to get rid of the additional render pass. Setting the depth texture uniform of your the soft particles material is a bit tricky in SoftParticlesPass. Consider to use a method like SoftParticlesPass.addPoints() in order to tell the pass what point objects should be used.
Thank you!
Unfortunately this seems a little too much for me . I’ve never made a custom pass and I have no idea how to handle rendering just the particles. Also Im not sure why in the DepthTarget.texture (main render target that supplies the depth texture to the softParticlesMaterial) the clouds are actually showing, but only those whithin 2 or 3 units of distance from the camera:
Okay, depending on how much time I have this week, I try to make the changes and host a fiddle with the code. The current progress is already great but let’s see if we can push things a bit further^^.
Unfortunately, I was not able to manage yet to avoid drawing the scene twice. You can see the problem right now how the render function works. It’s necessary to render the scene once in order to save the depth information. Then you have to render it again together with the soft particles. I think it’s possible to draw the scene once, then the particles and composite both results into a final image. But like I said before, I was not able to do so since the blending did not work so far. At least, we have now a live example so others can play around with the code.