The first problem is that I do not know how to correctly substitute the UV for tDepth.
Here’s what I got:
As you can see the UV coordinates are wrong, another problem is that it also renders my object with my shader, how can I disable the render of the depth of my object?
Further, as I understood it is necessary to get the distance of the pixel to the camera and compare it with the distance obtained from the texture zdepth.
But honestly I do not understand by what formula it is necessary to do everything.
I would be grateful if you tell me which way to dig)
This “soft particle” shader would be a great asset to include in the THREE.js library.
I won’t be able to help you to the full extent of your questions, but I might be able to help you move forward.
Just for clarification, from what I see the soft particle shader are sprite particles, I am assuming you using the THREE.Points object to render the soft particles, which must have 0-1 uv mapping by default. If you are not using the sprite object, the shader is only translating the UVs coordinate without modification, which means the UVs passed are not correct.
You can set the object3D.material.depthWrite property to false (by default it is set to true).
Can’t you pass the camera position to the shader to calculate the distance?
Maybe the THREE.js contributors/authors like @westlangley or @mrdoob will be able to help you further; it seems your case scenario involves an in depth knowledge of the library.
I had to look for a long time, tried, but still managed to do, now I want to share with all that I have. There are some jambs but in general everything works fine.
First you need to create a Depth depthRenderTarget that’s pretty easy
var depthMaterial = new THREE.MeshDepthMaterial();
depthMaterial.depthPacking = THREE.RGBADepthPacking;
depthMaterial.blending = THREE.NoBlending;
depthMaterial.side = THREE.DoubleSide;
var depthRenderTarget = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight, {
minFilter: THREE.LinearFilter,
magFilter: THREE.LinearFilter,
});
Now in the loop you need to do this before rendering the main scene
But sometimes at some angles it is noticeable that the UV is incorrectly defined, but it is almost not noticeable.
Next, I did not know how to turn off depth rendering for transparent objects, so I did a better job, found the required line in the library and ordered it.
I’ve also had a look at this over the past few days. Here’s the result:
Main points:
adaptive distance for softening, based on particle size.
GL_POINTS (billboards)
For my own purposes:
I added sorting and support for multiple emitters per group. Meaning you can have things like: smoke, embers and ash particles all rendered smoothly together as part of the same drawcall.
automated texture atlas, packing and reference counting done automatically, you just get an AtlasPatch instance which will notify you if it’s been relocated in the texture
time-variate parameter packing into texture. Things like changing color or size over time. Currently I have only those two, as that’s all I needed so far.
Emission sources. Box, Sphere and point + Shell/Volume sampling.
Bounding box calculation (even for moving emitters where particles linger)