Soft Particles render

Hello everyone, I’m trying to make soft particles and found a couple of articles on how to do it.

http://blog.wolfire.com/2010/04/Soft-Particles
http://developer.download.nvidia.com/whitepapers/2007/SDK10/SoftParticles_hi.pdf

But honestly it’s hard for me to understand how to do this)

I created my own shader:

var Soft = {
uniforms: {
	time: { type: 'f', value: 0.0 },
	resolution: {type: 'v2',value: {x: 1,y: 1}},
	opacity: { type: 'f', value: 1.0 },
	color: { value: new THREE.Color( 0x0077ff ) },
	
	tDiffuse: { type: "t", value: null },
	tDepth: { type: "t", value: null },
	
	cameraNear: { value: 1 },
	cameraFar: { value: 50 },
},
vertexShader: 
	"varying vec2 vUv; \n\
	void main(){\n\
		vUv = uv;\n\
		vec4 mvPosition = modelViewMatrix * vec4(position, 1.0 );\n\
		gl_Position = projectionMatrix * mvPosition;\n\
	}",
fragmentShader: [
	'uniform vec2 resolution;',
	'uniform float time;',
	'uniform vec3 color;',
	'uniform float opacity;',
	
	'uniform sampler2D tDiffuse;',
	'uniform sampler2D tDepth;',
	
	"uniform float cameraNear;",
	"uniform float cameraFar;",
	
	'varying vec2 vUv;',
	
	"#include <packing>",
	
	'void main(){',
		'vec2 uv = vUv.xy / resolution.xy;',
		
		'vec4 col = texture2D(tDiffuse, uv);',
		
		'col.a *= opacity;',
		
		'float z = unpackRGBAToDepth( texture2D( tDepth, vUv.xy ) );',
		
		'col.a *= z;',
		
		'gl_FragColor = col;',
	
	'}'
].join("\n")

The first problem is that I do not know how to correctly substitute the UV for tDepth.

Here’s what I got:

As you can see the UV coordinates are wrong, another problem is that it also renders my object with my shader, how can I disable the render of the depth of my object?

Further, as I understood it is necessary to get the distance of the pixel to the camera and compare it with the distance obtained from the texture zdepth.

But honestly I do not understand by what formula it is necessary to do everything.

I would be grateful if you tell me which way to dig)

1 Like

Hello korner,

This “soft particle” shader would be a great asset to include in the THREE.js library.

I won’t be able to help you to the full extent of your questions, but I might be able to help you move forward.

Just for clarification, from what I see the soft particle shader are sprite particles, I am assuming you using the THREE.Points object to render the soft particles, which must have 0-1 uv mapping by default. If you are not using the sprite object, the shader is only translating the UVs coordinate without modification, which means the UVs passed are not correct.

You can set the object3D.material.depthWrite property to false (by default it is set to true).

Can’t you pass the camera position to the shader to calculate the distance?

Maybe the THREE.js contributors/authors like @westlangley or @mrdoob will be able to help you further; it seems your case scenario involves an in depth knowledge of the library.

Please, let us know how your project turns out.

HF & GL

Hello, just yesterday I did)

I had to look for a long time, tried, but still managed to do, now I want to share with all that I have. There are some jambs but in general everything works fine.

First you need to create a Depth depthRenderTarget that’s pretty easy

var depthMaterial              = new THREE.MeshDepthMaterial();
    depthMaterial.depthPacking = THREE.RGBADepthPacking;
    depthMaterial.blending     = THREE.NoBlending;
    depthMaterial.side         = THREE.DoubleSide;

var depthRenderTarget = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight, {
    minFilter: THREE.LinearFilter, 
    magFilter: THREE.LinearFilter,
});

Now in the loop you need to do this before rendering the main scene

scene.add(depthBox);

depthBox.position.copy(camera.position);

scene.overrideMaterial = depthMaterial;

renderer.render( scene, camera, depthRenderTarget, true );

scene.overrideMaterial = null;

scene.remove(depthBox);

.....

renderer.render( scene, camera);

But there was a problem, the fact is that if the background is empty then soft particles would hide completely.

I did not know how to fix it and made it simple, I added a box that was attached to the camera.

var depthBox = new THREE.Mesh(new THREE.CubeGeometry( 30000, 30000, 30000 ),new THREE.MeshBasicMaterial());

Now the shader itself, many tests and tests, but finally it turned out))
For convenience and for other shaders I did so

THREE.ShaderChunk['soft_pars_fragment'] = [
	'varying vec3 fmvPosition;',
            
    'uniform sampler2D tDepth;',
    
    'uniform float fCamNear;',
    'uniform float fCamFar;',
    
    'uniform mat4 fPjMatrix;',
    'uniform mat4 fMvMatrix;',
    
    'uniform float fDistance;',
    'uniform float fContrast;',
    
    // Transform a worldspace coordinate to a clipspace coordinate
    // Note that `mvpMatrix` is: `projectionMatrix * modelViewMatrix`
    'vec4 worldToClip( vec3 v, mat4 mvpMatrix ) {',
        'return ( mvpMatrix * vec4( v, 1.0 ) );',
    '}',

    // Transform a clipspace coordinate to a screenspace one.
    'vec3 clipToScreen( vec4 v ) {',
        'return ( vec3( v.xyz ) / ( v.w * 2.0 ) );',
    '}',

    // Transform a screenspace coordinate to a 2d vector for
    // use as a texture UV lookup.
    'vec2 screenToUV( vec2 v ) {',
        'return 0.5 - vec2( v.xy ) * -1.0;',
    '}',
    
    //I really do not know how this function works
    'float readDepth( float z ) {',
		'float cameraFarPlusNear  = fCamFar + fCamNear;',
		'float cameraFarMinusNear = fCamFar - fCamNear;',
		'float cameraCoef         = 2.0 * fCamNear;',

		'return cameraCoef / ( cameraFarPlusNear - z * cameraFarMinusNear );',
	'}',
    
    //Function for calculating fade
    'float calculateFade(vec2 pixelPosition, float particleDepth){',
        'float zFade = 1.0;',
        
        'float sceneDepth = readDepth( unpackRGBAToDepth( texture2D( tDepth, pixelPosition ) ) );', 
        
        'float inputDepth = ((sceneDepth - particleDepth) * fDistance);',
          
        'if ((inputDepth < 1.0) && (inputDepth > 0.0)){',
            // Make it fade smoothly from 0 and 1 - I think I grabbed this curve from some nVidia paper
            'zFade = 0.5 * pow(saturate(2.0*((inputDepth > 0.5) ? (1.0 - inputDepth) : inputDepth)), fContrast);',
            'zFade = (inputDepth > 0.5) ? (1.0 - zFade) : zFade;',
        '}',
        'else{',
            'zFade = saturate(inputDepth);',
        '}',
            
        'return zFade;',
    '}',
].join("\n\n");

THREE.ShaderChunk['soft_fragment'] = [
	'vec4 csp = worldToClip( fmvPosition, fPjMatrix * fMvMatrix );',
    'vec3 scp = clipToScreen( csp );',
    
    'gl_FragColor.a *= calculateFade( screenToUV( scp.xy ), readDepth( gl_FragCoord.z ) );',
].join("\n\n");

THREE.ShaderChunk['soft_pars_vertex'] = 'varying vec3 fmvPosition;';
THREE.ShaderChunk['soft_vertex'] = 'fmvPosition = mvPosition.xyz;';

THREE.UniformsLib['soft'] = {
    tDepth: { type: "t", value: null },
    
    fCamNear: { type: "f", value: 1 },
    fCamFar: { type: "f", value: 1000 },
    
    fPjMatrix: { type: 'm4', value: null },
    fMvMatrix: { type: 'm4', value: null},
    
    fContrast: { type: 'f', value: 1.0 },
    fDistance: { type: 'f', value: 50.0 },
}

It’s simpler then, we create the shader with the material

var shaderSoft = {
    uniforms: { },
    vertexShader: [
        'varying vec2 vUv;',
        
        '#include <soft_pars_vertex>',

        'void main(){',
            'vUv = uv;',
            
            'vec4 mvPosition = modelViewMatrix * vec4(position, 1.0 );',
            
            '#include <soft_vertex>',
            
            'gl_Position = projectionMatrix * mvPosition;',
        '}',
    ].join("\n"),
    fragmentShader: [
        '#include <packing>',
        '#include <soft_pars_fragment>',
        
        'void main(){',
            'vec4 col = texture2D(tDiffuse, vUv.xy);',
            
            'col.a *= opacity;',
            
            'gl_FragColor = col;',
            
            '#include <soft_fragment>',
	    '}'
    ].join("\n")
}


var uniforms = THREE.UniformsUtils.merge([
    THREE.UniformsLib['soft'],
    {
        opacity: {type: 'f', value: 1.0},
        tDiffuse: {type: 't', value: null}
    }
];


uniforms.tDiffuse.value = new THREE.TextureLoader().load( 'texture.png' );
uniforms.tDepth.value   = depthRenderTarget.texture;

uniforms.fPjMatrix.value = camera.projectionMatrix;
uniforms.fMvMatrix.value = (new THREE.Matrix4()).multiplyMatrices( camera.matrixWorldInverse, camera.matrixWorld );



var material = new THREE.ShaderMaterial({
    uniforms        : uniforms,
    vertexShader    : shaderSoft.vertexShader,
    fragmentShader  : shaderSoft.fragmentShader,
});

material.depthTest   = false;
material.transparent = true;

Be sure to turn off the depth check depthTest = false
Also specify the depth texture obtained from depthRenderTarget

uniforms.tDepth.value = depthRenderTarget.texture

uniforms.fDistance Adjusts how much the fade will be, if the number is less then the fade will be greater.

fPjMatrix and fMvMatrix I do not know what it is, it is necessary for the co-correction of UV

Here is the source javascript - Projecting FBO value to screen-space to read from depth texture - Stack Overflow

But sometimes at some angles it is noticeable that the UV is incorrectly defined, but it is almost not noticeable.

Next, I did not know how to turn off depth rendering for transparent objects, so I did a better job, found the required line in the library and ordered it.

//if ( transparentObjects.length ) renderObjects( transparentObjects, scene, camera, overrideMaterial );

Now I’m happy with the result, for particles this is what you need, and also to simulate the ambient light, too, will sweat.

Yes, it would be great, I like the result and I would like to see if there was such a plug-in in the library)

I offered a post on github:

3 Likes

Great, I will give it a shot and let you know…

I’ve also had a look at this over the past few days. Here’s the result:

Main points:

  • adaptive distance for softening, based on particle size.
  • GL_POINTS (billboards)

For my own purposes:

  • I added sorting and support for multiple emitters per group. Meaning you can have things like: smoke, embers and ash particles all rendered smoothly together as part of the same drawcall.
  • automated texture atlas, packing and reference counting done automatically, you just get an AtlasPatch instance which will notify you if it’s been relocated in the texture
  • time-variate parameter packing into texture. Things like changing color or size over time. Currently I have only those two, as that’s all I needed so far.
  • Emission sources. Box, Sphere and point + Shell/Volume sampling.
  • Bounding box calculation (even for moving emitters where particles linger)
  • JSON serialization

Demo doesn’t really show this :slight_smile:

[edit]
here’s another one