Transparent ShaderMesh Reordering

I have a custom shader applied to photos on plane meshes, which all rotate towards the camera. Sometimes, when you rotate the camera, one of the meshes will go in front of the other causing a change in the depth/rendering order. So I initially had two related problems:

  1. The change is abrupt. Not just part of the plane, but the entire thing suddenly switches rendering order with the other.

  2. At the moment of this change, the shape of the entire mesh was visible for a moment. So even though there is a vignette effect around the image, you would briefly see the corners of the plane.

Problem 2) is solved by setting depth to false in the WebGLRenderer constructor. Why is this?

And do you have any suggestions for Problem 1)? A technique that would make these order changes less abrupt or more ambiguous?

Here’s a video of both problems 1) and 2):
2024-07-09 21-33-15.mkv (6.0 MB)

here’s the vertex shader:

varying vec2 vUv;

void main()
{
	vUv = uv;
	vec4 localPosition = vec4(position, 1.0);
	gl_Position = projectionMatrix * modelViewMatrix * localPosition;
}

the fragment shader:


precision highp float;  
uniform float u_time;
uniform float u_timeMult;
uniform sampler2D u_tex;
uniform float u_alpha;
uniform float u_blurStrength;
uniform float u_vignetteWidth;
uniform float u_vignetteStart;
uniform bool u_isTextureGrad;

varying vec2 vUv; 

float circle(vec2 st, vec2 resolution, float radius)
{
    // Normalize st for aspect ratio
    vec2 aspectCorrect = st * resolution;
    aspectCorrect.x *= resolution.x / resolution.y;

    // Calculate distance from the center
    vec2 dist = aspectCorrect - vec2(0.5 * resolution.x, 0.5 * resolution.y);

    // Return the vignette effect based on the distance
    return 1.0 - smoothstep(radius - radius * u_vignetteWidth,
    radius + radius * u_vignetteWidth,
    dot(dist, dist * u_vignetteStart));
}

vec3 gradientBlur(vec2 uv, float strength)
{
    return textureGrad(u_tex, uv, strength*dFdx(uv), strength*dFdy(uv)).rgb;
}

vec3 simpleGaussian(vec2 uv, float strength)
{
    float a = 0.000229;
    float b = 0.005977;
    float c = 0.060598;
    float d = 0.241732;
    float e = 0.382928;
    float f = 0.241732;
    float g = 0.060598;
    float h = 0.005977;
    float i = 0.000229;

    vec2 blurX = vec2(strength * 0.01, 0.0) / 2.0;
    vec2 blurY = vec2(0.0, strength * 0.01) / 2.0;
    vec3 color = vec3(0);

    color += a * texture2D(u_tex, uv + blurX * -4.0).rgb;
    color += b * texture2D(u_tex, uv + blurX * -3.0).rgb;
    color += c * texture2D(u_tex, uv + blurX * -2.0).rgb;
    color += d * texture2D(u_tex, uv + blurX * -1.0).rgb;
    
    color += e * texture2D(u_tex, uv + blurX * 0.0).rgb;    
    color += f * texture2D(u_tex, uv + blurX * 1.0).rgb;
        color += g * texture2D(u_tex, uv + blurX * 2.0).rgb;
    color += h * texture2D(u_tex, uv + blurX * 3.0).rgb;
    color += i * texture2D(u_tex, uv + blurX * 4.0).rgb;

    color += a * texture2D(u_tex, uv + blurY * -4.0).rgb;
    color += b * texture2D(u_tex, uv + blurY * -3.0).rgb;
    color += c * texture2D(u_tex, uv + blurY * -2.0).rgb;
    color += d * texture2D(u_tex, uv + blurY * -1.0).rgb;
    // Skip the center sample since it's already added
    color += f * texture2D(u_tex, uv + blurY * 1.0).rgb;    
    color += g * texture2D(u_tex, uv + blurY * 2.0).rgb;    
    color += h * texture2D(u_tex, uv + blurY * 3.0).rgb;    
    color += i * texture2D(u_tex, uv + blurY * 4.0).rgb;    
    color /= (2.0 * (a + b + c + d) + e) * 2.0;
    return color;
}

void uvWave(inout vec2 uv)
{
    float amp = 0.05;
    float freq = 0.5;    
    uv.x = 0.5 * (1.0 + amp * sin(freq * uv.y + u_time * u_timeMult)) + (1.0 - amp) * (uv.x - 0.5);
}

void main() 
{  
  	vec2 uv = vUv;

    float radius = 1.0;
    float vignette = circle(uv, vec2(1.0, 1.0), radius);
    
    uvWave(uv);    
    
    vec3 clrA = gradientBlur(uv, u_blurStrength * 1.0);
    vec3 clrB = simpleGaussian(uv, u_blurStrength * 0.2);
    vec3 clr = u_isTextureGrad ? clrA : clrB;
    gl_FragColor = vec4(clr, vignette * u_alpha);
}

and the mesh construction:

const uniforms =
    {
        u_tex: { value: tex},
        u_vignetteWidth: { value: 1.0 }, //3.0
        u_vignetteStart: { value: 7.0 }, //15.0
        u_vignetteSize: { value: 1.0 },
        u_alpha: { value: 1.0 },
        u_time: { get value() {return 0.001 * performance.now() }},
        u_timeMult: { value: Math.random() * 0.5 + 0.5 },
        u_blurStrength: { value: 7.0 },
        u_isTextureGrad: { value: Math.random() > 0.66 }
    }

    const mat = new Three.ShaderMaterial
    ({
        uniforms,
        vertexShader: vert,
        fragmentShader: frag,
        transparent: true
    });
 
    const geo = new Three.PlaneGeometry(faceScale * aspect, faceScale);
    const plane = new Three.Mesh(geo, mat);

By setting depth=false, you’re disabling z buffer entirely, so objects should render exactly in the order they are encountered in the scene.

I can’t think of an easy solution to the problem you describe… except for something complex in a shader…

You could try to manually detect the situation in software… and then change opacity or something… but… sounds like a lot of work.

edit: Just watched your video… cool concept/presentation! And yeah… I see the issue… its not super noticable but I see the flash when the order changes…
I think its made worse by the depth sorting.

Perhaps there is something that could be done using three.js docs
alphaHash + TAAPass… but I don’t know much about it.

Also perhaps something can be done with forcing a specific .renderOrder ?:
https://threejs.org/docs/#api/en/core/Object3D.renderOrder

1 Like

Hey, thanks for your reply.

Yeah, it’s much better with depth: false. The problem of briefly seeing the shape of the mesh is completely gone. Here’s a video of that version:
2024-07-11 17-33-39.mkv (7.9 MB)

I guess the depth buffer tries to show the shape of everything but is late to reapply the shader…? I still don’t quite get it, but I also haven’t really studied the basics of 3d depth rendering.

Thanks for the suggestions! Yeah, I figured it would be involved and the current version with depth:false is not so bad. Since I have a deadline (end of this month), I’ll put this aside to work on some other aspects of the project for now, but I’ll keep your hints in mind.

Actually, now I’m having the 2nd problem again since I’ve started using PostProcessing. Whatever depth: false on the Renderer does, it seems to get undone by the EffectComposer and its passes.

EDIT: A fix for this is to set depthWrite: false when declaring the material. That works even with EffectComposer.

1 Like