Need help making transparency show correctly in point cloud

I have the following code in three.js which takes color data from a loaded point cloud (a collection of colored vertices in a single mesh), and makes vertices transparent in accordance with how bright the color is:

const uniforms = { //These are defaults for brightness threshold options
    color: { value: new THREE.Color(0xffffff) },
    brightnessThreshold: { value: 0.5 }, //set `value: 0.5` for 50% threshhold
    size: { value: 0.2 }, // this defines the size of the points in the point cloud
    invertAlpha: { value: 1 } // this defines if the alpha channel is inverted or not in translucency
};
basicMaterial = new THREE.PointsMaterial({ size: 0.2, vertexColors: true }); //`size: 0.2`, usually
thresholdMaterial = new THREE.ShaderMaterial({
    uniforms: uniforms,
    vertexShader: `
        uniform float size;
        varying vec3 vColor;
        
        void main() {
            vColor = color;
            vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
            gl_PointSize = size * ( 300.0 / -mvPosition.z );
            gl_Position = projectionMatrix * mvPosition;
        }
    `,
    fragmentShader: `
        uniform vec3 color;
        uniform float brightnessThreshold;
        varying vec3 vColor;
        
        void main() {
            float brightness = dot(vColor, vec3(0.299, 0.587, 0.114)); // correct way to calculate brightness
            if (brightness < brightnessThreshold) {
                discard;
            } else {
                gl_FragColor = vec4( vColor * color, 1.0 );
            }
        }
    `,
    transparent: true,
    depthTest: true,
    depthWrite: true, // originally set to false
    vertexColors: true, // ensures that the colors from geometry.attributes.color are used
    blending: THREE.NormalBlending
});
translucentMaterial = new THREE.ShaderMaterial({
    uniforms: uniforms,
    vertexShader: `
        uniform float size;
        varying vec3 vColor;
        
        void main() {
            vColor = color;
            vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
            gl_PointSize = size * ( 300.0 / -mvPosition.z );
            gl_Position = projectionMatrix * mvPosition;
        }
    `,
    fragmentShader: `
        uniform vec3 color;
        uniform float brightnessThreshold;
        uniform int invertAlpha;
        varying vec3 vColor;
        
        void main() {
            float brightness = dot(vColor, vec3(0.299, 0.587, 0.114));
            float alpha = brightness;
            if (invertAlpha == 1) {
                alpha = 1.0 - alpha;
            }
            gl_FragColor = vec4( vColor * color, alpha );
        }
    `,
    transparent: true,
    depthTest: true, // enable depth testing
    depthWrite: false, // disable depth writing
    vertexColors: true,
    blending: THREE.NormalBlending
});

For the above code, I notice the translucency shading works well when looking from one side of the mesh (where if multiple “voxels” are in front of each other in the point cloud, the frontmost voxels obscure the backmost ones). This is what the mesh should look like:

However when I rotate the camera to look at the “back” of the mesh, only the backmost vertices are displayed when two or more colors are in front of each other:

image

I’ve already tried setting depthTest and depthWrite to both true and false (and all combinations thereof), and found that doesn’t seem to make a difference. Note again that everything works perfectly fine when the camera is on one half of the scene, and this problem only occurs once you rotate past a certain point (here you can see how abrupt it can be, with the left half rendering correctly, and the right just showing the “back” of the point cloud):

image

My guess would be that there should be some way to locally flip the z-buffer or something if a ray lands on the camera from the “wrong” angle, but I have no idea how I’d do that. If anyone here could help me fix this issue, I’d be incredibly grateful!

The usual problem with overlaying transparent point geometry is that for it to work properly, individual points need to be manually sorted by their distance from the camera (on every frame where you change the camera properties), THREE doesn’t do that (at least this was the case a couple years ago).

The depth write should be disabled to let the occluded points that are father away to be rendered and blended.

The depth test should be enabled, the points should be rendered starting from the farthest from the camera in “back to front” order.

Also, if you blend two pixels, the result will depend on which pixel is blended with the background first, and which goes on top of it. For that reason there is a method called “order independent transparency”.

@tfoller So you’re saying that using order-independent transparency will solve the problem here? When I look it up, I’m having trouble finding anything useful about it. Would you mind sharing example code using it? (preferably a variation on the code above, but linking to someone else’s three.js script would also be fine if you don’t want to bother)

I’ve never done it myself, so I don’t have a code. OIT is not a perfect solution, it’s a better approach but there is no perfect solution for this problem as far as I know.

The easiest thing you can do is to add point sorting routine to your code and see maybe it will be enough to fix your problem.

I’m really new to 3D rendering, and honestly have no idea how I’d even start to go about that–are there any concrete implementations of this I could use as a starting point?

I guess you can just sort coordinates and colors order in the buffer attribute and then update, something like this:

image

should work, unless I made a mistake somewhere, this code needs to be tested.

I’m afraid this doesn’t work, unfortunately–in the jsfiddle example you provided, the color of the points further away (the smaller squares in the screenshot) overwrite the color of the squares which are closer:

After messing around with GPT-4 I was able to get it to write me this function which when called makes transparency render correctly, but which is horrifically slow, and cannot be used in realtime:

function depthSortGeometry(geometry, camera) {
    const positionAttribute = geometry.attributes.position;
    const colorAttribute = geometry.attributes.color;

    // Calculate the depth of each point relative to the camera
    const depthArray = Array.from({ length: positionAttribute.count }, (_, i) => {
        const position = new THREE.Vector3().fromBufferAttribute(positionAttribute, i);
        return camera.position.distanceTo(position);
    });

    // Get indices sorted by depth (in descending order)
    const indices = depthArray.map((depth, i) => i).sort((a, b) => depthArray[b] - depthArray[a]);

    // Create new position and color attributes based on sorted indices
    const newPositionAttribute = new THREE.BufferAttribute(new Float32Array(positionAttribute.count * 3), 3);
    const newColorAttribute = new THREE.BufferAttribute(new Float32Array(colorAttribute.count * 3), 3);
    for (let i = 0; i < indices.length; i++) {
        newPositionAttribute.setXYZ(i, positionAttribute.getX(indices[i]), positionAttribute.getY(indices[i]), positionAttribute.getZ(indices[i]));
        newColorAttribute.setXYZ(i, colorAttribute.getX(indices[i]), colorAttribute.getY(indices[i]), colorAttribute.getZ(indices[i]));
    }

    // Create a new geometry with the sorted attributes
    const sortedGeometry = new THREE.BufferGeometry();
    sortedGeometry.setAttribute('position', newPositionAttribute);
    sortedGeometry.setAttribute('color', newColorAttribute);

    return sortedGeometry;
}

Is there a way to do something like this, but sped up by an order of magnitude or so? The above code works great if you never need to move the camera in your scene, but is otherwise rendered useless by how slow it is.

Here is an attempt on the WBOIT technique mentioned by @tfoller . Sadly I can only forward you to it, because to be honest, this stuff is above my league and fry my brain. There was some discussion to maybe include something similar into three’s core as well (transparency depth is always a hot topic). But the amount of people knowledgeable about this is quite scarce :sweat_smile:

To summarize, the goal is to avoid classic sorting altogether. And aim for a more abstract angle.
More theory here: Casual Effects: Weighted, Blended Order-Independent Transparency

1 Like

Since this appears to be a regular 3D grid, a much faster sort option would be to divide the points into vertical planes, and sort the planes. This would mean rearranging large chunks of the point cloud together, rather than individual points.

In the fiddle, try to change this line

 return distA - distB;

to

 return distB - distA;

that should reverse the sort order, see if that helps.

Otherwise, as I said before, a code like this requires thorough testing, which I didn’t do, since it’s an example of a general approach to solving this problem, in a relatively fast and simple way.

As to the GPT generated code, you can ask GPT to find a bottleneck in the snippet and then ask to rewrite it so it computes faster.

Just by looking at the GPT code, I’d say it’s slow because it recreates buffers every frame, you need to create buffer attributes once, store your vertices coordinates in a static array, sort the array every frame and then overwrite the exiting buffer data, that’s what I did, it’s much faster.