The point size is 1.0, with sizeAttenuation = false, the threshold that seem to work best is around 0.25 - it covers well the point ‘surface’ on various zoom levels and does not overlap the points.

I guess this is most likely coincidence and for other setup it will be different. How to determine more precisely the right threshold for a given pointsize? What are the factors in the equation?

Normally, lines and point clouds are conceptually infinitely small entities. Although it is possible to render points with a size greater than one pixel, it is not possible with lines. And with WebGPU, rendering point clouds will only be possible with one pixel size, too.

Hence, the respective raycasting implementations are just approximations and the threshold values for lines and points are mechanisms to fine-tune the test. If you study the raycast() methods of THREE.Line and THREE.Points, you will see that they are no real ray-intersection tests like with meshes.

If you need a more precise test, it’s probably best to render your point cloud as plane meshes via InstancedMesh.

Ok so after spending some time reading the code in Three.Points, here is what what seems to happen (for simplicity I used case for indexed buffer geometry)

boundingSphere of the Points geometry is calculated

the boundingSphere is converted to world coordinates and the threshold value is added to its radius (1)

if ray is not intersecting the sphere, undefined is returned (no intersects, early return)

if ray is intersecting then localThreshold is calculated reusing threshold parameter which is divided by the mean of xyz scale components (?) of the Points (2)

localThresholdSq is calculated

index bounds of the geometry (first and last index of attribute components) is determined

looping through all of the indexes within the bounds
- getting the x component of position attribute ( to get current vertex position)
- testing the vertex of Points mesh against testPoint by passing

current vertex position,

index,

localThresholdSq,

world coordinates matrix,

raycaster object,

output intersects array and

current instance of Points)

In testPoint function we:

calculate rayToPointDistanceSq

if the above distanceSq is less than localTresholdSq (if i understand well then if no Points scaling is used and no change in default threshold value then this will be equal to 1?), so if rayToPointDistanceSq is less than 1
- we define intersectPoint by calling rays.closestPointToPoint,
- if distance from origin of the ray to the intersectPoint is within near/far bounds of the raycaster, then we push intersect data to intersects array.

so questions arise:

(1) is using threshold to check both intersects on mesh level and vertex level a good idea?
it seem that more precise would be to check bounding box, not bounding sphere, on Points mesh without any threshold to early return if there is no intersect. Is bounding sphere better performing?

what is achieved by having such ‘mean scale’ value from points mesh to calculate local threshold value?
in case scale.x = 2 and scale.y = 0.5 and scale.z = 0.5 we apply scale but the value will be the same. I don’t understand this.

I assume that with Three.Points we make a gl.drawArray or gl.drawElements with gl.POINTS somewhere, along with defined gl.PointSize in shader. This basically will always render a kind of ‘squared sprites’ (as someone mentioned in here). The square will be of defined side = gl.PointSize (which can vary per vertex if attribute for it is used).

Could the threshold be derived from the max pointSize (if size attribute is passed) or just a gl.PointSize (is it defined in pointMaterial.size property?)
so instead of if ( rayPointDistanceSq < localThresholdSq ) we could have if ( rayPointDistanceSq < maxPointSizeSq) ?