# What is the fastest way to check if a vertex is visible?

I need to check if vertices are visible to the camera. I have many vertices, but have found some lag when using the raycaster to check if any intersection occurs in-front of the vertices.
Is there another way that I’m overlooking that I can use without using raycasting?

A line-of-sight test for specific points is usually implemented via raycasting. Occlusion queries sometimes pop up in this context but they are normally done on mesh (3d object) basis.

yes, color coding. render small box in white where your vertex is, the rest of the scene in black, then check the pixel - if it is white, your vertex is visible. see three.js examples for pointers.

1 Like

Since I check multiple vertices at the same time, how would I render multiple boxes without adding more processing? I’d have to create hundreds of boxes and then render- that doesn’t seem fast.

Time it. It might be faster, OR it might be slower, but the point is - there IS another way.

It won’t work if vertices are behind other vertices though. The vertices behind other vertices would come back saying they are visible because it is checking a white box from a vertices infront of it.

obviously in case of multiple vertices you have to use multiple colors

Project your vertex into screen space and check the z coordinate against the z-buffer for this (x,y) position. You will need to perform a check with an epsilon to avoid having your vertex being occluded by itself.

Depending on the number of vertices you have to check, it may counterbalance the time spent to retrieve the zbuffer into CPU memory.

I have tried rendering the depth and retrieving the z from the x,y screen position as you can see here: Get orthographic 3d point from mouse position and depth map
But the Z isn’t completely accurate (gets more inaccurate towards the edges of the screen) and needs to convert the found depth position back into world space (which will make it accurate) and compared to the original vertex position to see if they are near each other.
In the end, it is a slower process than without doing all this…
Is this what you are talking about?

I forgot that in WebGL you can’t retrieve the depth buffer of the output framebuffer, so you have to perform an additional depth rendering of the scene… So, it may indeed be slower than other methods, unfortunately.

@Popov72 you could have MRT with one of rejected PRs to some older 3js version, I think. but probably not worth the effort for this case.