Hello,
I have a scene with multiple meshes. I would like to see what vertices in each of the meshes that would be seen, and which ones would be obstructed, from the perspective of an orthographic camera placed at a certain location.
Here is an illustration of what I’d like to achieve.
each vertex in the scene gets projected onto the camera and the index of all vertices that are seen by the camera gets a 1 and all hidden vertices gets a 0. This is then used to colour them.
This is the code that I’ve got so far:
https://codepen.io/erikforsberg/pen/gOEoGzG
There’s a function called getCameraView()
that creates a separate renderer and a THREE.WebGLRenderTarget to render the depth. This is the main function that I need to get to work. I want to output an array of arrays, numberMeshes*numberVertices where a 0 represents that the vertex is NOT seen by the camera and a 1 that the vertex IS seen by the camera.
I appreciate any and all help with this!
Best,
Erik
First solution is project vertices to camera and check are they in uv range or something. Maybe somebdy help.
Hi, thanks! Do you have an idea of how I get the vertices in uv space?
My attempt in the example is
const vertex = new THREE.Vector3(vertexArray[j], vertexArray[j + 1], vertexArray[j + 2])
const projectedVertex = vertex.clone().project(camera)
// Convert normalized device coordinates to pixel coordinates
projectedVertex.x = Math.round(((projectedVertex.x + 1) / 2) * size)
projectedVertex.y = Math.round(((-projectedVertex.y + 1) / 2) * size)
const vertexDepth = far - ((projectedVertex.z + 1) / 2) * far
const buffer = new Uint8Array(4)
//I'm reading the 4 elements of the buffer here because I think this is the depth value of the pixel. Is this correct?
depthRenderer.readRenderTargetPixels(renderTarget, projectedVertex.x, projectedVertex.y, 1, 1, buffer)
const depth = (buffer[3] / 255) * far
but I’m not sure if this is correct.