I have a scene and a PerspectiveCamera, and I want to render the depth of the scene, perfectly normalized in the [0, 1] range. My thinking is that I should find the nearest and farthest projected vertex distances in the scene and use those for the camera’s near and far planes.
Does that make sense? How would I go about computing/retrieving those coordinates in three.js? My complicated idea is to iterate over all geometry buffers and transform all vertices manually to find the min/max coords.
Or would it be better to render the scene in a large enough frustum as usual, and normalize the range of the fragments afterwards? It seems like this approach would give less accurate results.
Still trying to find optimal near and far clipping distances in a static scene.
I wrote this function to find the minimum and maximum distances from visible geometry’s vertices to the camera position. But using the returned near distance as the camera’s near plane results in clipping. The far plane is probably too far out as well, but I haven’t checked for that yet.
const findZClippingPlanes = (scene, camera) => {
let near = Infinity, far = -Infinity;
const vertex = new THREE.Vector3();
camera.updateMatrixWorld();
const camPos = camera.position;
const frustum = new THREE.Frustum();
frustum.setFromProjectionMatrix(camera.projectionMatrix);
scene.traverseVisible(obj => {
if(!obj.isMesh) return;
obj.updateMatrixWorld();
if(!frustum.intersectsObject(obj)) return;
const geom = obj.geometry.clone();
geom.applyMatrix4(obj.matrixWorld);
const pos = geom.getAttribute('position');
for(let i = 0; i < pos.count; ++i) {
vertex.set(pos.getX(i), pos.getY(i), pos.getZ(i));
const dist = camPos.distanceToSquared(vertex);
near = Math.min(near, dist);
far = Math.max(far, dist);
}
geom.dispose();
})
near = Math.sqrt(near) - .1;
far = Math.sqrt(far) + .1;
return [near, far];
}
vertex.set(pos.getX(i), pos.getY(i), pos.getZ(i));
const dist = camPos.distanceToSquared(vertex);
near = Math.min(near, dist);
far = Math.max(far, dist);
Just a quick note – this calculates distances to the camera position and picks the smallest and the largest. Maybe you have to use distances to the camera plane ?
It makes sense to include the *Squared methods for point to point, since it’s cheaper than doing the square root for true distance, but for a plane, you don’t need a square root to compute distance, since it’s just a dot product and subtract, so that’s probably why it isn’t a formal method. **2 is smaller and faster than a function call.
I think the feature should be preset in Three.js engine. The engine computes intersection of every object and frustum in each frame, so it’s easy to get the neareast and farthest distance of visible objects to camera, and then we can get a more accurate depth buffer.