I’m trying to use an OrthographicCamera to get a depth map of a scene (a room), intended to replace the project’s current implementation of performing a raycast per each object in the scene (for instance, chairs in a room), which is used to adjust their elevation, so objects may stack others. This approach was really slow.
I’ve managed to place the camera (many different rooms) and have it render, but I seemingly can’t extract the pixels and make sense of the output. This is roughly what I’m using:
this.camera = new THREE.OrthographicCamera();
threeController.scene.add(this.camera);
this.depthMaterial = new THREE.MeshDepthMaterial();
this.depthMaterial.depthPacking = THREE.RGBADepthPacking;
this.depthMaterial.blending = THREE.NoBlending;
this.renderer = new THREE.WebGLRenderer({ preserveDrawingBuffer: true });
this.renderTarget = new THREE.WebGLRenderTarget(
this.width,
this.height,
{
format: THREE.RGBAFormat,
},
);
this.renderer.setSize(this.width, this.height);
this.renderer.setRenderTarget(this.renderTarget);
this.pixels = new Uint8Array(this.width * this.height * this.bytesPerPixel);
// ...
render() {
this.threeController.scene.overrideMaterial = this.depthMaterial;
this.renderer.render(this.threeController.scene, this.camera);
this.threeController.scene.overrideMaterial = null;
this.renderer.readRenderTargetPixels(
this.renderTarget,
0,
0,
this.width,
this.height,
this.pixels,
);
console.debug(this.pixels);
}
However, this produces output like [0, 228, 193, 126, 0, 228, 193, 126, 0, 228, 193, 126, ]
, which doesn’t seem to be the gray pixels I expected.
So, how can I properly read the pixels from a render target? Additionally, which may even be better, how do I read pixels from the camera’s depth map? I couldn’t figure out a way to read those out as pixels
I created the following pen, but couldn’t yet get it to render the orthographic camera