[SOLVED] Use an OrthographicCamera to render a depth map

I’m trying to use an OrthographicCamera to get a depth map of a scene (a room), intended to replace the project’s current implementation of performing a raycast per each object in the scene (for instance, chairs in a room), which is used to adjust their elevation, so objects may stack others. This approach was really slow.

I’ve managed to place the camera (many different rooms) and have it render, but I seemingly can’t extract the pixels and make sense of the output. This is roughly what I’m using:

this.camera = new THREE.OrthographicCamera();
threeController.scene.add(this.camera);

this.depthMaterial = new THREE.MeshDepthMaterial();
this.depthMaterial.depthPacking = THREE.RGBADepthPacking;
this.depthMaterial.blending = THREE.NoBlending;

this.renderer = new THREE.WebGLRenderer({ preserveDrawingBuffer: true });

this.renderTarget = new THREE.WebGLRenderTarget(
    this.width,
    this.height,
    {
        format: THREE.RGBAFormat,
    },
);

this.renderer.setSize(this.width, this.height);
this.renderer.setRenderTarget(this.renderTarget);

this.pixels = new Uint8Array(this.width * this.height * this.bytesPerPixel);

// ...

render() {
    this.threeController.scene.overrideMaterial = this.depthMaterial;
    this.renderer.render(this.threeController.scene, this.camera);
    this.threeController.scene.overrideMaterial = null;

    this.renderer.readRenderTargetPixels(
      this.renderTarget,
      0,
      0,
      this.width,
      this.height,
      this.pixels,
    );

    console.debug(this.pixels);
}

However, this produces output like [0, 228, 193, 126, 0, 228, 193, 126, 0, 228, 193, 126, ], which doesn’t seem to be the gray pixels I expected.

So, how can I properly read the pixels from a render target? Additionally, which may even be better, how do I read pixels from the camera’s depth map? I couldn’t figure out a way to read those out as pixels

I created the following pen, but couldn’t yet get it to render the orthographic camera

Your pen code doesn’t match the snippet you provided.
Based on your pen code, here is an example of rendering depth to render target (I added a quad to display render target texture, you can read pixels from rt.texture) :

You need to use the same web renderer for both normal and depth renders (and one canvas).

You ortho camera setup also required some fixing.

Okay, I didn’t know one needed to use the same renderer. Why is that? But still, the depth texture doesn’t look right, I’m getting this:

Which I had seen in the past with some other setup. Why isn’t it grey? I tried checking the camera bounds, but that doesn’t seem to be the problem

For reference, this is the room:

Do you see the grey texture in the example provided?

Can you extract grey pixels from it?

If yes, try to add your rooms in that example and see if it works.

I can’t tell by looking at your images what might be wrong with your code.

Afaik, Different WebGL renderers can’t exchange information directly in GPU memory.

I’m not trying to read from one renderer directly into another, I want the data in CPU memory

Fair, wasn’t really reasonable to expect you to know what might be wrong.

I can’t easily use the models from the project in the fiddle, the performance is really poor. But yes, it works fine in your example, so I’ll continue to try to spot differences between the two, but I tried a bunch of stuff and can’t tell what’s wrong

EDIT: So, the cause for that green was this.depthMaterial.depthPacking = THREE.RGBADepthPacking;

Okay, for anyone who comes across this in the future, the main takeaways are:

  • Make sure your OrthographicCamera has appropriate bounds and isn’t ‘inside out’
  • Use only one WebGLRenderer and replace it’s RenderTarget with renderer.setRenderTarget(rt) and then renderer.setRenderTarget(null) after the render

If you come across this, comment if you need assistance