Value of depth in ShaderPass with depth

Hi,

I’m trying to use the depth (distance from camera) value in a fragment shader use in post processing. I based my code on this example.

I have an orthographic camera and a scene that’s 300x300x300, with a sphere of diameter 200 right in the middle, and the camera looking at it. After tweaking things a bit, I can render the depth to the screen.

here’s the link: codepen link

While it works, there’s a few things I don’t understand:

  • What does “depth” represent exactly? I first assumed it was the distance to the camera plane, but I have to multiply that value by 1000000 to get something meaningful. And moreover if I set the camera.near to 0, the depth won’t work at all, which I don’t understand.
  • What is the dimension of the depth texture? Should that be the same size (in pixels) as the canvas rendering the scene?

Thanks in advance!

I’ve just realized the fragment shader can be simplified to something like this:

        varying vec2 vUv;
        uniform sampler2D tDepth;

        void main() {
            float depth = texture2D( tDepth, vUv ).x * 15.0;
            gl_FragColor.rgb = 1.0 - vec3( depth);
            gl_FragColor.a = 1.0;
        }

Ok, I’ve figured it out. The value of depth ranges from 0 (near plane) to 1 (far plane). I had a typo in my code and the far plane was much further than expected (x10).

(codepen updated with the working version)