Hi,
I’m trying to use the depth (distance from camera) value in a fragment shader use in post processing. I based my code on this example.
I have an orthographic camera and a scene that’s 300x300x300, with a sphere of diameter 200 right in the middle, and the camera looking at it. After tweaking things a bit, I can render the depth to the screen.
here’s the link: codepen link
While it works, there’s a few things I don’t understand:
- What does “depth” represent exactly? I first assumed it was the distance to the camera plane, but I have to multiply that value by 1000000 to get something meaningful. And moreover if I set the
camera.near
to0
, thedepth
won’t work at all, which I don’t understand. - What is the dimension of the depth texture? Should that be the same size (in pixels) as the canvas rendering the scene?
Thanks in advance!