Logarithmic Depth value

I have been using the logarithmic depth buffer for a while, and I have written a shader that reads the buffer per pixel. I need to re-generate a valid depth value from the depth pixel value but I cant seem to get anything that is either accurate, or makes much sense.

Theres a couple of discussions on stack-overflow that I tried, but they didnt work. I read the log depth buffer shader and even tried reversing that, but it seems like something else is happening that Im not sure of.

Could someone explain how I can take a log depth pixel value and covert it back into an accurate Z-float value (from the camera). Thanks for reading.

Does it help:

?

No. I tried this, and this is for Outerra - which uses a similar but different method.
It should be close to

z = (exp(depth_value*log(C*far+1)) - 1)/C

But like I said I cant seem to get it close enough to be usable.
Sorry. I should note, I specifically need a solution for threejs. I think it may not be completely able to be reversed. But it would be good to know if this were the case.

For anyone looking for the answer to this:

Depending on the WebGL version and/or available extensions, there are diffferent ways to calculate the view space z value from the depth texture. This is because Three.js has different ways of interpreting and storing the depth values.

To better understand this, have a look at the four logdepthbuf_*.glsl.js related shader chunks in the ShaderChunks folder.

These are included in many of the materials, including the MeshDepthMaterial (shader at src/renderers/shaders/ShaderLib/depth.glsl.js).

The solutions below follow the implementation of the depth texture example.

Assuming the renderer’s logarithmicDepthBuffer is true, there are two scenarios:

Scenario 1: Your browser has WebGL2 support

// Inside fragment shader
vec4 fragCoord = texture2D( tDepth, vUv );
float logDepthBufFC = 2.0 / ( log( cameraFar + 1.0 ) / log(2.0) );
float viewZ = -1.0 * (exp2(fragCoord.x / (logDepthBufFC * 0.5)) - 1.0);

Scenario 2: Your browser has WebGL1 and extension EXT_frag_depth / WEBGL_depth_texture support

// Inside fragment shader
vec4 fragCoord = texture2D( tDepth, vUv );
float logDepthBufFC = 2.0 / ( log( cameraFar + 1.0 ) / log(2.0) );
float viewZ = -1.0 * (exp2((fragCoord.x + 1.0) / logDepthBufFC) - 1.0);

(Note: I could not test this one, so maybe it’s incorrect.)

You can now use viewZToOrthographicDepth() or viewZToPerspectiveDepth() from the packing.glsl.js shader chunk with viewZ to get the value corresponding to the current camera’s projection.

Example for comparison
Have a look at the JSFiddle I made:
Reading depth buffers normal vs logarithmic

1 Like

Thanks for this, it fixed having extreme values of camera near and far with a logarithmic depth buffer in a WebGL2 environment while I was using the formulas here to get the world position from UV and depth in a post-processing depth shader. For the record, one can find the logDepthBufFC - aka the logarithmic depth buffer float constant, I suppose - formula in the source code of the WebGLRenderer on GitHub here (permalink here).

A simplified way of writing the WebGL2 variant above after reduction would be:

return 1.0 - exp2(texture2D(tDepth, vUv).x * log(cameraFar + 1.0) / log(2.0));

even though the way you wrote it is more explicit and flexible in case some of the involved values change in the future.

We can get viewZ by this formula, then use viewZToPerspectiveDepth to get the depth. Does the depth mean the linear depth? using ndc.xy and depth and camera matrix to get world position? Am I right?

It’s been a while since I completed my project related to this, so things aren’t that fresh in my head anymore. Plus, I suppose there have been various changes / improvements to ThreeJS since working with the 149 revision the last time, so I’m not sure this is 100% accurate or up to date, but…

Here are the relevant portions of my depth fragment shader code, used with a composer:

...
uniform sampler2D depthSampler;
uniform mat4 proj;
uniform mat4 view;
uniform vec3 eyePos;
uniform float eyeNear;
uniform float eyeFar;
varying vec2 UV;
...
float remap(float value, float getmin, float getmax, float setmin, float setmax)
{
  return setmin + (value - getmin) * (setmax - setmin) / (getmax - getmin);
}
float logarithmicDepthToViewZ(float logdepth, float near, float far)
{
  return 1.0 - exp2(logdepth * log(far + 1.0) / log(2.0));
}
float perspectiveDepthToViewZ(float invClipZ, float near, float far)
{
	return (near * far) / ((far - near) * invClipZ - far);
}
float viewZToOrthographicDepth(float viewZ, float near, float far)
{
	return (viewZ + near) / (near - far);
}
float orthographicDepth(float rawdepth, float near, float far)
{
  #if defined(USE_LOGDEPTHBUF) && defined(USE_LOGDEPTHBUF_EXT)
    rawdepth = logarithmicDepthToViewZ(rawdepth, near, far);
  #else
    rawdepth = perspectiveDepthToViewZ(rawdepth, near, far);
  #endif
  return viewZToOrthographicDepth(rawdepth, near, far);
}
vec3 wPosition(vec2 UV, float ortdepth, float near, float far)
{
  vec4 ndpos = vec4(UV * 2.0 - 1.0, 0.0, 1.0);
  vec4 vspos = inverse(proj) * ndpos;
  vspos.xyz *= remap(ortdepth, 0.0, 1.0, near, far);
  vec4 wspos = inverse(view) * vec4(vspos.xyz, 1.0);
  return wspos.xyz;
}
...
void main()
{
  ...
  float rawDepth = texture2D(depthSampler, UV).x;
  float ortDepth = orthographicDepth(rawDepth, eyeNear, eyeFar);
  vec3  geoWPosition = wPosition(UV, ortDepth, eyeNear, eyeFar);
  ...
}

with the uniforms coming from (just copy pasted part of my code for setting up the effect composer):

  ...
  depthrender = new THREE.WebGLRenderTarget(Width, Height, {depthTexture: new THREE.DepthTexture({type: THREE.FloatType}), samples: 0});
  composer = new EffectComposer(renderer, depthrender);
  composer.setSize(Width, Height);
  var renderpass = new RenderPass(scene, camera);
  composer.addPass(renderpass);
  var depthpass = new ShaderPass(depthShader);
  if (camera) {camera.updateMatrix(); camera.updateMatrixWorld(); camera.updateProjectionMatrix();};
  try {depthpass.uniforms.depthSampler.value = depthrender.depthTexture;} catch {};
  try {depthpass.uniforms.proj.value = camera.projectionMatrix;} catch {};
  try {depthpass.uniforms.view.value = camera.matrixWorldInverse;} catch {};
  try {depthpass.uniforms.eyePos.value = camera.position;} catch {};
  try {depthpass.uniforms.eyeNear.value = camera.near;} catch {};
  try {depthpass.uniforms.eyeFar.value  = camera.far;} catch {};
  ...
  composer.addPass(depthpass);
  ...

So, as far as I recall, getting the perspective depth from view Z wasn’t what I needed, as the conversion formulas are a bit different from those for the orthographic (aka linear) depth, see here too. Anyway, it was plenty of guesswork for me too until I got to the desired result. Hope the above helps in answering your questions - if not, maybe someone more knowledgeable can correct me if I’m wrong, since, as I said, it’s been a while since I worked with these.

Thanks a lot! I haved tested my code just like yours, It seems your camera is an orthographic camera, while mine is perspective, I can get a fairly correct result. So I think it’s right!

Actually, it’s not, it’s a perspective one as well, but I guess that looking from the point of view of a very large sun located very far away towards a very small planet like ours, things might seem more orthographic in nature. Anyway, I’m glad you managed to make it look correct in your case. :+1:

1 Like