Reading from depth texture

I’m trying to do some basic depth texture read in a custom shader (soft particles), but unfortunately not having much success. Think I might be a little lost with some of these concepts and how they fit together.

I’m using R3F Drei’s useDepthBuffer() and using that as input to a texture uniform, with a custom shader like this:

    const { camera } = useThree()
    const depthTexture = useDepthBuffer()
    const { uniforms, onBeforeCompile } = useShader({
        uniforms: {
            depthTexture: {
                value: depthTexture
            cameraFar: {
                value: camera.far
            cameraNear: {
                value: camera.near
        vertex: {
            head: glsl`
                #include <packing>
                varying vec2 vUv; 
                varying float vZ; 
            ` ,
            main: glsl`
                vUv = uv; 
                vZ = position.z;
        fragment: {
            head: glsl`   
                #include <packing>
                uniform sampler2D depthTexture;  
                varying vec2 vUv; 
                varying float vZ;  
                uniform float cameraNear;
                uniform float cameraFar;

                float readDepth( sampler2D depthSampler, vec2 coord ) {
                    float fragCoordZ = texture2D( depthSampler, coord ).x;
                    float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
                    return viewZToOrthographicDepth( viewZ, cameraNear, cameraFar );
            main: glsl`
                float z = readDepth(depthTexture, vUv);

                gl_FragColor.a = clamp((vZ - z) / .25, 0., 1.);

and attaching it to a mesh:

  <mesh   position={[0, 0, 4]}>
      <planeGeometry args={[10, 10, 1, 1]} />

This is based off Threejs own example for depth textures, trying to fade out the texture the closer it gets to an intersection. When I attach the depthTexture to a simple plane mesh to debug, I can see that the texture is generated as expected, but not sure what the correct way of reading the value in a shader would be. Am i on the right track? Can the vZ value be compared directly with the value returned from readDepth? Should I compare to the world space z of the pixel? Does the transform/rotation of the mesh matter? Does it matter that i’m using a orthographic camera?

I’m also getting a lot of errors in the console: _INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture.. Not sure if that matters of not.

Any help greatly appreciated :slight_smile:

So after a lot of trail and error i think i did something very basic. This feels crazy simple though so I’m not convinced this is very robust lol. It simply fades out the fragment the closer it gets to an intersection with the rest of the scene.

vec2 coords = gl_FragCoord.xy / resolution;
float sceneDepth = texture2D(tDepth, coords).r;
float currentDepth = gl_FragCoord.z;
float cameraRange = (cameraFar - cameraNear);
float falloff = .75;
gl_FragColor.a = easeInOutCubic(clamp((sceneDepth - currentDepth) / (falloff / cameraRange), .0, 1.));         

I’m also not doing any kind of conversion of the depth value so i guess im always comparing it in the 0-1 range, is there a reason why the other examples might not want to do this? Also the depth buffer texture is pretty low res (for performance reasons i assume), which causes some boxy ugliness. Any way to get rid of that without increasing the resolution of the depth texture?

1 Like

For better quality i use it. Not in Drei.

let rtt_lake_terrain=new THREE.WebGLRenderTarget(width,height);
rtt_lake_terrain.depthTexture=new THREE.DepthTexture();
//rtt_lake_terrain.depthTexture.type=THREE.UnsignedShortType; // small quality
//rtt_lake_terrain.depthTexture.type=THREE.UnsignedIntType;  // better quality
rtt_lake_terrain.depthTexture.type=THREE.UnsignedInt248Type; // big quality

So as far as i can tell, Drei only wraps a standard render target and a depth texture. Also, the blockiness comes from the fairly low resolution of the texture itself. My camera far and near work ok with the default depthTexture.type.

Is the only way to get rid of the blockiness to render the depth texture with the same resolution as my render target? I also tried to get a smoother linear filtering of the depth texture but switching to LinearFilter stops the texture from rendering altogether – is NearestFilter the only one supported?