Reconstruct world position in screen-space from depth buffer

Hey guys,

I’m trying to reconstruct world position in a post-processing pass. I have access to depth buffer (in separate render target) as well as original camera with its matrices.

Geometry is a screen-space quad. I visualize coordinates that my shader computes as a grid pattern, but coordinates seem to be completely unstable and move with the camera.

original:
image
shader
image
moved camera a bit, notice how grid has moved also
image

That’s what i’m trying to fix. Grid should remain fixed.

here’s my material:
Vertex

varying vec2 vUv;

void main() {

    vUv = uv;
            
    // Don't care about actual position, we know shader is screen-space, so we can avoid unnecessary matrix multiplication
    gl_Position = vec4( (uv - 0.5)*2.0, 0.0, 1.0 );
        
}

Fragment

    #include <packing>
    uniform sampler2D tDepth;
    
    uniform mat4 uProjectionInverse;
    uniform mat4 uViewInverse;

    varying vec2 vUv;
    	     
    vec3 computeWorldPosition4(){
            float d = unpackRGBAToDepth( texture2D( tDepth, vUv) ); 
        
        vec2 uvClip = vUv*2.0 - 1.0;
        
        vec4 clipPos = vec4(uvClip, d, 1.0);
        
        // inverse projection by clip position
        vec4 viewPos = uProjectionInverse * clipPos;
        
        // perspective division
        viewPos /= viewPos.w;
        
        vec3 worldPos = (uViewInverse * viewPos).xyz;
        
        return worldPos;
    }
    
    vec3 visualizePosition(in vec3 pos){
        float grid = 5.0;
        float width = 3.0;
        
        pos *= grid;
        
        // Detect borders with using derivatives.
        vec3 fw = fwidth(pos);
        vec3 bc = saturate(width - abs(1.0 - 2.0 * fract(pos)) / fw);
        
        // Frequency filter
        vec3 f1 = smoothstep(1.0 / grid, 2.0 / grid, fw);
        vec3 f2 = smoothstep(2.0 / grid, 4.0 / grid, fw);
        
        bc = mix(mix(bc, vec3(0.5), f1), vec3(0.0), f2);
        
        return bc;
    }
        
    void main(){
            //get world fragment position
            vec3 worldPosition = computeWorldPosition4();
                            
            gl_FragColor = vec4( visualizePosition(worldPosition), 1.0);
    }

Material

const uniforms = {
    tDepth: {
        type: 't',
        /**
         * @type {Texture}
         */
        value: null
    },
    uViewInverse: { type: 'm4', value: new Matrix4() },
    uProjectionInverse: { type: 'm4', value: new Matrix4() }
};

const material = new ShaderMaterial({
    uniforms,
    vertexShader: vertexShader(),
    fragmentShader: fragmentShader(),
    blending: NormalBlending,
    lights: false,
    depthTest: false,
    depthWrite: false,
    transparent: true,
    vertexColors: false,
    extensions: {
        derivatives: true
    }
});

Render method

    //set up uniforms
    const uniforms = this.material.uniforms;

    const modelViewMatrix = new Matrix4().multiplyMatrices(camera.matrixWorldInverse, camera.matrixWorld);

    modelViewMatrix.getInverse(modelViewMatrix);

    uniforms.uProjectionInverse.value.copy(camera.projectionMatrixInverse);
    uniforms.uViewInverse.value.copy(modelViewMatrix);

    // Scene contains screen-space quad and camera is orthographic
    renderer.render(this.scene, this.camera, target, true);

Any clues are welcome

Okay, managed to solve it. For posterity, here are the relevant parts of the shader:

vec3 computeWorldPosition(){
		// Convert screen coordinates to normalized device coordinates (NDC)
		
		float normalizedDepth = unpackRGBAToDepth(  texture2D( tDepth, vUv) ); 
		
		vec4 ndc = vec4(
			(vUv.x - 0.5) * 2.0,
			(vUv.y - 0.5) * 2.0,
			(normalizedDepth - 0.5) * 2.0,
			1.0);
		
		vec4 clip = uProjectionInverse * ndc;
		vec4 view = uMatrixWorld* (clip / clip.w);
		vec3 result = view.xyz;
		
		return result;
}

here are the relevant matrices:

uniforms.uProjectionInverse.value.copy(camera.projectionMatrixInverse);
uniforms.uViewInverse.value.copy(camera.matrixWorld);

Note that the “camera” here is the original camera used to render the scene over which the screen-space effect is being drawn and not the camera facing the quad of said effect.

6 Likes