WorldPosition in shader

To compute world-space positions from depth in a postprocessing pass, you need:

  1. A method to reconstruct view-space coordinates from screen UV + depth.
  2. Multiply by the camera’s world matrix to go from view space to world space.

Below is a common pattern in Three.js (for a perspective camera):

// Reconstructs view-space position from UV + depth:
vec4 getViewPosition(vec2 uv, float linearDepth) {
  // 1) Convert uv (0..1) to clip-space (-1..1)
  vec4 clipPos = vec4(uv * 2.0 - 1.0, 0.0, 1.0);

  // 2) Depth is in view space, so move clipPos.z = NDC
  //    We can approximate by reprojecting with the camera proj matrix if we have an inverse.
  //    Alternatively, if we have 'viewZ' from perspectiveDepthToViewZ, set clipPos.z to that reprojected coordinate.
  clipPos.z = (linearDepth * 2.0 - 1.0);
  
  // 3) Transform by inverseProjectionMatrix to get view-space
  vec4 viewPos = inverse(cameraProjectionMatrix) * clipPos;
  // Perspective divide
  viewPos.xyz /= viewPos.w;
  
  return viewPos;
}

void main() {
  float fragDepth = texture2D(tDepth, vUv).r;
  // Convert depth to view-space z
  float viewZ = perspectiveDepthToViewZ(fragDepth, cameraNear, cameraFar);

  // Reconstruct the view-space position
  vec4 viewPos = getViewPosition(vUv, fragDepth);

  // Finally, go to world space
  vec4 worldPos = cameraWorldMatrix * viewPos;

  gl_FragColor = vec4(worldPos.xyz, 1.0);
}
  • cameraWorldMatrix transforms from view space to world space.
  • If you see mismatches, ensure you’re correctly converting the depth to the coordinate system used by getViewPosition.
  • Three.js’s built-in perspectiveDepthToViewZ helps convert [0…1] depth to view-space Z, but the code must consistently handle projection/unprojection.