float depthTx = texture2D(tDepth,vUv).r;
float viewZ = getViewZ( depthTx );
float clipW = cameraProjectionMatrix[2][3] * viewZ + cameraProjectionMatrix[3][3];
vec4 e = getViewPosition(vUv,depthTx,clipW);
vec4 wPos = CameraMatrixWorld*e;
gl_FragColor = wPos;
i try to render worldPosition,but look still like eye space position.
ok,I know,I forget reset camera.matrixWorld when I move camera position.
1 Like
new question.How do I want to handle this precision problem?
How do you configure your depth texture?
function setupRenderTargetLight() {
var target = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight );
target.texture.minFilter = THREE.NearestFilter;
target.texture.magFilter = THREE.NearestFilter;
target.stencilBuffer = ( THREE.DepthFormat === THREE.DepthStencilFormat ) ? true : false;
target.depthTexture = new THREE.DepthTexture();
target.depthTexture.format = THREE.DepthFormat;
target.depthTexture.type = THREE.UnsignedShortType;
target.setSize( window.innerWidth, window.innerHeight );
return target
}
Your reminder helped me find a solution.thank you!
target.depthTexture.type = THREE.FloatType;
3 Likes
Correct! THREE.FloatType
is the highest possible precision for a depth texture and usually mitigates such issues.
could you share the full code, specially getViewPosition()
linsanda:
getViewPosition
is it a built in function? i dont have it
The function usually looks like so:
vec3 getViewPosition(const in vec2 screenPosition, const in float depth) {
vec4 clipSpacePosition = vec4(vec3(screenPosition, depth) * 2.0 - 1.0, 1.0);
vec4 viewSpacePosition = cameraProjectionMatrixInverse * clipSpacePosition;
return viewSpacePosition.xyz / viewSpacePosition.w;
}
In this file, you fine related code like getViewZ()
.
2 Likes
I am just startign to work with all the built in functions around spaces and perpectives, is there a documentation specific to THREE shaders / passes?
Mugen87
January 24, 2025, 6:23pm
11
No, there isn’t. But with WebGPURenderer
and NodeMaterial
we have started to implement such functions only once with TSL and provide proper documentation:
/**
* Computes a position in view space based on a fragment's screen position expressed as uv coordinates, the fragments
* depth value and the camera's inverse projection matrix.
*
* @method
* @param {Node<vec2>} screenPosition - The fragment's screen position expressed as uv coordinates.
* @param {Node<float>} depth - The fragment's depth value.
* @param {Node<mat4>} projectionMatrixInverse - The camera's inverse projection matrix.
* @return {Node<vec3>} The fragments position in view space.
*/
export const getViewPosition = /*@__PURE__*/ Fn( ( [ screenPosition, depth, projectionMatrixInverse ], builder ) => {
If you really start from scratch, consider to use WebGPURenderer
with its new post processing system right from the beginning. It is more performant and flexible than the previous composer. A collection of demos is available here: three.js examples
4 Likes