More of an API question. Is there a way with the node system to use depth values in a fragment shader without the MRT system or defining a depth texture, or is MRT the way to go?
Hi cmhhelgeson,
If you want to have the depth texture of the scene with the node system in a fragment shader without a MRT and without defining a depth texture, this is how it works:
import {depthPass, viewportTopLeft, attribute, uv} from "./three/nodes";
const sceneDepthPass = depthPass( scene, camera );
const sceneDepthPassColor = sceneDepthPass.getTextureDepthNode();
const fragmentShader = wgslFn(`
fn main_fragment(
uv: vec2<f32>,
viewportDepthTexture: texture_depth_2d,
viewportDepthSampler: sampler
) -> vec4<f32> {
return textureSample(viewportDepthTexture, viewportDepthSampler, uv) + vec4<f32>(1, 0, 0, 1);
}
`);
const shaderParams = {
uv: viewportTopLeft, //or attribute("uv") or uv()
viewportDepthTexture: sceneDepthPassColor,
viewportDepthSampler: sceneDepthPassColor
}
sceneDepthPassColor is then the depthTexture of the scene. Here is a code pen:
The advantage over renderTargets is that you have no frame delay.
I use this in postprocessing to reconstruct the world coordinates with the depth values and create atmosphere. It works better than with a renderTarget depthtexture in webgl. The terrain doesn’t have any color yet, but this is about the effect with the depth texture:
I don’t yet know how fxaa works in WebGPU, but the node for the depthTexture is very practical compared to the renderTarget method that I did first and is also more efficient.
I think the question was is it possible to somehow obtain the depth value without rendering to any depth targets? Which i was wondering too if it could be possible past webgl 1.
gl_FragCoord.z ?
I think maybe I need to be more specific in what I’m trying to do. I’m trying to replicate MeshDepthMaterial within the node system ( i.e a MeshDepthNodeMaterial class ), which I think would necessitate applying the depth values to the diffuseColor of a material without necessarily using the MRT.
You can also choose an object instead of the scene. This means that only the depth of the object is written into the texture, but maybe I’m misunderstanding you
const material = new MeshBasicNodeMaterial();
material.colorNode = vec4(0, 0.25, 0.75, 1);
const geometry = new THREE.BoxGeometry( 1, 1, 1 );
const box = new THREE.Mesh( geometry, material );
scene.add( box );
const boxDepth = depthPass( box, camera );
const depthPassTexture = boxDepth.getTextureDepthNode();
Otherwise I can only think of
var fragDepth = position.z / position.w;
in the vertex shader and pass it via a varying to the fragment shader but that has nothing to do with the node system but is purely a shader thing. I also know that you are pretty smart and if you had meant that you wouldn’t have asked in the first place ![]()
