WebGLRenderTarget: Null texture in shader texture Uniform

While using THREE.WebGLRenderTarget to pass rendered scene as texture to a second shader, the value received in the Shader texture uniform variable is always null.

Flow of code:
1.Shader1 is used for renderingDepthMap(Working fine while being rendered on screen)
2.The rendered depthMap is passed as texture uniform to shader2 that is finally rendered on screen.

Relevant Code Snippet:
//Initialize the renderTarget

var options = {
		
		minFilter : THREE.LinearFilter,
			magFilter : THREE.LinearFilter,
			format : THREE.RGBAFormat,
			type : /(iPad|iPhone|iPod)/g.test(navigator.userAgent) ? THREE.HalfFloatType : 
                    THREE.FloatType
		};

renderTarget1 = new THREE.WebGLRenderTarget(window.innerWidth, window.innerHeight, 
                               options);
renderTarget1.stencilBuffer = false;
renderTarget1.texture.needsUpdate = true;
renderTarget1.texture.generateMipmaps = false;

//Define the Shader2 uniforms
    uniforms = {
		depthMap : { type: "t", value: renderTarget1.texture},
		O_SIZE : { type: "float", value: 0.000375},
	   };

    //Creating Material for Shader2
    materialShader2 = new THREE.ShaderMaterial({
			uniforms: uniforms,
			vertexShader : document.getElementById('vertex2_Shader').textContent,
			fragmentShader : document.getElementById('fragment2_Shader').textContent
		});


materialShader2.needsUpdate = true;


//Animate function
function animate()
{
	// render depth map in renderTarget1
	renderer.setRenderTarget(renderTarget1); //setting render target
	renderer.render(sceneDepthMap, shadowCamera); // rendering depthmap to framebuffer
	renderer.setRenderTarget(null);
	renderer.render(scene2, shadowCamera);
	requestAnimationFrame( animate );
}

How do you initialize the WebGL renderer? Could renderer.autoClear = false; be missing? You may also adjust the animation loop after turning off auto-clear:

function animate()
{
       renderer.clear();

	// render depth map in renderTarget1
	renderer.setRenderTarget(renderTarget1); //setting render target
	renderer.render(sceneDepthMap, shadowCamera); // rendering depthmap to framebuffer
	renderer.setRenderTarget(null);
	renderer.render(scene2, shadowCamera);
	requestAnimationFrame( animate );
}

Any chances to demonstrate the issue with a live example? https://jsfiddle.net/f2Lommf5/

Sharing your code via GitHub is also a good idea.

BTW: When defining uniforms, it’s not necessary to configure the type property anymore. Just setting value is sufficient. Besides, when creating a material, it’s not necessary to immediately set needsUpdate to true.

I am attaching live example: https://jsfiddle.net/MrOrangeSky/gyf4pdv6/2/
Kindly take a look.

I am receiving following warnings too(I did not mention them earlier, because I think they are not likely the cause of the mentioned issue).
1.WebGL warning: drawArraysInstanced: Tex image TEXTURE_2D level 0 is incurring lazy initialization.
2.THREE.WebGLRenderer: Texture marked for update but image is undefined.
However these warnings are slowing jsfiddle unlike the browser run(where delay was not as noticible).

I am also attaching visualisation of expected shader uniform valueexpected-result
@NicolasRannou: Adding renderer.autoclear =false;
And clearing it before each render call is not effecting the output. still the same issue persists.

@Mugen87 Thank you for sharing the updates on uniform definition and on needsUpdate.

controls.addEventListener( 'change', animate );
This line starts new animation loop on each change of controls. This leads to significant drop of performance very quickly.

1 Like

This warning goes away if you remove renderTarget1.texture.needsUpdate = true;.

In general, it seems to me what you are trying to do is not correct. You render the scene with an orthographic camera into a render target. You then use this as a texture for the same cube. This can’t work since you have to render to a full-screen quad like demonstrated in this example:

https://threejs.org/examples/webgl_shader

Keep in mind that the render target represents the image of the entire screen, not just of the cube. Mapping this texture to your cube can’t be right.

@Mugen87
Intentions for writing the code: Render Opacity Map(to be used later) that needs depth values of an object for calculations
STEP WISE PROCESS
1.Render DepthMap: Place an orthographic camera at the light source and record depth values(wrote fragmentShadow_Shader for the purpose).
2.Render Opacity Map: fragment2_shader written for the purpose.
Pass depthmap as a texture to read depth Values for generating the output of this shader.
Full Code fragment2_shader:

		uniform sampler2D depthMap;
		uniform float O_SIZE;

		varying vec4 fragCoord;
		varying vec2 fUv;

		vec4 opacityMap = vec4(0.0);  

		#define A 0.01     

		const float UnpackDowncale = 255./256.;
		const vec3 PackFactors = vec3(256. * 256. * 256., 256. * 256., 256.);
		const vec4 UnpackFactors = UnpackDowncale/ vec4(PackFactors,1.);

		float unpackRGBAtoDepth(const in vec4 v)
		{
			return dot(v,UnpackFactors);
		}


		void main()
		{
            float currentDepth = fragCoord.z; //check whether it is more efficient to pass fragCoord.z
			vec4 shadowMapDepthRGBA = texture2D(depthMap,fUv);
			float shadowMapDepth = unpackRGBAtoDepth(shadowMapDepthRGBA); //depth value from depthMap

			if(currentDepth < shadowMapDepth + (1. * O_SIZE))
			{
				opacityMap.r  = A;
				opacityMap.g  = A;
				opacityMap.b  = A;
			}
			else if(currentDepth < shadowMapDepth + ((1.+3.) * O_SIZE))
			{
				opacityMap.g = A;
				opacityMap.b = A;
			}
			else if(currentDepth < shadowMapDepth + ((1.+3.+16.) * O_SIZE))
				opacityMap.b = A;
			else
				opacityMap.b = A;

			opacityMap.a = shadowMapDepth;

            gl_FragColor = opacityMap;
			//gl_FragColor = shadowMapDepthRGBA;
			//gl_FragColor = vec4(shadowMapDepth,1.0,1.0,1.0);
			//gl_FragColor = vec4(shadowMapDepthRGBA.rgb,1.0);
			//gl_FragColor = vec4(currentDepth,currentDepth,currentDepth,1.0);

		}

Since, I was having issues of not receiving right output from fragment2_shader. I went for visualisation of different variables for debugging. This makes it look like I am using the first render output as a texture for same cube(but I am not). I just intended to find out why I am not receiving the depthMap(Uniform) value in fragment2_shader via visualisation.

Do you still think I am doing it wrong? If so what is the correct way receiving depthmap(as uniform) for rendering the desired opacity map?

Also As Render target represents the entire screen, Then Is the use of render target to obtain depth map of a single object wrong(even if the scene has just that object in it) ? If yes whats the alternative.
Point me if I am wrong, do we not use frame buffer object similarly in OPENGL to store the depth buffer value for generating depth map of an object(since we donot have access to depth buffer values in our custom shader, is rendering it like shaderDepthMap wrong)?

@prisoner849
Does the performance becomes better if, we place all the render calls in a render function and call
controls.addEventListener( ‘change’, render );
and change animate function as follows:
function animate() {
requestAnimationFrame( animate );
//update( );
render();
}