Pass renderer output as input texture to next frame interation


I’m trying to implement a example that iterates on a modified version of an input texture over time.

I start with the following shader material:

	// Create the plane geometry
	var geometry = new THREE.PlaneBufferGeometry(2, 2);

	// Define the shader uniforms
	uniforms = {
		u_time : {
			type : "f",
			value : 0.0
		u_resolution : {
			type : "v2",
			value : new THREE.Vector2(window.innerWidth, window.innerHeight)
		u_mouse : {
			type : "v2",
			value : new THREE.Vector2()
		u_texture : {
			type : "t",
			value : null

	// Create the shader material
	var material = new THREE.ShaderMaterial({
		uniforms : uniforms,
		vertexShader : document.getElementById("vertexShader").textContent,
		fragmentShader : document.getElementById("fragmentShader").textContent

And then my idea is to pass to u_texture the rendered screen from the previous frame.

		renderer.render(scene, camera, renderTarget);
		renderer.render(scene, camera);
		uniforms.u_texture.value = renderTarget.texture;

This produces the following error:

GL ERROR :GL_INVALID_OPERATION : glDrawElements: Source and destination textures of the draw are the same.

I also, don’t like it because i need to render the same scene twice. My question is, is there a way to put the rendered scene into a texture that I can pass to the next frame iteration?

Something like:

		renderer.render(scene, camera);
		uniforms.u_texture.value = renderer.getScreenTexture();

Thank you for your help!

The problem is that you use for both calls of WebGLRenderer.render() the same scene and camera. You normally render the actual scene to a render target (this is the so called “beauty pass”) and then use it as an input for post-processing. In this context, you normally have a separate scene with a fullscreen quad and an instance of OrthographicCamera. The material of the quad is usually an instance of ShaderMaterial that contains the actual post processing code (for example a blur effect). Have a look at THREE.ShaderPass to see this setup in action.

Thank you for your reply, @Mugen87

What I want to do it’s a bit different than a postprocessing effect. It’s more a evolution effect, where the input of the current frame is based on the output from the previous frame.

I finally managed to make it work using two scenes and two render targets. You can see the example here:

And this is the js code:

This example from alteredqualia helped me to converge to the final solution: