Need to drawImage on 2D canvas the image data from THREE.WebGLRenderTarget

Guys, I have found all kinds of examples of using render target for maps, but not taking THREE.WebGLRenderTarget and turning it into an image that I can use on 2D canvas to ctx.drawImage(rtt, 0,0,300,300) etc. I have extensive threejs experience and knowhow, but just never needed to touch render target stuff before this… Can anyone provide me these (presumably) couple lines of code?

Just the relevant parts (no animations - only render as needed):

bufferTexture = new THREE.WebGLRenderTarget( wid, hei, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter});

function render(){	
	renderer.render(scene, camera);
	renderer.render(scene2, camera2, bufferTexture, true);
	rtt = bufferTexture.texture;	

function drawRTT(){
	ctx.drawImage(rtt, 0, 0, 300,300);	

I have managed to do what I need with two of everything (including a second THREE.WebGLRenderer – and putting its domElement in a hidden div)… but this obviously is a pain, and I imagine twice as taxing on systems… So really guys, if there is a way to do what I am saying with a render target and just popping that imageData (whatever) into a new Image(), or something along those lines, and being able to draw that into my 2d canvas, that would rock. Please advise. Thanks in advance.

Edit: To be clear, scenes 1 & 2 have somewhat different content. And I’m aware that one can drawImage(renderer.domElement,x,y,xpos,ypos), which I am doing in my workaround. I just want to be able to do the same from a render target.

AFAIK, it’s not possible do directly use a render target in order to draw its framebuffer onto a canvas element. Apart from your proposed solution you might want to check out WebGLRenderer.readRenderTargetPixels in order to try this approach.


Hey, someone finally spoke up! Thanks Mugen!

Hmmm… I’m likely to just keep the workaround with 2nd renderer (et al), since that is real-time… It’s been my experience that things like toDataURL(“image/jpeg”) take time to go through each pixel…

But as far as system resources are concerned, I am curious what you (and others) think would be less taxing? My workaround or readRenderTargetPixels way?

And do you know if having multiple threejs render instances (even stacked above one another - like PS layers), is problematic?.. System take a huge hit, or are each instance of renderer pretty light themselves, but only hit hard with high poly meshes, lots of pbr maps, and/or hundreds of thousands of instances of stuff?

I have not done any performance comparisons between both approaches. The only thing I know is that WebGLRenderer.readRenderTargetPixels() internally uses WebGLRenderingContext.readPixels which tends to be a slow operation. Since it’s a synchronous call, it potentially blocks your rendering loop. It would be actually interesting if you could try out both approaches and report some performance data in this topic :innocent:

In general it’s possible to have multiple renderers like demonstrated in this example. However, you should only use this approach in special use cases since both renderers are completely independet of each other. They have their own WebGL context, state and caching.