Copying canvas buffer to texture?

I’ve always used WebGLRenderTarget to render my scene to a texture to perform post-processing effects:

// 1. Draw scene to texture
renderer.render(scene, cam);

// 2. Perform post-effects on renderTarget.texture
// ...

// 3. Draw texture to canvas
renderer.render(postScene, postCam);

However, what I’m trying to attempt is in reverse order:

// 1. Draw scene to canvas
renderer.render(scene, cam);

// 2. Copy finished canvas buffer to texture
// How do I copy canvas buffer texture to renderTarget.texture?

Is this possible? I’m hoping to maintain the anti-alias properties of drawing directly to canvas, but also want to capture that color data to add some texture effects on the following frame. I’m trying to avoid rendering the whole scene twice, so I’m looking for a copy method.

In short, i don’t think that this is possible. Why, i’d like to hear an explanation. You don’t manage buffers manually i think as you would in openGL.

1 Like

you could do CPU roundtrip via readPixels (and then THREE.DataTexture or something), by that is kind of high price to pay for the antialiasing - it will decimate your fps.


Yeah, I looked into that option, and the performance cost would be too big to justify its use. :frowning_face:


I guess the cheapest there is is temporal AA, but it is for static or mostly static scenes only.

My scene is moving, so I’m using an FXAA pass for now. It’s pretty good, but still doesn’t compare to the default renderer.antialias = true result.


Try it with WebGLRenderer.copyFramebufferToTexture(). This method allows you to transfer the contents of the current framebuffer to the given DataTexture. There is also an example that shows how the method is used:


ha, now this is interesting, I did not know such method existed! someone @ me if you try and it works!

Thanks for the suggestion, @Mugen87!

@makc3d Sadly it looks like .copyFramebufferToTexture() is also a CPU-intensive operation. I rendered to canvas, then copied to a texture of varying sizes, and I got noticeably slower framerates as the texture got larger :frowning_face:.

  • 60FPS when rendering direct to WebGLRenderTarget
  • ~50FPS when rendering to canvas, then copying to 512² texture
  • 50FPS when copying to a 1024² texture
  • 30FPS with 2048² texture
  • 14FPS with 4096² texture

I guess that’s why the example Mugen suggested only captures a 128x128 area, where framerate wouldn’t be largely affected.

1 Like

What you could try is drawing the WebGL canvas to a regular canvas (drawImage) and use that canvas as texture.

I just tried it @Fyrestar , and performance declined even further:

  • 12FPS with 2048² canvas
  • 6FPS with 4096² canvas
// Create canvas
const canv = document.createElement("canvas");
canv.width = 1024;
canv.height = 1024;
const cContext = canv.getContext("2d");

// Make texture with canvas as source
const texCopy = new THREE.Texture(

update() {
	// Pass 1: Render scene to WebGL canvas

	// Draw WebGL canvas onto 2D canvas
	cContext.drawImage(this.rend.domElement, 0, 0, canv.width, canv.height);
	texCopy.needsUpdate = true;

	// Use updated texture for post-processing
	this.extractShader.uniforms.uBaseTexture.value = texCopy;
	this.rend.render(this.postScene, this.postCam);

It looks like all these methods are CPU-intensive. I guess I’ll stick with rendering to texture, and adding an FXAA pass as usual.

That’s strange for that resolution, what are your specs? I have no issues with 1080p on multiple machines and doing it even a couple times per frame, i’m using it in a hybrid drawing/animation app. How is the performance only drawing to the canvas, not uploading the canvas as texture?

I would recommend this anyway for anti-aliasing, there is no guaranty anti-aliasing is always supported.

I’m using a 15-inch MacBook Pro from Mid-2015. I have no issue running most WebGL sites, for example runs smoothly.

I created a demo with the ctx.drawImage setup you suggested with a 2048x2048 canvas: Top canvas is WebGL, and the lower canvas is 2D. I get 30FPS with drawImage (on lines 64 & 65), but if I comment those 2 lines out, I get smooth 60FPS.

I get solid 60, what browser are you using, and what are the specs of the GPU?

It’s running an AMD Radeon R9 M370X 2048 MB
Interestingly enough, I get the following:

  • 30FPS with Firefox
  • 60FPS with Chrome
  • 30FPS with Safari
1 Like

cough superior cough browser

I get 30 on my iPad as well, i still would recommend using a GPU side method like FXAA instead going this route if it’s just about anti-aliasing this method is too costly for weak devices and the native anti-aliasing isn’t even guaranteed, in my case it doesn’t even go both ways like in yours, rather just either one direction depending on the composition complexity, but as it’s about editing and isn’t required to always preview on the fly in realtime (prerender) it’s no bottleneck for my case.

You could either go with FXAA what gives a really satisfying result for the low cost, MSAA with WebGL2 or super sampling.


ok, but did you turn mipmapping off. or is it off for DataTexture by default?

(this is re: copyFramebufferToTexture)

@makc3d I was using THREE.LinearFilter for minFilter and magFilter, which disables mipmapping if I remember correctly.

Try it out, I made a second demo that uses copyFramebufferToTexture:

it looks like by needsUpdate = true you erase the result of copying from fame buffer and slow it down. see