I’ve always used WebGLRenderTarget to render my scene to a texture to perform post-processing effects:
// 1. Draw scene to texture
renderer.setRenderTarget(renderTarget);
renderer.render(scene, cam);
// 2. Perform post-effects on renderTarget.texture
// ...
// 3. Draw texture to canvas
renderer.setRenderTarget(null);
renderer.render(postScene, postCam);
However, what I’m trying to attempt is in reverse order:
// 1. Draw scene to canvas
renderer.setRenderTarget(null);
renderer.render(scene, cam);
// 2. Copy finished canvas buffer to texture
renderer.setRenderTarget(renderTarget);
// How do I copy canvas buffer texture to renderTarget.texture?
Is this possible? I’m hoping to maintain the anti-alias properties of drawing directly to canvas, but also want to capture that color data to add some texture effects on the following frame. I’m trying to avoid rendering the whole scene twice, so I’m looking for a copy method.
you could do CPU roundtrip via readPixels (and then THREE.DataTexture or something), by that is kind of high price to pay for the antialiasing - it will decimate your fps.
Try it with WebGLRenderer.copyFramebufferToTexture(). This method allows you to transfer the contents of the current framebuffer to the given DataTexture. There is also an example that shows how the method is used:
@makc3d Sadly it looks like .copyFramebufferToTexture() is also a CPU-intensive operation. I rendered to canvas, then copied to a texture of varying sizes, and I got noticeably slower framerates as the texture got larger .
That’s strange for that resolution, what are your specs? I have no issues with 1080p on multiple machines and doing it even a couple times per frame, i’m using it in a hybrid drawing/animation app. How is the performance only drawing to the canvas, not uploading the canvas as texture?
I would recommend this anyway for anti-aliasing, there is no guaranty anti-aliasing is always supported.
I’m using a 15-inch MacBook Pro from Mid-2015. I have no issue running most WebGL sites, for example lusion.co runs smoothly.
I created a demo with the ctx.drawImage setup you suggested with a 2048x2048 canvas: https://jsfiddle.net/marquizzo/znb9q4hp/ Top canvas is WebGL, and the lower canvas is 2D. I get 30FPS with drawImage (on lines 64 & 65), but if I comment those 2 lines out, I get smooth 60FPS.
I get 30 on my iPad as well, i still would recommend using a GPU side method like FXAA instead going this route if it’s just about anti-aliasing this method is too costly for weak devices and the native anti-aliasing isn’t even guaranteed, in my case it doesn’t even go both ways like in yours, rather just either one direction depending on the composition complexity, but as it’s about editing and isn’t required to always preview on the fly in realtime (prerender) it’s no bottleneck for my case.
You could either go with FXAA what gives a really satisfying result for the low cost, MSAA with WebGL2 or super sampling.