I appreciate this is already somewhat coved in other posts but I haven’t found a definitive answer.
I have a pretty involved workflow with a lot of shaders and steps in a render pipeline. I want a way to see the realtime outputs in separate canvases which maybe I can put into a debug UI with 5 or 6 smaller canvases showing each step (gpgpu texture, an exclusion matte etc). Imagine i have a final output canvas and some kind of visibility toggle on my lil-ui controls to toggle the creation/dom visibility of small canvases on the side.
I’ve read that simply drawing one output to a 2d context is expensive as are multiple renderers in my code.
Question: whats the most performance efficient way to draw out render steps (rendertargets? computationalrenders) to other canvases? are there any simple examples of this that someone can share?
It’s true that using multiple canvases is expensive, and there’s a hard limit to how many WebGL canvases you can create. Especially if they’re all going to update realtime, it’s better to create a single large (even full-screen) canvas, and draw into rectangular windows as if they were separate canvases. You can see this approach in these examples:
For rendering just one “render step”, I would suggest what @manthrax just said; if you want to get rid of rendering an actual canvas for this you can always just create it -thus avoiding adding the element to the dom, updating this is really the bottleneck- and pass it to a simple threejs plane as a regular texture. This is what we do here for debugging textures, following the same pattern of the ShadowMapViewer.js . Being a regular plane you can latter attach it to the camera or other tricks for sticking it to the view
Now, regarding rendering multiple mini “render steps” simultaneously, I think the approach of @donmccurdy is actually what I’ve observed on other projects (deferred renderers, Multipass compositing, etc) as it is the most performant choice. We once tried using multiple views for quad rendering and worked just fine
Thanks, this seems to be the conventional wisdom despite seeming very counterintuitive insofar as it being more performant. In my case i have thousands of gpgpu particles which i’m moving around and messing with in multiple steps, then adding additional geometry planes (which also moves with the scene controls) which are textured - seems like it would logically slow things down more than dumping out a 2d updated image to another dom element. But you seem to be right, the browser seems to bottleneck the transfer of bitmaps in realtime with the read/writes. I loose about 5% performance with the ‘inline scene’ method, but about 25% with writing to secondary 2d canvases