I have an application with a requirement for multiple graphics pipelines (e.g., scene+camera+renderer+canvas). One byproduct of this requirement is, it becomes necessary as part of the workflow to adjust the rendering target (e.g., canvas) on the fly. It appears this cannot be done once a renderer is instantiated? Looking at the WebGLRenderer model, the canvas is primarily used to resolve a gl context from which most internal references are derived. I could see three solutions to this case:
Adjust internal calls to rederive (initGLContext() would need to be reinvoked) gl context references once a new domElement is assigned
Possibly extend WebGLRenderer to provide more consistent state management/update upon target reassignment (likely accessor-based, though that’s not entirely consistent with the THREE.js pattern)
Try and capture renderer configuration (it’s theoretically a stateless behavior, but the references are obviously an issue); there’s no easy way to ‘extract’ parameters from instantiation without modifying to serialize/cache them, beyond which point it could be reinitialized but possibly at the expense of losing any external references
I’m comfortable developing MRs for any of the above (to the degree necessary–#2 wouldn’t involve much), but it’s not clear to me what approach would be most consistent with the THREE.js “pattern” and the manner in which the community expects the graphics pipeline to behave.
If your sole purpose is assigning a different canvas, then I suggest rendering to an intermediate canvas first, then draw that into your (swappable) final canvas. Basically a front&back-buffer principle. This approach gives some additional nifty side effects, such as render scaling, or canvas filters for offloaded post-processing effects. The 2d canvas api is fast enough these days, so you shouldn’t notice a negative performance impact
Honestly, having dove into the existing implementation more, I’m not sure I will have the choices outlined by either of us. I had previously been under the impression that rendering was (aside from configuration properties) largely a stateless activity (w.r.t. the other pipeline models) but clearly that’s not the case.
Copying image data remains a sticky proposition, though, because it’s difficult to assert what else (like controls–OrbitControls, for example, binds to events) might already be dependent on that DOM. Since the canvas is resolved upon instantiation, I think the best approach going forward is probably just to bind rendering to the UI model so they share the same scope. Multiple UI elements will each have their own canvas and renderer (really would be nice to tease those apart, oh well), but could still rasterize the same scene from the same (or slightly different) cameras.
The other approach that occurred to me was to arrange a separate WebGLRenderTarget representation that could update the canvas content, but this is largely meant to go the “other way” (e.g., rasterize texture data) and not necessarily to provide access (including read/copy) into the image contents themselves. Neither does drawImage() play particularly nice once the context has been attached to a renderer anyway.
If I were to submit an MR, it would probably look something like exposing the context lost/restore handlers so you could use them to reconstruct references from a “new” (swapped) context.
Thanks for the ideas, though.