Use one renderer for two canvases

Hi,

I have one renderer that renders to one canvas so I have my model in view. Is it possible to have another canvas and have the renderer render once to that canvas and then go back to rendering to the original canvas?

The idea is to take a ‘screenshot’ of the viewer with changed camera settings without showing the different camera settings in the original viewer. We want to do this by having another canvas where we render the original renderer to with the new camera settings and do toDataURL for that canvas (so we get the url for the image) and you see no change in the original canvas (so still the original camera positions).

1 Like

A canvas is bound to its GL context, so the renderer cannot draw into two canvases directly. As you suggested though, you can copy the data out of one canvas and display it in another. You might also consider taking the single-canvas approach shown in these examples:

Sorry, not exactly what I mean. We want to render our data to the second offscreen canvas while not rendering to the original canvas. Important note is that we don’t have to render them both at the same time, just one at the time.

i might be wrong but i think you can use your one canvas to render with custom settings, get the snapshot, then render again right afterwards in the same frame with your normal settings and the snappshotting won’t be visible to the end user.

This works! But what happens if the user has low fps. Will the user see the transition between the different frames?

i don’t think so, or at least i would suggest you try first. rendering it here or there, eventually you must render anyway and using an offscreen canvas (imo) won’t “readily” make thinks faster. i could imagine two webgl contexts operating on doubled assets and materials should create an overhead over a single context where all assets and materials are known and precompiled, but this is just an assumption.

if you’ll be rendering and taking snapshots both every frame, at 60 FPS, there is certainly a cost there. The snapshot is not free. But occasional snapshots should be OK, and I agree that’s worth trying.

1 Like

If you decide to go this route, I highly recommend rendering to an (offscreen) canvas. Once a frame has been rendered, you can simply use the 2D-context of the viewable canvas to draw the output of the offscreen canvas. This operation is really fast and shouldn’t have too much of an overhead.

You can even throw in some “post-processing” using canvas filters :stuck_out_tongue_winking_eye:

1 Like

What’s stopping you from using WebGLRenderTarget as your “offscreen canvas” (which it kind of is)?
You can render to it and read data from it via readRenderTargetPixels.

You can have as many of render targets as you need, to store multiple “screenshots” or organize them in an atlas and pull it across CPU/GPU memory barrier less frequently.

1 Like

@drcmda’s approach of changing the renderer settings (cam position, size, pixelratio, …) between frames works, but only at the assumption that the renderer is running at 60FPS. Sadly we’re never guaranteed that it is - the user might have a bunch of tabs open, running some other software in the background, or it might even load a very vram heavy scene. This might cause the FPS to drop.
At for instance 20FPS you will definitely see the camera jump between frames while it takes the snapshot.

Temporarily rendering to an offscreen canvas combined with render on demand circumvents this in part, as that one frame with the different camera position will then not be stored in the visible canvas and the user will be none the wiser.
As a caveat though; if the visible canvas has eg autoRotate enabled - you will see the animation skip for a single frame. Again not an issue at 60FPS, but very much so at 20.

@tfoller 's approach seems to be the right one. I think you would still have to convert readRenderTargetPixels to actual bitmap data (and subsequently to base64* if you want a proper image) which may or may not be slower, but won’t affect the output of the original canvas.

Keep in mind that base64 string size is limited differently per browser and even device. So while all this might work internally, you might run into trouble if you decide to make a snapshot at 8k resolution with 3x pixelratio. As far as I know there’s no way of polling the clients’ limitation beforehand.

so you want to snapshot runtime? i assumed you only want to take a snapshot once, you store it as an image and that’s all.

if this is supposed to be runtime, then i don’t know why you want with two canvas and toDataURL. you can indeed use a WebGLRenderTarget, you do not have to read out the pixels because that target can be projected and act like an image.

i still think that this must be faster than offscreen because you are re-using geometries and materials, switching stuff on the gpu is expensive. plus, you can decrease the render targets resolution.

this example has a scene that is then shown in three extra settings with different cameras and view coordinates:

I would say your assumption is correct; it’s about storing a single frame as an image, not necessarily to show onscreen but sent to a backend for example.
However, based on the camera settings requirement, it would not be a snapshot of the current state, but rather one of a predetermined state.

Say the user is allowed to rotate the camera and change the color of the loaded model. Creating the ‘snapshot’ would then result in the changed color but without the new camera position.

Easiest solution would be to store the first frame initially and just call that when needed, though any other changes to the model would then not be reflected.

A pen to illustrate. Change the color of the cube and move the cam around. Clicking the button will show an image of the original cam position but with the new colors.

  1. “move the cam” approach. Edit: I changed this to having a second camera with a fixed position:
    https://codepen.io/emkajeej/pen/abRPjYx

  2. WebGLRenderTarget approach:
    https://codepen.io/emkajeej/pen/WNaPrJL

The performance hit seems to be about the same. Bonus of Approach 2 is that on slower FPS, the user won’t see the camera jump positions between frames.

Interesting finds:

  • Using a very high resolution (eg 6000) will result in a cut off image in the approach 1, but works in approach 2. It will also take quite a while to complete but that’s to be expected.
  • When using a background with opacity other than 0 or 1 results in an incorrect background color in the approach 2, but works correctly in approach 1.
    • Try background of #ffffff and an opacity of 0.5. readRenderTargetPixels returns grey (188,188,188,128) rather than (255,255,255,128). Not sure if this is related to THREE.

Conclusion for now is that both approaches have similar performance. Approach 1 is probably easier to implement. Just realise that there’s limitations to the saved image size.

For now I’ll go for approach 1 since it’s easy to implement and I can work around the limitations. Thanks all for the responses and insights!

using one renderer to draw on two canvas cost more than having one renderer each canvases.