Performance considerations for partial scene render onto canvas

I have a scene where a couple of objects are continuously animated, while the rest of the scene remains (mostly) static. It would seem that a good strategy to improve performance is to animate the static parts of the scene on demand, while continuously rendering the small group of animated objects.

I have a couple of questions:

  1. Is the WebGLRenderer intelligent enough to only rerender parts of the scenes that have been changed? Or is there no such optimization of the sort, and it is a straightforward renderer that rerenders the canvas completely on each render pass? There is probably some nuance here — any details of its internals would be appreciated, as it may be possible that the WebGLRenderer is intelligent enough out of the box to not need a specialized render approach.

  2. Is there a recommended approach for this? I know that there are a couple of possibilities, including multiple canvases, and a single canvas but multiple rendering layers. However, I don’t know how each one performs empirically.

Thank you!

Set visible=false for bones in mesh.
if(child.isBone){ child.visible=false; }
It gives about +10% performance.

This is not what is being asked and not helpful at all.

As for your question, I’ve done some experimenting with this recently and found that detecting which parts of the output texture (canvas) have changed is usually heavier than just rendering the entire thing. There’s free camera movement in my case.


If you’re only dealing with a static camera that doesn’t move, it might be beneficial to render the “background” scene once, put that on a canvas, then render the moving objects on a second canvas and let the second canvas overlap the first one. Just remember to set alpha: true in the renderer options.

This comes with some obvious caveats though. For example, if anything in your “background” scene does stuff with reflection, it won’t show the dynamic parts, neither would any reflection on the dynamic parts show anything that would otherwise be visible in the background scene.

So, it completely depends on your use-case. The main problem remains calculating reflections. If everything would just be a plain color/texture without reflections, you could chop up the viewport in smaller chunks, but depending on how many objects are in your scene, detecting which object is visible from which frustum will also become (very) demanding eventually.

Thanks @Harold . I was thinking about this more for my usecase — while animation-wise one layer is static and the other is dynamic, there is occlusion between the two layers. I don’t know if it’s possible to deal with occlusion natively within threejs without adding them to the same scene, although it might also be possible by not clearing the Z-buffer.