Nesting scenes is possible. What is it good for?

It’s possible to nest a scene in a scene, because a Scene is an Object3D which means it can be added to the node of another scene.

For example:

What’s are the practical use cases for adding one scene to another scene?

Some formats can contain entire scenes (for example the three.js JSON format), so when loading one it can be simpler to just add it to the current scene, especially if you are just testing a lot of files rather than building a production ready app.

Other than that, I’m not sure. Does it have to be good for anything though?

1 Like

Thanks, this is interesting. I’d like to try this setup for my case.

I have a scene with object + lights and 2 cameras displaying the scene simultaneously (with multi view ports renderer).
One camera (“preview camera” in my terms) should ignore panning and zooming. Right now I’m rotating object (object.rotation.y = …), but zooming (camera.zoom = …) and panning (camera.position.y = …) one camera only (“design camera” in my terms). There are some reflections issues with this setup that I do not like.
I think I can try nested scenes here. “Design camera” will be used for outer scene. “Preview camera” – for inner scene. I will rotate object (as before), use zoom on one camera (as before) but Y-pan inner scene. This should do the trick.

Interesting point.

It doesn’t have to be, I’m just curious if it is.

In my own project (I’m working on WebGL rendering with Three.js in the threejs branch), Scene's have a size, and that size determines how big the scene is in CSS pixels (i.e. the DOM size on the screen).

After I figure out how to map Three space to CSS3D DOM space, I’d like to be able to nest scenes inside scenes: a scene inside a scene would be a “PlaneGeometry” in Three terms, and inside that plane would be drawn the inner scene (f.e. like a TV display inside a 3D world, everything in the TV is an inner scene). This would be unlike how Three’s currently treats a Scene as simply a Object3D, and rendering it in a plane takes extra effort.

So I’m just pondering ideas on how to treat nested scenes in my project, thinking about making a nested scene default to being rendered in a plane. :slight_smile:

@fifonik That’s interesting! So, f.e., we could have a scene that’s a whole game, then if we want to have selectable items that when selected can be previewed alone in a separate viewport, we can just render that selected item as it’s own scene with another camera. But this can also be done by placing a camera somewhere in the scene (but then we get the whole rest of the scene in the background).

Interesting to note that in Pixi we can render using any object as the root, it doesn’t have to be a specific sort of Scene node.

@trusktr why don’t you use a Group instead of a scene? Then you can have on default scene ( a single Scene ) and add as many sub scene in the form of Groups as you like.

I’ve created demo with nesting scenes that reflects my case:

Two cameras and one renderer with 2 views.
One object.
The object can be rotated on both cameras (LMB and move horizontally), but zoomed (wheel) + Y-panned on left camera only (LMB and move vertically).

There are some issues, however.

  1. Light helpers does not work as expected (I expected they will be static on right view)
  2. Something is not right with reflections on the left view. When I Y-pan, the reflection should change positions on spheres.

It would be nice if someone point what I’m doing wrong.

1 Like

@fifonik using Layers instead here might give better lighting results.

1 Like

Do you mean that I should use different lights for different cameras?

P.S. I’m using layers to display on one camera wireframed objects and hide them on another camera (in the demo as well).

Perhaps, although objects can belong to multiple layers so this shouldn’t be necessary.
On the other hand since there are problems with the lighting this might be a way of fixing it - and there may be bugs here, since what you are doing is not common so it probably hasn’t been tested as much as other techniques.

I meant that you can probably do this using just one scene with subgroups belonging to different layers.

Although looking at it again this might cause problems with the controls since you are controlling each scene differently.


The layers allow for filtered lighting, some lights will affect some groups/objects, and the renderer will completely ignore the light for others.

One use case I can imagine for nesting scenes is to render subsets of the main scene:

parentScene.add(dayScene, nightScene, debugScene)

Either you render it all or only one of the sub-scenes (maybe using different cameras too?):

if (bla) 
  render(nightScene, camera) 
  render(dayScene, camera)

Do you have to duplicate trees, so that f.e. nightScene just has different lighting than dayScene? So if you have one object in the dayScene, you also need to clone that one to the nightScene? I think that could lead to unnecessary duplication of memory usage.

I wasn’t thinking about lighting (light and shadows could change in other ways). It’s a case where you have subsets of meshes that you need to render in different moments. They shouldn’t need to be cloned (i think Object3D is added by reference, but haven’t tried actually).