Multiple scenes slow down renderer

I have a bunch of scenes that I preload with objects on startup and then store the scenes in an array eg..

var sceneArray = new Array()

for(var index=0; index<100; index++)

{

sceneArray.push(new THREE.Scene())

//add some objects to the scene

sceneArray[index].add(….)

}

I Then switch the scenes into the renderer so I can quickly load them without transitions.

renderScene.scene = sceneArray[0]

Ive noticed the more scenes i add the slower my current rendred scene gets. Ive found that setting all the scenes visible=false, kinda speeds it up but still see a huge perfomance loss.

My assumption is that the only objects in play on the gpu are the ones IN the currently playing scene.

Any ideas on what might be happening?

Some example code here GitHub - leonyuhanov/ThreeVJ: Realtime multi-scene, interactive VJ Suite based on ThreeJS · GitHub

Objects in non-rendered scenes are not automatically “free”.
Even if a scene isn’t rendered:

  • All geometries

  • All materials

  • All textures

…are still uploaded to the GPU.

100 scenes = potentially 100× GPU memory usage

What’s happening is that even if only one scene is being rendered, creating a lot of scenes and objects upfront still uses memory and CPU. Three.js doesn’t automatically freeze GPU data for unused scenes, so geometries, textures, and materials for all the other scenes are still in memory and add overhead when switching scenes.

Setting visible = false helps a bit because it skips rendering objects, but all the objects still exist, and things like matrix updates, bounding spheres, and internal bookkeeping still run each frame. That’s why performance drops as you preload more scenes.

A better approach is to only keep the active scene in memory and lazily create or load other scenes when needed. If preloading is important, you can dispose of geometries, materials, or textures from inactive scenes and reload them later. Another option is to keep everything in a single scene and just toggle groups on or off instead of making 100 separate scenes. Avoid updating matrices on objects in inactive scenes if possible.

Basically, the slowdown isn’t about what’s being drawn, it’s about the overhead of having so many objects and scenes in memory at once.

potentially 100x gpu memory usage

Is this documented anywhere?

is there any documentation that 100% states this?

I suspect you are seeing the matrix recalculation hit for each of them.
If the scene is mostly static.. you can disable matrix recalculation by:

scene.updateMatrixWorld(true,true) // Force update all matrices..
scene.traverse( e => e.matrixAutoUpdate=e.matrixWorldAutoUpdate = false)

This will prevent the matrices from being recomputed… but you won’t be able to just change .position/rotation/scale, without calling .updateMatrix() and .updateMatrixWorld() from then on…
(or you can only disable the matrix updates on things you know aren’t going to be moved dynamically.)

1 Like

The only scene in play is the current scene all the others are not doing anything. Nothing in them is beeing changed,updated or read whilst the main scene plays.

Are you saying i need to do this for the currently playing scene?

Im not 100% certain that all the scenes ARE getting loaded into the GPU. Looking at the task managers in windows, specificly the perfomance monitor(not aure how accurete that is) but the gpus memory is barely beeing used while 1 scene is in play. I can def see a huge chunk of RAM (6-7gb) being used for the whole app, but thats fine i have plenty.

The gpu never goes over 30% 3d utilization and sits at like 10% memory util. Have tested thia with just having 1 scene vs having 100 scenes with only 1 playing.

@wow_elec_tron VRAM available in the browser is not the same as hardware RAM available from your machine, browsers limit the amount of available VRAM they can access, typically 4gb on mobile and 8gb on desktop and laptop… I’m curious what you see reported if opening this pen in chrome…

https://codepen.io/editor/forerunrun/pen/019d2e97-6dc4-715c-9f7e-6c673f66c000?console=true&file=%2Findex.html&orientation=top&panel=blocks&show=preview

having 6-7GB of resources stored in memory is enormous for a Web app (essentially entering browser crashing territory on lower spec devices), especially if 99% of it is not being rendered at all (100 scenes, 1 being rendered?), you’ll want to create a system that unloads and disposes the current scene before running a function to initialize and render the next dynamically.

So 1st this isnt a web app, its a visual perfomance tool thats desigbed to run on a dedicated desktop/laptop, but i get what your saying :slight_smile:

Chromes task manager reporta that the GPU memory sits at about 600mb when i load like 100 identical scences(with only 1 playing) and the same amount of usage when i just preload 1 scene. Using an nvidia comandline utility i can see the app uses 8% of its total memory(8gb gpu) which is roughly 640mb with both tests.

Chromes task manager and the windows task.manager shows a huge RAM usage difference(not vram) between 1 preloaded scene and 100 preloaded scene(which is fine as its not in the vram) so im still not convinced that ALL the scenes are beeing loaded into vram

Oh and thanks for that link. On my Android it said i have 8gb and on my laptop with an nvidia gpu it also said 8gb

maybe your convictions have the better of you here but these two statements directly conflict each other…

I see a difference in RAM usage not VRAM usage. Ill get some profile screenshots and post them here so theres no confusion. Just repeating that the GPUs memory usage is sitting at around 600mb no matter how many scenes I preload.(if the scene has more complex/more objects obviously this gous up a little)

So this is Chromes performance analyser snapshot of like 2 seconds with 100 scenes(with only 1 playing) You can see it plays fine, then there is a stuttering(this only happens when there are lots of preloaded scenes)

So with 1 Scene only:

This is Chromes Task Manager

Windows Task Manager

Nvidia dmon:(mem column is vram usage):

With 100 Preloaded scenes and 1 playing:

This is Chromes Task Manager

Windows Task Manager

Nvidia dmon (mem column is vram usage):

yes it looks like overloading your browsers vram limit is leading to sending resources to system ram to be stored (ready to be swapped) once that vram limit is hit… here’s what gemini has to say…

1. The Setup: Resource Allocation vs. Scene Graph

In Three.js, a Scene is just a “Scene Graph”—a hierarchical list of objects. Creating 100 scenes in JavaScript (RAM) is relatively cheap. However, the moment those scenes are passed to the WebGLRenderer, Three.js performs Resource Allocation.

  • Geometries and Textures: These are large data blobs. To render them, the browser must “upload” them from System RAM to VRAM (Video RAM).

  • Buffer Objects: On the GPU, these become Vertex Buffer Objects (VBOs) and Texture Units.

  • The Trap: The user in the thread is “preloading” all 100 scenes. Even if they only call renderer.render() on one scene, the renderer has already initialized and cached the GPU buffers for all the objects in all 100 scenes to ensure they are ready for instant switching.

2. The Bottleneck: VRAM Over-subscription

GPU memory (VRAM) is a finite physical resource. Unlike System RAM, which can easily “swap” to a hard drive (Virtual Memory) with a manageable hit to performance, VRAM behaves differently in a browser context:

  • The Memory Ceiling: When the 100 scenes exceed the available VRAM, the GPU driver and Chrome’s Gallium/Angle layer must start Memory Paging.

  • Bus Latency: Because the VRAM is full, the GPU has to constantly delete some data to make room for the current scene’s data, then fetch that data back from the System RAM over the PCIe Bus. The PCIe bus is significantly slower than the internal GPU memory bandwidth.

  • Stuttering (Jank): This constant “swapping” of buffers between RAM and VRAM creates massive frame-time spikes.

3. The “Visibility” Misconception

The user noted that setting object.visible = false didn’t solve the problem. Technically, this is because:

  • Visibility only tells the renderer to skip the Draw Call (the command to actually paint pixels).

  • Texture Residency remains unchanged. The heavy textures and vertex data are still “Resident” in VRAM. As long as the objects exist in an active WebGL context, the GPU keeps that memory reserved.

4. The Lifecycle Solution: Explicit Disposal

In standard JavaScript, the Garbage Collector (GC) automatically clears RAM when a variable is no longer used. However, the GC cannot see into the GPU. To fix the slowdown described in the thread, the developer must manually manage the GPU Lifecycle using the .dispose() method:

  1. geometry.dispose(): Frees the Vertex Buffer Objects from VRAM.

  2. texture.dispose(): Frees the Image Bitmaps from VRAM (usually the biggest memory savers).

  3. material.dispose(): Frees the compiled Shader Programs.

Summary of the Conflict

The “slowdown” isn’t caused by the CPU struggling to calculate the scenes; it is caused by VRAM Pressure. By keeping 100 scenes “live,” the user forced Chrome into a state of constant memory swapping, where the GPU spends more time moving data across the motherboard than it does actually rendering the 3D images.

That makes sense except that it all relies on part 1

They are never passed into the rendered. Only the current scene is.

I want what you replied to be valid but its an AI summary with 0 references to prove its valid. Some of it def describes what im seeing, but again it relies on the 1st part which never happens.

Apreciate your input

Hm, if you’ve created 100 scenes, never rendered 99 of them, and rendering the remaining scene is slower than if the other 99 didn’t exist, then something is wrong, and that’s probably outside the scope of guesswork… If that’s the case then I would take a hard look at exactly what’s in the Chrome performance snapshot during FPS stalls, and perhaps the total vertex counts and texture resolution in those scenes. three.js does not upload scene resources to the GPU until the first time the scene is rendered, unless you explicitly instruct it to do so.

I’m (very) skeptical of the LLM’s “GPU has to constantly delete some data to make room for the current scene’s data, then fetch that data back from the System RAM over the PCIe Bus” claim… but regardless, I wouldn’t fixate on memory unless you are seeing explicit “Major GC” items in the performance snapshot, or something similar.

1 Like

Im also skeptical of the LLM response it makes little sense. Ive been looking through the chrome peromance traces and i just cant work out whats happening.

Thanks mate

You need to zoom in on that spike region of the graph and see the names of the functions taking the time.

(And possibly run an unobfuscated build.. because it looks like the function names might be stripped.)

The whitespace in between the slices in your graph indicate that apart from the spikes, you’re running well on CPU budget but your GPU usage is high.. (the solid green bar at the bottom)

But the spike is something happening on the CPU.

For instance, here’s a region zoomed in on one of my randomly selected apps:

1 Like

Thankyou ill look into this!!

1 Like

Yeah, that’s not normal. Unused scenes shouldn’t slow things down.