Summary (TL;DR):
I’m trying to understand when it really makes sense to use THREE.LOD to improve performance in large industrial-style 3D scenes.
Context:
In my application, users can create an industrial plant layout — beams, pipes (L, U, C shapes, reducers, bends, etc.) — each with varying dimensions.
So, every element is a unique geometry, created dynamically using Three.js primitives (e.g., CylinderGeometry for pipes).
Users can add hundreds or thousands of such elements and modify them at any time (change length, diameter, etc.).
My goal is to improve performance when the scene grows large.
In which Scenario LOD should be used
-
For Pre-created LOD models:
Have multiple versions of each model (e.g.,
.glbor.obj) with different polygon counts and assign them as LOD levels via theTHREE.LODAPI. -
Dynamic simplification at runtime:
When a new element is created, generate 2–3 simplified versions of it using:
SimplifyModifier- Or, when using primitives, by reducing geometry parameters (e.g., fewer radial segments for a cylinder).
Then assign these to the LOD system.
Whenever a user modifies the base geometry, the LOD meshes would also need to stay consistent.
Observation:
In my tests, LOD didn’t improve performance — in fact, it slightly worsened it.
From what I’ve learned, Three.js keeps all LOD meshes (for all levels) in GPU memory, even if they aren’t visible.
My Question:
Given this scenario — where many objects are created and modified dynamically —
is it practical to use LOD at all?
Or is the THREE.LOD approach meant more for predefined, static models (like buildings or large assets), not for scenes with thousands of dynamic elements?
I can share more details or test data if needed — I mainly want to know if LOD is conceptually the right fit for this type of use case, or if a different optimization approach would make more sense.