We are using three.js to load 2d/3d models in BIM domain, like dxf (converted from dwg), glTF/glb, etc. When there are too many entities, browser crashes. Of course, it depends on what devices/gpus we are using.
Now I’m thinking to reduce the memory three.js uses, for one of my dxf file (1M+ entities, drawcall 20000+), log shows:
- totalJSHeapSize: 1630.15M
- usedJSHeapSize: 1564.05M
We already merged entities together for the same type and material. Since we don’t use objects’/materials’ uuid at all, can I simply don’t generate UUID for them? As I tried, everything works fine. Do you know any side effect? And please let me know if there is other ideas. Thanks!
Our example and sample code repo:
Reproduction steps
- Open online sample page
- Upload a dxf file (this one is sort of large, but there are others much larger)
It is more likely to happen on mobile devices.
1 Like
Do you have the option of optimizing these models offline? Or do they need to be loaded in the browser as-is, and any optimization happens clientside?
Right, merge mainly reduces draw call. Well, it also reduce memory cost because each object takes some memory, it is a trouble when there are too many entities. And, ture that it prevents frustum culling, so we don’t merge everything together.
The imposter sounds interesting, I’ll consider. Many thanks!
We actually do have the option to optimize them offline, if there is good idea… E.g., We can do merge offline, rather than merging in the browser.
use the draco loader to load compressed gltf and dispose unnecessary components in large gltf model by taking the buffer geometry from gltf models and merging it into one mesh and disposing identical materials into single materials, this will drastically reduce your model size in ram.
see demo below of a simple small island city with large gltf models compressed into buffer geometries.
see demo
to make your project work offline use service workers to allow users or players explore your threejs project offline.
I would suggest not worrying about Draco compression until the model renders efficiently in the browser first. Draco compresses the model over the network, but the whole thing must be decompressed in the browser memory before rendering, so your FPS isn’t going to get any better. The strongly preferable approach would be to reduce the complexity of the model (offline), compress it (offline), and then the loading and rendering will perform much better.
There is no one-size-fits-all method to optimize every possible file, you especially need to think about how much of the cost is draw calls, vertex count, texture memory, etc. But a good starting point for glTF models would be:
npm install --global @gltf-transform/cli
gltf-transform optimize input.glb output.glb --compress draco --texture-compress webp
There are a number of options (see --help
) if you need to tweak anything.
2 Likes
Thank you, donmccurdy! If gltf-transform can reduce the complexity of model and texture, that maybe helpful to me. I will have a try later.