I want to render a glb model that is nearly 400MB in three.js. What considerations should I keep in mind for optimal performance?
If you load a large glb file without any specific settings, you may encounter issues with loading speed and performance after loading. The glb file consists of simple geometries (e.g., external panels and internal pipes of a building). The file size is 400MB mainly due to the large number of these geometries.
I do not need detailed geometries, but loading speed and handling performance are important to me. Please let me know what considerations I should keep in mind.
Primarily the fact that 400 MB files shouldn’t go anywhere read the browser Besides the fact that some devices my run out of memory, requests on slower networks will timeout, and overall questionable user experience - if you say that the GLB consists of simpler geometries, why not partition it beforehand? Wouldn’t even have to be any smart splitting, just 3D tiles, loaded sequentially based on the distance from the camera.
Which part of the GLB takes the 400MB? Just geometries, unless there’s really lots of them, shouldn’t be that heavy - splitting geometries from materials and textures may also improve the loading experience (and allow for texture optimisations on the side.)
"When I say detailed geometries are not necessary, I mean, for example, it doesn’t matter if a cylindrical shape has 32 sides or 8 sides. They will all look like long pipe-shaped cylinders.
The geometries inside the glb file are actually made with BufferGeometry, not PrimitiveGeometry. Do you know how to adjust the details of BufferGeometry?
When you mention loading from a distance sequentially, are you referring to the LOD (Level of Detail) feature? For LOD functionality, don’t we need several glb files with different levels of detail?
In reality, the glb file is a model of a 5-story building. It contains many objects such as walls, stairs, doors, pipes, ventilation ducts, and so on."
Hopefully that can compress your 400 megs into something Much smaller.
Then, in addition to what @mjurczyk alluded to… you may also have performance problems rendering many many small objects (> 1000), in which case you may need to look into Instancing or something… but meshtoptimizer/gltfpack by itself can do some amazing things, so it’s good to get that into your pipeline early. It can do things like convert multiple models into instances etc. via gltf instancing extension.
another thing you can try is gltf-transform which imo is the latest innovation in that space. it has lots of helpful transforms. there’s a one trick pony cli command that runs 10 or so transforms:
npx gltfjsx yourmodel.glb --transform
try that and see how much it reduces the model. i’ve seen 100mb models go down to a few kb with this.
Most of the time it is 90% un-optimized textures. Please avoid images larger than 2048px side, and apply compression mechanisms. See this thread:
Update: My former .glb file was 570,4 MB with textures. My new .glb file without textures is 1,4 MB. WOW
If optimizing textures doesn’t give you a smaller file, then you are most deffinitely using ‘high-poly’ meshes. Keep in mind that threejs GLTFLoader has a limit for vertices, you can’t just expect that it works beyond some boundaries. If that is the case, embrace low-poly generation (not only convert to). Read about it, search on YouTube etc.
Also, web browser have their own maximum for loading files. That is the current bottleneck for wasm et al, beyond 250mb browsers can’t allocate that amount of memory and simply crash.
P.D.
Now you are probably thinking that this is somehow going to compromise the quality, right? Think about google maps: every vis challenge have its scale, zoom and level of detail.
Is there a chance that a 5-store building would be shown at fully detail simultaneously? Enter spatial partitioning systems and occlusion culling technics
My original glb file (a 5MB model used for testing) had about 2000 meshes. After transformation, the glb file size was significantly reduced, and the number of meshes decreased to 44.
I need to read each mesh individually using raycaster when clicking on the model on the screen. Therefore, the meshes should not be merged. I just need to adjust the detail of each mesh.
For example, a cylinder with 64 sides, which is almost circular, should be converted to a cylinder with 8 sides.
Two things I do to reduce model size from Blender:
Always export with textures set to .jpg, even normal maps, although check for acceptable quality.
Use geometry instances. I am often exporting models with lots of screws, etc. By creating one low poly screw (or door, etc) and using ALT+D to copy, instead of SHIFT+D, Blender will use the same geometry in all instances, so only their orientation and position are unique. As an added bonus, editing one will edit them all. Takes time, but decreases size and increases performance.
You can of course use compression and I know I will look at some of the suggestions named above, but be aware that unpacking the model in the client side, it may not use less memory than the uncompressed.