My .glb object is 3 MB. How can I compress it to the smallest size or optimize ? Maybe online
You can use Draco, a library for compressing and decompressing 3D data. There are different ways to compress an existing
glb file, for example via gltf-pipeline. More information about this topic right here:
Also note that this type of compression affects only geometry. To reduce the size of textures you’ll need to use image optimizers instead.
@donmccurdy What image optimizers are out there?
There are many, but https://squoosh.app/ is an easy one to use for example.
I’m using blender to export the model as a gltf. I’m assuming I compress the texture image before I apply onto the model in Blender? Also, in your opinion, what is an acceptable file size for a 3D model that is to load in a browser-- something that the general public wouldn’t have too much trouble loading? For reference, my files usually average 15000KB
An image imported then exported by Blender won’t necessarily keep its compression. A good way to optimize textures is to unpack the model into separate files (
.bin + textures), optimize the images in place, then repack them into binary
.glb files. https://github.com/AnalyticalGraphicsInc/gltf-pipeline can help with the packing/unpacking.
Acceptable size depends a lot on what you’re doing. For desktop users with good internet connections and a professional reason to use your app, some apps are OK with >100MB of assets. For best loading and runtime performance on mobile you should probably stay much lower than that… this is just my opinion but keeping the whole scene under 5MB is a good goal, although it’s not too uncommon to see 20-25MB. If you open Chrome or Firefox developer tools and use the Network tab, you can see how much other websites are loading. Check http://christmasexperiments.com/ for examples.
So I would take the gltf I exported from Blender > use the “gltf-pipeline -i model.gltf -t” command to unpack the texture> use something like squoosh to compress the texture> resave the unpacked texture as the new compressed image> repack into a glb… but I don’t know which command can accomplish this?
Also, is there a difference in load time for glb and gltf–which one has the better load time? Thank you!
Supposing you start with a GLB:
# unpack the model into .gltf, .bin, and texture files in the tmp/ directory mkdir tmp gltf-pipeline -i input.glb -o tmp/tmp.gltf --separate --json # optimize the textures in tmp/ # ... # re-pack the model into a GLB, adding Draco compression gltf-pipeline -i tmp/tmp.gltf -o optimized.glb --binary --draco
I’d suggest using GLB in general, it’s fewer network requests.
I use it, but the size is the same. 3 MB
I have an error after when load a model
gltf-pipeline -i myfile.glb -o tmp/tmp.gltf --draco.compressionLevel
index.js:1 Error: THREE.GLTFLoader: No DRACOLoader instance provided. at new GLTFDracoMeshCompressionExtension
I use it, but the size is the same. 3 MB
This could mean (a) that your model size is mostly things other than geometry, which draco doesn’t affect, or (b) that the model is non-indexed which
gltf-pipeline can’t compress. see https://github.com/AnalyticalGraphicsInc/gltf-pipeline/issues/420.
how can I import this loader from npm ?
I load THREE from npm
and import it as:
import * as THREE from 'three';
If I use as
THREE.DRACOLoader(), but DRACOLoader not found
Also I use in this way:
window.THREE = THREE
But it wasn’t help me.
my model is not load
convert with draco
gltf-pipeline -i main-main.glb -o tmp/tmp.gltf --draco.compressionLevel
- then in glb
gltf-pipeline -i tmp/tmp.gltf -o optimized.glb --binary --draco
I got 1.5 MB
import * as THREE from ‘three-full’;
const loader = new THREE.GLTFLoader();
THREE.DRACOLoader.setDecoderPath( ‘/examples/js/libs/draco’ );
loader.setDRACOLoader( new THREE.DRACOLoader() );
But I have an error
Refused to execute script from 'http://localhost:3000/examples/js/libs/dracodraco_wasm_wrapper.js' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.
Have you looked at the Network Tab in developer tools for errors? You will likely see that the
draco_wasm_wrapper.js is red and the URL is wrong. I think your decoder path should be
No, it didn’t help me
What other ways maybe to optimize models?