Incredible Compression on a GLB

TL;DR- I’m trying to figure out how these GLB files are so small, and secondly, when I attempt to import the optimized versions into Blender I receive the error: Extension EXT_texture_webp is not available on this addon version.

I’m working on an established project for a client replacing the prior dev, who I don’t have access to for questioning. In this project there is an incredible amount of compression applied to the GLBs being used
The picture shows the original models and then their optimized versions. They carry the “-transformed” tag from gltfjsx, but that alone isn’t responsible for their low file size. I’ve attempted to use the glTF Transform CLI but the lowest I could get it was from 13,000 KB to 2,000 KB, which is still fantastic, but still falling far short of 11,700 KB into 283 KB! And despite using the flag for webp compression when I used to CLI the models I generated were able to be put into Blender without issue. Any help with these issues is much appreciated!

it’s quite the elaborated config, the following is minus the conditions

const resolution = config.resolution ?? 1024
const normalResolution = Math.max(resolution, 2048)
palette({ min: 5 }),
reorder({ encoder: MeshoptEncoder }),
instance({ min: 5 }),
resample({ ready: resampleReady, resample: resampleWASM }),
prune({ keepAttributes: false, keepLeaves: false }),
  slots: /^(?!normalTexture).*$/, // exclude normal maps
  encoder: sharp,
  targetFormat: config.format,
  resize: [resolution, resolution],
  slots: /^(?=normalTexture).*$/, // include normal maps
  encoder: sharp,
  targetFormat: 'jpeg',
  resize: [normalResolution, normalResolution],

as for EXT_texture_webp is not available blender just doesn’t support the extension, so it can’t open it. it only supports basic gltf, a short while back it didn’t even import draco/meshopt. use to profile models. as for why your CI tries using webp worked, my guess is you had a typo or misconfig and it didn’t actually transform to webp.

ps you use gltfjsx like this (open shell):

npx gltfjsx yourfile.glb --transform

and the biggest compression ratio (which may mangle meshes):

npx gltfjsx yourfile.glb --transform --simplify
1 Like

It looks like some sort of protection, which is why you can’t simply open it.
That ain’t the whole file. The rest is probably missing along with the ex developer :innocent:
Or it is somewhere locked.

it most likely is the whole file. 13mb to 300kb is not unusual. gltfjsx (which uses gltf-transform ofc, with the config from above) has a compression ratio up to 95% or higher. it can potentially turn a hundred MB into kilobytes.


I’m pretty sure this also almost all (totally?) depends on gltf-transform

I just run

npm i gltfjsx


npx gltfjsx models/city-2.glb --transform

which compressed the file from 67.4mb to 8.28mb - that’s 8.1 times smaller, very far from 43 times from 13 to 0.3 - obviously it depends on the mesh complexity.

If the original files have little information (detail), but lots of polygons, and thus a lot of redundancy, that might happen. But that’s the case with Draco too.

There’s also GitHub - zeux/meshoptimizer: Mesh optimization library that makes meshes smaller and faster to render

My main takeaways on how these things get good compression:

  1. Texture compression… reducing bit depth… setting a max size on textures, using formats that are inherently compressed like JPG or KTX2… Using texture formats suited to color vs data.

  2. Mesh quantization… Instead of storing 3 floats per vertex… instead scale the model to fit into an integer range like 0 to 255 (byte), or 0 to 65535 (short), and then use the .scale/.posiiton of the output mesh to bring the model back into the original size. This usually necessitates inserting an extra node as parent to hold the fix up transform.

KTX2 is great because the textures stay compressed even on the GPU… making them take up less memory on the CPU And GPU, And end up making your models render faster due to reduced texture bandwidth required to move the pixel data around.

Mesh quantization is the same… the meshes don’t need to be “uncompressed”… since GL can handle native datatypes like float/short, and “decompression” is just a matter of scaling it back to the original size and adjusting its position.

The tradeoffs for mesh quantization is that it works best for individual meshes… not an entire scene merged into a single mesh. 3d objects inherently have different scales, so a small vase merged with the model of a football stadium might get trashed by quantizing compression… whereas a separate vase mesh and stadium mesh will each be quantized to their individual bounds.
So there are tradeoffs with drawcalls, and pipeline complexity.


Aside – Blender will get support for reading/writing glTF files with WebP textures beginning in v4.0:

I believe there are some technical issues making addition of KTX2 support in Blender difficult, but I agree with @manthrax about the benefits there.


it depends on the model. if its very texture heavy then you get more compression ratio. 88% is still not bad, though 8mb is not acceptable for the web, i would decimate the model first. if you add --simplify it uses gltf-transforms inbuilt decimate though that one can mangle meshes.

you don’t need npm i btw, npx can execute packages without installing it.

The purpose of gltfjsx is to be used with react and provide some benefits, but compression is exactly the same as draco - as mentioned in Github.

Mesh simplification is very hard and critical to leave it to the secondary feature of a utility, unless the mesh quality comes last for the current project for some reason.

(Personally I only use vanilla JS + Three.js + gltf2.0 + custom everything I can, and I try to avoid unnecessary libraries and the dependencies that come with them)

1 Like

compression is not the same as draco. draco is a part of it, yes, but it’s using gltf-transform with the long config i posted above. if you wanted you could npx gltfjsx your own models — it will give you a no-dependencies glb. you can of course also use gltf-transform directly.

I try to avoid unnecessary libraries and the dependencies that come with them)

it’s up to you and i get this sentiment often around here, though without libraries (like draco) your models will be hundreds of megabytes. and without gltf-transform/gltfjsx a large model will never go down to 1-3mb with blender alone.

tbh, i wouldn’t even use threejs. this is one of the biggest libraries/frameworks on the entire web. if threejs is your cut-off that is a high threshold that nothing you ever use will reach. i would use plain webgl.

I’ll give it a try.

Which is why I wrote that “I try to avoid unnecessary libraries”, draco is not one, like the Khoronos products.

There are two awesome alternatives for far better results:

  1. Quadremesher by Exoset

  2. Manual simplification which includes retopo and utilities like Quadremesher and even Blender’s flawed decimation which works correctly in some cases, applied on parts of the mesh.

I’m very close to that as I don’t use the effect composer, I’m using custom shaders for post -processing, and I recently started to make my own custom shader materials. But I don’t know how long it would take me to switch to pure WebGL, if it was possible in a few days, I would do it.

Being a big library is not a problem, it just provides more options, as the bundler will only include what you’re using. The problem (in my case) is that it’s not optimized for speed, i.e speed is not the first priority.

If you know that the optimization your model requires is remeshing + simplification, then yes – there are better options than gltf-transform or gltfpack for that purpose. I do expect the next meshoptimizer release (used in both) will improve simplification results in both tools.

In practice the most common issues I see are high texture sizes and high draw call counts. Both tools will help with those issues where remeshing will not.

1 Like