Incredible Compression on a GLB

TL;DR- I’m trying to figure out how these GLB files are so small, and secondly, when I attempt to import the optimized versions into Blender I receive the error: Extension EXT_texture_webp is not available on this addon version.

I’m working on an established project for a client replacing the prior dev, who I don’t have access to for questioning. In this project there is an incredible amount of compression applied to the GLBs being used
compression
The picture shows the original models and then their optimized versions. They carry the “-transformed” tag from gltfjsx, but that alone isn’t responsible for their low file size. I’ve attempted to use the glTF Transform CLI but the lowest I could get it was from 13,000 KB to 2,000 KB, which is still fantastic, but still falling far short of 11,700 KB into 283 KB! And despite using the flag for webp compression when I used to CLI the models I generated were able to be put into Blender without issue. Any help with these issues is much appreciated!

it’s quite the elaborated config, the following is minus the conditions

const resolution = config.resolution ?? 1024
const normalResolution = Math.max(resolution, 2048)
...
unpartition(),
palette({ min: 5 }),
reorder({ encoder: MeshoptEncoder }),
dedup(),
instance({ min: 5 }),
flatten(),
dequantize(),
join(),
resample({ ready: resampleReady, resample: resampleWASM }),
prune({ keepAttributes: false, keepLeaves: false }),
sparse(),
textureCompress({
  slots: /^(?!normalTexture).*$/, // exclude normal maps
  encoder: sharp,
  targetFormat: config.format,
  resize: [resolution, resolution],
}),
textureCompress({
  slots: /^(?=normalTexture).*$/, // include normal maps
  encoder: sharp,
  targetFormat: 'jpeg',
  resize: [normalResolution, normalResolution],
}),
draco())

as for EXT_texture_webp is not available blender just doesn’t support the extension, so it can’t open it. it only supports basic gltf, a short while back it didn’t even import draco/meshopt. use gltf.report to profile models. as for why your CI tries using webp worked, my guess is you had a typo or misconfig and it didn’t actually transform to webp.

ps you use gltfjsx like this (open shell):

npx gltfjsx yourfile.glb --transform

and the biggest compression ratio (which may mangle meshes):

npx gltfjsx yourfile.glb --transform --simplify
1 Like

It looks like some sort of protection, which is why you can’t simply open it.
That ain’t the whole file. The rest is probably missing along with the ex developer :innocent:
Or it is somewhere locked.

it most likely is the whole file. 13mb to 300kb is not unusual. gltfjsx (which uses gltf-transform ofc, with the config from above) has a compression ratio up to 95% or higher. it can potentially turn a hundred MB into kilobytes.

2 Likes

I’m pretty sure this also almost all (totally?) depends on gltf-transform

I just run

npm i gltfjsx

then:

npx gltfjsx models/city-2.glb --transform

which compressed the file from 67.4mb to 8.28mb - that’s 8.1 times smaller, very far from 43 times from 13 to 0.3 - obviously it depends on the mesh complexity.

If the original files have little information (detail), but lots of polygons, and thus a lot of redundancy, that might happen. But that’s the case with Draco too.

There’s also GitHub - zeux/meshoptimizer: Mesh optimization library that makes meshes smaller and faster to render

My main takeaways on how these things get good compression:

  1. Texture compression… reducing bit depth… setting a max size on textures, using formats that are inherently compressed like JPG or KTX2… Using texture formats suited to color vs data.

  2. Mesh quantization… Instead of storing 3 floats per vertex… instead scale the model to fit into an integer range like 0 to 255 (byte), or 0 to 65535 (short), and then use the .scale/.posiiton of the output mesh to bring the model back into the original size. This usually necessitates inserting an extra node as parent to hold the fix up transform.

KTX2 is great because the textures stay compressed even on the GPU… making them take up less memory on the CPU And GPU, And end up making your models render faster due to reduced texture bandwidth required to move the pixel data around.

Mesh quantization is the same… the meshes don’t need to be “uncompressed”… since GL can handle native datatypes like float/short, and “decompression” is just a matter of scaling it back to the original size and adjusting its position.

The tradeoffs for mesh quantization is that it works best for individual meshes… not an entire scene merged into a single mesh. 3d objects inherently have different scales, so a small vase merged with the model of a football stadium might get trashed by quantizing compression… whereas a separate vase mesh and stadium mesh will each be quantized to their individual bounds.
So there are tradeoffs with drawcalls, and pipeline complexity.

4 Likes

Aside – Blender will get support for reading/writing glTF files with WebP textures beginning in v4.0:

I believe there are some technical issues making addition of KTX2 support in Blender difficult, but I agree with @manthrax about the benefits there.

5 Likes

it depends on the model. if its very texture heavy then you get more compression ratio. 88% is still not bad, though 8mb is not acceptable for the web, i would decimate the model first. if you add --simplify it uses gltf-transforms inbuilt decimate though that one can mangle meshes.

you don’t need npm i btw, npx can execute packages without installing it.

The purpose of gltfjsx is to be used with react and provide some benefits, but compression is exactly the same as draco - as mentioned in Github.

Mesh simplification is very hard and critical to leave it to the secondary feature of a utility, unless the mesh quality comes last for the current project for some reason.

(Personally I only use vanilla JS + Three.js + gltf2.0 + custom everything I can, and I try to avoid unnecessary libraries and the dependencies that come with them)

1 Like

compression is not the same as draco. draco is a part of it, yes, but it’s using gltf-transform with the long config i posted above. if you wanted you could npx gltfjsx your own models — it will give you a no-dependencies glb. you can of course also use gltf-transform directly.

I try to avoid unnecessary libraries and the dependencies that come with them)

it’s up to you and i get this sentiment often around here, though without libraries (like draco) your models will be hundreds of megabytes. and without gltf-transform/gltfjsx a large model will never go down to 1-3mb with blender alone.

tbh, i wouldn’t even use threejs. this is one of the biggest libraries/frameworks on the entire web. if threejs is your cut-off that is a high threshold that nothing you ever use will reach. i would use plain webgl.

I’ll give it a try.

Which is why I wrote that “I try to avoid unnecessary libraries”, draco is not one, like the Khoronos products.

There are two awesome alternatives for far better results:

  1. Quadremesher by Exoset

  2. Manual simplification which includes retopo and utilities like Quadremesher and even Blender’s flawed decimation which works correctly in some cases, applied on parts of the mesh.

I’m very close to that as I don’t use the effect composer, I’m using custom shaders for post -processing, and I recently started to make my own custom shader materials. But I don’t know how long it would take me to switch to pure WebGL, if it was possible in a few days, I would do it.

Being a big library is not a problem, it just provides more options, as the bundler will only include what you’re using. The problem (in my case) is that it’s not optimized for speed, i.e speed is not the first priority.

If you know that the optimization your model requires is remeshing + simplification, then yes – there are better options than gltf-transform or gltfpack for that purpose. I do expect the next meshoptimizer release (used in both) will improve simplification results in both tools.

In practice the most common issues I see are high texture sizes and high draw call counts. Both tools will help with those issues where remeshing will not.

1 Like

Hey Don,

For a web application where user’s upload gltf files to display (and fast load times are important but you dont know in advance what optimization each model needs), what general method would you recommend for simplifying gltf geometries? Also, would it be better to use draco or meshopt compression?

It really depends:

  • Are you OK with changing the internal structure of the model (merging meshes and materials, flattening hierarchy, simplifying geometry, changing names…) as long as the visual result is the same?
  • What’s your tolerance for optimization causing a problem in the user’s model? Lossy optimizations aim to work well “most of the time”, and improve performance the most, but for some models they won’t work out of the box and you’d need to tune the parameters. No one choice of lossy parameters will work for every model, in my experience.

If you’re willing to have less compression in exchange for minimizing the number of cases where optimization will break a model, or you need to preserve the internal structure of the scene — then you’ll want to stick with lossless compression methods. If you’re willing to tune the parameters occasionally when something breaks, you can be more aggressive with the optimization.

I usually start with something like:

gltf-transform optimize in.glb out.glb --texture-compress webp --compress meshopt --no-instance

Meshopt decodes much faster, and the size is pretty similar to Draco if you are also doing Gzip on the files, which requires Gzip support in whatever webserver you’re using to serve the files. GitHub Pages, for example, doesn’t support GZip on GLBs. Draco is also a totally fine choice here.

If I’m worried about texture memory, or texture upload stalls, or doing anything WebXR-related, I’d use KTX2 instead of WebP. See Choosing texture formats for WebGL and WebGPU applications for more on texture formats.

Finally, it’s pretty important to know what you’re optimizing. If your model’s size is mostly textures, worrying about Draco vs. Meshopt is useless. If it’s too many draw calls, then neither geometry nor texture compression can help you. Use something like gltf-transform inspect scene.glb to get a better idea here, and choose your tools accordingly. :slight_smile:

3 Likes

@donmccurdy To answer your questions:

  1. Ok with changing internal structure along as visual result is the same.

  2. Low tolerance for causing visual problems with user’s models. Would rather avoid having to tune parameters for case by case scenarios.

With that in mind, what would the best lossless vs lossy compression method? Currently I’m using jszip after draco compressing glTF files, is that similar to Gzip? Thanks for the help!

JSZip, Gzip, Brotli, or any other lossless compression method is fine, yes.

Lossless compression won’t usually do very much on top of Draco. Meshopt, on the other hand, is designed to prepare data so it benefits from those lossless compression methods, and is almost always used alongside them. So to compare Meshopt and Draco correctly, you would need to factor in the lossless compression.

Draco itself is always lossy, and will break for some small percentage of models. Particularly with small details or very large scales in the geometry or UVs. Similar with default Meshopt implementations, or anything based on quantization. It might be possible to do Meshopt compression losslessly — but I’m not personally sure how.

Something like this would be closer to visually lossless:

gltf-transform optimize in.glb out.glb --no-simplify --no-compress --no-instance --texture-compress webp

I’d have a look at the gltf-transform optimize --help output, there’s a fair bit of choice there.

2 Likes

Yup. Getting things to compress well with meshopt is an art.

For instance… breaking the scene into separate meshes helps it… so it can compress each model differently according to detail.

Often I found when I see things losing quality its because its a small island of geometry in a much larger mesh, so meshopt doesn’t have a sense of what scale the features are important in.

1 Like

Yeah, unfortunately true of Draco as well, they both rely on quantization. You can increase the quantization bits for the ‘position’ attribute to reduce the issue, but splitting meshes so the geometric detail is more evenly distributed across the extent of each mesh is better.

1 Like