Glb model generated from gltf is bad

[I wanted to share my gltf file and it’s bin file as well as resulting draco file but discourse has limit of 8mb, i don’t know how i can share it]

I have pretty big model and not limited to this model but i want to learn a way in which i can make the 3d model small size

For this the first step that i did was something that i learnt from sbcode which is using compressed gltf or draco

https://sbcode.net/threejs/loaders-draco/

I compressed my 50 mb model into draco with gltf-pipeline and when i loaded my draco model this is my 3d scene looks like.

The model is degraded very badly,

closer look

the original model in gltf was this:

I did this to create a draco file:

gltf-pipeline -i dist/client/models/drawing_room/scene.gltf -o modelDraco2.gltf -d

gltf-pipeline i am using is from this:

which i insalled by doing

npm install -g gltf-pipeline

can anyone help me !! please

this is my gltf file:

and its bin file:

1 Like

@seanwasere in your website, can i ask how did you create draco compressed file for that monkey model?

is it that since my model is scanned model, i can’t compressed it into draco ?

I think i have tried and check most of the code and examples, i don’t think you have talked about way to reduce number of triangle from mesh.

in three.js i found simplifier but that works for model already loaded and it is not working in my case !!!

can you please help me learn what are the ways that i can compress my model without degrading it like draco degraded the original model.

is there anything that you prefer?

i want to focus all my learning and learn things on scanned 3d models, and in scanned 3d model, big problem is file size which i want to reduce but i am not able to find out what are the ways to do that.

can you please help me on this

I used blender. The monkey is a very simple model compared to yours.

Open blender,
click into main scene so that keyboard events are sent to it.
press a to select all.
press delete to delete all
press shift a to add a new mesh–>monkey
file – > export → gltf 2
open geometry options → tick Compression → Export
image

Hi, thank you,

generating compressed from blender gives better model than the one that I got from gltf-pipeline module
but still there are some artifact which is not as bad as previous draco file which I mentioned in post.

But there are still some like

I did not change any value on default value in the compression input. Do you think I should change something ?

I don’t know.

@Pravin_Poudel try including the --draco.unifiedQuantization flag when compressing with gltf-pipeline. Draco compression involves rounding vertices to a grid (quantization). The default is to choose that grid independently for each mesh, which improves compression, but this may introduce seams between two adjacent meshes. The option above forces Draco to use the same grid for the whole scene, and should avoid the seams.

Hi, thank you so mcuh.

This solved the issue of artifacts but there is another issue which came now.

Now the model is like this:

very very low quality

can i do something for this?

and one more thing

in the file, that i tried gltf pipeline:

scene.gltf- 24kb
scene.bin - 42.246 MB

and i did this as command line

gltf-pipeline -i dist/client/models/drawing_room/scene.gltf -o roomDraco.gltf --draco.unifiedQuantization

and my roomDraco.gltf model is 40.482MB

There area a lot of documented options in the CLI — I think you’ll want to try a few more of them. With unified quantization you may also need higher precision on the vertex positions. I’d recommend output to .glb rather than .gltf as well.

Hi thank you for insight,

i tried with other argument as well and still this is my output

this scene is from the file compress:

gltf-pipeline -i dist/client/models/drawing_room/scene.gltf -o roomDraco.glb--draco.unifiedQuantization --draco.quantizePositionBits 16 --draco.quantizeNormalBits 16

and this is the output

for this command:

gltf-pipeline -i dist/client/models/drawing_room/scene.gltf -o roomDraco.glb --draco.unifiedQuantization --draco.quantizePositionBits 24 --draco.quantizeNormalBits 16

i tried to reduce compression as well

gltf-pipeline -i dist/client/models/drawing_room/scene.gltf -o roomDraco.glb --draco.unifiedQuantization --draco.quantizePositionBits 24 --draco.quantizeNormalBits 16 --draco.compressionLevel 4

this is output, still low quality,

can you please give me feedback on this !!

One important thing would be that the texture used by this model is almost 30 MB alone. Draco only compresses geometry, so you’ll need to do something else about that. Here’s a set of steps I’ve tried:

  1. compress the image separately in https://squoosh.app/
  2. merge all meshes in the model with gltfpack:
gltfpack -i models/scene.gltf -o ~/Desktop/scene-merged.gltf -noq
  1. apply draco compression (no need for unified quantization now, the meshes are merged)
npm install --global @gltf-transform/cli

gltf-transform draco ~/Desktop/scene-merged.gltf ~/Desktop/scene-merged-draco.glb --quantize-texcoord=14

The result (attached) comes down to 11 MB with the steps above, and looks reasonably good to me. Further trial and error may improve results further. I’d also consider reducing the texture from 8K to 4K if mobile devices are a concern.

Hi,

thank you so so much,

I am just curious about one thing !!

I did what you did but I didn’t do the first step because I was over excited to reduce my file size and the final file in glb that I got without compressing is 67MB

i am not able to understand why is this file size bigger than original gltf file which is 42 MB itself

I’m not sure which steps you did, but if you’ve converted the .gltf to .glb then the texture has probably been embedded in the .glb file.

Ohh no sorry for being dumb,

I misread your last command as gltf-pipeline so I was writing gltf-pipeline instead of gltf-transform which was the root of the issue.

I used gltf-transform and boom !!! The problem is solved !!
Now my model is compressed from 42MB to 11 MB

It was eating my brain, thank you

Since this model is Draco compressed does that mean I don’t need that texture anymore ? because even when I deleted the texture there is no error.

Is this intended behavior?

and one more thing is there any way I can run this command by the program because I want to reduce the file uploaded by users to my system ???

Again Thank you so so much

1 Like

No worries at all! Lots of moving pieces to optimize in a 3D model.

Since this model is Draco compressed does that mean I don’t need that texture anymore? because even when I deleted the texture there is no error.

You don’t need the texture anymore, but it isn’t because of Draco — in most tools, exporting to .glb (rather than .gltf) will embed textures and the binary .bin data into the self contained .glb. A .gltf usually keeps references to external files instead. And if you ever see a .gltf that doesn’t have external files (sometimes called “glTF Embedded”) that’s very inefficient (Base64 strings) and it’s better to switch to .glb (binary).

… is there any way I can run this command by the program because I want to reduce the file uploaded by users to my system ???

I guess it depends if you can execute CLI commands from your system? Or if not, what language your server is implemented in? glTF Transform is the most flexible of these tools in that area, it’s JavaScript and can do Draco compression in a web application or a node.js server without needing a CLI:

But the first step with gltfpack is important here, it joins the meshes together so they’ll compress better, and that part is only a CLI tool at this point. I have work in progress for supporting that feature in glTF Transform but it isn’t done yet.

1 Like

Oh wow, the level of knowledge you are imparting to me is insane. Wow !!

I am using the backend, right now I was trying to do it by the front end.

I don’t use any framework on the front end just vanilla JS, TS, and webpack.

I guess I can use an express as the backend if needed, as i am familiar with it.

If I use js as the backend, i can execute this command?

Can I also do this from the front end? or is it that it makes more sense to do this on the backend side?

and one more thing, I wanted to learn how to do LOD in the gltf model with MSFT_lod extension as well.

I was reading something and I saw that article referencing gltf-transform as a tool that can do this. After checking it I found this repo

https://github.com/takahirox/glTF-Transform-lod-script

but looks like to perform this, I have to clone this repo and install the dependency and run this file while giving input and output file name or location.

Is there any way I can do this in my project and have LOD in my model?

1 Like

Either way is fine! Up to you. Getting the Draco encoder installed on the frontend is a little more complicated than in node.js, the Stack Overflow link above has some details on that.


I’m not sure if LODs will help for this particular model — that project is designed around the case where you have a large world with many objects, and want to make those objects more/less detailed as you get closer/farther away. For a single static room, it may not help much.

As far as the setup though, yeah I think it’s what the project readme describes. And you’ll also need to install extra plugins for THREE.GLTFLoader, which doesn’t support that extension out of the box. See GitHub - takahirox/three-gltf-extensions: Unofficial Three.js glTF loader/exporter plugins for that part, I’m not sure it’s completed, and using custom extensions are a more complicated topic.