The compression size of draco is impressive, the decompression time is very long but ok for this file size. But i personally don’t see a use of it outside enormous models like from 3D scans. It took ~7 seconds on my powerful desktop machine, on weak mobile devices it will take much longer. At least i don’t see a use for games or the usual apps, but those for 3D scans of course.
At a certain triangle count an average device starts struggle rendering it anyway. This César model for example could be optimized that it would basically look the same, but only around 1-2 MB in uncompressed binary. Besides a lot unecessary faces the flat details on the fabric or armor will look even better/smoother baked in normals maps. The distortion of scans creates a lot wrong details.
Yes, and i agree with draco doing a great job, i just don’t see a reason to generally use it since it’s really rather relevant for big files. For the web one should always optimize the models first before only trying to compress it as much as possible, since triangle count for rendering is the more important point if it’s about games and such.
I really like draco and will integrate it in my own fromat as well.
By optimizing scan models, baking details etc. mobile devices won’t struggle or die rendering it at the same quality. But that only applies to just rendering it in it’s entireness, if there is a LOD rendering technique the hardware limit can be avoided, though it would require to store additional data which i think draco doesn’t handle.
I’ve got a i7 5930k with 6x3501 MHz, 32 DDR4 RAM and TitanX with latest Chrome
Compressing media data is used in many areas like pictures, video or audio. Nobody would come up with the idea to integrate uncompressed image data in a website (established formats like JPEG, PNG or GIF all use compression per default). From my point of view, there is no reason why 3D model data should be a special case. Even small files benefit from compression since the total delay (load+parse/decode) is in general smaller than loading uncompressed data.
Of course compression is relevant, but for average buffers other compression methods / faster ones suffice as well. And as said, optimizing the actual content since it’s the most important part that will reduce the file size as well.
how does it generally compare to gzipped json? Why does json gzip better than a binary mesh? On a high level, it’s a different compression algorithm, while draco is specifically aimed for spatial data?
Theoretically, if draco decoder was part of the browser, there would be no question which format to use. But currently, since browsers can unzip natively, zipped json might win when both size and decoding time are considered?
Draco is spatially-aware and configurably lossy, so the compression ratio will be better than gzip for geometry-heavy models. Decompression does take longer, though, so that is a tradeoff. Related: https://github.com/mrdoob/three.js/pull/15249.
Right now, I am loading my scene as a zip file, and using THREE.ZipLoadingManager and it is cutting the file size down by about half, and it works great. The only thing I noticed is there is a second or two of really intense CPU activity, and everything freezes. Is draco any better, in that regard? If I understand correctly, draco won’t handle the texture data, only the mesh.
@Hyacinth Draco should give you significantly better compression on the geometry than ZIP compression. The decoding time is still nontrivial, but I don’t know if it will be more or less in your case. You may want to try to keep your textures out of the ZIP archive, they don’t usually tend to benefit from that sort of compression much.
We have a work-in-progress version of THREE.DRACOLoader that runs in a Web Worker, so (1) it can decode multiple models, or parts of a single model, in parallel, and (2) it doesn’t block the main thread: https://github.com/mrdoob/three.js/pull/15249
You could also do your ZIP decompression in a web worker, but I don’t have an example of that handy.
Finally, you may want to consider the new Basis texture compression library, which will significantly decrease the amount of time your application freezes during texture GPU upload.
Wow, thank you. I will look into Draco then. And look forward to any new developments you guys make. Right now, the scenes I am loading are not very large. But I am modelling this world after OpenSim and Secondlife, where a region full of stuff can be up to a GB of data. So when the scenes start getting that large, I really need to think through loading it in stages, so your base scene loads first, so you can walk around. Then it overlays details on top of it without really freezing up. Though I think pressuring people to make better design choices will help too.
Thanks for your suggestion, I am new with glTF Loader and also try to reduce size of my model using gltf-pipeline.
My original file is 43.4MB, but when I do: gltf-pipeline -i MyModel.glb -o modelDraco.gltf -d
and the new file is made with 51.3MB in size.
The other option to do is: gltf-pipeline -i MyModel.glb -o model.gltf
But the new file is 57.9MB in size.
Those are the new default quantize settings from draco.
If you still dont get favorable compression, try loading your model into blender and export it in blender to a glb file with compression using blender’s default compression settings.