🐉 Anyone using Draco?

Draco: GitHub - google/draco: Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.
Have you had a good/bad experience you could share with the community?

Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.

Draco was designed and built for compression efficiency and speed. The code supports compressing points, connectivity information, texture coordinates, color information, normals, and any other generic attributes associated with geometry. With Draco, applications using 3D graphics can be significantly smaller without compromising visual fidelity. For users, this means apps can now be downloaded faster, 3D graphics in the browser can load quicker, and VR and AR scenes can now be transmitted with a fraction of the bandwidth and rendered quickly.

Draco is released as C++ source code that can be used to compress 3D graphics as well as C++ and Javascript decoders for the encoded data.

My curiosity, I see WebAssembly being used here.

I’m using it and I think it’s great. Just one example to illustrate this:

The following model at Sketchfab is very complex. The original OBJ file is 186 MB big, the glTF version about 41 MB. Although the usage of glTF clearly reduces the file size, it’s still bad to load this amount of data over a potential slow internet connection. After applying Draco with default compression level to the asset via gltf-pipeline, the file size is reduced to approx. 5 MB. Of course you have now a higher client-side parsing overhead because of the decoding process but the overall delay is much better than before. Nevertheless, for such big models I would recommend to perform the decompression in a worker. Besides, using a mesh compression algorithm like Draco is more efficient than using a general purpose, lossless compression like zip.

Decompression can be a quite computing intensive task. The official Draco libs allow a decompression via JavaScript or WebAssembly. The latter one is the faster approach.

4 Likes

For the interested reader or student: One algorithm Draco uses is the so called Edgebreaker. The following paper provides the theoretical background behind the implementation:

4 Likes

This is impressive indeed I had not realized this important feature of glTF on reducing file size.

  • Are their other pro(s) and con(s) for glTF?
  • Because we are in 2018 would you recommend using glTF all the time? or sparingly?
  • Does glTF have a known limitation(s)?

I thought this thread is about Draco? :wink: Maybe it’s better if we not mix topics.

BTW: Draco can also be used without glTF. According to the project’s website, it accepts OBJ and PLY as input. But apart from that, it’s very elegant to use it in combination with glTF.

1 Like

your right, my bad
a topic for another time

:scream: :scream:

2 Likes

Yeah, it’s crazy :laughing:

2 Likes

Isn’t it a base64 string for the buffers? I mean i got some models in text format for example ~5mb turning into ~120kb models in binary. The algorithm looks interesting, a lot 3D scan models could be optimized/reduced in advance too, a lot faces don’t contribute to any detail, i also noticed this in a lot scans on sketchfab.

1 Like

I’m not sure I understand. What are you referring to?

I meant glTF (without draco) resulting in 41 MB, looking into the loader it’s using a base64 string for the buffers, that’s a huge difference from having a binary buffer, but quite better than a text format of course. But maybe i missed something and you already used binary buffers.

glTF has two variants: .gltf and .glb. Typically .gltf will reference separate .bin and texture files, whereas .glb has everything embedded as binary (not base64) data. It’s possible to embed everything into .gltf too, and that’s when you end up with base64 strings in the JSON that increase filesize by ~30%, so that’s unusual and inefficient.

Generally, just use the .glb variant (or pack it with https://glb-packer.glitch.me/) and you’ll avoid all of that confusion. :slight_smile:

Using Draco alone is a good option if you just need geometry. If you need materials, textures, and animation, then glTF+Draco gives the best of both worlds.

6 Likes

BTW: Here is the mentioned César model as a three.js demo:

https://cdn.rawgit.com/mugen87/draco-showcase/2ad92db87f5f12d6cf2af8ea58574e2f37e308a0/index.html

It’s interesting to have a look at the network tab of your browser’s dev tools.

1 Like

The compression size of draco is impressive, the decompression time is very long but ok for this file size. But i personally don’t see a use of it outside enormous models like from 3D scans. It took ~7 seconds on my powerful desktop machine, on weak mobile devices it will take much longer. At least i don’t see a use for games or the usual apps, but those for 3D scans of course.

At a certain triangle count an average device starts struggle rendering it anyway. This César model for example could be optimized that it would basically look the same, but only around 1-2 MB in uncompressed binary. Besides a lot unecessary faces the flat details on the fabric or armor will look even better/smoother baked in normals maps. The distortion of scans creates a lot wrong details.

1 Like

You can always optimize meshes, independently of compression. The demo should just show the difference between the original OBJ file (more or less raw geometry data) and a compressed glTF asset.

Interesting. The pure decompression takes only ca. 2 seconds on my four years old iMac with latest Chrome.

1 Like

Yes, and i agree with draco doing a great job, i just don’t see a reason to generally use it since it’s really rather relevant for big files. For the web one should always optimize the models first before only trying to compress it as much as possible, since triangle count for rendering is the more important point if it’s about games and such.

I really like draco and will integrate it in my own fromat as well.

By optimizing scan models, baking details etc. mobile devices won’t struggle or die rendering it at the same quality. But that only applies to just rendering it in it’s entireness, if there is a LOD rendering technique the hardware limit can be avoided, though it would require to store additional data which i think draco doesn’t handle.

I’ve got a i7 5930k with 6x3501 MHz, 32 DDR4 RAM and TitanX with latest Chrome

1 Like

Compressing media data is used in many areas like pictures, video or audio. Nobody would come up with the idea to integrate uncompressed image data in a website (established formats like JPEG, PNG or GIF all use compression per default). From my point of view, there is no reason why 3D model data should be a special case. Even small files benefit from compression since the total delay (load+parse/decode) is in general smaller than loading uncompressed data.

2 Likes

Of course compression is relevant, but for average buffers other compression methods / faster ones suffice as well. And as said, optimizing the actual content since it’s the most important part that will reduce the file size as well.

1 Like

how does it generally compare to gzipped json? Why does json gzip better than a binary mesh? On a high level, it’s a different compression algorithm, while draco is specifically aimed for spatial data?

Theoretically, if draco decoder was part of the browser, there would be no question which format to use. But currently, since browsers can unzip natively, zipped json might win when both size and decoding time are considered?

I literally just started testing with DRACO today for my non-skinned meshes. The filesize reduction is absolutely nuts. 171kb => 9kb for example.

What I still need to benchmark is how much overhead the decompression adds and see if the faster download times are worth it. Will do that sometime this week.

3 Likes