Hello,
For me, the main issue with web 3D is a high memory consumption. High quality models have many polygons, so they consume a lot of RAM.
glTF is promoted as “JPEG for 3D”, but is it really a solution for this issue? Smaller file sizes are helpful for traffic, but as for real usage unpacked HQ models wil still be hungry for memory.
Am I wrong? Is it possible to reduce memory consumption with new glTF format?
Let me add another another question to this… I just did my first test of glTF exporter the other day, using embedded textures… it worked great, and I was impressed. But I have not had a chance to look into disposing glTF stuff… Does anyone know if we would be disposing of glTF’s the same way as three meshes? Like the following:
scene.remove(glTF_Mesh);
glTF_Mesh.geometry.dispose();
glTF_Mesh.material.dispose();
glTF_Mesh.material.map.dispose();
glTF_Mesh.material.bumpMap.dispose();
glTF_Mesh.material.envMap.dispose();
(and any other maps we have embedded, obviously)
@FrankSilvaHM — disposing works the same as with other THREE.Mesh
instances. If you’re using the KHR_materials_pbrSpecularGlossiness
extension, your materials may have an additional texture (material.glossinessMap
) that is not normally there on a three.js PBR material, but otherwise the same.
@EtagiBI typically the RAM available to your CPU is a relatively small aspect of a WebGL scene’s performance. Mostly you’ll use that memory while loading the model, and yes — glTF is designed to be very efficient to load. The mesh data is already in a binary buffer that can be uploaded directly to the GPU without copying it anywhere. The implementation of THREE.GLTFLoader is not entirely “in place”, but should still be pretty good. Compare that to text-based formats like OBJ, where every character in the file needs to be parsed and converted to an array.
Beyond memory usage, a few other things to keep in mind:
- Draw calls are costly. With any format you use, try to merge meshes and materials before export.
- Mesh compression (see glTF’s Draco extension) can reduce the download size of your model.
- Quantized attributes can reduce both download size and GPU memory usage. See https://cesium.com/blog/2016/08/08/cesium-web3d-quantized-attributes/ (although that post is a bit out of date).
- Texture compression can dramatically reduce the amount of GPU memory needed for textures, and frames dropped when uploading them. A glTF extension for cross-platform compressed textures is in progress but incomplete.
- More advanced techniques (LODs, impostors, …) can be used regardless of your file format.
glTF is not the only format designed with performance optimizations, but IMO it is the first to get really widespread support. An advantage of this is that there are beginning to be more tools, like Compressonator and glTF-Pipeline, providing convenient ways to optimize a model. Other common formats like OBJ, FBX, and COLLADA are designed for interchange between tools and do not focus on performance much. The new USDZ format may also perform well, but (1) details are limited at this point, and (2) it’s dependent on a very large client library, so not currently a good choice for use with WebGL.
1 Like
glTF doesn’t change anything on the GPU memory, but it doesn’t require to allocate a lot memory required by other non-binary loaders, or any loader which will perform a conversion. But even those allocating a lot memory for the loading process, the memory will be dropped later and isn’t required permanently, the result is the same as from loading a glTF file.
Regarding “JPEG for 3D”, this only is meant for file compression, so file will be faster loaded, but will take a little for decompression depending on how strongly compressed. When the file is loaded it remains in memory uncompressed.
glTF doesn’t change anything on the GPU memory…
This isn’t strictly true — there are techniques to reduce GPU memory usage like quantized attributes and compressed textures. But neither are official glTF features quite yet — texture compression support is actively being developed, quantized attributes may be in the future.
Regarding “JPEG for 3D”, this only is meant for file compression…
We use the phrase more broadly than that — think of PSD vs JPEG, rather than BMP vs JPEG. A photoshop file preserves lots of data that’s useful while editing a file, so it’s a “good” format, but you wouldn’t write a JS parser and embed photoshop files on your website. The format has much more complexity than is needed just to view an image, so we export to JPEG or PNG. The same is true of COLLADA and FBX — they have many features, and many ways of representing the same thing, which is all great for content pipelines but makes it hard to render consistently. By exporting to glTF you strip this down to the subset needed to render. For a concrete example, a glTF file cannot contain Ngons — the model must be triangulated at time of export.
But of course, glTF does also offer (optional) compression, which gives faster download at the cost of decompression time.
There is a binary loader basically already, and writing a own not very difficult, but the support for this format of course is widely great and quickly spread when it comes from the company khronos.
Compression is great but it’s mostly just about the transfer, the OP asked about memory consumption. If it’s about very big point data for example, it would be rather streamed instead trying to compress an elephant into a smartphone with high client-side decompression cost.
1 Like
Yeah, that’s a good point. If an application really is memory-constrained then (mesh) compression is likely to just make things worse.
Hey thanks Don! You really seem to know your stuff! Are you one of the dudes writing the glTF spec or something?
No problem! Yes , I’m a member of the 3D formats working group at Khronos and one of the authors of THREE.GLTFLoader
.
1 Like
@donmccurdy, @Fyrestar thank you for your thorough replies/comments!
The practical side of my question is that in our Three.js based project we’re thinking of moving from FBX to glTF. Oddly enough, our comparison tests showed that glTF is on par with FBX when it comes to a file size matter and uncompression times. So I was wondering if there’re any advantages behind glTF that can boost performance / reduce memory consumption.
Could you say what numbers you’re seeing more specifically? Unless you have directly chosen to compress the model, there is no decompression. Compression will decrease file size but increase both loading time and memory usage… So I’m not quite sure which thing are trying to improve.
Just to pick the first example at hand, here’s the Samba Dancing.fbx
file in the three.js examples folder:
file |
filesize |
parse time |
FBX |
3.7 MB |
900ms |
glTF |
2.8 MB |
50ms |
glTF + Draco |
600 KB |
750ms |
In all three cases the model loads and animates correctly. I have not included file download in the measurement of parsing time.
Don, it sure would be wonderful if the Blender glTF 2 Exporter had an Enable Precision feature that allowed us to set float lengths, just like the three exporter does. Any chance this is coming to glTF 2?
Even though glTF uses JSON, all vertex data is in one or more binary chunks. As a default that’s much more efficient:
var a = [];
for (var i = 0; i < 100; i++) { a.push(Math.random()) }
new TextEncoder().encode(JSON.stringify(a)).byteLength // 1930 bytes (JSON)
new Float32Array(a).byteLength // 400 bytes (binary)
See the sections of the spec on accessors and binary storage. Since this isn’t converted to a string, changing precision won’t directly improve file size — quantization is probably what you’d want, or compression, per earlier post.
The one big exception is that metadata like material values and object position/rotation/scale are stored as JSON in glTF, but I haven’t seen any examples yet where that’s been a real factor in overall file size.
1 Like
Ahh. Gotcha. Binary is different. Makes sense. I’ve just always liked that three could be knocked down like that, and for many things, 3 decimal places are perfectly adequate and often can’t tell the difference from 6. Thanks again for everything. And boy am I glad we have a glTF team member around here!
1 Like
Well, this kinda depends on your scale, you could have a very detailed model in a normalized space in which case 3 decimals might not really cut it…