Our team is currently working on an online 3D web editor, allowing users to save and upload directly through the browser.
While evaluating server efficiency and user experience, we’re considering two options:
1. Users can upload a GLB file to the server and download assets as needed. If a small change is made, a new GLB file would be uploaded. For instance, if a user downloads “tree.glb” from the server, makes edits, and saves, we would then have a “newTree.glb” on the server.
2. When a user wants to save their project, we could break down all GLB files, checking if the “material of the tree” has already been uploaded to the server. Subsequently, the entire project’s JSON, transformed from GLB but excluding any duplicated materials, would be sent to the server.
We’re facing a bit of debate between these two options. The first might lead to extensive DB storage usage. The second could make the system more complex and, by transporting non-binary files like GLTF or using Json.stringify(three.object), it might also increase network load.
Would breaking down .glb files to reuse materials be efficient, or could it worsen network performance and negatively impact the user experience? Any insights or suggestions for a saving logic in an online 3D web editor would be greatly appreciated.
I think the heaviest part of a material are the actual textures it uses… and most often, those are unique per model, for user generated content. If users are limited to the assets in your app, then you may have some luck de-duping those texture references to allow better re-use. My instinct would be to focus on your editor and either ignore the storage issues, until your userbase size makes it a (good) problem… Or… look into using the newer local file access APIs to allow users to edit local content without roundtripping to the server… https://developer.mozilla.org/en-US/docs/Web/API/File_System_API
I’m thinking along the lines of a webapp like https://www.photopea.com/ … they let you drag a jpg from your desktop onto their page… without uploading it to their servers… edit it… and hit Ctrl-S to save back to the original local file… skipping the downloads folder etc.
You’ll need to decide if your business model is storing users content, in which case you’re going to have to factor in storage cost as part of your business, or whether it’s simply providing an editor that people will use.
Wow…!!Thank you for your insight. Me and our friends just debated that what should we focus on!!
However, we made consensus that our primary focus is on server-side management of assets and geometry information. While the idea of managing these assets locally as a typical 3D editor does sounds intriguing, implementing such a system comes with its own set of challenges.
You’ve pointed out the benefits of handling assets on the web editor, and I appreciate that. As we delve deeper into this project, we find ourselves at a crossroads: should we maintain our current approach with .glb or explore a new logic to accommodate a large user base?
While we may not have a vast user base now, I’m curious about the frequency of asset overlap in general 3D projects. Being a developer and only a casual user of Blender, I’m not entirely familiar with how often assets are reused across different projects. If there’s a significant chance of asset repetition, some of our proposed logics might become redundant.
Plus, If the advantages of .glb outweigh the potential increase in DB storage, or… any other customized GLTF loader that handle does logic easily, I’d love to learn more about it.
What about this approach: each asset that you have stored at your side, has some unique id of the same type. For example, a 10MB texture has an id some unique minitexture, let’s say 32x32 pixels; or a geometry with 100k vertices has an id some unique small geometry of 20 vertices. There is no need for the mini-id to resemble the big asset, but it should be somewhat unique (like a texture-UUID, or geometry-UUID).
- you have stored individual assets and their ids
- models are stored as is, but embedded assets are replaced with their ids (this makes the file size much smaller)
- user downloads a model from the server (with the mini-ids)
- your client software identifies these ids, and if these resources are not already downloaded, they are downloaded from the server
- then the software replaces the mini-ids in the model with the original big assets
- the user works with the model and modifies it
- before uploading, all unmodified assets in the model are replaced by their mini-ids; all new assets or modified assets are kept unchanged
- the model is then uploaded (and the file is much smaller)
- when a model is received, it is already minimize to some extent. If there are new assets, they are extracted and stored individually, their mini-ids are generated and put back in the model, and the completely minified model is stored
Note: the reason to replace big textures with small-id-textures; big geometries with small-id-geometries and so on, is to keep transfer files valid (e.g. minified GLBs will still be a valid GLBs). For example with textures, the colors in the pixels may actually encode some UUID.
Now something off topic, about decision making in projects:
It feels like dealing with LOD, but, in my case, my problem is so many material would stored in my S3 bucket which store lots of .glb files.
So, break down .glb and not to overlapping it’s texture is my idea. I mean not to store same “.png” files in S3 bucket.
we are in “Deciding whether to implement A or B”