Hola! I’m one of the devs of IFC.js.
The problem
How to represent big geometric environments efficiently? That’s the question.
I’m opening this thread as an open discussion because I assume that people wanting to represent environments for visualization/videogames might be in a similar situation. Hopefully, our findings will be useful for others and we’ll be able to learn other points of view.
My goal is:
- Big scenes that are efficient both in terms of memory and speed/fps .
- Being able to retrieve each individual object of those scenes fast. By “retrieve” I mean getting its ID and being able to create a Mesh with its geometry.
Our findings
As far as we know, there are 2 ways to optimize Three.js scenes:
Mix everything in a BufferGeometry
This reduces the draw calls to a minimum. This is what we have done so far. Here is an example of this. Selecting individual items of a merged BufferGeometry efficiently is not simple, but we already solved that by creating a map that stores the positions of each object in the buffers. It works and it’s fast.
This is very memory intensive. In this system, a model that is two chairs occupies twice as much RAM as a model with one chair. This is problematic because when it comes to models of large buildings we are talking about thousands or millions of objects.
Use InstancedMesh
This has the advantage that it consumes very little memory. That is, an InstancedMesh of a chair of 2 materials and 1000 instances occupies 1 chair of memory and consumes 2 draw calls. In this case, it is easy to retrieve individual instances. You can check this out here.
The disadvantage of this system is that each InstancedMesh will have as many draw calls as materials, which means that if we apply this to all objects in a building, we will end up with thousands of draw calls since there are thousands of unique objects.
Other findings
- We have been said that maybe we should look into multidrawing.
- We could perhaps combine instances with LODs.
- Occlusion culling could also help?
Our solution
We are designing a hybrid system called Fragment. The idea is to use the best of both worlds behind the same interface. That is:
- Low poly objects: merge them into one/many BufferGeometries.
- High poly objects: create one InstancedMesh for each one.
In our use case, all the walls/floors are low-poly and unique, while all the doors/furniture/windows are high-poly and repeated, so it’s quite straightforward. Maybe there is a formula to generalize this for any use case.
In our proposal, a Fragment can be a BufferGeometry composed of several objects, but it can also be instantiated. This means that 1,000 equal chairs would be a fragment, but also that all the walls of the project could be a single fragment. At the persistence level, we are thinking of storing one GLB per fragment.
The idea is that all fragments have approximately the same number of vertices and draw calls. We hope that, in this way, a good balance between memory consumed and draw calls needed can be achieved!
We started this a short time ago, but we hope to reach promising results very soon