The most efficient way to display heavy environments

Hola! I’m one of the devs of IFC.js.

The problem

How to represent big geometric environments efficiently? That’s the question.

I’m opening this thread as an open discussion because I assume that people wanting to represent environments for visualization/videogames might be in a similar situation. Hopefully, our findings will be useful for others and we’ll be able to learn other points of view. :slightly_smiling_face:

My goal is:

  • Big scenes that are efficient both in terms of memory :brain: and speed/fps :rabbit2: .
  • Being able to retrieve each individual object of those scenes fast. By “retrieve” I mean getting its ID and being able to create a Mesh with its geometry.

Our findings

As far as we know, there are 2 ways to optimize Three.js scenes:

Mix everything in a BufferGeometry

:rabbit2::white_check_mark: This reduces the draw calls to a minimum. This is what we have done so far. Here is an example of this. Selecting individual items of a merged BufferGeometry efficiently is not simple, but we already solved that by creating a map that stores the positions of each object in the buffers. It works and it’s fast.

:brain::x: This is very memory intensive. In this system, a model that is two chairs occupies twice as much RAM as a model with one chair. This is problematic because when it comes to models of large buildings we are talking about thousands or millions of objects.

Use InstancedMesh

:brain::white_check_mark: This has the advantage that it consumes very little memory. That is, an InstancedMesh of a chair of 2 materials and 1000 instances occupies 1 chair of memory and consumes 2 draw calls. In this case, it is easy to retrieve individual instances. You can check this out here.

:rabbit2::x: The disadvantage of this system is that each InstancedMesh will have as many draw calls as materials, which means that if we apply this to all objects in a building, we will end up with thousands of draw calls since there are thousands of unique objects.

Other findings

  • We have been said that maybe we should look into multidrawing.
  • We could perhaps combine instances with LODs.
  • Occlusion culling could also help?

Our solution

We are designing a hybrid system called Fragment. The idea is to use the best of both worlds behind the same interface. That is:

  • Low poly objects: merge them into one/many BufferGeometries.
  • High poly objects: create one InstancedMesh for each one.

In our use case, all the walls/floors are low-poly and unique, while all the doors/furniture/windows are high-poly and repeated, so it’s quite straightforward. Maybe there is a formula to generalize this for any use case.

In our proposal, a Fragment can be a BufferGeometry composed of several objects, but it can also be instantiated. This means that 1,000 equal chairs would be a fragment, but also that all the walls of the project could be a single fragment. At the persistence level, we are thinking of storing one GLB per fragment.

The idea is that all fragments have approximately the same number of vertices and draw calls. We hope that, in this way, a good balance between memory consumed and draw calls needed can be achieved!

We started this a short time ago, but we hope to reach promising results very soon :slightly_smiling_face:

12 Likes

I will be following ur project on the most efficoent way to display heavy environment to help my project load more assets and players.

1 Like

Grouping things broadly feels nice but has limits, grouping things in dissected chunks and LOD steps helps, here is some ideas:

  • Every objects in a room could be grouped together, this way you could show/hide groups based on if you enter/leave the room (or hide them all when looking at the building from outside).

  • Every room could be rendered as an Interior Cubemap when looking at the building from outside (or peeking at another room from a doorway or window).

  • Far away objects could be rendered with billboards based on an automatically generated texture atlas of different points of view of the objects (this article explains it very well).

  • Extremely far away objects could be rendered as colored points instead of meshes lol

1 Like

that’s nice matrix like interior cube maps, is there a tutorial on how to implement with threejs?

Maybe ask someone from the minecraft team how they’re going about it. It follows from the idea of grouping things so that you’re not thinking of everything. Spatial grids n what not.

Game dev tends to be a lot of UI trickery. Like restricting the camera to simpler scenes. In the case of buildings, viewing just one at a time or even floor by floor, or room by room. The trick would be handling the transition between floors and rooms and ignoring everything else.

1 Like