The most efficient way to display heavy environments

Hola! I’m one of the devs of IFC.js.

The problem

How to represent big geometric environments efficiently? That’s the question.

I’m opening this thread as an open discussion because I assume that people wanting to represent environments for visualization/videogames might be in a similar situation. Hopefully, our findings will be useful for others and we’ll be able to learn other points of view. :slightly_smiling_face:

My goal is:

  • Big scenes that are efficient both in terms of memory :brain: and speed/fps :rabbit2: .
  • Being able to retrieve each individual object of those scenes fast. By “retrieve” I mean getting its ID and being able to create a Mesh with its geometry.

Our findings

As far as we know, there are 2 ways to optimize Three.js scenes:

Mix everything in a BufferGeometry

:rabbit2::white_check_mark: This reduces the draw calls to a minimum. This is what we have done so far. Here is an example of this. Selecting individual items of a merged BufferGeometry efficiently is not simple, but we already solved that by creating a map that stores the positions of each object in the buffers. It works and it’s fast.

:brain::x: This is very memory intensive. In this system, a model that is two chairs occupies twice as much RAM as a model with one chair. This is problematic because when it comes to models of large buildings we are talking about thousands or millions of objects.

Use InstancedMesh

:brain::white_check_mark: This has the advantage that it consumes very little memory. That is, an InstancedMesh of a chair of 2 materials and 1000 instances occupies 1 chair of memory and consumes 2 draw calls. In this case, it is easy to retrieve individual instances. You can check this out here.

:rabbit2::x: The disadvantage of this system is that each InstancedMesh will have as many draw calls as materials, which means that if we apply this to all objects in a building, we will end up with thousands of draw calls since there are thousands of unique objects.

Other findings

  • We have been said that maybe we should look into multidrawing.
  • We could perhaps combine instances with LODs.
  • Occlusion culling could also help?

Our solution

We are designing a hybrid system called Fragment. The idea is to use the best of both worlds behind the same interface. That is:

  • Low poly objects: merge them into one/many BufferGeometries.
  • High poly objects: create one InstancedMesh for each one.

In our use case, all the walls/floors are low-poly and unique, while all the doors/furniture/windows are high-poly and repeated, so it’s quite straightforward. Maybe there is a formula to generalize this for any use case.

In our proposal, a Fragment can be a BufferGeometry composed of several objects, but it can also be instantiated. This means that 1,000 equal chairs would be a fragment, but also that all the walls of the project could be a single fragment. At the persistence level, we are thinking of storing one GLB per fragment.

The idea is that all fragments have approximately the same number of vertices and draw calls. We hope that, in this way, a good balance between memory consumed and draw calls needed can be achieved!

We started this a short time ago, but we hope to reach promising results very soon :slightly_smiling_face:


I will be following ur project on the most efficoent way to display heavy environment to help my threejs cannon GTA load more assets and players.


Grouping things broadly feels nice but has limits, grouping things in dissected chunks and LOD steps helps, here is some ideas:

  • Every objects in a room could be grouped together, this way you could show/hide groups based on if you enter/leave the room (or hide them all when looking at the building from outside).

  • Every room could be rendered as an Interior Cubemap when looking at the building from outside (or peeking at another room from a doorway or window).

  • Far away objects could be rendered with billboards based on an automatically generated texture atlas of different points of view of the objects (this article explains it very well).

  • Extremely far away objects could be rendered as colored points instead of meshes lol


that’s nice matrix like interior cube maps, is there a tutorial on how to implement with threejs?

Maybe ask someone from the minecraft team how they’re going about it. It follows from the idea of grouping things so that you’re not thinking of everything. Spatial grids n what not.

Game dev tends to be a lot of UI trickery. Like restricting the camera to simpler scenes. In the case of buildings, viewing just one at a time or even floor by floor, or room by room. The trick would be handling the transition between floors and rooms and ignoring everything else.


Hola Antonio, nice name :upside_down_face:
I think occlussion culling is first. And the earlier you plan for it the better/easier is to incorporate aditional methods on top of it.

One aspect of architectural worlds that cannot be granted in other scenarios is its regularity and repetition rules, and so there is room here for taking advantage from general guidelines to organize data. The bottom would be adopting wisely an BVH schema that accounts for an efficient space partitioning / indexing, because if well designed it can drive optimization for a lot of things (actual rendering, lighting/shadowing, collissions, even download/ RAM loading/ disposal of objetcs, you name it…)

Visibility determination is crucial in loads balancing, so things like PVS can save you several headaches in a row. For instance Cell-portal approaches can be very straightforward in arch domain (disclaimer: I am an architect), leading to rooms-portals and doors-cells associations that can be used to toggle/swap envmaps textures feeding reflection shaders.

When this is already set-up you have a solid ground to start stacking things like geometry instancing on top.

Nevertheless, the idea behind Fragment is actually very interesting, I would love to hear how it goes
My two cents :coin:


I’ve been working on systems for large environments since couple years, a lot mentioned in my project Tesseract, a realscale planetary engine for open world games.

More specifically for my mmo game i use the engine for, where you also can build houses like in The Sims, and from there towns and cities together.

Batch, Merge, Instance, Index, Cull, LOD

Asides of the terrain/world engine that is system for itself, the major boosters are the IndexedVolume - mass update optimized spatial index with hierarchical culling and chunked world content streaming, auto-instancing, auto-batching (merging geometries, textures and materials while preserving uv repeats) and the volume hull impostor (super low memory cost volumetric universal impostors) as one of the key components. But also a system for road, wall and fence networks which also utilizes the impostor and auto-instancing system, and a as well performance oriented particle engine with a static GPU driven component. Auto-instancing also goes as far as appending parameters, for instance some items in my game can be recolored like furniture, the auto-instancer turns instanced uniforms into an attribute.

The only thing i’m not sure of the benefit yet is occlusion culling as it comes with it’s own cost, but i probably will look into making use of it in a compound manner rather than individual items. Portals for interior for instance is a more smart (but specific) approach rather than the pure muscle strength approaches.

2D or 3D Impostor?

Any type of sprite impostor obviously can help a lot, but all 2D approaches suffer either from heavy popping or memory cost, that’s why i solved it in a universal approach (the VHI component) with fixed low cost and visual 3D appearance which also makes higher res volumetric impostors cheap in memory. But asides of the memory concerns the visual appearance was the most important, if you have a dense forest of trees the popping just becomes extremely noticeable with regular impostors.

Cardboard impostors also can be very good though, as they consist of multiple planes they don’t pop, and are commonly used for bushes, grass etc anyway. However they are very asset specific, and the transition not as good as a volumetric.

On the right when the origin line is visible the volume hull impostor is rendered, the transition from impostor to original and the visual appearance from angles is what bothered me most with sprite or cardboard impostors.

Here used together with the IndexedVolume and auto instancing.


Something that can be done also without index, but getting the most out with, is throttling anything possible in the scene. For instance i heavily reduce animations on distance, also in skeletons skipping hierarchy such as not updating every finger anymore etc, also particle systems and anything that can be visually reduced or just skipped entirely.

Furthermore i also throttle shadows of CSM, every cascade after the first only renders every nth-frame with higher delay on higher cascade, visually it’s not heavy as it’s far away and most times not visible due the next frame being rendered already when the next pixel in that distance will be filled. It also means there are not 3 cascades rendered every frame anymore but only 1 to 2. Depending on mechanics i enable update of all like for the rare reason of fast forward daytime/sun.

Shadow rendering also goes through the IndexedVolume with all it’s features for regular rendering like instancing, but also a predefined shadow geometry can be provided by the asset to use instead the original, many games use this as the shadow often can go with a much lower detailed geometry. Regular billboard impostors for trees also can work very well even as single LOD. Shadows mostly need just silhouettes.

Batching instancing

A main concept of the spatial index is to touch and compute as little as necessary per frame, while the auto-impostor does actually cull and works with LOD versions and impostor, on high level nodes of the index a chunked baked compound is maintained in order to not touch any single object anymore, however while this concept ensure a maxed out performance for rendering rich landscapes it’s also more a fit for impostors as these can work with a single original quad or cube geometry and atlases for the different assets, otherwise the memory cost and rendering cost become a bottleneck again.

Separating logic

The index also has layers of roots, as the GPU accelerated procedural content decorator of Tesseract controls the vast foliage, rocks, plants, grass is separated from regular dynamic content as the decorator nodes are always only covering area cells that need to insert several ten to hundred thousand of assets and especially detach all of them again which is more efficient when separated from dynamic content.

Generally it makes sense to use multiple and different kind of indexes instead trying to have one for all kind of content, the road network system for instance has a significant different shape of elements than typical assets in an index. I also make use of subtrees, such as every lot of a house maintains it’s own node index tree while the lot bounding box is the root and also a leaf object in the main index.

Better performance without complex systems

For those working without any systems for optimizations the StaticMesh component i made a while ago might help a bit further, it works out of box, reducing computations massively and adding a concept of hierarchical compound culling for children of a scene root object, for instance all interior objects such as furniture of a house would be skipped entirely if the house itself was already culled.

Asides, obviously using InstancedMesh will help a lot in many suited cases, however many assume it works like regular meshes in the scene, just faster since it’s 1 drawcall, but it is static and has no culling of objects/instances, you will still likely always have a better performance with it for a lot of the same asset by how costly drawcalls and internal processing for one is. And generally ideally having a single mesh and textures for one asset, unless parts of it need to move like a door and it’s frame being separate.


Hey, thanks everyone for all the responses! :yellow_heart: I’m looking into them in detail and see what I can try out. In the meantime, I have made some progress with Fragment:

Here you see a scene with 1k chairs and 4 walls. It only has 3 draw calls (plus one for the selection) and consumes less than 10MB. The key is that the chairs are an InstancedMesh, while the walls are merged. I’ve designed a common Fragment interface that allows both instantiation and/or merging and still retrieve/highlight individual items super efficiently.

I will make some tests with big guys and post the results here. I’ll also try to implement some of the hints stated above. Cheers! :blush:


So, I’ve got the first version up and working. I’m quite happy: a +100mb IFC model that loads quite fast (even on mobile). I believe that the speed is quite close to Autodesk Forge already. :rocket:

:point_right: You can try it out here: IFC.js