Virtually Geometric


That’s the case, sorry for the inconvenience :slight_smile:

:frowning: Anyway good work mate, at least its good to know it IS possible.

Also massive thanks for the virtual texturing explanation, I’ve tried to have a go before with no success so thats really helpful… I may have questions at some point! I’m just not happy with the quality of KTX (even at highest settings) and we can’t use it for our lightmaps anyway so I’m hoping this will be a better experience


its actually really easy, it is very similar to the dynamic terrain worker but instead of serially crunching all the model, this method used an algorithm to selectively crunch down the triangles in the most efficient way possible. this method is exactly similar to unreal nanite tech.

Another one:

1 Like

Looks pretty cool, I’m 99% sure he’s using the same tricks :grinning:

1 Like

@Usnul That’s really cool! However, I don’t really understand the two links you posted in november 2022, there seems to be a single instanced draw that draws all instances at the same time (so all with the same geometry data)? I would have expected instances close to the camera to be drawn with more triangles than instances further away from the camera?

That is the case. Instances of the angel model that are closer to the camera are draws with more triangles than those further away.

You are correct that there’s one instanced draw call, but the instance is not what you think it is.

An instance in this draw call is actually a 128 triangle cluster. I don’t want to go too much into detail about how or why, it’s not particularly interesting.

Those clusters pull attribute data from textures during that instanced draw in turn.

By the way, as much as I can appreciate deconstructing this demo might seem useful - it’s not very. Because most of the work actually goes into preparing the topology and not drawing, and that work is all about graph theory and not so much about graphics.

I highly recommend the nanite talk from SIGGRAPH if you want to understand the technique in more detail.

Back to original question - if you use the second link, you’ll see each geometry cluster drawn in different color. By moving the camera around it will be obvious that angel statue closer to the camera will have more clusters and those further away will have fewer, down to just 1 cluster at the extreme, or 128 triangles in other words.

1 Like

Is it open source now?

Hi @wenrenqiang,

Thanks for asking, but no, the tech has been sold and I suspect this particular version will not be open-sourced at any point.

Came across this GitHub - AIFanatic/three-nanite: An attempt at reproducing unreal nanite in threejs is similar to what you done?

1 Like

I don’t want to speak for @Usnul but… I think so yes.

@Telepathic & @manthrax

Yes, it’s very similar. The hard problem is generally solved here. Rendering is not really attempted as far as I can see, but that’s a simpler problem to solve.

I can see that code from @zeux ( Arseny Kapoulkine ), specifically the meshoptimizer library is used extensively here, which makes sense too.

I think for anyone trying to implement something like this in JS - this repo might be a good starting point. Though, based on the quality of the code - I’d say that it reads more like a learning exercise, so if you are aimong for production-level solution, you’re wasting your time.

If you just want to understand the concept better and get some ideas on where to start - it looks quite good.

I like the references as well, they are quite close to what I would recommend as well.

I also like the visualisation, I’m pretty good at thinking in terms of graphs, so I didn’t find something like this useful when I was working on this, but I can see how it would be useful for most people. Though, again, for real meshes with a few thousand triangles it will become useless.

For those that want to know why I don’t think it’s a good starting point for a production-level solution:

  1. Mesh simplification is a mess, there are a few key constraints that are being violated here. You’ll really need to build something a lot more custom here.
  2. The code is using a lot of serial promises, which is going to be very very slow. For real usecases nanite is useful for meshes with 100s of thousands of triangles, this looks like it will run at 10k triangles / second at best on good hardware, which is a no-go.
  3. Memory usages is going to blow pretty much any budgets here, I’ve already experienced issues with memory when working on 1mil+ poly models, typically on models with 10mil+ in my case, but the algorithm by itself is a memory hog, and this implementation is doing nothing to help. For every 1Mb of mesh data, you’ll be needing 100Mb or so of RAM just for processing.

Finally, it takes a bit of effort to support more than just position attribute. That is, if your geometry has UVs, normals, tangents etc - you’ll have to make some pretty significant changes in the core.


I am experimenting on the ThreeJS Nanite but the METIS worker don’t seem to be working as I hoped it should, there is lags, can you suggest what must be the best approach to avoid lags?

Testing for a mass number of high poly asset.


The topic of virtual geometry makes me curious.
How can I imagine how such a system works? A model is a finished geometry. How do I imagine software that influences this geometry?
I really enjoyed your chat about virtual textures. This helped me understand how such a system works.
With a virtual geometry, do you have thousands of geometry fragments that you load dynamically depending on the camera position, like with virtual textures?
But I think I’m making a false analogy. Modeling a dozen different lod fragments in Blender is definitely not the way to go.
I would be interested in the principle behind it :thinking:
I can then implement it myself :blush:
I’ll take a look at this meshoptimizer

If both Nanite and Mipmap works together in this project, It would be a big leaf because we wouldn’t have to worry about loading high quality massive 3d assets in threejs, one of my issues in the threeverse project is the limitation of memory because all models are have no LOD.

Open world threeverse
race track
toy car

I am always pleased to see the creative energy with which so many people develop great things here.
I live in Germany and companies here complain a lot about the declining work morale in society. If companies don’t offer creative perspectives, people’s creative energy simply flows somewhere else.
As in so many beautiful projects here in three.js.
The idea of ​​virtual geometry appeals to me because I work a lot with procedural lod geometry. But I haven’t had anything to do with virtual geometries for models and I would like to change that. Therefore I would like to understand the principle behind it. I’ll have to wise up about that.


The threejs r165 updates seems to have some issues with the Metis LOD, the virtually geometries seems to go all the way to the lowest LOD, only the avatars are affected the buildings and vehicles LODs are all working perfectly.

Can anyone tell me why the meshlets algorithms for the avatars are messed up?

Press key [ ? ] to toggle wireframe

The avatar meshlets algorithm bug is fixed, turns out the avatars are scale is 100, so I fixed it by adding a meshlets resolution scale offset by 100 instead of 1.


Thanks to the author @Usnul contribution the Metis - LOD features really helps threejs massive environment run seamlessly fast not worrying about high detailed assets.