hey guys, here’s a live link
Here’s another version of the same thing with patches colored with random colors to help visualize the technique.
What you see there is a grid of 100x100 angel statues, each 100,000 polygons for a total of exactly 1 billion triangles. I thought it should be enough to prove my point.
main things that have changed since last time I talked about this:
- much better performance
- removed a few limitations, such as supported source maximum geometry size
- full support for all three.js materials
There are some goodies under the hood in there as well, such as:
- accelerated raycasting for free basically, because there’s already a spatially partitioned hierarchy. You get same performacne as with a great quality BVH, but without having to build that separately.
- export of any mesh simplification LOD. You input number of triangles in the desired output mesh - and we get a hierarchy cut that satisfies this constraint. Super fast too.
- used the above, building discrete LOD proxies for a mesh. Say you don’t want to actually use the entire Micron (my Virtual Geometry solution name) rendering stack - you can quickly build any number of LODs from the micron representation. When I say - I mean linear speed with respect to output size, it’s about as fast as you can write to memory
- support of custom geometry attributes
- automatic normal, uv and tangent compression. This is something Unreal’s nanite does as well, for vast majority of meshes - it’s able to reduce normals memory requirements from 12 bytes per vertex, down to just 3 bytes, for UVs and tangents it’s similar. Not that Micron meshes were heavy to begin with, but with this they are around ~30 lighter as well as taking less GPU ram (~40-50% typically) and rendering faster.
Overall it’s hard to compete with C++, threads and native graphics API that has bindless resources as well as compute shaders. But you can get pretty far even with what we have in browser today
PS:
please note that this still is not an open-source project