Virtually Geometric

au contraire

It’s an interesting thing to think about. The Nanite presentations talk about the asset size angle a lot actually.

Say you have a 2GB asset, and you “Nantize” it, it’s probably around 2GB now, maybe a bit more because you have some redundancy and more information in general. How do you fit all that in the GPU memory? - well, you don’t. You fit the metadata onto the GPU, and you split the data into fixed-sized packet, so you can upload them onto the GPU on-demand. In fact, that’s how it can run so fast.

Where does that bring us on the web? Imagine loading a 2GB mesh in milliseconds? Well, you can do that by just downloading the metadata and just the packets that contain clusters of your target resolution.

Better yet, you can download those clusters in a progressive order, starting from most coarse LOD level, and the mesh will appear more defined over-time.

From my experience doing some unscientific residency testing, I found that unless you’re doing a demo zooming in on every part of an asset and spinning it around - you’re typically below 20% utilization, that is - you’re only ever interacting with up to 20% of all of the clusters. And it scales really well, that is - the larger the asset - the lower the utilizaiton.

Just like with the virtual textures, I think this tech is suited for the web very well.

The problem is in complexity that comes with nanite itself, and then data structure design and the whole memory management system. I’m quite comfortable in that area, but even I didn’t go for it yet, as it’s just a massive amount of work.

I’ve seen a lot of people use Draco on the web to save on traffic, even when the mesh are not that big and were explicitly designed for the application, the argument goes that it saves precious time-to-first-contentful or whatever you may call it.

Personally - I hate Draco, with a passion. I think it’s one of the worst things to happen to 3d on the web. It’s implementation is trash, as it spins up 4 worker threads by default and hogs that resource forever. It’s incredibly slow to decode assets, making it more of a coin-toss whether on a good connection you’d just download the unpacked asset faster or not. You just added a ton of complexity to your pipeline for dubious gain.

Virtual geometry on the other hand is naturally streamable, and allows you to offer meaningful content to the user at as low as 1 cluster’s worth of data transfer, something like say 128 triangles, with 19 bytes per vertex, or around 2.47 kb per cluster. That’s it, no matter how large your actual dataset is - you can draw a very-very coarse 128 traignle representation of it for the user with just 2.47 kb of data needed to be transferred.

You can then apply whatever your favourite type of compression to it or just configure the webserver to gzip it.

6 Likes