Virtually Geometric

au contraire

It’s an interesting thing to think about. The Nanite presentations talk about the asset size angle a lot actually.

Say you have a 2GB asset, and you “Nantize” it, it’s probably around 2GB now, maybe a bit more because you have some redundancy and more information in general. How do you fit all that in the GPU memory? - well, you don’t. You fit the metadata onto the GPU, and you split the data into fixed-sized packet, so you can upload them onto the GPU on-demand. In fact, that’s how it can run so fast.

Where does that bring us on the web? Imagine loading a 2GB mesh in milliseconds? Well, you can do that by just downloading the metadata and just the packets that contain clusters of your target resolution.

Better yet, you can download those clusters in a progressive order, starting from most coarse LOD level, and the mesh will appear more defined over-time.

From my experience doing some unscientific residency testing, I found that unless you’re doing a demo zooming in on every part of an asset and spinning it around - you’re typically below 20% utilization, that is - you’re only ever interacting with up to 20% of all of the clusters. And it scales really well, that is - the larger the asset - the lower the utilizaiton.

Just like with the virtual textures, I think this tech is suited for the web very well.

The problem is in complexity that comes with nanite itself, and then data structure design and the whole memory management system. I’m quite comfortable in that area, but even I didn’t go for it yet, as it’s just a massive amount of work.

I’ve seen a lot of people use Draco on the web to save on traffic, even when the mesh are not that big and were explicitly designed for the application, the argument goes that it saves precious time-to-first-contentful or whatever you may call it.

Personally - I hate Draco, with a passion. I think it’s one of the worst things to happen to 3d on the web. It’s implementation is trash, as it spins up 4 worker threads by default and hogs that resource forever. It’s incredibly slow to decode assets, making it more of a coin-toss whether on a good connection you’d just download the unpacked asset faster or not. You just added a ton of complexity to your pipeline for dubious gain.

Virtual geometry on the other hand is naturally streamable, and allows you to offer meaningful content to the user at as low as 1 cluster’s worth of data transfer, something like say 128 triangles, with 19 bytes per vertex, or around 2.47 kb per cluster. That’s it, no matter how large your actual dataset is - you can draw a very-very coarse 128 traignle representation of it for the user with just 2.47 kb of data needed to be transferred.

You can then apply whatever your favourite type of compression to it or just configure the webserver to gzip it.

6 Likes

Interesting, how you use custom meta data to define geometry. The physical model’s extensability is the nature of the request. It’s like logarithmic scale from a universe (geography) to an atom (biology). And your fluid LOD uses superpixel patches. With discretion, select deterministic simulations could run outside of screen space… for instance a tracer bullet trajectory. ROAM could further implement dynamic systems to account for complexity, tortuosity, continuum. This is the antithesis of a superresolution upsample.

The core asset library is vital to utilize memory. This may elicit even deeper relationships in an abstract layer. For example, Metahumans use poses and Intelligent Scissors use targets. A modern node system may use generative breakpoints to affect a hot loop.

Or some such multivariate ethical consensus to progress.

Cheers,
LAN Sk8ps

I did :blush:

It is not yet as efficient as I would like it to be, but I will solve that. Since I only work with three.webgpu.js, WebGPU offers exactly the right techniques, but these still need to be implemented in threejs. This could already be done much more efficiently, but not in the modern way that webgpu could. I already have an issue in the developer forum about one of these points. In webgpu you can let the GPU decide for itself whether something should be rendered or not. drawIndirect + compute shader. But DrawIndirect and the associated peripherals are a major construction site.

7 Likes