Blend Geometry in between LOD

I would like to avoid popping in between LODs without rendering the large meshes twice. I am using the GPU renderer. I suspect this may entail a shaderMaterial, are there any example out there?

1 Like

I believe virtual geometry might be what you look for. It allows gradual LOD and smooth reduction of geometry instead of the step-wise traditional LOD.

2 Likes

Thank you, I ll look into this.

I saw a YouTube livestream where Hugo Martin mentioned this in the idTech8 pipeline. It was further described in an interview featuring Billy Khan, which details lessons from overhauling Indiana Jones.

Specifically, they don’t “do” (layman’s term) monolithic dynamic meshlets. They do “do” vertex-warping between LOD’s to smooth the transition. Maybe the word dark magic or trade secret was appropriate? Everything is 1 monolithic shader.

Also they improved stream asset loading… which was brought up because overall fidelity is bound to resources. In terms of content creation / generation, quality is a product of multiple live feedbacks. Path-tracing effects for example may opt-in to reduced feature sets.

I know that’s not a bleeding-edge WASM port, but perhaps you’ll find it inspiring.

~ ProState “Pass-phrase” Potree

1 Like

why not use the three.js simplifier and mipmapping examples, it’s straightforward no extra LOD meshes adding up to the file size?

Demo

2 Likes

I think virtual geometry is amazing as a piece of tech, but I agree that 9 times out of 10 just a good set of discrete LoD will do the job just as well if not better. They will have close to 0 overhead, unlike virtual geometry and if done well - the user won’t notice any popping what-so-ever. Virtual geometry shines when you have very large objects that are intended to be viewed partially. For example - large terrain, or a massive building. The user can focus on the a small portion of the overall model, and that’s where virtual geometry will be head and shoulders better than discrete LoD could ever be.

There is a reason why LoDs haven’t really been a massive issue for big studios in the past. It’s because they work. They take effort to build, but they work well. That said effort is much lower todays, because we don’t need to think about individual triangles so much. +100 more or -100 fewer triangles is not going to make a massive amount of difference. And we’re not aiming for anything like 50 triangle meshes. Well, not usuaully.

Most graphics cards will happily rasterize 200,000,000 triangles per frame at 60 FPS, even those from 10 years ago. So you can be a lot more sloppy with your LoDs, and you don’t have to be very daring with distances. If you see a pop - just make that LoD transition happen at a larger distance, who cares that the overall triangle count per frame goes up a little.

Virtual geometry is amazing, and someone who has spent a lot of effort implementing the thing from scratch, even back before zuex released his wonderful clusterization library - I do believe it’s not a panacea. It has a place and usecases, but it’s not a solution for everything. Beyond that, Virtual geometry has a cost, both in terms of integration/engineering as well as the frame-time overhead. If your usecases fit well - those overheads are a good trade, but if your usecase is a poor fit to virtual geometry - you better off just going with traditional tech. There is a reason why it’s still around.

4 Likes

Actually, a shader smoothing geometry transition is what i had in mind, but I havent yet found anything relatively easy to implement. Its all R&D, im very intyerested by all solutions, thanks!

That video chapter was dovetailed with destructible physics. So presumably LOD of rock vertex groups would be easy. But tweening an active demon arm breaking a board would be hard.

There was a paper, I forgot the name of it, I think it was written by Hugues Hoppe in 90s or early 2000s. The idea is based around parametric mesh simplification. That is - we use edge collapse to simplify a mesh - this is pretty standard, but we also record these transforms. That is - when we collapse and edge - we take 2 vertices of a triangle, we remove the triangle from the mesh and we place a new vertex somewhere along the old edge.

For LoD morphing - I imagine it’s based on that. In fact, the original paper(s) that I read were proposing this exact usecase, of allowing… sort-of continuous LoD by replaying edge collapses forward and in reverse.

So, say we have LoD(N) and LoD(N+1), we also have EC(N), a set of edge collapse operations that led us from N to N+1, if we say that N+1 is the coarse version of N, when we transition from N to N+1, we can first keep the N, and over-time morph vertices to where they would be in N+1, since we know which pairs in N were used to produce vertices in N+1. I hope this is clear, and the reverse case is obvious.

The small sticking point is - you need to keep that edge-collapse metadata, if you don’t know what was the transform from N to N+1 - you’re not in a good position. You can always guess, but there is a good reason why tech like image morphing from the 90s didn’t survive until today - it’s not great.

You can build something like this from the three.js simplifier, but you’d need to modify it to output the collapse decisions.

Personally - I don’t think it’s worth it, you need more code for vertex morphing to make it work, and you need to push the edge-collapse data to the GPU for each LoD transition to play the animation. You also will be forced to deal with a LOT of edge-cases. For example, what happens when we are in the middle of a transition from N to N+1 and we need to go to N+2 now? What about going back? To my knowledge, having read a lot of presentations from actual graphics engine teams in the industry - it’s not a popular technique. You can do it, and some cases for sure make it worth while the effort, but it’s not a popular choice.

I reckon for what it costs - virtual geometry will be a more appealing choice from perspective of performance.

3 Likes