THREE.SimplifyModifier vs. Progressive Mesh Streaming

I’m using nexus project to loading model into three.js

I just want to load a model step by step and rendering the same time. All object seems blur first time and more and more clear with loading going on. This is very important for large model and better feeling. The function looks like a loding time LOD.

I found some topics about THREE.SimplifyModifier today on this form, So i’m wondering if anyone can change the .obj model and resave it ordered by some logic like THREE.SimplifyModifier maybe one file or files array. Then I can load a model from files progressively and show model step by step.

nexus project is very useful but with some bugs not fixed. And I think maybe use THREE.SimplifyModifier is more efficient and better. Maybe three.js can offer an original loader like this.

Is there anybody can help me about that?

THREE.SimplifyModifier implements a very basic Mesh Decimation and Simplification algorithm which tries to reduce the vertex and face count of a 3D model with minimal shape changes. If you really want to apply this kind of algorithm to an existing OBJ model, I would use a content creation tool for this. For example Blender provides a Decimate Modifier which does exactly what you are looking for.

In general, I am not a big fan of such algorithms. The biggest problem is that the simplification is generic and might change important parts of your asset (like the face of a character model) in an unnatural way. So in many cases it’s actually better when a designer does the simplification and provides for instance three models with different complexity. You can load and render the simplest model first, then the others. This kind of LOD is also known as discrete LOD and easy to implement.

I’m not familiar with the linked nexus project but I guess it is some sort of toolkit for Progressive Mesh Streaming. Progressive Mesh Streaming is actually known for using continuous LOD which means the complexity of the model is continuously increased from a base version to the highest resolution. Normally, data are compressed and managed in an interleaved buffer and transferred/processed chunk by chunk. The buffer does not only contain geometric but also texture data. The increasing complexity of the model is realized by (advanced) topological operations and might be view-dependent. As you can see, the whole approach is very sophisticated and not easy to implement. It also has a high decompression/parsing overhead on the client side. Besides, I have made the experience that designers are not always happy with continuous LOD since they can’t control how an object is rendered during the streaming process.

Because of this, I would start with Draco compressed glTF assets with different level of complexity and see how good this approach works. The following extension might be interesting in this context, too:

1 Like

thank you for your reply

I’m really interested by this topic. I’m trying to use Nexus too. But I’m facing a server error when using this library on “large” model (see :

Did you have the same problem ?
Is there an explaination/solution that you know ?

Thanks in advance.