Optimization for large scaled 3D model

independently from compression, what are the ways to optimize the 3D model so that it will be feasible to be used on the Web?

I see 3D scanned models or photo geometry models are of very high size, is there anything we can do for that kind of model?

I checked SimplifyModifier in one of my photos scanned models and it turned out it took all the shape and it looks like a box with nothing where it was the room before.

And it seems it is very very slow almost crashing every time!!!

Anything that you would love to put in the comments?

Thank you, just trying to learn things from others’ experiences and knowledge !!!

this is the code I did:

async function load_model() {
        const gltfLoader = new GLTFLoader()
        await gltfLoader.setPath('./models/drawing_room/').load('scene.gltf', function (gltf) {
            console.log(gltf.scene)
            gltf.scene.traverse(function (child) {
                if (child instanceof THREE.Mesh) {
                    console.log('hi')
                    const _child = child as THREE.Mesh
                    // computing bounding box for it's geometry
                    // we only have to compute it's bounding box because this is static mesh
                    _child.geometry.computeBoundingBox() //AABB
                    _child.castShadow = true
                    _child.receiveShadow = true
                    _child.scale.set(100, 100, 100)
                    sceneObjects.push(child)
                    let verticesToRemove = Math.floor(
                        _child.geometry.attributes.position.count * 0.1
                    )
                    _child.geometry = modifier.modify(_child.geometry, verticesToRemove)
                }
                if (child instanceof THREE.Light) {
                    const _light = child as THREE.Light
                    _light.castShadow = true
                    _light.shadow.bias = 0.0008 // to reduce artifact in shadow
                    _light.shadow.mapSize.width = 1024
                    _light.shadow.mapSize.height = 1024
                }
            })
            scene.add(gltf.scene)
           
            {
                ;(document.getElementById('loader') as HTMLDivElement).style.display = 'none'
                ;(mainscreen as HTMLElement).style.display = 'block'
            }
        })
    }

Before simplification:

After simplication:

that is a contradiction in itself. it will never be feasible if you load a big bloated gltf probably a ton of mb and then try to fix it runtime.

you compress it before you use it on the web, after that it may become feasible. it won’t be a gltf any longer, it will be a glb. it won’t be a 100mb, but between 1-2. you can crunch down vertices and surfaces in blender easily with decimate modifiers.

1 Like

wait, you mean compression between gltf to glb is that much, I am just making sure if I understood right, you meant from 100 MB to something around 1-2mb?

and can we do that in the front end in our code or any other things as the backend instead of a 3d modelling tool like blender?

yes, you can get compression rates like that. i would not load a model larger than 5mb, and even that, your mobile users are waiting a minute. and you can do some of it at the build stage with commandline tools like gltf-transform, gltf-pipeline, gltf-pack, these will help you with texture resize + compression, dedupe and compressing vertices (draco or meshopt). but remeshing and decimating geometry, or baking deformed surfaces as normal maps, you can’t circumvent blender.