independently from compression, what are the ways to optimize the 3D model so that it will be feasible to be used on the Web?
I see 3D scanned models or photo geometry models are of very high size, is there anything we can do for that kind of model?
I checked SimplifyModifier in one of my photos scanned models and it turned out it took all the shape and it looks like a box with nothing where it was the room before.
And it seems it is very very slow almost crashing every time!!!
Anything that you would love to put in the comments?
Thank you, just trying to learn things from others’ experiences and knowledge !!!
this is the code I did:
async function load_model() {
const gltfLoader = new GLTFLoader()
await gltfLoader.setPath('./models/drawing_room/').load('scene.gltf', function (gltf) {
console.log(gltf.scene)
gltf.scene.traverse(function (child) {
if (child instanceof THREE.Mesh) {
console.log('hi')
const _child = child as THREE.Mesh
// computing bounding box for it's geometry
// we only have to compute it's bounding box because this is static mesh
_child.geometry.computeBoundingBox() //AABB
_child.castShadow = true
_child.receiveShadow = true
_child.scale.set(100, 100, 100)
sceneObjects.push(child)
let verticesToRemove = Math.floor(
_child.geometry.attributes.position.count * 0.1
)
_child.geometry = modifier.modify(_child.geometry, verticesToRemove)
}
if (child instanceof THREE.Light) {
const _light = child as THREE.Light
_light.castShadow = true
_light.shadow.bias = 0.0008 // to reduce artifact in shadow
_light.shadow.mapSize.width = 1024
_light.shadow.mapSize.height = 1024
}
})
scene.add(gltf.scene)
{
;(document.getElementById('loader') as HTMLDivElement).style.display = 'none'
;(mainscreen as HTMLElement).style.display = 'block'
}
})
}
Before simplification:
After simplication: