Positioning camera according to size of the object

Hi,
I am trying to load gltf models using gltf-loader. currently, I am positioning the camera by multiplying the max vector of the bounding box, but the issue with the current implementation is that when the size of the model is very small (say < 0.1) camera is not properly positioned. I would like to know how can I change the implementation by positioning the camera according to the size of the bounding box of the model.

Thanks,
Binoy

Your problem is seriously underspecified:

  • too many variables (camera position, -orientation, -fov, -type, geometry position, -orientation etc.)
  • too few boundary conditions

You need

  • Bounding Sphere radius.
  • The center of geometry in world space.
  • Camera FOV
var radius = geometry.boundingSphere.radius;
var cog = mesh.localToWorld(geometry.boundingSphere.center.clone());
var fov = camera fov;
camera.position.set( cog.x, cog.y, cog.z + 1.1*radius/Math.tan(fov*Math.PI/360) );

I use this formula to position perspective camera on my app. Object size vary from a coin to city block.

i was hunting down bits and pieces for weeks but just couldn’t find something that is truly complete and will handle both orthographic and perspective cameras, has damping, margins, etc. i did found snippets for both cameras on stackoverflow, there’s also a bigger thread here on discourse, but always ran into weird edge cases.

i’ve collected everything i could find and with help from grabcad we more or less made it work: GitHub - pmndrs/drei: 🥉 useful helpers for react-three-fiber (demo Bounds and makeDefault - CodeSandbox)

the component is react but the code is just threejs.

That was helpful thank you

mesh.localToWorld(geometry.boundingSphere.center);
Should be:
mesh.localToWorld(geometry.boundingSphere.center.clone());

1 Like