On a side node, from artistic point of view perfect fit might require margins around the model. A model touching the border looks as upsetting as a text touching the edges of a paper. Thus, my suggestion is to look for aesthetically perfect fitting – apart from looking better, it is also easier to calculate.
That’s the part that I tend to think should not work for some models:
In perspective projection scaling an object N times will not make its image on the screen N times bigger (horizontally and vertically). Of course, I might be wrong … I’ll try to find a counter-example.
It was fairly easy. M is the initial model. 3M is the model scaled by 3. M+M+M is the image of M stacked 3 times (used Gimp for this). This shows clearly that 3M ≠ M+M+M. Actually, the size of 3M also depends on where is the center of the model. The right-most image shows a side view of the model.
Explanation. Because of both the perspective projection and the depth of the model, different parts of the model will be visually scaled differently (the back of the model will change less than the front). Thus, when the model is small, its height is defined by its bright part; when the model is scaled up, its height is defined by its dark part.
hmmm, it seems i didn’t consider all the side effects. i did it once and it might have been the model but the results were good, but since the camera fov looks strange with a blown up model i never tried again.
Three’s frustum has a containsPoint method, I suppose one way of achieving this could be to scale the model up to a unit size that sits just outside the view, iterate through all attribute.position’s of the geometry and then check if the view frustum contains all points of the model, if it does then the model is contained in the view, if not it can be scaled down by small increments in a loop till it does contain all points. This of course could be more process intensive than another approach but in theory could work well…
Yes, the solution for the most general case appears to be not elegant, but if we can assume some restrictions (like static mesh, or non-perfect fit, or 0 depth) it is more feasible.
For static meshes my suggestion would be to scan all vertices just once. For each vertex calculate* the maximal scaling that will keep it within the frustum. Then pick the smallest scale factor from all scale factors.
* I think that for a single dimensionless point the best scaling factor is directly computable. I’m not 100% sure, but I have some reasonable expectation that this is true.
Ok guys I am starting to understand more and more but I still need help.
I am loading the model putting it a group getting the size and scale and applying it however I noticed that the model size is not exactly the same as in the online gltf viewer, I can’t use their code because my project requires the model in a group…
Below is a snippet of the code that I am using to scale the group, pelase let me know if this can be improved.
// This is where the model is loaded...
this.model.updateMatrixWorld(); // donmccurdy/three-gltf-viewer#330
// Get scale and center model.
const box = new Box3().setFromObject(this.group);
const size = box.getSize(new Vector3()).length();
const center = box.getCenter(new Vector3());
this.size = size;
model.position.x += (model.position.x - center.x);
model.position.y += (model.position.y - center.y);
model.position.z += (model.position.z - center.z);
this.modelScale = 1/size;
this.group.scale.set(this.modelScale, this.modelScale, this.modelScale);