I am really struggling with this, I am a beginner and this stuff seems impossible.
I am loading a gltf model and I want to scale it so that it fits perfectly the renderer viewport and center it.
I do not want to modify the camera position or scale I want to scale and position the model.
Any help is appreciated it.
I’m also curious whether someone will propose how to fit perfectly, technically this means up to a pixel precision. Bounding boxes and bounding spheres will not make perfect fitting.
The following illustration shows an interesting case – the result depends on the distance and on the shape of the model (sometimes the blue part should be considered, sometimes the red part).
On a side node, from artistic point of view perfect fit might require margins around the model. A model touching the border looks as upsetting as a text touching the edges of a paper. Thus, my suggestion is to look for aesthetically perfect fitting – apart from looking better, it is also easier to calculate.
Drei has this, the resize component which scales anything you wrap to 1 unit, and from there you only need to scale to viewport.width or height
You find the code on the GitHub repo GitHub - pmndrs/drei: 🥉 useful helpers for react-three-fiber
PS, it has a bounds component which scales the camera, most likely the only complete open source implementation of that on the web.
There’s also a Center component.
Why does it work?
Scaling a model changes what parts of the model are visible at its “border”.
It’s very naive but somewhat works. You normalise the model, though you do need to pick constraints like width or height. Once that’s done scaling it up to viewport will match the screen.
I would prefer fitting the camera, because it avoids the problem you mention.
That’s the part that I tend to think should not work for some models:
In perspective projection scaling an object N times will not make its image on the screen N times bigger (horizontally and vertically). Of course, I might be wrong … I’ll try to find a counter-example.
It was fairly easy. M is the initial model. 3M is the model scaled by 3. M+M+M is the image of M stacked 3 times (used Gimp for this). This shows clearly that 3M ≠ M+M+M. Actually, the size of 3M also depends on where is the center of the model. The right-most image shows a side view of the model.
Explanation. Because of both the perspective projection and the depth of the model, different parts of the model will be visually scaled differently (the back of the model will change less than the front). Thus, when the model is small, its height is defined by its bright part; when the model is scaled up, its height is defined by its dark part.
You can try the demo here: https://codepen.io/boytchev/pen/abPvdRV
hmmm, it seems i didn’t consider all the side effects. i did it once and it might have been the model but the results were good, but since the camera fov looks strange with a blown up model i never tried again.
Three’s frustum has a containsPoint method, I suppose one way of achieving this could be to scale the model up to a unit size that sits just outside the view, iterate through all
attribute.position’s of the geometry and then check if the view frustum contains all points of the model, if it does then the model is contained in the view, if not it can be scaled down by small increments in a loop till it does contain all points. This of course could be more process intensive than another approach but in theory could work well…
What if there is a custom vertex shader that manipulates the vertices?
True, I suppose in its most basic form it would only account for static meshes but could be adapted to make checks for the greatest vertex offset and account for that value?
Yes, the solution for the most general case appears to be not elegant, but if we can assume some restrictions (like static mesh, or non-perfect fit, or 0 depth) it is more feasible.
For static meshes my suggestion would be to scan all vertices just once. For each vertex calculate* the maximal scaling that will keep it within the frustum. Then pick the smallest scale factor from all scale factors.
* I think that for a single dimensionless point the best scaling factor is directly computable. I’m not 100% sure, but I have some reasonable expectation that this is true.