Gltf model scale to viewport and center

Hi.

I am really struggling with this, I am a beginner and this stuff seems impossible.

I am loading a gltf model and I want to scale it so that it fits perfectly the renderer viewport and center it.

I do not want to modify the camera position or scale I want to scale and position the model.

Any help is appreciated it.

I’m also curious whether someone will propose how to fit perfectly, technically this means up to a pixel precision. Bounding boxes and bounding spheres will not make perfect fitting.

The following illustration shows an interesting case – the result depends on the distance and on the shape of the model (sometimes the blue part should be considered, sometimes the red part).

On a side node, from artistic point of view perfect fit might require margins around the model. A model touching the border looks as upsetting as a text touching the edges of a paper. Thus, my suggestion is to look for aesthetically perfect fitting – apart from looking better, it is also easier to calculate.

1 Like

Drei has this, the resize component which scales anything you wrap to 1 unit, and from there you only need to scale to viewport.width or height

You find the code on the GitHub repo GitHub - pmndrs/drei: 🥉 useful helpers for react-three-fiber

PS, it has a bounds component which scales the camera, most likely the only complete open source implementation of that on the web.

There’s also a Center component.

Why does it work?

Scaling a model changes what parts of the model are visible at its “border”.

It’s very naive but somewhat works. You normalise the model, though you do need to pick constraints like width or height. Once that’s done scaling it up to viewport will match the screen.

I would prefer fitting the camera, because it avoids the problem you mention.

That’s the part that I tend to think should not work for some models:

In perspective projection scaling an object N times will not make its image on the screen N times bigger (horizontally and vertically). Of course, I might be wrong … I’ll try to find a counter-example.


Edit:

It was fairly easy. M is the initial model. 3M is the model scaled by 3. M+M+M is the image of M stacked 3 times (used Gimp for this). This shows clearly that 3M ≠ M+M+M. Actually, the size of 3M also depends on where is the center of the model. The right-most image shows a side view of the model.

Explanation. Because of both the perspective projection and the depth of the model, different parts of the model will be visually scaled differently (the back of the model will change less than the front). Thus, when the model is small, its height is defined by its bright part; when the model is scaled up, its height is defined by its dark part.

You can try the demo here: https://codepen.io/boytchev/pen/abPvdRV

1 Like

hmmm, it seems i didn’t consider all the side effects. i did it once and it might have been the model but the results were good, but since the camera fov looks strange with a blown up model i never tried again.

1 Like

Three’s frustum has a containsPoint method, I suppose one way of achieving this could be to scale the model up to a unit size that sits just outside the view, iterate through all attribute.position’s of the geometry and then check if the view frustum contains all points of the model, if it does then the model is contained in the view, if not it can be scaled down by small increments in a loop till it does contain all points. This of course could be more process intensive than another approach but in theory could work well…

What if there is a custom vertex shader that manipulates the vertices?

:thinking: True, I suppose in its most basic form it would only account for static meshes but could be adapted to make checks for the greatest vertex offset and account for that value?

Yes, the solution for the most general case appears to be not elegant, but if we can assume some restrictions (like static mesh, or non-perfect fit, or 0 depth) it is more feasible.

For static meshes my suggestion would be to scan all vertices just once. For each vertex calculate* the maximal scaling that will keep it within the frustum. Then pick the smallest scale factor from all scale factors.


* I think that for a single dimensionless point the best scaling factor is directly computable. I’m not 100% sure, but I have some reasonable expectation that this is true.

1 Like

Ok guys I am starting to understand more and more but I still need help.

I am loading the model putting it a group getting the size and scale and applying it however I noticed that the model size is not exactly the same as in the online gltf viewer, I can’t use their code because my project requires the model in a group…

Below is a snippet of the code that I am using to scale the group, pelase let me know if this can be improved.

// This is where the model is loaded...
this.group.add(this.model)
this.scene.add(this.group);

this.model.updateMatrixWorld(); // donmccurdy/three-gltf-viewer#330

// Get scale and center model.
const box = new Box3().setFromObject(this.group);
const size = box.getSize(new Vector3()).length();
const center = box.getCenter(new Vector3());
this.size = size;

model.position.x += (model.position.x - center.x);
model.position.y += (model.position.y - center.y);
model.position.z += (model.position.z - center.z);

this.modelScale = 1/size;

this.defaultCamera.updateProjectionMatrix();

this.group.scale.set(this.modelScale, this.modelScale, this.modelScale);

Thank you!

This is a (really) long story…

1 Like

Thank you the issue here is that some models have huge scales others small scales, I don’t know if this will work…

Hmm…

The renderer.getSize() dose not exist, I tried using the viewport width and height but the scale is huge!

You still used const scaleFactor = boundingBoxSize.divide(renderer.getSize());

I think you should test your code before adding it here to avoid more confusion :slight_smile: