How to get the size of the loaded gltf object in pixels

How to get the size of the loaded gltf object in pixels. For example the canvas size is 480x360, how to get how long the object is on the canvas (i.e. pixels), the camera fov is 45, and z position is 20, rest 0.
This gives some vector3:

let bbox = new THREE.Box3().setFromObject(gltf.scene);
let measure = new THREE.Vector3();
let size = bbox.getSize(measure); // HEREyou get the size

But how to convert that to pixels.

1 Like

Depends - what for?

What it depends on? Wanted the object size on the canvas, then will create an image out of that. If I create an image from the canvas, it contains lot of blank spaces, wanted image to be the perfect size of the object.

On the use-case - in your case you can use cameraControls.fitToBox. That ensures the model fits the viewport 100% vertically.

Then you can use Box3 to calculate the model’s width/height ratio - and scale canvas width appropriately.

1 Like

You can project each vertex of your mesh(es), find the bounding box2, then unproject the box.

3 Likes

Can you explain further, or is there any code sample. Can’t understand.

Cannot use another 3rd party package, already using three.js with much difficulty, based on the solution provided in: How to import GLTFLoader in umd library - #2 by Mugen87

Can port just this part - it’s not super lengthy or complex (can be simplified even more, if you ignore the OrthographicCamera part.)

Well, its not easy for me to port this library, there are too many functions call and global variables set elsewhere. Is there a standard Three.js way?

An alternative and naïve approach is to find the size without Three.js. Capture the image, then crop the empty space from top, right, bottom and left. That’s all. This approach is naïve, because it uses more computational resources.

You can try it here (the capture function is at lines 63-115):

https://codepen.io/boytchev/full/MWzaaGQ

image

4 Likes

Already using this approach in our Scratch implementation: https://youtu.be/py2_R-h9MV4,
But would love a more performant approach.