Is it possible to know at what distance of the camera from the cube the height of the perspective projection of the cube will be equal to the height of the screen?

We can project all the vertices of the cube into camera space. But at what distance from the camera will the projection of the top point be equal to 1, and the bottom -1?

Yes, it’s possible - I actually used a somewhat similar method to calculate at which distance from my globe I should place the camera so that the globe is in full view, i.e. it occupies the entire viewport. The key to such calculations is an appropriate version (according to the shape of the object) of the angular diameter formula, and integrating that correctly with the camera distance and its angle of view / field of view.

For example, in my case (a sphere), I set:

camera.position.set(distance / radius, 0, 0);
camera.fov = THREE.MathUtils.radToDeg(Math.asin(radius / distance)) * 2;

and had to use the asin() variant of the angular diameter formula, since for a sphere, the zones near the poles are not visible because of the sphere’s curvature. While you can probably safely use the atan() variant of the formula, you should also be aware of the fact that even though in your case the equivalent of the actual diameter is the length of a cube side, the distance is not necessarily the one from the camera to the center of the cube, but rather the one from the camera to the front side of the cube.

Obviously, I’m not sure what exactly do 1 and -1 mean in your context, but usually you can adjust things according to your needs by multiplying such stuff with a factor. Naturally, you can deduce the distance instead of the .fov via simple math, since in Three.js its value is what it’s technically called “angle of view” given the fact that it’s expressed in degrees.

Thanks. Can you show the formula for a cube without changing the rotation of the camera and its fov, just for the distance (given that the camera can look at the cube at an angle)?

Ah, I see. It seems I misundertood your objective - I thought you only wanted the initial (vertical) apparent size of the cube to be from y = -1 to y = 1 in camera space, but it seems you want to continuously adjust the camera distance (without altering its FOV or rotation) so that the cube vertically spans from y = -1 to y = 1 in camera space at all times…

If that’s the case, this simplifies the “formula” on one hand, and complicates it one the other hand. It simplifies it because for the same FOV, the relation between the apparent size and distance is just plain inversely proportional, e.g. if you want the object to look half of the current size, you simply have to go at twice the current camera distance. It complicates it because you’d have to take into account the position of the cube, the rotation of the cube, and the rotation of the camera (assuming the camera always looks at the center of the cube, of course).

Considering that there would be a bit too many variables to take into account for a direct formula in such a case (even though the cube in itself is a fairly simple 3D shape), maybe a more rudimentary approach would be more effective. For example, if you said you could project all the vertices of the cube into camera space, why not iterating through all the vertices’ position attribute in the cube’s buffer geometry, project the corresponding Vector3-s into camera space, get their camera space minimum and maximum y components to calculate the current (vertical) apparent size in camera space, then multiply the current camera distance with the division between the current apparent size you just obtained, and the desired apparent size (in your case, 1 - (-1) = 2)?

Alternatively, you could probably use a proper combination of .computeBoundingBox() and Box3 to get the current apparent size (and make sure it’s in camera space eventually, and corresponding to the cube’s rotation and such) in order to avoid the iteration, then apply the camera distance multiplication based on the difference between the Box3’s y components of its .min and .max properties.

Obviously, there might be even simpler ways to do this since Three.js has a lot of methods and properties that can be used for just about anything, but I’m afraid that just like most folks using the library, I don’t have the needed extensive knowledge to be able to provide something close to a one liner that automagically gets you the expected result, so on that front maybe other experts here could cover that aspect.

Using a simple vertex projection is not suitable, because when the distance changes, the visible FOV and the proportions of the object change. So it would just work for an orthogonal camera. It’s not a linear relationship, which is why I asked the question.

A good example that shows that the size of the face has not increased proportionally (the example is not entirely correct, but when the camera moves, a similar effect occurs).
https://improvephotography.com/wp-content/uploads/2015/09/focal-length-example1.gif

Another example:
https://i.ytimg.com/vi/qgF2mZTPkYs/maxresdefault.jpg

An object will visually fit into the canvas dimensions if it fits into the camera frustum (and the camera is setup to use the whole canvas for rendering).
For a perspective camera, the ratio of half-height of the frustum to the distance is equal the tangent of the half of FOV angle:
h/d = tan(fov/2).
So in the case of a cube, for its front face to fit into the canvas the distance is:
d = box_height / (2 * tan(fov/2)).
Keep in mind FOV needs to be in radians.

I added 0.01 into formulas so you can see a tiny bit more than the cube on each side.

But what if the plane of the camera is not parallel to the plane of the side of the cube?

https://upload.wikimedia.org/wikipedia/commons/5/5f/Blender-mesh-cube.png

You’ll need to find the biggest size (horizontally and/or vertically) of the cube in its cross-section that’s parallel to the camera planes (and fit that into the frustum). The answer depends on how exactly you’re rotating the cube in 3D space: around which axes by what degree etc.

Technically, when it comes to Three.js and the camera distance changes, it’s the apparent size of the object (whose value can be calculated using the atan or asin formulas that I mentioned earlier) that is changing, and not the camera’s .fov property (not even the value of calling its .getEffectiveFOV() method is changing, at least not for me when using, say, ArcballControls or similar) - see fiddle here, their values are printed to the console (focus canvas by clicking and press the P key to toggle animation).

As for the linear relationship, you’re wrong, and any physicist can tell you the same, considering that, as proved already, the (not visual, but the camera’s) FOV doesn’t change. But, since it’s about Three.js, you can test for yourself, by uncommenting the amesh.position.set(0, 0, - 1); line in the above fiddle, measure the cube’s height in pixels on the screen, then press the D key to go at twice the camera distance and measure again - the latter height will be precisely half of the former.

If you wonder why the position of the cube had to be changed to get the exact inverse relationship between distance and apparent size, it’s because the apparent size (and the distance as well) is, in this case, measured at the middle section of the cube, i.e. the one passing through origin. This is also the reason why the apparent size value printed on the console is not exactly half when going at twice the distance in either case … but it will be half if the formula is adjusted to account for the front side size and not the size of the section passing through (0, 0, 0). In other words, it’s the shape of the cube (and, of course, whatever rotations are set on the cube and the camera) that’s affecting the original linear relationship, and not the Three.js camera’s .fov (which as said, stays the same in this case, since you wanted it so as per one of your replies).

And yeah, if the plane of the camera is not parralel to the plane of the side of the cube, then rotations will affect the result. This is why I suggested to find the maximum (or biggest size) of the cube using computing its bounding box or Box3, to avoid going into deep level math and try to account for all kinds of rotations when computing the result.

The camera can be rotated in any way. Can you show the code that calculates the distance for a rotated perspective camera that would fit the cube perfectly into the height of the screen?

On the example of this cube with dimensions 1x1x1.
https://upload.wikimedia.org/wikipedia/commons/5/5f/Blender-mesh-cube.png

That is, I am looking for an analytical code / approach that will take into account perspective distortions and ensure that the object is visually fitted on the screen. Not just fitted in frustum, but fitted so that the top pixel is on the top border of the canvas, and the bottom pixel is on the bottom border.

Blender-mesh-cube

I think there might be an analytical solution for a randomly rotated cube in your camera view but I don’t think it’s easy to find.

If I were to solve this problem I would for a practical approach: render a cube at a distance where its bounding sphere fits into the frustum, render it with flat color, so it’s just a hexagonal shape), figure out how tall is the shape onscreen (by scanning image lines horizontally from the top and the bottom till you find cube color pixel). From there you can find an analytical formula as to how much closer you need to get so it fits perfectly.

How? Especially considering perspective distortions.

There is nothing mysterious about perspective distortions here, rather than a tangent.

As a starting point, here is some simple math explaining how you can move a vertex inside the frustum towards the camera along z-axis, so it sits on the top edge of the canvas, that’s what was used in the fiddle above. You need to know either how far away the vertex is or the elevation of it.

For a cube, you have 8 vertices inside the frustum, so solving this problem is not easy, but the principle is the same. You need to be comfortable at this level of math/geometry to massage it further towards a particular solution.

frustum

Actually, there is none of either. Meaning that:

  1. Just adjusting the camera distance is not enough to achieve your objective, you also have to center the object in camera space, and this is the tricky thing:

    • if you go the camera.lookAt() route and you use one of the example controls, it will alter the center of rotation as well
    • if you go the camera.position.set() route, this will change the camera rotation
    • if you go the object.position.set() route, it will change the position of the object as well as the camera rotation relative to the object

    In other words, you can’t have your cake and eat it too, in this case. You have to decide which of the variants is acceptable for you. The easiest is the lookAt() approach, obviously.

  2. There is no approach that will directly do what you want, unless some Three.js code guru can figure out the entire math to do this in “one shot”, because once you center the object in screen space by following one of the variants above, its bounds in screen space will also change since you’re basically looking at it from a different “perspective”. Therefore they will have to be recomputed to ensure getting at the proper distance works as expected. Changing the distance will again off-center the object, this time less, so you’d have to re-center it again. Centering would again need a re-fit, and so on, it’s a cycle of those which will have to last until both the centering and fitting yield the correct result.

For the record, using .computeBoundingBox() is useless in this case, since the said bounding box is AABB (axis aligned bounding box), not to mention its .min, .max or center are set assuming the same orientation, which doesn’t help at all when it comes to screen space - at least I didn’t found a way to do it.

That being said, your objective is still possible. You’ll have to:

  • project the (camera rotated) object’s vertices to screen space, using Vector3.project(camera) and get the X, Y, Z components of the min, max and center of that screen space projection, as well as the unprojected such values (this step will be used each time you center and / or fit the object)
  • center the object using Vector3.unproject(camera) applied on the screen space projected center computed above, and then .lookAt() it with the camera
  • fit the object by getting the ratio between the screen projection’s Y (vertical) size and the desired vertical size that accounts for the camera rotation (essentially, the unprojected size of the object from the first step), calculate the distance of the camera to the origin plane perpendicular to its direction (you can do that by multiplying the camera distance to origin with the cosine of the angle between the camera’s direction and its negated position, so it’s a simple planar triangle problem), followed by using Object3D.translateZ() to move the camera on its direction by (originplanedistance - projectedmaxz) * projectionsizeratio - (originplanedistance - unprojectedmaxz)

Then, you’ll have to alternate between centering and fitting the object in screen space, until you get to the desired outcome (at which point, neither centering or fitting will affect the object since it will already be where it should). Obviously, this can be done automatically as well, using a do ... while and a proper condition, but it will easily self explain why you need a cycle of center and fit if you do those manually (say, via a key press for each).

Result:

There are probably some gotchas along the way, maybe things can, in some places, be fixed or done better, but that’s the general idea for one way - and currently, the single way, apparently - of doing it.

Notes:

  1. Using the camera’s FOV and the atan() formula is not needed when fitting the object in this approach, since we’re essentially “multiply” the current camera distance with the ratio between the projected (current) and the unprojected (desired) vertical size, which once again proves that the apparent size of an object is inversely proportional with the distance to the object (so, a linear relationship), when FOV doesn’t change
  2. You’d have to do camera.updateMatrixWorld(); before using vector3.project(camera); to project the vector to the camera space.

To be able to view any object at any viewing angle, you have to calculate the camera frustum. If the object is within the frustum, then it will be visible for sure. As for viewing at all angles, I think it also depends on what material you are using in your geometry, and the lighting you are using. There are some helpers in Three.JS for camera and lightings, you can use them for better judgement. As for the frustum, you can calculate it by drawing a bounding box from the geometry, and then use that to calculate the z parameter for your camera.position, or just go through the Frustum_ThreeJS from ThreeJS documentation. Hope this helps.

Been busy with other stuff in the meantime, and the code had some flaws at the time of my previous reply which is why I didn’t post it, but here’s a reasonable prototype in this fiddle. Instructions are displayed on loading the fiddle, and are mentioned in my previous reply as well.

Enjoy, have fun, and if you care enough to reward the effort, marking this as a solution would be nice. Other than that, who knows, maybe I’ll use this approach in my project as well, to “reset” my globe to fit the view. Designing an automatic while that alternates between center and fit until the right values are reached is something I’ll leave to you.

Thanks. This is not an analytical solution and doesn’t answer my question, but it works.

I know what you mean: it isn’t some magic formula that you can apply and voila, things happen in a single shot. I already explained why, while this might be possible for a simple shape like a cube via some advanced math, it’s not possible in general: because looking at a different point with the camera also changes the bounds of the projected object in camera view. In other words, while you can get them for the current look at, your values will not match reality anymore, once the camera looks at things at a different angle.

Personally, I prefer things that work in every possible scenario, so I wrote a solution that will center and fit things irrespective of shape (can be a cube, a sphere, whatever). Even folks at NASA work on the basis of solutions that successively approximate things until the desired outcome is reached, and sometimes those are the only solutions that work. Technically, it’s a numerical and not analytical solution, but is just as valid.

So, if the answer doesn’t meet the high standards of expecting things to automagically happen just by asking, well, that’s life - feel free to wait for some better idea, but it may take a while, considering I was the only one answering this after more than a month and a half of zero activity on this. For now, like you said, while there is plenty of room for improvement, this approach works. And of course, it does also answer your original question, you only have to look at the log where everything, including the desired distance, is printed out for the doubters to take notice. :sunglasses:

Related

3 Likes