Bounding sphere projected height on screen

Maybe just decrease radius depend of distance to camera and maybe FOV and size of canvas.
Distance 0 meter, radius 1 meter.
Distance 10 meter, radius 0.5 meter.

Hey Chaser_Code, can you elaborate? Sorry I don’t get it but I’d really like to understand the technique you are suggesting which seems very simple.

Do you mean we can pre-compute projected values for every distances (for a given object size, say of 1m and a given fov) and then interpolate between? (sort of a standard meter camera-grid)

Yes, you got it right. That’s precisely the reason why - if you want to use an object, that is, and not the vectors themselves - a suited representation of that box3, but as an Object3D (so you can manipulate it properly), would be needed.

That might be true, but only if you use the cube “corners”. If you use the “middle” points of the cube sides (which can be easily calculated and represent the points where the sphere is tangent or “touches” the circumscribed cube), I believe you’d get the right values.

The only slight difference would be the fact that in a perspective projection, you can never see the actual top and bottom of a sphere because of its curvature (you’d have to be at an infinite distance from it to theoretically do that), so the circumscribed cube would be a bit larger than the upmost and downmost points of the sphere that you can see. In other words, from a practical point of view, you’d have to apply some form of the angular diameter formula for spheres (i.e. the arcsin() version of the formula, instead of the arctan() one) to “shrink” the cube’s top and bottom values and get the visible topmost and bottom-most points of the sphere.

Alternatively, this could probably be solved from a mathematical point of view (i.e. without bothering to create the second, larger, box3, or iterating through the sphere’s vertices) just by using the first - or the initial - box3 to calculate the top and bottom of a circumscribed sphere, since the radius of a circumscribed sphere of a cube / box is sphereradius = boxsidelength * (Math.sqrt(3) / 2), and boxsidelength = box3.min.y - box3.max.y. Applying a suited form of the angular diameter formula for spheres to lower the radius result to the visible one, rotating the (side’s middle placed, as mentioned above) vectors according to the camera rotation, and then projecting them to get the screen space values converted in pixels would more or less achieve the same, but without creating any other objects or performing intensive operations.

Naturally, feel free to correct me if I’m wrong on this. :wink:

I believe he refers to the fact that given a known size of an object, a known distance from it, and a constant FOV, its size at another distance would be inversely proportional with that distance. In other words, if you have an object whose size is 6 m when seen from 10 m, its size will be 3 m when seen from 20 m. Not sure if that would help in this case, since you deal with projections and rotations, but that’s what it looked like.

1 Like

yes I was specifically talking about these:

the pink line would always be under the orange, negligible effect if you are far away from the sphere, but up close it will be significant.


Indeed, you’re right. I believe I covered that when talking about using the arcsin() version of the apparent diameter formula, to account and adjust for the visible up and down points of the sphere, instead of the strictly mathematical middle of sides. It seems I misunderstood what you were saying before, because if you talk about what you illustrated in the picture (nice drawing skills, by the way), we are in complete agreement. :+1:

I have made a (visual) implementation of your solution (the first one, with the bigger circumscribed invisible cube that always lookAt the camera), and it seems to works pretty well:

As expected, it remains an issue when the camera is getting close:

I will try to fix this with the apparent diameter formula you pointed out, but I’m not sure to understand what δ stands for in the formula: I guess the FOV?

I’d like to be able to compute the missing “offset” my top/bottom blue spheres need to “touch” the projected surface.

But I’m not sure about how…

Don’t hesitate to review/fork the pen if you want, I’m not so confident with my code (particularly when I zoom a lot, the computed height is kinda strange, and I don’t know why…)

Alright, good work with the implementation - I’ll take a look at it tomorrow if needed, since now it’s a bit late here. In the meantime, regarding computing the said offsets (well, actually, the apparent size in terms of distance between the top and bottom visible points on the sphere, since you can then easily figure out the rest) it’s just simple right triangle math. And yes, δ is camera.fov.

[I had a nice photoshop figure explaining things, but I accidentally closed it without saving it first (which is not something I normally do), so I’ll explain using your figure, although you’d have to imagine things or write the schematics on a piece of paper instead, for easier understanding.]

First, let’s call the camera point C, the center of the sphere O and the points where the tangents touch the sphere A and B (up and down, respectively), with the AB segment intersecting the CO (of length D) in a point called X. What you need for the apparent length is the AB segment length, instead of d (which is basically R * 2, with R being the sphere radius), so if you know the length of AX, you multiply that with 2 and get AB.

Now, you can easily notice that the CAO and CBO triangles are right triangles, and so are the CAX and CBX ones, or the OAX and OBX ones. In a right triangle, the sides can be expressed as either hypothenuse * sin(oppositeangle) or hypothenuse * cos(adjacentangle) to the said side. That’s the reason for the δ or fov formula in the apparent diameter for the sphere, because you have the CAO right triangle where AO = CO * sin(ACO) aka d / 2 = D * sin(δ / 2), thus sin(δ / 2) = d / 2 * D and then applying arcsin becoming δ / 2 = arcsin(d / 2 * D). You may also notice that in this triangle, the COA angle is 90 degrees minus the other acute angle aka half of the fov, so by extension the XOA angle has the same value, PI - δ / 2.

On the second right triangle of interest, i.e. OAX, given the fact that we know the value of the hypothenuse (d / 2 or R) and the opposite angle of the AX segment aka the XOA angle (PI - δ / 2), it becomes clear that AX = d / 2 * sin(PI - δ / 2) = R * sin(PI - δ / 2), or even R * cos(δ / 2) if you find it easier.

Now you can multiply by 2 the above value and get the “length” of the apparent diameter, as 2 * R * sin(PI - δ / 2) = 2 * R * cos(δ / 2). In other words, you’ll need to multiply the actual diameter d with either sin(PI - δ / 2) or cos(δ / 2) to get the visible diameter. :wink:

are you sure? because both numbers are < 1, and visible diameter is supposed to be larger, not smaller.

It’s now perfectly clear for me, that we are looking for AB length (that was what I was missing because the wikipedia diagram was not exaggerated enough to see the difference with d)

I did my trigonometry following your awesome and detailed reasoning, I post here the doodle for reference :wink:

Now that I have my mind clear about the formula, I’m going to apply it to my code, I let you know :wink:

Thank you very much Yin

say, did you consider that even after you find A and B and plug it in your code, it is still going to be off :smiling_face_with_tear: because the sphere shape on the screen is not a circle :pleading_face:

I mean, those are right points to look for, but in order for them to be on the enclosing circle they need to be aligned with the screen center. Otherwise your circle will intersect with the sphere.

Ok, here is my forked previous codepen:

I have add 2 more red spheres to the scene: top2 and bottom2 respectively for my A and B points.

Thanks to the previous calculus, I have expressed their coordinates vA and vB (in the lookAt-rotated cube basis) and transform them into world-coordinates to position them absolutely:

We can see the red spheres now “touch” the surface of the sphere:

From those 2 points, have also calculated the h_apparent projected height, which is bigger than the h projected height.


BTW, I’m still having strange results (this the console.log values) when zooming a lot, event if the red sphere is correctly positionned… I don’t really know if this is correct/ok…

I now see your point and in effective, when the sphere is not centered into the screen, it has a ellipse/potato shape:

and the red bottom sphere isn’t at the bottom of the sphere…

I guess we have no solution for this? except what you were initially suggesting: projecting all vertices to the screen and make a Box2 of it, to get its projected width/height…


BTW, I was wrong when I supposed δ was the fov: it is actually just the angle OCA

Again, the solution would be to pick A and B in a way that the three points are in line :pensive: Just need to math this a little harder
Screenshot 2022-10-07 at 15-25-12 oNdPJgN

The visible diameter is always smaller than the actual diameter, for a sphere (because of the curvature, like we both noticed before), so it makes sense for the multiplication factor to be less than 1. What can indeed be larger than both of these is the projection (especially when the camera is close), but that is handled by Three.js via .project() so no need to worry on that front. :wink:

Can you make your drawing skills speak again? (sorry I haven’t understood the 3 points alignment thing)

But if there is a solution (other than projecting all vertices) I’d like to try harder, hopefully with your precious help which is really amazing. Thank you Yin and makc3d for this thread, it’s so instructive (to me at least and to others I hope)!

a plane that contains the camera, the sphere center, and the scene origin (orbit controls target, actually) is the one where you need to look for points A and B (it would look like a line on the screen because the camera is in it)

That might be because of your last two operations in the h and h_apparent computations. Performing such things on the vector3s instead of the plain .y component of the initial vector3 doesn’t seem like the right choice, and getting the .length() neither. In other words, you’re looking for scalar values here, and the only component of interest is the projected Y one… :thinking:

There is always a solution. Projecting all vertices would probably work as well, but depending on their number (and the number of segments in the shape), it can be a bit intensive, and logically it’s overkill to do that when all you need are just 2 points. A slightly related approach (with a different objective than yours, mainly to center and fit an cube on the screen) is illustrated in the fiddle from this post.

That indeed looks strange, not entirely sure what would be the cause. I’ll try to rewrite this in a fiddle and see what can be done about it. By the way, you want the little A and B spheres to be in the screen space or in 3D space?

My ultimate goal and why I’m looking for that “projected height” of the bounding sphere is because in fine, I want to draw a 2d circle (so yes screen space) that always encloses my bounding sphere.

More precisely, and to be more concrete what I’m working on is this:

:point_up_2: Currently in the video you can see the red-circles have a fixed (screen-)size, but I’d like each 2D red-circle to have the exact same size as the offscreen bird’s 3D bounding sphere, it follows.

Now, if the bounding sphere is distorted (due to the perspective of the camera) and has a shape of a “potato”, I’d like my 2d red-circle to have the size to enclose that “potato”.
→ And since this sphere could be distorted (what I didn’t consider first), it makes me realize, that I’m not specially looking for the projected height specifically, but rather more for the maximum screen-projected width OR height of a such distorted sphere (to draw the enclosing circle of that distorted sphere).

Simple solution:

if(distanceToCamera>1000){ radius=0.01; }
else{ radius=radius-radius*(distanceToCamera/1000); }