Getting FOV/zoom changes from PerspectiveCamera after OrbitControl changes

I am still stuck with a question that is related to questions I previously asked, but I was still groping a bit in the dark at the time and probably didn’t ask the right question. But I have seen a bunch of similar questions, often unanswered on here as well. So I hope I can ask the right question, and hopefully answers here may help others too.

So I am using PerspectiveCamera and OrbitControls, and in general I don’t understand how those two work together. How do changes from the user using the OrbitControls (rotate, zoom, etc) influence the PerspectiveCamera? The camera.position does seem to change when the user pans with the OrbitControls, and so does But camera.fov and camera.zoom never seem to change when the user zooms in. So what changes in the camera when a user zooms in?

I have a small sample of detecting camera/control changes here:

The use-case I need to know this for is that I want to calculate a bounding box for the “current view” based on the near, far and fov, so I can dynamically load data depending on those.

Lol, it’s always funny. The moment you force yourself to write the question down, of course you have an epiphany. Well it took some time and mulling over, but now that I look at it once more, I realize that the only thing that probably changes in the camera is the position. Is that right?

Zooming slides the camera along the (camera.position, vector and updates the camera.position accordingly. So you basically move the camera closer and further away from while keeping the orientation the same.

Rotating the view with OrbitControls keeps the distance between the camera.position and the same, but rotates the camera.position to a new location. Fortunately I don’t need to do anything with that, because the math of angles, azimuth, radians and that stuff makes my head hurt.

And panning moves the, and the camera.position together, so keeping the angles and distance the same, but just showing another part of the world.

If this is right, I guess with the camera.fov, camera.position, and the distance between and camera.position I should be able to figure something out, which I will report back here once I do.

Unfortunately you just can get the camera position change, and fov. zooming is only changing camera location.
Example code to get camera position in x, y, or z

//Get Camera location
camera.updateMatrixWorld(); //Update the camera location
vector = camera.position.clone(); //Get camera position and put into variable
vector.applyMatrix4( camera.matrixWorld ); //Hold the camera location in matrix world
//render(); use if the scene is static

I think it is possible to get the bounding box for whatever is visible with some decent math. As long as my assumption about how the camera and controls work together is correct, I think this is the math you need to do:
First you need to draw a line (line 1) from camera.position to Along that line you can draw the camera.near and camera.far points which are distances along it from camera.position.
Then you need to draw a second line (line2) perpendicular to line1, but parallel to the x/z plane. Lastly you need to calculate the positions along line2 of point1 and point2 based on the camera.fov angle from the camera.position. If you then take the min(x,y,z) and max(x,y,z) from point1, point2 and camera.position you would end up with a bounding box for the area you are viewing.

I think this would be feasible with some math. For me though I found a much easier solution, which is that my space is already divided into 3D sectors anyway. So I am just going to take and add that distance to all dimensions of
and then see which sectors are within that area and load data for those sectors. Since my data comes from web requests, this will even allow me to cache individual sectors so I don’t have to redo the requests if data has already been loaded before.