[SOLVED] How to limit pan in OrbitControls for OrthographicCamera so that object (texture image) is always in the scene

Hi,

I am using OrbitControls to control an orthographic camera. The scene has a 2D texture image.

I want to limit the pan of the camera, when reaching the edge of the image.
For example, when the camera is panned to the left, and the image is shifted to the right in the view window, when the left edge of the image is to the right of the visible window, stop the left panning (see the attached diagram)
snapshot2

Here and here the pan limit is hard coded, but in my case the limit depends on the zoom (and I assume that also on the size of the image). For example, when the image is shifted all the way to left,

  • when zoomed-in, scope.target.x ~= ~100
  • when zoomed-out, scope.target.x ~= ~800

How can I disable panning to the left when the left side of the image reaches the left edge of the visible window?

Thanks,
Avner

Crossposting:

Any chances to provide a live demo with your current progress? This makes it easier for the community to find a possible solution for you.

@Mugen87, please see the example in here
I added code suggestions from @Rabbid76 (see here)

The example runs with OrthographicCamera mode with rotation disabled.
By setting minPolarAngle, and maxPolarAngle to Math.PI/2 (with rotation disabled) the camera position stays in the original z plane, e.g. (0, 0, 80) - good!
The axis helper in the scene, shows the following coordinate system
snapshot4_texPane_orthographicCamera_coordSystem

bbox.min bbox.max is fixed (-50 -50 50 50) even if panning or zooming the camera. This is expected because the object coordinates are fixed regadless of the camera position.
texCamera left, top, right, bottom are initialized to (-50 -50 50 50) .The units are in world coordinates

When the camera is panned, I expect the left, top, right, bottom to move, but they always show up with the same values (-50 -50 50 50)
Why is that?

snapshot5_useCase3

With some modifications to @Rabbid76 suggestions from here I solved the problem.
Since the camera left, top, right, and bottom values are in world units (see here) , and relative to the camera position, these values are also fixed.
In case of OrthographicCamera, the calculation of the world coordinates of the borders of the camera frustrum should factor in the zoom.
For example, the x world coordinate of the left border is computed with:

let leftBorderX = camera.position.x + (camera.left / camera.zoom);

In the working example here, the image always fills the view window.
The minimal zoom is set to 1, to prevent a case where the image is smaller than the view window
If the zoom is 1, the image covers the view window, and panning is disabled.
If the zoom is bigger than 1, panning is enabled as long as the image covers the view window.

Hi there, i see your working example is not working anymore… can you update these example, i have similar issue about limiting panning in orbit controls.

Here is the updated example: https://jsfiddle.net/pmf2uz58/

I have a similar issue as this but with perspective camera. I want to set this so that when a box is clicked the camera pans to the clicked object, then zooms into it (so that the box fits the camera). I dont want rotation I simply want the camera to pan vertically, or horizontally, to the box that is clicked. The zoom should be the fov of the box. If the camera is already zoomed into a box, then the z.position stays the same and only the camera.x/y change. Essentially, the camera will appear to be in a mode of traversing the boxes. Getting the actual world position of the clicked box is whats escaping me for some reason. Your post was a little helpful but Im not sure Im getting all the coordinates for my clicked box correctly. I have tried a few different ways but none have worked. Some have resulted in something close but would rotate the grid of boxes upon zooming.

I understand the frustum, fov, distance and have calculations for all, but I am having trouble making this work. I have most of this solution done but I am failing to grasp something. I believe I need to be translating the clicked object to world coordinates… any help would be appreciated.

Here is demo

What am I missing?

Nothing still on this? Anyone? And before this gets assigned to a similar issue, like (https://github.com/mrdoob/three.js/pull/14526) please note the use case here is the type of view it is being used on. Its different.

The use-case for 14526 is that its taking into account multiple objects, omitting the direction of the object face being clicked on, and then increasing the field of view to keep all selected objects in the view. That is a use case. Used for perhaps selecting 3D objects in world space for a multi-select. Hugely useful for the 3D industrial industry, modeling, CAD style use cases, I’m sure, but most of us now make UI for the corporate world. The corporate world still uses grids and reads off of 2D surfaces. So lets remove the idea of the frustum needing to be at a different orientation than the object being zoomed to. Let’s, in fact, assume the frustum needs to in exactly the same orientation as the object-face being clicked on because it is always going to be a flat box of content that needs to be in full view, whatever size “IT” (as in just one) is. The user will be navigating a flat plane of boxes. Those boxes may be seamless, or have buttons or text in them, but they always need to be in view. If a box is then zoomed into view, and the next box needs to be in view, the user should be able to navigate, at the current zoom, to the next box (pan). Again, camera angle does not change and zoom only changes to fit the object. This is a mode for traversing objects nested in a flat plane. Its called “tabbing” in the 2D interface world.

3D is useful in charts. Its useful in large data and informational analytics but if its going to be useful in Materials and libs like Angular, Kendo, JQuery or plain vanilla JS, then its got to model the current UI concepts of the day and then expand on the abilities of 3D. Some of us are trying to do just that.

A grid in 3D has the ability to hold a lot more data and transform that data in ways that just arent possible in 2D. But being able to traverse data in a 3D grid, as a 2D flat surface, is a huge mindf**k (mindfork) for most corporate devs. The steam blows right out their ears and the lights shut off. We need WebGL and libs like Three in the corporate world now. Big data and expanding global markets is not making it easy for 2D UI nowadays. Solutions for everything from hierarchical org charts and forecasting grids to global data point representations and regional charts are requiring some expensive custom solutions. We need Three to recreate controls we have now.

Using Three to create a globe of the earth, with zooming, and with select-able countries, was easy. Even adding libraries to map the lat and long of the countries was easy. But using the camera, and controls, to zoom to a flat surface and pan across its clickable objects, is not only difficult, but the community isnt even considering the use-case of objects being in a flat grid structure. This would be used for tabbing to the next form object, or the next box, circle, navigation-item, in the group, or parent. We need to have a common navigation of readable items.

Being able to render a layout grid would be huge. Copying parts of material lib controls would be huge and should be a project of Three JS right now. Now that ES6 versions of the Three modules exists… let’s give the corporate community some much needed attention here. WebGL is far superior to single threaded browsers and from hierarchical organization charts, to global mapping, to large data point visualizations, hands down, Three is the best candidate for the job.

Just think about it for a second…

Three Material for Scaleable Web Applications

Features -

  • Three Cubic Grid (Each column can represent 6 sides of data, features compound sorting, compound filtering, linear table cell-data transforms, multi-column-multi-face-filtering, data modeling)
  • Three Material Layout Grid (100+ columns, auto zooming, responsive to screen size and smart-child-scaling)

  • Three Material Form (input, dropdown, search box, multi-select-list, dragdrop-list, text-box, radio-buttons, check-box, password)

  • Three Material SVG Buttons and Icons

I digress now but the case is the objects that are being viewed. Are they embedded in a flat 2D surface, like readable object (textNodes, navigation blocks, buttons, etc.) or are they standalone 3D objects that are being observed with trackball control style navigation? If ‘observational’ then Observation Modes are x,y,z ,custom). If ‘readable’ then Readable Modes are x,y,z,custom)

Could ones scene have a case where it needs trackball style observational control AND readable controls? Yes. Assume a scene with a clickable 3D globe and grids of data that are layered on top of the 3D globe. This is a common set of use cases in the industry and transitioning between the navigation modes should be of great importance.