Orthographic Frustum Size Calculation

I’m using Orthographic Camera and Arcball Controls in my application. My Frustum size is 100000 for Orthographic camera. I initialized camera like

const frustumSize = 100000
const aspect = width / height;
const camera = new THREE.OrthographicCamera(
      (frustumSize * aspect) / -2,
      (frustumSize * aspect) / 2,
      frustumSize / 2,
      frustumSize / -2,
      -frustumSize / 2,
      frustumSize,
    )

My issue is, If I created a point in (12000, 0, 0) and (-12000, 0, 0) while rotating the model, the points gets disappeared from camera view. I think its because of the frustum size I defined while initialization.

My question is how can I Update the Camera’s properties to view the model without this issue. I attached a video of my issue for the reference.

The units you are using are really large in relation to webGL 32 bit floats.

You don’t need to scale near/far by the same values as left and right.

You probably need to scale everything down by 100 or 1000… you’re likely hitting numerical limits.

In js, floats are 64 bit doubles so we have more leeway there, but when it gets onto your GPU, everything is 32 bit floats (if you’re on a high end gpu) and worst case, 16 bit float on lowend.

SO in conclusion…
const frustumSize = 100000
should be more like

const frustumSize = 100

(remember these are not “pixel” units. its in 3d and virtual so 1 or 100 can look the same.. its all getting scaled up to the screen size at the end.)

Oh ok. Thanks for your response. Let me try that.

May I know when should I scale down the model.? what’s the maximum numerical limits ?

It’s complicated, but.. generally we like to pretend that 1. unit in world space is 1 meter.
A lot of the systems you will find are built with that range in mind.

However… meshes are stored in a numeric datatype with a limited range… so to get the best detail/precision in a 3d model, they are sometimes scaled up to a large size so that the full numeric range of their storage is used… so then you have to use the mesh.scale to scale them down to what works in your scene.

For instance, a simple cube mesh in optimized gltf, might store its vertices as bytes… so each vertex would be in -127 to +127
range, and you would need to scale it down if you want it to be one meter in your app. (I usually just eyeball this and try to keep these units in mind, unless im working on an actual CAD app or something)

r.e. your question.. floating point doesn’t have a simple min/max like integers do. Instead what happens is the larger it gets, the less precise it gets…

so.. 100000.1 in 32bit float, might be identical to 100000.2

but 10.1 and 10.2 will be distinct.

I like to use scene sizes of 10000 max.. so the difference between near/far being 10000… but it’s not a requirement.. it’s just if you do start going out of the numeric precision range.. you’ll start to see things like.. different points collapsing into the same spot… or vertices jumping from one position to another.

ok. Thanks for your response :blush:

1 Like

Out of curiosity:

is that really so? From my limited understanding, the 32 bits in a highp 32 bit float is “spent” like this:

The 8 bits for the exponent allow for a value range of approx.:


Source

Shouldn’t that be plenty even for the occasional squaring of values without causing an “out of range” condition?

The TO with his initial value of 10^^6 wouldn’t even come close to exhausting the available value range, even during a squaring computation => 10^^12 :thinking:

the further you get from the 0, the less precision you have. there’s no “range” per se… we can have Infinity of course… but values higher than a certain value will start to lose fractional precision.
but perhaps I am misunderstanding what you are articulating?
It’s also a little more complex than just 32bit float.. since all the scene graph transforms happen in 64 bit. so.. if your objects and camera are near each other.. the resulting 32bit float values are relative to the camera… so you may have enough precision to transform vertices, but if you need to get worldspace in the shader, you’re getting back a 32 bit float value.

Agreed. But you were initially pointing at the “size” of the value, not its precision.

We can leave it at that - didn’t mean to contaminate this thread.

2 Likes

all good. :smiley:

Dear manthrax (and everyone else)

I’m an enthusiastic user of three.js (in such a way that I decided to teach it to my students a few years ago), but I would like to say that, in my humble opinion, it wasn’t the best decision to associate some three.js parameters to physical quantities such as 1 distance unit = 1 meter, or 1 light intensity unit = 1 lumen, as you did in a three.js version whose number I can’t recall now.

Firstly, because there are different cultures that use different units of measurement (meters and their multiples and submultiples; inches, feet and miles; parsecs and light years; and so on…), and I am above suspicion, insofar as, being European, I use the International (metric) System of Units, which you have adopted.

Secondly, because the best unit to use depends on (the scale of) the program’s theme: if I am creating a quantum mechanics program, I want 1 distance unit = let’s say, 1 nanometer; but if I am developing a Universe program, I would rather prefer 1 distance unit = 1 light-year.

Last but not least, because this approach can pose several problems, such as compatibility with other Computer Graphics programs (e.g., 3D modeling programs, as already mentioned by other three.js users) or, even worse, the inclusion of anomalies such as insufficient precision of the depth buffer when the values ​​defined for zNear and zFar parameters are exaggerated (as seems to have happened in this case).

Personally, I’m of the opinion that you should have continued to use relative values ​​when defining the parameters of some three.js classes (1 unit of distance = whatever the program authors want; units of light intensity varying in the range [0.0..1.0], etc.).

Best regards
JP

Thanks for your thought-provoking critique re. Three.js’s way of interpretating parameter values.

Frankly, I never heard/read about anyone making any such claim. Do you happen to have a source for that?

Personally, I wouldn’t think that such claims make any sense at all, given that a Three.js based page can be viewed from a very broad range of physical devices, ranging from small mobile phones to 40+ inch screens. How could any calibration perceivably work?

To my knowledge, there doesn’t exist any standard for data exchange between “other Computer Graphics programs”. See this:

In case of “exaggerated” Znear/Zfar parameters there exists the concept of a logarithmic depth buffer.

My impression is: the “science” on this hasn’t been settled yet :wink:

Dear vielzutun.ch

As I said, this is only my humble opinion; I am not relying in any sources. But please consider these two examples:

1 - If I want to keep with the association 1 distance unit = 1 meter and I want to create a large scale program (let’s say, a program that deals with the Universe as a whole), I must use very large numbers and lose precision, even when programming in languages such as JavaScript or TypeScript.
2 - The same applies to very small scale programs (e.g. a quantum mechanics one): if I want to represent distances such as the Planck distance = 1.616199 × 10⁻³⁵ m and still respect the “rule” that 1 distance unit = 1 meter, I will probably have to deal with (lack of) precision issues.

Finally, you are right about the Z-buffer: its distances are defined in a logarithmic scale. But most people aren’t aware of that. They tend to define a very small value for zNear and a very large one for zFar, so that the whole scene fits the camera’s viewing volume. Forcing programmers to stick to a rigid units system will, again in my humble opinion, make their understanding of the way a 3D API works even more confusing.

Best regards
JP

People, nobody is preventing you from using any unit you like. However, if you use a different unit than the one in Three.js, you may need to scale the result accordingly. Three.js has no way to know what units you use.

Consider this example:

  • you make a simulation of planets orbiting a star
  • to use smaller values you decide to use 1 unit = 1 AU
  • unfortunately the force of gravity F=g\frac{M_1M_2}{r^2} uses r in meters

What would you do?

  • would you say that the formula is not good because it does not work for your units?
  • or would you just convert between your units and the formula’s units?

As for why Three.js switched units, the three most notable reasons are:

  • simplification of some internal calculations
  • allowing VR/AR/XR content where by definition 1 distance unit is fixed to 1 meter
  • compatibility with the most standard and universal set of units (i.e. SI)
2 Likes

Obviously, in a physics simulation, you’ll have to adhere to a coherent set of units. But the TO was just talking about visualizations as such, didn’t mention any physics relationships.

1 Like

Here is one source:

2 Likes

Dear PavelBoytchev and everyone else

First of all, I would like to apologize to keep discussing such a general topic in this particular thread. One can create a different one if you find it adequate and useful.

Pavel, I agree with you, but if you are designing, let’s say, a program that deals with the Universe as a whole, you must do something that is not realistic at all: you must light the scene, i.e., the whole Universe, so that users can see it. But it doesn’t make any sense defining the intensity in candela of a light source that lights the whole Universe (or the solar system, for instance). Reasoning in the opposite direction, I think it doesn’t make sense either defining the intensity in candela of the light source that illuminates, let’s say, a DNA molecule.

Besides this, I think that it is easier for programmers to find the right value for the light intensity if they know that, in a particular API, this parameter is normalized, i.e., it ranges from 0.0 to 1.0. When three.js evolved from r154 to r155 (thanks for reminding me the version number) I had to define new values for such parameter; and since it is not upper limited, I felt a bit lost in finding the appropriate ones.

Maybe three.js would be more universal if it would not adopt any measuring system. It would be fully independent of the application contexts (large scale, small scale, human scale, whatsoever).

Finally, two more different (but related) topics:

1 - Pavel, you said that one can always scale the whole scene so that it fits our purposes, no matter its real dimensions are. Could you (or anyone else) please tell me if three.js automatically normalizes normal vectors in such situations? Or do programmers need to do it instead? Or is there some parameter, as it happens with other APIs, that allow us to decide?
2 - I was consulting three.js documentation and I realized that classes such as PointLight and SpotLight have a property named “power”. It allows programmers to define the “luminous power of the light measured in lumens (lm)”. I must say I’m no expert but, shouldn’t this property be named “flux” instead? If three.js developers really want to keep SI units, shouldn’t the intensity be expressed in candela, the flux in lumen, and the power in Watts? And shouldn’t the relationship between flux and power depend on the nature of the light source (incandescent, fluorescent, LED)?

Best regards
JP

Everything in the core is agnostic towards units. Users could assume any units they wish, except for a few lights and color spaces that are dependent, because they are supposed to be dependent.

If you prefer other units for the lights, you could scale their inputs/outputs, or add a wrapper class to do this for you. This would be faster, shorter and easier than running this discussion.

The length of normalized vectors (i.e. unit vectors) by definition is unitless 1. Unit vectors are invented and used only to simplify formulae and calculations.

Yes, this should really go into a topic of its own. I apologize to @zenhanu.

2 Likes