Camera and floating point origin

I looked at the example loader collada skinning

https://threejs.org/examples/#webgl_loader_collada_skinning

I did it in my standard threejs setup and it works. But I use the logarithmicDepthBuffer because I have very large and small scales in my project. If camera and the stormtrooper are both far away from the origin then it gets ugly.

But only in the direction of distance from the origin. Camera and trooper are just 20 units apart. I suspect the problem is that the x and y coordinates are still absolute because I only moved the trooper in the z direction. However, the z coordinates of the trooper, which are very small in relation to the z distance, are no longer precisely recognized by the GPU. So the problem is the very distant reference point. So I would need a floting origin. My camera must be the reference point and I would have to transfer all camera movements inversely to the object. Does anyone have such an example?
Here is the github link to download my example:

If I make the stormtrooper 1000 times larger in blender and rescale it in threejs with

avatar.scale.setScalar(0.001);

than it looks clean with the large z position. That means I have to make the model big enough so that its point coordinates in relation to the distance from the reference point can be correctly recognized by the GPU. But why then by scaling with

avatar.scale.setScalar(0.001);

it still looks good although I have small values ​​again, I don’t understand it yet. Instead of scaling the model in blender, I would have to do it in threejs. Is that possible with the model matrix?

Floating point values in GPUs are very limited and often cause confusion because of unexpected results with “large” values. What you experience is the sacrifice of precision in order to fit calculations between large and small values. The scaling trick worked, because it eliminated this discrepancy and the calculation is done between large and large values, and only after it is done the result is converted into small values. I mean, using scaling to resolve such issues is a matter of chance, because you do not know the exact internal sequence of calculations. In your case scaling worked fine, in other cases it will not work. The only solution, that I’m aware of, is to use small values and to ensure that all intermediate values in expressions are also small.

To answer your other question – instead of moving the stormtrooper and the camera together at a large distance, you can keep them near (0,0,0) and move the rest of the world in the opposite direction.

However, it is better to rethink the spacial organization of your scene. If you have the stormtrooper dance on different planets … you do not need all of them in the scene at the same time. When he is on planet A, the (0,0,0) could be at the center of A, when he is on planet B, the (0,0,0) could be at the center of B. In this case, the distance between A and B does not matter at all.

Since planets themselves have very large radii (earth: 6371000m), the planet’s origin itself is not a good reference point. Because then the geometry coordinates of the trooper would still be very small in relation to the radius

I would then have to place the camera in the origin

camera.position.set(0, 0, 0);

The camera can then rotate just as if it were moving, but then it is no longer allowed to move. So camera position is allways (0, 0, 0)!

That means I have to transform all object coordinates into the camera system. But then I always have to move everything else. For this, each object would need a galileo transformation.
The best thing would be that all objects have a base class that they inherit and this base class receive from the first-person controller the speed so that they all adjust their position by themselves.
Hm… that sounds easy in itself. But maybe there are a dozen things that I don’t see yet that need to be considered. Has anyone done this before or know an example?
I have to program that separately first

Edit: If your world is too big, you have to split it into chunks and work with one chunk at a time. For example, if the planet is too big, cities could be the chunks, or even the buildings. When a chunk is active, the origin of the coordinate system is stationary and it does not move with the camera. When you go to another chunk (e.g. from the city you enter a spaceship), then you remove the old chunk from memory and create the new one in place. Then, while you travel through space, your chuck is the spaceship and the origin is fixed. When you land on a planet 3 million light years away, you delete the spaceship chuck and create the chunk of the new planet. And the origin is still the same, and you do not care about the other chunks.

I have this video about it. I find his tutorials very good. That’s exactly what it’s about. But what I don’t understand is why it’s a floating point origin when the camera still has large coordinate values

here is the codelink

His World consists of many chunks. He talks about putting the origin in the camera and instead performing the movement on the objects in the opposite direction. I understand the theory. But I am not yet aware why the camera in his code than still has large values and the camera then moves. There’s something I haven’t understood yet, because what he does it has an effect.
This is a very interesting topic. So far I have had little to do with changing the reference system. That will probably keep me busy for a while

1 Like

Ok I think I got it. Then I can now ask a specific question.

I want to translate all the vertex coordinates of an object to have them relative to the camera instead of the world coordinate origin. To do this, I just have to do this for each vertex

vertex position - camera position

So far, so good. That’s the easy math part.

The question:
How do I perform this operation on all vertices of an object?

Yes, the size of planets, even Earth, force us to make the origin closer. It is better to always have the camera at the origin, and transform the World in reverse when moving, using a single transform on top of everything.

“How do I perform this operation on all vertices of an object?”
I use the scenegraph (in Unity/vrml/x3d) to do this for me: one transform on top of the World scene is moved in reverse to simulate the camera moving forward. The camera never moves from the origin.
I believe you can do the same in threejs : three.js manual.

The camera is allowed to move in the three.js world, but you have to compensate for this by moving the meshes in opposite directions.

The video from Simon Dev which I shared a little higher illustrates this pretty well. I would just repeat here what he conveys more clearly.
What is not clear in the video is that the camera in the shader really always remains at its origin.
This is achieved by setting the camera position for x, y, z to 0 in the viewMatrix in the vertex shader

		mat4 terrainMatrix = mat4(
      		viewMatrix[0],
      		viewMatrix[1],
      		viewMatrix[2],
      		vec4(0.0, 0.0, 0.0, 1.0)
      	);

		gl_Position = projectionMatrix * terrainMatrix * modelMatrix * vec4(position, 1.0);

Simon creates his own view matrix from the original one in which the camera always remains in the origin. On the three.js side he subtracts the camera position from the group position in which all his meshes are. So he moves the camera in the three.js world and with this movement moves the mesh. Since the meshes all change their position, he achieves in the shader that the world moves around it instead. The shifting of the meshes on the three.js side always results in the modelMatrix for the shader.

Since I have long since solved this topic, I will close it. I don’t like it when I have questions in the forum that no longer concern me.

2 Likes