# Camera and floating point origin

I did it in my standard threejs setup and it works. But I use the logarithmicDepthBuffer because I have very large and small scales in my project. If camera and the stormtrooper are both far away from the origin then it gets ugly.

But only in the direction of distance from the origin. Camera and trooper are just 20 units apart. I suspect the problem is that the x and y coordinates are still absolute because I only moved the trooper in the z direction. However, the z coordinates of the trooper, which are very small in relation to the z distance, are no longer precisely recognized by the GPU. So the problem is the very distant reference point. So I would need a floting origin. My camera must be the reference point and I would have to transfer all camera movements inversely to the object. Does anyone have such an example?

If I make the stormtrooper 1000 times larger in blender and rescale it in threejs with

avatar.scale.setScalar(0.001);

than it looks clean with the large z position. That means I have to make the model big enough so that its point coordinates in relation to the distance from the reference point can be correctly recognized by the GPU. But why then by scaling with

avatar.scale.setScalar(0.001);

it still looks good although I have small values â€‹â€‹again, I donâ€™t understand it yet. Instead of scaling the model in blender, I would have to do it in threejs. Is that possible with the model matrix?

Floating point values in GPUs are very limited and often cause confusion because of unexpected results with â€ślargeâ€ť values. What you experience is the sacrifice of precision in order to fit calculations between large and small values. The scaling trick worked, because it eliminated this discrepancy and the calculation is done between large and large values, and only after it is done the result is converted into small values. I mean, using scaling to resolve such issues is a matter of chance, because you do not know the exact internal sequence of calculations. In your case scaling worked fine, in other cases it will not work. The only solution, that Iâ€™m aware of, is to use small values and to ensure that all intermediate values in expressions are also small.

To answer your other question â€“ instead of moving the stormtrooper and the camera together at a large distance, you can keep them near (0,0,0) and move the rest of the world in the opposite direction.

However, it is better to rethink the spacial organization of your scene. If you have the stormtrooper dance on different planets â€¦ you do not need all of them in the scene at the same time. When he is on planet A, the (0,0,0) could be at the center of A, when he is on planet B, the (0,0,0) could be at the center of B. In this case, the distance between A and B does not matter at all.

Since planets themselves have very large radii (earth: 6371000m), the planetâ€™s origin itself is not a good reference point. Because then the geometry coordinates of the trooper would still be very small in relation to the radius

I would then have to place the camera in the origin

camera.position.set(0, 0, 0);

The camera can then rotate just as if it were moving, but then it is no longer allowed to move. So camera position is allways (0, 0, 0)!

That means I have to transform all object coordinates into the camera system. But then I always have to move everything else. For this, each object would need a galileo transformation.
The best thing would be that all objects have a base class that they inherit and this base class receive from the first-person controller the speed so that they all adjust their position by themselves.
Hmâ€¦ that sounds easy in itself. But maybe there are a dozen things that I donâ€™t see yet that need to be considered. Has anyone done this before or know an example?
I have to program that separately first

Edit: If your world is too big, you have to split it into chunks and work with one chunk at a time. For example, if the planet is too big, cities could be the chunks, or even the buildings. When a chunk is active, the origin of the coordinate system is stationary and it does not move with the camera. When you go to another chunk (e.g. from the city you enter a spaceship), then you remove the old chunk from memory and create the new one in place. Then, while you travel through space, your chuck is the spaceship and the origin is fixed. When you land on a planet 3 million light years away, you delete the spaceship chuck and create the chunk of the new planet. And the origin is still the same, and you do not care about the other chunks.

I have this video about it. I find his tutorials very good. Thatâ€™s exactly what itâ€™s about. But what I donâ€™t understand is why itâ€™s a floating point origin when the camera still has large coordinate values

His World consists of many chunks. He talks about putting the origin in the camera and instead performing the movement on the objects in the opposite direction. I understand the theory. But I am not yet aware why the camera in his code than still has large values and the camera then moves. Thereâ€™s something I havenâ€™t understood yet, because what he does it has an effect.
This is a very interesting topic. So far I have had little to do with changing the reference system. That will probably keep me busy for a while

1 Like

Ok I think I got it. Then I can now ask a specific question.

I want to translate all the vertex coordinates of an object to have them relative to the camera instead of the world coordinate origin. To do this, I just have to do this for each vertex

vertex position - camera position

So far, so good. Thatâ€™s the easy math part.

The question:
How do I perform this operation on all vertices of an object?