I use a floating point origin to improve the precision with large numbers. In itself it’s nothing complicated. To do this I essentially have to make the camera the origin of my meshes at every interval. This is what I do here:
Update(cameraPosition) {
this.mesh.position.copy(this.params.origin);
this.mesh.position.sub(cameraPosition);
}
To do this, I have to split the “modelViewMatrix” in the vertexShader, which consists of two parts.
modelViewMatrix = ViewMatrix + modelMatrix
Model Matrix: Contains transformations that position, rotate, and scale the mesh in the world coordinate system.
View Matrix: Contains transformations that represent camera position and orientation.
The result in the vertexShader then looks like this:
void main(){
mat4 chunkMatrix = mat4(
viewMatrix[0],
viewMatrix[1],
viewMatrix[2],
vec4(0.0, 0.0, 0.0, 1.0)
);
//gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
gl_Position = projectionMatrix * chunkMatrix * modelMatrix * vec4(position, 1.0);
}
This ensures that the modelMatrix remains untouched. What I haven’t yet understood is how to take this transformation into account in the shader for the vertices in the Three.js code for the boundingSpheres. The geometries are essentially flat terrain sections and all have a boundingSphereCenter, which I can conveniently generate directly when generating the terrain chunks (arithmetic mean of the almost equidistant verices).
The boundingSpheres work as expected without a floating point origin. Now I would like to extend the boundingSpheres to the floating point origin case. Has anyone ever done something like that?
I would probably need an analogous transformation for the boundingSphereCenters