There’s a to-scale Solar System project that I’m working on which is using real data scraped from NASA’s website. Due to this there’s some massive scale differences when switching between planets and moons.
Each entity also comes with its own orbit line, and depending on an entity’s orbit path it means there can be some massive circles drawn using the new THREE.BufferGeometry().setFromPoints(points) method.
The logarithmic depth buffer helps with any z-fighting issues, but I’ve run into a recurring issue where zooming into an entity with a large orbit circle causes the line to vibrate and flicker strangely.
I’ve created a CodePen demo of the issue that can be viewed through here.
Clicking the Sphere 5 and Sphere 6 buttons and dragging the camera around the spheres should show that vibration quirk.
I’ve searched for similar issues but couldn’t find anything. Has anyone seen something like this happen before, and have a possible solution?
Could be you’re running out of precision in the shader code, when you’re looking at the very tiny piece of your system due to the large scale of it. You might be looking at numerical errors of calculating vertices positions.
I don’t think the problem here is THREE.js or JavaScript numeric precision. Ultimately, you upload your numbers on the GPU to calculate in the shader, and the shader precision (at least for WebGL) is quite low, that’s the bottleneck.
Logarithmic Z-buffer is not a magic trick to increase overall numerical precision, it just counters precision bias in the depth buffer, making it more “fair” for numbers near 1 at the expense of other numbers.
The only resolution, so far, is to use smaller numbers. This may require extremely large scenes to be split into chunks, each having own coordinate system.
GPU floating point numbers are short (24 or 32 bits only) and they have inherent limited precision, much less than the precision in JavaScript (which floats are 64 bits). As @tfoller said above, it is not a limitation or bug in Three.js. It is the way how GPUs are designed in order to have the performance they have now. Double-precision floating-point numbers (64 bits per number) are still not supported in GPUs.