I know this has been briefly discussed before - in the context of floating-point imprecision issues for Boolean operations in gkjohnson/three-bvh-csg
It’s not caused by three-bvh-csg. It is caused by the way in which floating point numbers are stored on conventional digital computers (forget Quantum, etc!!!).
The idea is: If I know the min/max range of floats used in my program, I might be able to scale up all these floats (by some multiple of 10) and truncate the resulting floats to produce integers. And thereafter work in this integer world for my critical calculations. The issues to consider are (a) the numerical accuracy required and (b) whether the resulting integer range can be addressed with the number of bits I have for integer representation.
Does anyone have experience in doing stuff like this? Would like to hear, please.
OLDMAN