Well, I thought about this before I found out that there was something similar, so it couldn’t have been a scam. More likely an approach that looked like the next best thing at first sight, and then turned out to be seriously suboptimal when it came to other features - it can happen to anyone and it doesn’t make the approach a scam, since it’s not intentionally meant to deceive. There are many attempts that start well with teasers about how great the idea is and then unfortunately hit insurmountable obstacles along the way and cool down.
Fact is, aside from the problems of how to store hypothetically infinite data and computing transformations, you have a limited screen space and a limited amount of colors when it comes to rendering on the screen, and volumes not just surfaces when it comes to replicating real world. Thus it would make sense to follow the structure you’re trying to replicate instead of faking it via various methods, resting assured that, apart from computations and storage, the cost of redering would never be greater than how many pixels make up the monitor resolution. If that’s not realistic, I don’t know what is. Problems in implementations (storage and computation in this case) always exist… until an optimal solution is found and they don’t exist anymore - so no reason to be stuck in a certain paradigm, an open mind is useful most of the times.
Thanks for the details and insight into the challenges that would come with using something similar to the cloud example, by the way. So far, adjusting alpha for the ground-based sky based on distance from the planet looks promising, I just need to match colors and transparencies between it and the outside view in order for things to look like they happen seamlessly. Tried a couple of atmospheric shaders but somehow I feel a bit uncomfortable with having to use hundreds of lines that I barely understand what they’re doing instead of my own shader code that in one or two dozens achieves more or less the same using gradients instead of complex physics approaches. There is also the issue that most of them seem to be focused on representing the sky from earth instead of a solution that can be easily adapted to make a spherical atmosphere. I’m not one to give up either though, so I’ll see what I can do instead.
Anyway, I’ll stop here with this, I don’t want to divert things off topic too much. Great to hear that you figured out ways to improve performance via morphing real world structures on the fly. I guess even though the complexity of the terrain is not an issue on this, the amount of real world structures you use and doing things on the fly instead of statically could potentially pose another challenge as things evolve.
That being said, in my opinion, you (probably unintentionally, while looking for ways to make things more efficient) just hit the jackpot… because that’s exactly how humans are able to visualize and describe multidimensional space (or any other thing, for that matter) through association with already known forms on the fly, without having to “store” large amounts of data. For example, a human can instantly build a 3D apple knowing that it’s basically a “sphere” - actually a closed ball - less “bulged” downwards by some factor, from which you exclude a curved cone at the top and apply some random surface irregularities - so only a couple of known properties / characteristics and you have your shape. And the ball volume is easily described as the set of all points of distance less than or equal to r away from x, without having to store anything of those points or surfaces. Light usually accounts for surfaces only, but it can extend to lack of light for inside points, similar to how occlusion works.
Incidentally, like I mentioned before, this has little to do with geometric surfaces, faces, vertices, normals and such (which don’t even need to be stored or memorized every time, apart from the base “known” shapes) since they’re all volumes and have a base “inside” configuration similarly described via a couple of characteristics. They also represent a potentially optimal way to describe “atomic” structures by using just a small set of description data, along with the 3 transformations, instead of storing every point (or vertex, in the case of geometric approaches) in space. That’s what I meant with my previous replies, that the current approaches are flawed in that regard.
I would strongly suggest to further explore this avenue, because it’s efficient and can describe any shape whatsoever, since all shapes are just unions, intersections and exclusions of known simpler volumes. In Three.js terms, every shape can be described as an union of base convex hulls (e.g. spheres, cubes, pyramids, etc., that can easily be stored just once), and only the said hull transformations (e.g. scale, position and rotation) need to be accounted for, nothing else. Or, to put it another way, everything is just variations / combinations of instanced base forms. Of course, automatically decomposing a shape into convex hulls is a challenge and you need a fast on the fly morphing algorithm, but that’s a “different” matter…