Hello! First off, I’m not sure I’m asking this the right way. Please bear with me as I explain what I’m trying to do.
I’m looking to implement something similar to DZI (deep zoom image) but for panoramas. My current solution is creating a large number of sphere slices (adjusting phi and theta values) and texturing them with image tiles. This works for the most part but I think it uses more memory than necessary since I’m creating a bunch of sphere geos?
To optimize, I’m changing it to using only one sphere geometry with a large initial white dummy texture. And then using copyTextureToTexture() function to load tiles.
Finally, my question… I’m stuck trying to figure how to to detect which part of the sphere geo the camera is looking at to so I can update the corresponding tile. Any help is appreciate it!