I am wondering if three.js has any built-in methods to deal with this, and if not how it might best be implemented another way.
Basically, I have a globe (sphere geometry) with an earth texture applied to and I wish to increase the level of detail of the applied texture as the user zooms closer. However, the user the will only be interested in zooming into a certain area of the globe (e.g. the red area in the photo), so as the user zooms in, only the texture of the red area will increase in resolution.
I am wondering, could I remove the faces in the red area from the sphere geometry, and replace them with an identical, but separate mesh to which I could dynamically apply the textures at different resolutions? If I were to do this, would the original texture still be applied correctly to the sphere with the hole in it? And also, would there be any artefacts around the border between the sphere with the hole in it, and the custom mesh?
the cost of drawing textures is (more or less) independent of their resolution (in relation to draw size), so if all are loaded in advance you could have everything set up with different levels of detail from the start.
it comes down to drawcalls and if you can load all needed textures in advance. I am building something like this myself right now, and the problem is not draw performance, but keeping only the needed tiles in memory.
If in your case it comes down to one ‘area of interest’ you can simply cut the sphere up into smaller and smaller segments with holes for the next level, texture them all render them all in one go.
if you need to swap things around there is an LOD (level of detail) system in THREE, but i am not sure it fits your case: https://threejs.org/examples/?q=lod#webgl_lod
The other (not so easy) part is to create the meshes with correct mapping coordinates and the fitting textures. But this can be done with most 3d packages and without the need for custom scripts…
Thanks, some helpful ideas here.
I think what I will do is create two meshes using blender, and apply a high resolution texture on the ‘area of interest’ from the get go.
I’m very curious to know where you got that information. Could you please share?
If i read you right, you believe otherwise but have no proof at hand - same goes for me.
A quick google search did not bring up any hard evidence and i am not very interested in writing benchmarks.
But i would argue that having mipmaps is the reason that if a lets say 8x8k is rendered in 50x50 pixels it will not be noticeably slower than a 1x1k texture rendered in the same 50x50 pixels.
There will be for sure a measurable difference at some point as the gpu cache can only hold so much, but again - for most of the cases mipmaps take care of that.
Imagine a full 3d scene from some game - there are tons of textures, some up to 4 or 8k on the screen at the same time - the lod systems in place take care of loading reduced geometry in the distance, but you have no need to resize your textures because - again - mipmaps.
Aha, I see. So you’re talking about rendering top levels of mip pyramid and not large textures per se. I understand now, thanks. I was a bit shocked initially and thought maybe my understanding of how GPU memory and texture sampling works was severely outdated. No offense meant.