Hi all, I’m using a 8192x4096px image as a texture on glb sphere. When I zoom the camera I don’t see the details as what we excpect while normally zooming high res image. Rather, even on normal view distance the texture appear to be less crisp and slightly blurry as compared to original image. it seems like the threejs or R3F is sampling the pixel values in improper way especially at Egypt, US and other. I try adding even more high res image but it created the same result.
Please compare both of them at Egypt (I mean in demo and texture image)
From last month, I’m trying to debug this issue, I thought maybe its OpenLayers’s fault as I’m projecting map on a sphere via canvasTexture and the answer is no because I can see OpenLayers’s Canvas rendering clear image but I’m not understanding what’s happening when I use it over R3F.
Can you please help me with this. I tried almost everything from last 2 weeks but nothing seems to work.
Define accurate. You wanted it “crisp” - this I believe is as crisp as it gets.
You are probably comparing a zoomed in version of the image in some software (browser, Photoshop?) to a zoomed in version of the texture in THREE and they somehow don’t match. There is no way for me to replicate that.
You need to provide both images as screenshots side by side, at the same zoom level, otherwise it’s hard to understand what you are trying to achieve.
THREE.WebGLRenderer: Texture has been resized from (27000x13140) to (16384x7973).
It looks like the demo is using dynamically generated data from OpenLayers rather than the 8K texture attached above. The dynamically generated texture does not have the power-of-two dimensions recommended for WebGL, and must be resize (down to a lower resolution) in order to generate mipmaps for use in 3D scenes.
I would recommend simplifying the scope of the problem, using the static texture and not OpenLayers; none of us have experience with OpenLayers. It will be easier to compare results without additional dependencies.
@donmccurdy After our conversation on Discord, pretty much after I started using 8192x4096px image I didn’t got that error.
I have a solution in which you can replicate it on your end. This isn’t Openlayer’s issue but actually threejs itself. Just make a canvas with height and width of 8192 x 4096 px and add this image into the canvas according to this artical and use the canvas element as a texture to the sphere. You’ll get the same result as this thread mentions. I tried it yesterday.
I’ll soon (within few moments) update my codesandbox demo for it as well
… I do not see a huge difference in terms of pixel resolution, other than some effect of the mipmaps. If you wanted to see pixel edges more sharply, try THREE.NearestFilter for the minFilter and magFilter settings
Applications like Google Earth will have very complex systems of streaming in prepared data from many textures gradually. Perhaps some examples exist doing simpler versions of the same thing in three.js, but in terms of results you can get from a single texture, I don’t see that there is a bug here. Colors are off, but none of the color management best practices are set up either so I haven’t debugged that.
Result when just loading the texture directly (no CanvasTexture) and applying it to a SphereGeometry in the three.js editor. I don’t know if I’m seeing more detail but the colors are improved with correct color management settings.
@donmccurdy I actually forgot to upload my 8k image which I was using locally and now its been uploaded.
One thing I wanted to know that is there other way to stack multiple images on top of one another with transparency support as a texture to threejs material. That way I can select which texture I want to show/ highlight country boundary based on scroll position of key strokes?
If I have to do this I blender I can use concentric UV sphere mesh but it’ll make the glb size to 5-15 MB
Oh shaders, I’ve heard a lot of good things about it that needs extensive practice. I did look out for cesium but its docs weren’t easy to get started as threejs and there’s no tutorial on it. There’s indeed Mapbox’s globe which they’ve designed to compete with google earth but the problem with all of them was I didn’t know if they support blender camera like my project (it’still in progress) because I want to animate it like satellite and fpv drone. And so far looking at their docs, I guess they dont.
I indeed sent you that article during our discord discussion, That’s the exact method I’m using right now. But Sure, I’ll be looking forward in future to implement those sort of crazy stuff since right now I’m doing all of these things alone by myself.
WebGL globe’s demo were interesting, I’ll look into their GitHub code.