Equirectangular pano is more blurry than equivalent cube map version

Experimenting with 360 panoramas - one version using 6 (1024 x 1024) images in a cube map and another using a single equirectangular image (4096 x 2048) that was created from original 6.

The cubemap version is crystal clear. The raw equirectangular image is crystal clear (link). The equirectangular pano is quite blurry (fiddle).

Is that just to be expected due to the lost of information as you move from using 6 images to 1 or is there something I can do to improve the quality?

I already tried some of the recommended texture settings:

texture.mapping = THREE.EquirectangularReflectionMapping;
texture.minFilter = texture.magFilter = THREE.LinearFilter;

but that didn’t make any difference as far as I can tell.

Equirectangular environment maps are internally converted to cube maps because only this format (and cubeUV) are supported by the shaders. The internal conversion process uses a cube render target with a fixed size computation:

This produces a cube map size of 1024x1024 which is the same as your original panorama

Do you mind sharing a second fiddle that creates the background with these images? It would be interesting to see the difference between both fiddles.

BTW: I think the background in the fiddle already looks quite good.

Understood and thank you for the feedback.

I boiled down the cubemap version into a simple example (and discovered scene.background = new THREE.CubeTextureLoader() along the way!) - here are the two fiddles for comparison:

Cube map 6 x image version (clear)

Equirectangular single image version (a bit blurry)

Flipping between the two in browser tabs I think shows that the single equirectangular version is a bit more blurry. Sadly, I have to use a single image in the application I have in mind (user generated content) so reverting to the cube map option is not possible.

Hopefully something silly I am doing.

I’ve debugged this issue for a while but can’t find a place in the engine code that would explain the blur in the second fiddle.

Do you see a similar quality loss when performing the format conversion with other tools?

Obviously, that blur is induced by post-processing while doing the conversion to a cube map.

One solution is to compensate for this blur by pre-enhancing the image -which is a usual technique used in FM-radio (pre-emphasis), Dolby, etc whenever loss of high-frequency content occurs at the end device.

Check this, I had the best results by upscaling the image 2X using Photoshop’s “Bicubic Sharper”, (without making the file larger, since the visual information is about the same), which makes a slightly sharper upscale than the neutral bicubic: https://jsfiddle.net/dlllb/1n23pr64/

Or, use a higher quality / higher res source image that isn’t just marginally clear.

Thank you for taking the time to dig into this @Mugen87 - much appreciated. I’ll keep experimenting and post here if I find something.

Thank you @dllb - your version does indeed look a bit sharper.

This will be used inside my (C++) application to capture 360s by consumers of my application so sadly, modifying the resulting EQR image in Photoshop etc. isn’t a possibility. I will see if I can find a way to scale up the 6 source images myself before they are written - that seems like it would help.

Can you help me understand this line though:

Or, use a higher quality / higher res source image that isn’t just marginally clear.

The https://i.ibb.co/Vxsg9JH/eqr-test-img.jpg is just a simple test in a graphics package I made locally, and AFAICT, is very clear.

You’re right, that rectangular image is indeed very clear https://i.ibb.co/Vxsg9JH/eqr-test-img.jpg provided that you’re going to use it as is, without post-processing, but the resulting equirectangular one https://i.ibb.co/jgrhWhg/eqr-vs-jpg-eqr.jpg has areas where the demand varies, that is, lines appear thinner and more fragile than in other areas, which will be the first affected by post-processing artifacts:

more tolerant visual information:

more fragile visual information (thinner lines):

Therefore, taking into account that this (fragile) visual information has to withstand a second post-processing pass,the image is obviously marginally capable for this demand -hence the blurring.

You could definitely use a post-processing JS (or C++) library to have the equivalent of a quality “bicubic sharper” upscale -there are usually several algorithms available that you could experiment with.

Got it - thank you for the clarification