Why does Three.js scale texture to a lower POT instead of the next higher POT?

I understand that power-of-two textures are important for mipmapping. When I upload an image that is, for example 800x800, the engine will automatically resize it to 512x512. This means I lose ~300px of resolution on each axis! Why doesn’t Three.js stretch out the image to the next higher POT, 1024x1024? Doing so would mean you don’t lose any image data, you just stretch the existing pixels, and then the mipmapping can still create a 512x512 sample.

The engine is already creating a 2d canvas context to downsize the image, it would be very simple to use a ceil calculation instead of a floor. I understand this decision was probably made a long time ago, but is there a reason why the engine was made to downscale instead of upscale when resizing?

Textures with high resolutions can easily become a performance issue. It’s just a more resource-efficient and thus secure approach to downscale instead of upscale.

To be clear, it’s not necessarily an issue with 800x800 textures. But when a user wants to apply a 4500x4500 texture, the difference between 4096x4096 and 8192x8192 is quite noticeable.

4096 * 4096 * 4 (RGBA) = 67.108.864 Byte ≈ 67 MB
8192 * 8192 * 4 (RGBA) = 268.435.456 Byte ≈ 268 MB

And these numbers are without mipmaps.

4 Likes

Some previous discussion here:

I’d agree with that rationale – using power-of-two textures is best practice, and making threejs resize them on the fly hurts performance for no benefit beyond easy prototyping. A lower resolution makes the mistake much more obvious than inflating the texture size in GPU memory – many users would never notice that.

3 Likes

Ah, that makes perfect sense. I knew there was some good reasoning behind that decision!