I understand that power-of-two textures are important for mipmapping. When I upload an image that is, for example 800x800, the engine will automatically resize it to 512x512. This means I lose ~300px of resolution on each axis! Why doesn’t Three.js stretch out the image to the next higher POT, 1024x1024? Doing so would mean you don’t lose any image data, you just stretch the existing pixels, and then the mipmapping can still create a 512x512 sample.
The engine is already creating a 2d canvas context to downsize the image, it would be very simple to use a
ceil calculation instead of a
floor. I understand this decision was probably made a long time ago, but is there a reason why the engine was made to downscale instead of upscale when resizing?