I work on scenes for mobile display. I’m not an expert of the THREE JS and i think it’s difficult to know what the limits to prepare a scene for mobile (due to low gpu memory). here are some question it’s difficult to decide cause i don’t know how calculate the impact :
It is better to have more different mesh in the scene with low textures size for each of them or less meshes (merged) with high textures size (to keep quality) ?
Is there a way to calculate the memory used by my scene and if yes, which one (for instance, nb mesh * nb texture * texture with * texture height * 4 for float…). I’m not sure…
so, what’s the best way to work on a scene for mobile (in my case, i have a model with n mesh, 4 lights without shadow, an hdr a env, and ao,diffuse,normal,metalness map for each) ?
Thanks a lot for your response. I found a lot of tips and tricks on google but i didn’t found one that explain clearly the impact of memory, how to calculate it, and the best combinaison choice between separate mesh or merged them but with higher texture size to keep definition…
I’m afraid it’s not possible to answer this question since it depends on the use case. What you should bear in mind is that texture resolutions which work fine on desktop do not necessarily work on mobile platforms since memory and bandwidth are way more restricted.
Ideally, you do both. Merge meshes or use instanced rendering but also try to use compressed textures with potentially lower resolution.
I do not recommend to manually compute such metrics since the results are often wrong and thus misleading. Better to monitor the memory allocation with the browser’s dev tools.
Again, that depends and without seeing your app it’s not possible to tell. It’s best if you keep draw calls on a minimum and try to optimize textures as good as possible. Especially texture compression is an underrated feature.
First i want to thank you for your response cause i saw that you always try to respond and this is an incredible help for people like me.
So, i already use gltf.
What do you mean about compressed textures ? I believed that a jpg or png didn’t have any difference in memory cause it’s an array of width*height ??? The texture compression could change that ?
Secondly, the model i use (i can’t send it cause it’s confidential) is a handbag. So, when you speak about instanced rendering, what do you mean ? It is only for duplicated mesh i think, or not ?
When you load JPG or PNG, the texture data have to be decoded when uploaded to the GPU. After this process they are uncompressed in the VRAM and thus allocating much meory. When using compressed texture formats like S3TC, ASTC or PVRTC then texture data are kept compressed in GPU memory and are uncompressed on-the-fly when accessed by the shader. This happens in hardware which makes the decompression extremely fast. The overall memory savings can be significant.
Using compressed textures on WebGL tend to be complex in the past since you would have to support many different texture compression formats to support desktop and mobile. However, this became easier thanks to KTX2 and Basis.
If you would render the same mesh multiple times with different transformations that instanced rendering would be a great help to improve performance.
i’ll search about texture compression. Is there a tool to create easyly a ktx2 texture from jpeg or png (perhaps online converter ?) . Or could we generate this kind of texture directly from three js from the model display on a desktop ?
Getting the compression options configured optimally is a bit harder with GPU compressed textures than with image compression formats like JPG, WebP, and so on. There will be more documentation on this soon, but for now I would recommend starting with something like this:
This compresses normal maps with the Basis high-quality UASTC format, and other textures with the Basis standard quality ETC1S format. The gltf-transform inspect input.glb command will show sizes of each texture in case you need to debug texture sizes more in this process.
Just to have an idea of what is possible, do you think we can fixe a limit to be sure that a model is available to be seen on a mobile ?
For instance :
A model with 12 meshes
Each meshes has 4 texture with 2048x2048 pixels (rough, metal, diffuse and ao)
The total of polygons is 400 000.
An HDR for environment
4 lights
No shadowmap
Do you think it’s ok to be display on a mobile ?
This is a example but the idea is to be able to provide a limit like that where we are sure that the model will be able to be display. An idea of this values ? And some tricks to respect to optimize this limit ?
The Oculus Quest guidelines are 50-100 draw calls per frame, 50,000-100,000 triangles or vertices per frame. I tend to use those as a rule of thumb for what can be rendered at 60-90fps on mobile devices, but it’s just a guess. If a lower framerate is OK for your use case, you can increase limits accordingly. For WebXR the framerate is especially important.
Each meshes has 4 texture with 2048x2048 pixels (rough, metal, diffuse and ao)
You can merge occlusion (red), roughness (green) and metalness (blue) into a single texture and reduce size. Also, KTX textures require 4-8x less GPU memory than PNG or JPEG, so that’s another useful option.
I think the only “hard limit” is GPU memory / VRAM (most other limits just decrease framerate), and textures are the easiest way to blow that limit. One 2K PNG or JPEG texture is roughly 2048 * 2048 * 4 * 4/3 = 22MB, including mips, and VRAM available on a low-end mobile device might be 200-500MB.
Hi,
Is there a way to online convert jpg or png in KTX Textures ? In ma y case, I’ve a tomcat server with java servlet and it will be perfect if there woulb be a way to convert jpg or png to KTX with a servlet or something like that ?