The problem is that the scene starts to lag a lot when interacting with it. I want to solve this problem by not merging meshes, in order to reduce the draw calls, because those individual meshes need to move/drag independently.
I want to know what are the parameters on which smooth rendering depends.
What I know is that, if you can reduce the number of draw calls you end up boosting GPU performance. Similarly, what all parameters can I tune to make sure that my scene lags as less as possible? Sorry if I don’t make much sense, I am a beginner here.
Definitely look at the poly counts on the models. A lot of people get models suitable for 3D printing, and are really high def. They are not really suitable for any kind of live motion. I have found that most models, I can use the Decimate modifier in Blender, and reduce them to 10% or 5% of their original poly count, and see almost no loss of quality visually.
Also, look at the size of your textures. if they are huge, like 2048x2048… see if you can get away with resizing them to 1024x1024, 512x512… etc. The difference in memory and resources used by the texture is exponential, as they get larger.
You might give Blender a try since it is free or look online for something like this https://3dless.com. You should know though that some might not deal with UV coordinates and mess them up, i’m not aware of a free tool that handles retopo automatically and reliable, but if the polycount is very high there won’t be a really noticable different with a regular poly-reduction if it handles UV too.
I recommend exporting as gltf with separate texture and buffer files. I don’t recommend using JPEG over PNG, I do recommend reducing texture size as much as is reasonable and using lowest level of compression you can, this will reduce decoding time.
My guess is you have too many materials, switching between materials causes the “lag” - make sure there’s only 1 material, even if textures are different.
if you can re-uv your scene to use a single texture - it would reduce number of texture switches down to 0, this will net you a nice performance boost. Generally speaking, if you have 115 textures in your scene - that’s too many, at least for WebGL.
About geometry - I wouldn’t stress too much over it, this poly count is entirely reasonable for PC, for mobile it’s too high though. If you can re-uv your scene to a single texture - you can also join all geometry, this will reduce number of draw calls to 1 - that’s about the best you can do over-all.
Currently I am using 2048x2048 size textures, I am planning to reduce it to 1024x1024 or 512x512. I have seen the texture size decreases quite significantly.
using lowest level of compression you can, this will reduce decoding time.
I am using pngquant on top of that: pngquant -s 9 -f --ext .png --quality 70-95 image.png
with the given parameters for compression, default compression is 3 and I am using 9 here
make sure there’s only 1 material, even if textures are different.
Thanks for this
if you can re-uv your scene to use a single texture - it would reduce number of texture switches down to 0, this will net you a nice performance boost
How can you do that on ThreeJs? I have not found any API which merges textures into one single texture?
you can also join all geometry, this will reduce number of draw calls to 1
I can’t do this as I have to make sure that each one of the objects in the scene is independently moving/rotating.
The last point also bring me to another question, I have been using BufferGeometryUtils.mergeBufferGeometries(geoArray, false) to merge geometries for individual objects in the scene - is it possible to merge textures on ThreeJs?
Heya, it’s a bit confusing, if you use multiple materials, but they are configured sufficiently similar, under the hood you end up with only one “Program” (linked vertex+fragment shader pair), so GL context doesn’t need to switch programs, but only load different uniforms for rendering. I should have been more clear on that, sorry.
Sure, so JPEG is a lossy compression format, it compresses less well than PNG (lossless) in a lot of cases, and it typically doesn’t compress further using standard compression techniques like zip. So what you get is a drop is quality, you get some typical JPEG compression artifacts, but you don’t actually win much space, if any.
I would say 1 thing for JPEG - it’s super fast to decode, due to it’s block-compression nature. Still, if you use PNG with low compression factor - you will see relatively fast decoding times too.
No, that is what I assumed you meant. Just wanted to clarify, because I was a little surprised. I would expect that texture switching multiple times per frame would be much more expensive than having multiple materials. Do you have any kind of benchmark to back this up?
JPEG is a lossy compression format, it compresses less well than PNG (lossless) in a lot of cases, and it typically doesn’t compress further using standard compression techniques like zip
Hmm, that really depends on the image. Very low-frequency images (without much detail, large patches of single colors), tend to compress better in PNG, but other images tend to show much higher compression levels in JPG. Zip compression is not something I’ve considered before, but I’m inclined to think that the level of extra compression you get by zipping PNGs is usually less than you would get by using JPG and no zip compression.
In any case, not saying you’re wrong here, and I’m sure you’ve tested this well for your game. But you are definitely going against the conventional wisdom here.