Yeah, sure. You’re right, texture switching is the larger of the two problems, I’m quite convinced of that too. Having many materials still does incur a penalty, since you are loading a new program for each. There is also the unpleasant matter of compile and link time.
I’m a bit confused, so please bear with me, I’ll try to lay out what’s in my head.
http compression. Most servers are configured to use gzip compression, even this very page you are viewing is sent to the browser compressed, provided you are not using some weird and obscure browser. This means that any assets you send - can be further compressed over the wire.
JPEG introduces artifacts into your image.
My main argument is not so much about the size. My argument is this: You will be using exact same amount of RAM on the GPU to display this:
so, my advice is to pay for better quality image in traffic, as in terms of runtime performance there will be no difference.
I’m not trying to say you’re “wrong”, @looeee. I just think that your advice of using JPEG is not applicable for actual runtime performance. I believe that if you wanted to compress your textures to absolute minimum - JPEG is not a bad fit, as long as you’re willing to sacrifice some quality for it. We are just arguing for different outcomes, you and I.
I’ve tried compressing flower.png further using a couple of techniques (gzip, zip, bzip2) , and the smallest I’ve managed to get it is still >46kb. So, for this flower texture, you can expect about 80% reduction in the amount of data you transmit over the network, without reducing visual quality, by choosing JPG over PNG.
You’re quite right. I’m talking about overall best practices for textures, which includes download time, texture decode time, and runtime performance. Decode time is probably very tiny for both formats, and runtime performance will be identical. Download time is then the key in deciding which format to use - and in the case of this flower texture at least, it seems like JPG is the better choice overall here.
I find JPEG artifacts quite unpleasant, especially around the UV seams. Even minor artifacts. Since it’s only the storage format and the actual GPU data will both be just bitmap - I prefer not to take that quality gamble.
By all means - this is just my preference. Your statement about being able to get good quality from quite high JPEG compression is entirely true, and I admire JPEG for it. If, say, DTX was supported uniformly across all devices on WebGL - I would use it without a doubt. As it stands - those savings in space do not translate to the GPU, so I prefer lossless compression with the associated trade-offs.
Do all of these objects need to display at the same time?
Maybe not? Because the “world” doesnt make much sense - it’s a house, with . furniture warehouse in the open out on the lawn in around the house.
Could it perhaps be that you dont have to show all of these furniture pieces at the same time?
If not, then a sort of “culling” could be your friend here. If draw calls are a bottle neck, you can cut them drastically by showing only a few meshes at a time.
If you do need to show all the furniture around the house for some reason, then you can ask am i always going to see this from a far and from birds eye view?
If so, then maybe the poly count is too high, you’re rendering too many triangles that all end up in one pixel thus wasting cycles.
Texture lookups and materials? Oh my!
All this kind of stuff brings us to:
For the most part you need to learn graphics programming The question you posted is super broad, so really the only answer is to do a lot of research and move from being a newb into an amateur, into proficient etc… until you become a pro. This doesn’t even have to mean that you hold all the knowledge in your head, but know where to look and what to look for. In time, you’ll be able to dig deeper and understand more.
Here i’d say it’s a combination of things. First you can override everything with MeshBasicMaterial to rule out lighting calculations and texture lookups. Then you can try replacing geometries, (or nodes with geometries) to see if its a draw call bottleneck. Then you can fiddle with (auto) updating, to see if it’s a cpu bottleneck and if you’re doing stuff you dont have to. Then you can decimate the meshes and see if that helps.
Do you know if three.js has any optimization to account for this? I think it didn’t but i also think some work was done recently. I think it triggers bind buffers a lot when it doesnt have to.
Sorry, I don’t remember it too well. I think it’s pretty good at caching the programs, but probably not so great at binding data, like you said. I’m not really so deep into the low-level shader management.