I just would like to know how you calibrate the performance of your three.js projects towards average GPU and CPU of website users ? I have a old GPU and it’s hard to have an opinion about that. I would like to know for project without loading screen.
By the way, do you think it will be possible a day to make websites in full three.js without loading screen ?
It works the other way around, at first you don’t care about performances, you’re in your creative phase, you do whatever you want, once you have a clear vision of the final scene, you the optimization phase, testing and optimizing, using your target audience devices (high-end/low-end, PC, mobiles, emerging markets…), often you’ll have to compromise between sacrificing aesthetics and performances, that’s up to you.
If you don’t have any assets (Images, 3D models, data…), using only procedurally generated objects, then yes, otherwise you’ll have to load those images.
I always aim for 60 fps. Cartoons traditionally used 24 fps. But these days, I think that is too low. Also, I think it is important to try to maintain a steady fps. You can try something simple like creating a rotating cube and add a bunch of delays to vary the speed and see what you think is acceptable. Three.js has a frame rate counter that you can use that is featured in many of the three.js examples.
But, as Fennec said, you should start your project and see what happens and then optimize. You may find that some types of animations don’t require as high a frame rate to look acceptable.
If you are sure that you are going to have fps issues, it might be worthwhile to focus on learning how to use three.js WebGPU, rather than WebGL. Although three.js WebGPU is still in development, it is getting better with every revision and should be able to perform tasks faster by making better use of your GPU.
For me it’s a more complicated topic, not just a matter of fps.
For example, let’s say your app renders at around 30 fps. That happens because you are frame capping it explicitly to reduce the load or just the computer (CPU + GPU) can’t cope out with it?
In the first case it could be ‘fine’ (many games are capped at 30 in some systems, thinking of you Zelda BOFTW on the Switch…), in the second case it means one of your system computing resources is maxed-out. Usually if the GPU is maxed out the frame rate is kinda homogeneous, as the operations are pretty much the same each frame. But if the CPU is maxed out you will get all kinds of stuttering/jankiness in your animations, and the framerate probably won’t be that steady as the browser/OS are using that CPU core for many things (garbage collection from time to time for example, microtasks, etc).
Many systems can fire a RAF at 120 or 144 fps, so it’s easy to go too computationally heavy.
So what I personally try is to NEVER max out the client resources. You can try to detect the client system capabilities with utilities like detect-gpu, and then adapt your app to it.
About the GPUs in my case I have to target the broadest spectrum possible. From shity laptops with integrated GPUs (those seem to be very popular in some of the best worldwide companies, lol) to fast APUs like the Macs with M chips or nice modern dedicated GPUs.
Thankfully I don’t target mobile so that makes my life a bit easier.
The apps I work in are going to be consumed by paying customers (Saas), so in no way can I let any of them have a bad performance experience.