One scene on one computer slows down differently in different browsers

Hello, friends. THREE / WEBGL scene performance issue. On the same laptop, in different browsers, the scene performance is different. For example, in FireFox it does not slow down, in Chrome it slows down. On some other devices, on the contrary - it slows down at FF, but works well in Chrome.
At the same time, the scene code is optimized as much as possible, all performance tips are followed.
What can you say? If you can confirm similar cases that you encountered, please write about it.

I experience exactly the same problems in one of my projects.
It is basically finished since 2 years and has been on ice because on my laptop it slows down only in chrome, on my smartphone it slows down only in firefox. On my friends high end pc it slows down only in chrome. On my work pc it does not slow down at all. I am talking about a difference of 60 fps vs 3-10fps.
It uses tons of precalculated tween animations and cubes to form words and has real time lighting.
I know that it is heavy on the compujter but it should be consistent.

1 Like

I also had the same problem where on my newest pc any threejs model would be slow while on older it would work fine. It turns out that somewhere in the past I disabled “use hardware acceleration” option inside of my chrome browser (Settings → System → Use hardware acceleration when available), now it works fine.

4 Likes

This is correct, thanks a lot man ! I don’t remember why I deactivate it but it happens a lot to set things up and forget about them years later

Welcome to the joys of 3d development! Everyone has a different CPU/GPU combination.

It’s sometimes funny to see someone spend 6 months optimizing one little corner of an application, only to find out that it doesn’t work the same on an iphone6 and, basically give up.

Learn to use the Chrome profiler to identify bottlenecks in your javascript and rendering code.

And learn to make peace with the fact that some people won’t be able to run your stuff… and learn to communicate this to people higher up that don’t understand that Some things are literally Not Possible in a reasonable time.

Some of the main bottlenecks are:

Drawcall count: The usable number of drawcalls can vary between ~100 to ~10000, depending on the CPU/GPU config.

Texture bound: You’re using large size textures on models. Some better GPUs have zero problem with this. Some smaller GPUs literally die.

Overdraw/Fillrate bount: You’re drawing many many things on top of each other, or to the same small area of the screen. threejs attempts to sort solid objects based on depth to render them front to back… thus things rendered first, have a chance of obscuring things that will render behind them, thus eliminating some of the shader work that needs to be done. If something you are doing … (Instancing for instance)… circumvents this depth sort… you may become fill rate bound.

Shader bound: This can be a subset of overdraw bound… you may not be doing much overdraw, but the shader/material combo you’re using is very expensive.

Vertex / transform bound… You’re trying to push more than 1 million triangles, even thought everything else is optimal. Works fine on most modern GPUs but falls over on some lower end hardware.

Javascript bound… You’re doing too much javascript per frame. Fast CPUs have less of an issue with this… but most CPUs WILL fall over at some point, because you are also sharing the CPU with every other crappy javascript application the user is running in other tabs etc.

These are just a few of the things that you sometimes have little to no control over in terms of what the end users hardware can handle. There isn’t really a “minimum spec” you can target. You can only try to make reasonable decisions/tradeoffs to try to run well on a certain subset of hardware that you select. There will always be someone who thinks your app is “broken” because it won’t run on their raspberry PI, or weird homemade phone they build out of old tv parts.

You eventually learn some rules of thumb that help though:

Drawcalls: Try to keep this around 1000 max-ish. When possible disable matrixAutoUpdate on meshes that aren’t expected to move a lot. This can save a lot of CPU time and increase your drawcall limit.

Textures: Try to avoid texture sizes larger than 2048x2048, or be prepared to have smaller, fallback textures, or a way to resize them at runtime if you have to cover a large swath of hardware configs. There is an actual Max texture size supported by your GPU that you can query, but even then, there are other limits like how much GPU memory is available at a given time.

Overdraw bound: Avoid disabling depthTest/depthWrite… when you Have to use transparency, try to use alpha transparency with alphaTest: .5 or similar… This lets the GPU potentially avoid some blending.
Things like additiveBlending look great but often rely on lots of overdraw to accumulate to the final output you want.
Avoid doing lots of full screen quads or effects that cover large areas of the screen.

Shader bound: Avoid using more complex materials when you can: MeshStandard is faster than MeshPhysical. MeshBasic is fastest of them all. Custom shaders can be even faster/simpler at the cost of some features.

Vertex/Transform bound:
OPTIMIZE your GLTF/Models. Use tools like gltf-transform and meshopt to really crush your models. These tools optimize Many aspects of your models:
They use reduced precision formats for vertices… Cutting the vertex size in half, can almost double the amount of triangles you can render… as can encoding them in bytes or other data sizes. These tools also can implement texture compression, via KTX2 or similar. These compressed texture formats are:

  • smaller on disk, faster to download
  • faster to render, since with some formats they stay compressed on the GPU, meaning reduced memory pressure and faster access time, even though the decoding is more complex. Modern GPUs may benefit more. Older GPUs may end up being even slower. Tradeoffs, again.
  • Merging geometry, and crushing complex hierarchies to reduce drawcalls and cpu transform calculation.

All of these are just rules-of-thumb… and you can often push one of these higher at the expense of others, depending on your needs, so don’t take these as absolutes, just a general guide of things to consider when confronted with poor performance.

This is only scratching the surface of the factors involved in making something that performs well, but over time you get a sense of what works and what doesn’t, and you learn how to better communicate these limitation to the stakeholders.

My main suggestion is learn to use the debugger, and various debugging tools. You will literally destroy your brain trying to speculate about what COULD be going wrong… only to find that you’re nowhere close to what is actually going on.
I’ve been doing 3d for >30 years and I’m still only about 50/50 in guessing what the bottlenecks are when looking at an application from the outside. Profiling and testing is where the rubber meets the road.

3 Likes

I had this problem when using delta, from THREE.Clock as multiplier speed of my animations,
i removed delta and used a fixed value

using this to maintain 30Fps or skip on throttling

var deltafps = 0;
var timestep = 1 / 30; //frameRate
function animate() {
    var delta = clock.getDelta();
    deltafps += delta;
    if (delta && delta != null ) {
        while (deltafps >= timestep) {
            if (deltafps < 0.3) { //Ant  throttling
                //CODE
            }
            deltafps -= timestep;            
        }
    }
}
1 Like

In the last days of 2024, on Chrome 131.0.6778.205 (Official Build) (arm64), I saw that when my laptop power dipped to 9% and was not connected to power, my scene was throttled down to 30FPS.

Once I plugged the power cable in, the same scene shot up to 120 FPS in the same version of Chrome.