Changing pixelRatio based on fps , good or bad idea?

so using logic from “stats.js” i’ve managed to read fps value every second in the render loop

and

running the site on older/weaker phones, clearcoat material performance fps is around 27-30 fps with max pixelRatio but if ratio is set to 1 fps is stable at around 57 fps and performs flawlessly
Newer 1080p phone was handling pixel ratio of 3 with no problem

is automating pixelRatio to change according to fps a good idea ?

the flaws i encountered are

  • fps drops when something is changed in the renderer( adding new model ) ,this might interfere with the benchmark
  • random black frames when switching pixel ratios
  • loop flaw : when fps is below 30 ,pixel ratio is made 1 ,then fps becomes above 55 which triggers condition for pixel ratio to be window.devicePixelRatio ,this then again causes low fps condition and is stuck in a loop
1 Like

I’m pretty sure model-viewer does something like this, but not sure of the details of how they compute the target pixel ratio.

I would recommend against it. It’s hard to know exact reason why FPS is low. Dropping resolution (which is what pixelRatio would achieve) would help in case of being “fill-rate” bound, that is, if pixel shader is the bottleneck. If you scene is vertex heavy - it wouldn’t help as much, if you scene has a ton of objects and you’re bound by the CPU’s API cost - dropping resolution would do absolutely nothing.

Ultimately, to drop the resolution in a smart way - you would have to do so with the provision that you expect FPS to improve, so you’d need to do something like this:

if recently reduced resolution{
   if FPS has improved{
      record improvement
   else{
       restore old resolution
       lock resolution reduction attempts for some time
   }
}

if (FPS is below `target`) and (resolution reduction is not locked)  {
   reduce resolution
}

I would suggest dropping resolution in small increments, and collecting at least 3 frames to check if the FPS is actually higher or not, better collect 20 or more. This is to make sure you are not affected by statistical anomalies.

Also keep in mind that changing resolution is not always free, that is - you might be using a number of render targets that would require resizing, which in turn triggers memory re-allocation.

Another approach is to run a small benchmark before you application starts to figure out the right performance parameters. Like rendering a full-screen quad 10 times in a row and checking how long that takes. Based on that - you could adjust resolution.

2 Likes

@donmccurdy is this it ?, not sure what’s happening here

@Usnul
i’m using the last 5 frames /5 seconds as interval between updates , there’s some black frames when the switch happens and sometimes updates keep looping

In terms of vertex count , i’m making the models myself & will aim to keep it under 1-2mb,and i’m trying to leverage the PBR textures more instead of high poly geometry

but if i can simulate the worst with a benchmark by showing a plane covering the entire screen like during a loading scene ,that would be great. how can i do that ?

current flawed logic

function adjustPixelRatio(fps) {
    if (fps > 60) {
        if (renderer.getPixelRatio() !== window.devicePixelRatio) {
            renderer.setPixelRatio(window.devicePixelRatio)
        }

    } else if (fps < 30 && fps > 50) {
        if (renderer.getPixelRatio() !== window.devicePixelRatio / 2) {
            renderer.setPixelRatio(window.devicePixelRatio / 2)
        }

    } else if (fps < 30) {
        if (renderer.getPixelRatio() !== 0.8) {
            renderer.setPixelRatio(0.8)
        }

    }
}

the site the mug model has clearcoat so it performs the worst

I don’t have code handy, but in a nutshell, you create a plane geometry, it will be centered on 0, then create a mesh with a material that somewhat represents materials you’ll be using. Then render that using an OrthographicCamera for a fixed number of frames. Time the execution of that loop - et violá, you have a number that can serve as a basis for your performance metric. Say 10 frames render in 100ms - you can expect average FPS to be at most 100 frames per second (100ms / 10 = 10ms per frame).

For a code snippet on how to render a full-screen quad there are a lot of examples out there, three.js examples have such code when it comes to post-processing.

1 Like

That as well as the method that updates scaleFactor, it looks like:

1 Like

is this the correct way to calulate time per frame ?
,& should i use ‘performance.now’ or get.ElapsedTime ?

In the animate loop


 startTime = clock.getElapsedTime()

// do render stuff

 endTime = clock.getElapsedTime()

timePerFrame=endTime-startTime


I did this before in an app for the same reason, like suggested by measuring performance for a little longer, however doing this continuously won’t be too helpful/provide a good experience if your app demands are highly fluctuating leading into changes from crisp to blurry.

You need to make these measures at the beginning, ideally with a average-cost setup, benchmark or extrapolating the results to the highest cost scenario what isn’t really reliable, since various things can cause no to absolute heavy performance differences depending on different hardware and drivers.

In my case the demands didn’t change much, so i measure at beginning and snap to a helping resolution. Additionally i also check the GPU and Browser info that is available, asides of some bare specs, for instance you can check if there is software rendering going on with intel graphics which has been the major enemy.

Finally you can also simply offer a option to the user to switch manually or do both like attempting to detect bad performance, if it’s clearly very bad do it automatically first as the ui would become hard to use, otherwise if it’s unclear bouncing suggest so switch to a faster setting. I did this for the intel case as in many cases the performance even with a simple scene was horrific with serious old office machines, while on some it was okish.

2 Likes

I would suggest using Detect-GPU instead of adapting to FPS while the app is running. This gives you more flexibility since you can run it before loading the app and for low performance GPUs you can reduce the quality of models, shaders, textures, and so on, rather than just reducing the resolution.

The main caveat is that the library may not detect recent GPUs well, especially if you stop updating your app to new versions. For example, soon after M1 chips were released it detected these as low performance. This has been fixed in the latest version but could happen again with new GPUs in the future. On the other hand, I don’t think any approach will be perfect and this is likely to be more accurate and flexible than custom FPS detecting code.

2 Likes

Personally I am cautious of relying on GPU info to set performance options. There are many other considerations, such as current work-load of the machine, resolution, as well as variations within the GPU chip usage, such as:

  • cards that have lower/higher clock speed
  • varied amount of RAM
  • different RAM technology (faster/slower)
  • PCIe interface available on the motherboard as well as what CPU support

There’s also the browser’s WebGL implementation, google’s ANGLE is one of them, and even that runs differently depending on the OS.

I mean, it’s better than nothing for sure, but you might be off by a lot.

6 Likes

If there is room for it in your application, just make a little options screen (with optionally a visible fps counter) and let the user decide. Relying solely on render scale isn’t enough, disabling some post-processing is also a viable option for the user to be able to disable.

1 Like

I’m from a android dominated market ,so detecting gpu will not be ideal

on my mom’s 90$ phone , the fps is solid 55-60
until a model with clearcoat is visible, then it goes to 18-20 , reducing pixelRatio to 1 fixes this

on my dad’s 200$ samsung the fps drop still there but alot less & again reducing pixelRatio fixes this

and my 400$ phone there’s absolutely no problem

so here’s my plan :

While the assets downloads ,make a benchmarking loading screen with a plane with clearcoat and transmission enabled which covers the entire screen/camera frustum and measure fps to determine a graphics preset , based on this i can set the pixel ratio and load lower 512x512 quality textures instead of 1k or 2k, or in worst case use meshBasicMaterial instead of standard/physical and use bakedTexture/lightMap instead of the PBR textures

to make things simpler ,there will be graphics presets just like a game, low,medium,high,ultra

and yeah i’ll give options for the user to set the option manually , but for the sake of a good user experience ,i’m setting up the site in such a way that any modern hardware from any performance range can have a pleasant experience,
also on weaker devices when the fps drops the entire phone’s ui slows down, so would like to avoid that

Does detect-gpu return the same result for all the phones? I thought it would give more fine-grained results :thinking:

It does work, but not completely reliable

weakest phone showed 19fps with tier 1
my 90hz screen phone showed 59fps with tier 2
and my laptop with 120hz screen showed 60fps with tier 3
and apple desktop devices all show up as Apple GPU without any fps and tier 1 (i guess it’s a privacy or metal issue)
apple phones are 60fps and tier 3

but my phone runs three js better than my laptop as the laptop has dual gpu and browser is using the weaker integrated gpu

1 Like

Interesting, thanks for taking the time to test all that. I did the think Apple issue was resolve in the latest detect-gpu release, but maybe not? :thinking: