Changing pixelRatio based on fps , good or bad idea?

so using logic from “stats.js” i’ve managed to read fps value every second in the render loop

and

running the site on older/weaker phones, clearcoat material performance fps is around 27-30 fps with max pixelRatio but if ratio is set to 1 fps is stable at around 57 fps and performs flawlessly
Newer 1080p phone was handling pixel ratio of 3 with no problem

is automating pixelRatio to change according to fps a good idea ?

the flaws i encountered are

  • fps drops when something is changed in the renderer( adding new model ) ,this might interfere with the benchmark
  • random black frames when switching pixel ratios
  • loop flaw : when fps is below 30 ,pixel ratio is made 1 ,then fps becomes above 55 which triggers condition for pixel ratio to be window.devicePixelRatio ,this then again causes low fps condition and is stuck in a loop
1 Like

I’m pretty sure model-viewer does something like this, but not sure of the details of how they compute the target pixel ratio.

I would recommend against it. It’s hard to know exact reason why FPS is low. Dropping resolution (which is what pixelRatio would achieve) would help in case of being “fill-rate” bound, that is, if pixel shader is the bottleneck. If you scene is vertex heavy - it wouldn’t help as much, if you scene has a ton of objects and you’re bound by the CPU’s API cost - dropping resolution would do absolutely nothing.

Ultimately, to drop the resolution in a smart way - you would have to do so with the provision that you expect FPS to improve, so you’d need to do something like this:

if recently reduced resolution{
   if FPS has improved{
      record improvement
   else{
       restore old resolution
       lock resolution reduction attempts for some time
   }
}

if (FPS is below `target`) and (resolution reduction is not locked)  {
   reduce resolution
}

I would suggest dropping resolution in small increments, and collecting at least 3 frames to check if the FPS is actually higher or not, better collect 20 or more. This is to make sure you are not affected by statistical anomalies.

Also keep in mind that changing resolution is not always free, that is - you might be using a number of render targets that would require resizing, which in turn triggers memory re-allocation.

Another approach is to run a small benchmark before you application starts to figure out the right performance parameters. Like rendering a full-screen quad 10 times in a row and checking how long that takes. Based on that - you could adjust resolution.

5 Likes

@donmccurdy is this it ?, not sure what’s happening here

@Usnul
i’m using the last 5 frames /5 seconds as interval between updates , there’s some black frames when the switch happens and sometimes updates keep looping

In terms of vertex count , i’m making the models myself & will aim to keep it under 1-2mb,and i’m trying to leverage the PBR textures more instead of high poly geometry

but if i can simulate the worst with a benchmark by showing a plane covering the entire screen like during a loading scene ,that would be great. how can i do that ?

current flawed logic

function adjustPixelRatio(fps) {
    if (fps > 60) {
        if (renderer.getPixelRatio() !== window.devicePixelRatio) {
            renderer.setPixelRatio(window.devicePixelRatio)
        }

    } else if (fps < 30 && fps > 50) {
        if (renderer.getPixelRatio() !== window.devicePixelRatio / 2) {
            renderer.setPixelRatio(window.devicePixelRatio / 2)
        }

    } else if (fps < 30) {
        if (renderer.getPixelRatio() !== 0.8) {
            renderer.setPixelRatio(0.8)
        }

    }
}

the site the mug model has clearcoat so it performs the worst

I don’t have code handy, but in a nutshell, you create a plane geometry, it will be centered on 0, then create a mesh with a material that somewhat represents materials you’ll be using. Then render that using an OrthographicCamera for a fixed number of frames. Time the execution of that loop - et violá, you have a number that can serve as a basis for your performance metric. Say 10 frames render in 100ms - you can expect average FPS to be at most 100 frames per second (100ms / 10 = 10ms per frame).

For a code snippet on how to render a full-screen quad there are a lot of examples out there, three.js examples have such code when it comes to post-processing.

1 Like

That as well as the method that updates scaleFactor, it looks like:

1 Like

is this the correct way to calulate time per frame ?
,& should i use ‘performance.now’ or get.ElapsedTime ?

In the animate loop


 startTime = clock.getElapsedTime()

// do render stuff

 endTime = clock.getElapsedTime()

timePerFrame=endTime-startTime


I did this before in an app for the same reason, like suggested by measuring performance for a little longer, however doing this continuously won’t be too helpful/provide a good experience if your app demands are highly fluctuating leading into changes from crisp to blurry.

You need to make these measures at the beginning, ideally with a average-cost setup, benchmark or extrapolating the results to the highest cost scenario what isn’t really reliable, since various things can cause no to absolute heavy performance differences depending on different hardware and drivers.

In my case the demands didn’t change much, so i measure at beginning and snap to a helping resolution. Additionally i also check the GPU and Browser info that is available, asides of some bare specs, for instance you can check if there is software rendering going on with intel graphics which has been the major enemy.

Finally you can also simply offer a option to the user to switch manually or do both like attempting to detect bad performance, if it’s clearly very bad do it automatically first as the ui would become hard to use, otherwise if it’s unclear bouncing suggest so switch to a faster setting. I did this for the intel case as in many cases the performance even with a simple scene was horrific with serious old office machines, while on some it was okish.

2 Likes

I would suggest using Detect-GPU instead of adapting to FPS while the app is running. This gives you more flexibility since you can run it before loading the app and for low performance GPUs you can reduce the quality of models, shaders, textures, and so on, rather than just reducing the resolution.

The main caveat is that the library may not detect recent GPUs well, especially if you stop updating your app to new versions. For example, soon after M1 chips were released it detected these as low performance. This has been fixed in the latest version but could happen again with new GPUs in the future. On the other hand, I don’t think any approach will be perfect and this is likely to be more accurate and flexible than custom FPS detecting code.

2 Likes

Personally I am cautious of relying on GPU info to set performance options. There are many other considerations, such as current work-load of the machine, resolution, as well as variations within the GPU chip usage, such as:

  • cards that have lower/higher clock speed
  • varied amount of RAM
  • different RAM technology (faster/slower)
  • PCIe interface available on the motherboard as well as what CPU support

There’s also the browser’s WebGL implementation, google’s ANGLE is one of them, and even that runs differently depending on the OS.

I mean, it’s better than nothing for sure, but you might be off by a lot.

7 Likes

If there is room for it in your application, just make a little options screen (with optionally a visible fps counter) and let the user decide. Relying solely on render scale isn’t enough, disabling some post-processing is also a viable option for the user to be able to disable.

1 Like

I’m from a android dominated market ,so detecting gpu will not be ideal

on my mom’s 90$ phone , the fps is solid 55-60
until a model with clearcoat is visible, then it goes to 18-20 , reducing pixelRatio to 1 fixes this

on my dad’s 200$ samsung the fps drop still there but alot less & again reducing pixelRatio fixes this

and my 400$ phone there’s absolutely no problem

so here’s my plan :

While the assets downloads ,make a benchmarking loading screen with a plane with clearcoat and transmission enabled which covers the entire screen/camera frustum and measure fps to determine a graphics preset , based on this i can set the pixel ratio and load lower 512x512 quality textures instead of 1k or 2k, or in worst case use meshBasicMaterial instead of standard/physical and use bakedTexture/lightMap instead of the PBR textures

to make things simpler ,there will be graphics presets just like a game, low,medium,high,ultra

and yeah i’ll give options for the user to set the option manually , but for the sake of a good user experience ,i’m setting up the site in such a way that any modern hardware from any performance range can have a pleasant experience,
also on weaker devices when the fps drops the entire phone’s ui slows down, so would like to avoid that

Does detect-gpu return the same result for all the phones? I thought it would give more fine-grained results :thinking:

It does work, but not completely reliable

weakest phone showed 19fps with tier 1
my 90hz screen phone showed 59fps with tier 2
and my laptop with 120hz screen showed 60fps with tier 3
and apple desktop devices all show up as Apple GPU without any fps and tier 1 (i guess it’s a privacy or metal issue)
apple phones are 60fps and tier 3

but my phone runs three js better than my laptop as the laptop has dual gpu and browser is using the weaker integrated gpu

1 Like

Interesting, thanks for taking the time to test all that. I did the think Apple issue was resolve in the latest detect-gpu release, but maybe not? :thinking:

React-drei has a performance monitor which can be used for this task

Answering from the end-users perspective (that is, ignoring actually good technical answers in here and assuming it actually helps performance), it depends on the use-case.

From a video gamer’s perspective, absolutely under no circumstances would I tolerate this. I’d much rather have 27FPS than have my visibility suddenly smear. It’d make me Alt+F4 and never return. Mind you, this is regardless of the actual game genre. This is related to why motion blur is such a controversial topic. If I’m stuck at 15FPS, then it’s either my responsibility as the gamer to get a better machine, or if I have good specs for this era, then it’s the dev’s responsibility to figure out why performance is poor.

From the graphics editing application perspective it might(?) make sense. Blender has a good use-case - if you want to see a cycles render without doing an actual render, you get something as bad as a smear in the form of the RGB noise it generates. It’s acceptable because 1) I can turn it off when done, and 2) it’s better than waiting 5 hours for a full render just to realize I messed up a small detail.

What some games do is allow the player to lower values as a graphics option. Call it “Render Resolution” or something, and offer options as percentages. If they need it, it’s there. For people sensitive to smear, it’s not forced.

It’s an interesting discussion to be had. Frame rate versus graphical fidelity vs clarity.

It appears that in the recent years we’re headed for frame rate and fidelity at the cost of clarity, if DLSS, XeSS and FSR are of any indication.

Most modern games come with DLSS on by default, running upscaling at about 66% internal resolution.

Plenty of engines have been using dynamic resolution as well, that is - when we detect that a frame takes more than, say 16ms, we say

“hey, we’re missing 60FPS frame timings here, let’s drop resolution until we’re back on budget”

The matter of blurring / smearing is also kind of complicated. I remember when FSR 1 came out, a lot of people were championing it in the gaming sphere, even going as far as saying that “it’s better than DLSS”, when in truth it was just a spatial upscaler with a Lancsoz (sharpening) filter on top.

Yet, for many people being given a blurry image that was passed through a sharpening filter seems to be acceptable.

Allow me to demonstrate, here’s a picture

Let’s scale it down by 50%
image

If we just stretch it out again, we get this

However, if we apply a bicubic filter (line Lancsoz) we get this instead

And now the sharpening filter

And size-by side for comparison (there’s a line vertically through the middle). Left is original and right is the “upscaled” version

And so - most people don’t seem to care :woman_shrugging:


Beyond this, we do actually have better options, that integrate temporally as well. The whole point of models like DLSS is to, pardon the pun, “blur” the line between native resolution and upscale.


There there’s the user preference. Some people swear by the frame rate, they will turn down all quality settings in the game for the sake of that FPS. Some will go the opposite direction and turn on as many quality settings as possible and crank them way up as long as the experience is still “playable”.

Most, however, don’t seem to care either way, as long as the game runs “well enough” and look “good enough” - they will not even think about this trade-off. And so, the trade-off is generally made by the developer.

When it comes to console and mobile platform, users often don’t even get a choice, they are stuck with the trade-off that the developer made.


Personally I’m undecided, I can see the value of high FPS, and I can see the value of visual fidelity. As a developer if you trade the way your game looks for extra FPS - it’s almost always a bad trade, because graphics helps sell games, and so they typically go in that direction and rely on upscalers and dynamic resolution to bridge the performance gap.

As for my own choices - I’m currently working on pretty much this exact tech, because as things stand - a lot of users on lower-end devices are excluded from using my software. At that point it’s a no-brainer, the user who can’t run your game doesn’t have any choices or preferences.

If the game runs, but sacrifices some aspects of quality - at least the user can decide if they want to accept it or move on. That said - I do think that upscalers that rely on temporal accummulation, like Unreal’s TAAU and TSR as well as DLSS & Co. are a good thing. They get bad rep because often times the developer will make a bad product, lie to the customer about the minimum spec and just blame the upscaler for not being able to produce a crisp 4k image from a postage stamp.

Indeed, even temporal accummulation fails miserably when the framerate is too low to begin with, because there’s too much change between frame (too long of a delay). When a pixel has moved from last frame to this current frame by couple of pixels - temporal reprojection will be close to perfect and you will get an amazing result, but if that said pixel has moved by 50 or even 100 pixels - you’re going to get proportionately worse result. You’ll get more disocclusion events as well, where there’s simply no temporal history to rely on.

So it all comes back to this: you need the software to run relatively well to begin with.

2 Likes

[Especially n]ow that we have WebGPU, it becomes important to optimize your models. For example, if a blender model has 100 parts, it will generate a glb with 100 meshes. My understanding is that the GPU will be called separately for each mesh. So, before exporting as a glb, you need to merge the model into as few meshes as possible. At least, that is what my experience indicates.

I was having a problem with frame rate dropping from 60 to 40 fps when flying over a group of model ships. When I looked, I found that, on one of the ships, each of the several guns had several parts - probably 50 parts in total. I joined them all into a single part. The frame rate went back to 60fps.

To be fair, this is applicable to WebGL as well.

1 Like