It’s an interesting discussion to be had. Frame rate versus graphical fidelity vs clarity.
It appears that in the recent years we’re headed for frame rate and fidelity at the cost of clarity, if DLSS, XeSS and FSR are of any indication.
Most modern games come with DLSS on by default, running upscaling at about 66% internal resolution.
Plenty of engines have been using dynamic resolution as well, that is - when we detect that a frame takes more than, say 16ms, we say
“hey, we’re missing 60FPS frame timings here, let’s drop resolution until we’re back on budget”
The matter of blurring / smearing is also kind of complicated. I remember when FSR 1 came out, a lot of people were championing it in the gaming sphere, even going as far as saying that “it’s better than DLSS”, when in truth it was just a spatial upscaler with a Lancsoz (sharpening) filter on top.
Yet, for many people being given a blurry image that was passed through a sharpening filter seems to be acceptable.
Allow me to demonstrate, here’s a picture
Let’s scale it down by 50%

If we just stretch it out again, we get this
However, if we apply a bicubic filter (line Lancsoz) we get this instead
And now the sharpening filter
And size-by side for comparison (there’s a line vertically through the middle). Left is original and right is the “upscaled” version
And so - most people don’t seem to care 
Beyond this, we do actually have better options, that integrate temporally as well. The whole point of models like DLSS is to, pardon the pun, “blur” the line between native resolution and upscale.
There there’s the user preference. Some people swear by the frame rate, they will turn down all quality settings in the game for the sake of that FPS. Some will go the opposite direction and turn on as many quality settings as possible and crank them way up as long as the experience is still “playable”.
Most, however, don’t seem to care either way, as long as the game runs “well enough” and look “good enough” - they will not even think about this trade-off. And so, the trade-off is generally made by the developer.
When it comes to console and mobile platform, users often don’t even get a choice, they are stuck with the trade-off that the developer made.
Personally I’m undecided, I can see the value of high FPS, and I can see the value of visual fidelity. As a developer if you trade the way your game looks for extra FPS - it’s almost always a bad trade, because graphics helps sell games, and so they typically go in that direction and rely on upscalers and dynamic resolution to bridge the performance gap.
As for my own choices - I’m currently working on pretty much this exact tech, because as things stand - a lot of users on lower-end devices are excluded from using my software. At that point it’s a no-brainer, the user who can’t run your game doesn’t have any choices or preferences.
If the game runs, but sacrifices some aspects of quality - at least the user can decide if they want to accept it or move on. That said - I do think that upscalers that rely on temporal accummulation, like Unreal’s TAAU and TSR as well as DLSS & Co. are a good thing. They get bad rep because often times the developer will make a bad product, lie to the customer about the minimum spec and just blame the upscaler for not being able to produce a crisp 4k image from a postage stamp.
Indeed, even temporal accummulation fails miserably when the framerate is too low to begin with, because there’s too much change between frame (too long of a delay). When a pixel has moved from last frame to this current frame by couple of pixels - temporal reprojection will be close to perfect and you will get an amazing result, but if that said pixel has moved by 50 or even 100 pixels - you’re going to get proportionately worse result. You’ll get more disocclusion events as well, where there’s simply no temporal history to rely on.
So it all comes back to this: you need the software to run relatively well to begin with.