Been working on extending Shade’s TAA (Temporal Anti-Aliasing) implementation to also perform upscaling wanted to share some thoughts.
Ground truth (TAA at 100% resolution)
Most modern upscalers like XeSS from Intel, or FSR from AMD or DLSS from Nvidia upscale by relatively modest amounts.
| Upscaler | Quality Scale Factor | Performance Scale Factor |
|---|---|---|
| NVIDIA DLSS | 1.5x | 2.0x |
| AMD FSR | 1.5x | 2.0x |
| Intel XeSS (1.3+) | 1.7x | 2.3x |
To set a challenge, let’s go for 10x upscaling.
10% resolution without upscaling (108 x 108 pixels)
Here’s what we get if we just apply TAA to our low-res target
Clearly, AA is important, but it’s still a low-resolution image
Finally, if we add a jitter-aware upscale filter to it we get this:
Typically, FSR and XeSS will apply a sharpening filter as well, I don’t know about DLSS specifically, but I’ve seen a lot of engines force a sharpening filter on top of DLSS separately as a post-process.
Why? Well, because there is some blurriness.
Now, Looking at the image above it’s easy to think that it’s no better than TAA + blur, but it’s actually more detailed, let me probe it to you
This
resolves into this
where we can see hints of the true shape octagon starting to show
Or take details on the door, base TAA
upscale
ground truth
Now onto a more sensible 1.5x upscale
And again if we zoom in on the target, here’s the upscale
And ground truth
Conceptually the upscaler is quite simple, we have an internal resolution at which we do the main render, I call it internal resolution, and we have the resolution at which we output to the screen, which for me is output resolution.
The upscaler part works by weighing low-samples more when their jitter brings them in alignment with the output pixel. I use a fairly simple gaussian 3x3 filter.
The only major change is that if you do TAA only - you don’t need to worry about jitter at all, it works in your favor. When we want to upscale - we need to take jitter into account.
The other complication is the history rectification, my temporal filter is based on clipping history against incoming neighbourhood color AABB, using variance as a guide. So I relax the variance bounds, if you use a simple clamp - you’d be doing something very similar.
One more point which may not be obvious at a first glance - you need to increase your jitter sequence depending on the upscaling factor, otherwise your upscaler will not be able to collect enough samples to work with.
What’s the use?
The use of an upscaler from the engineer’s perspective is twofold:
- Provide more headroom during rasterization. That is - make the renderign cheaper, letting you cram more post-processing and higher-fidelity shaders.
- Dynamic resolution.
2 is a derivative of 1, but for me that was the driver to implement temporal upscaling in the first place.
Shade is a high-end renderer, it wasn’t designed to run well on low-end GPUs, but a lot of people have those low-end GPUs, for various reasons.
I’d like for the engine to scale down to that hardware, as gracefully as possible. And Dynamic Resolution seems like the way to achieve that. In a nutshell: track the FPS, and if it’s too low - increase the upscale factor until good.
I happen to have an RDNA2 iGPU on my machine, and it runs Shade poorly. For a specific test scene it was running at 16 FPS. Again, granted that the engine was not designed for this. Dropping the main view resolution down to 0.6 (1.6666x upscale) nets FPS increase up to 24, which is somewhat usable.
I plan to propagate the upscaling support into the shadows as well, as they currently take up the biggest chunk of GPU time on low-end.















