The "Death" of Native Resolution: How AMD's FSR 2.0 and Intel's XeSS Will Drive PC Gaming Forward
It has taken several months, but we are finally seeing GPU prices falling down. While video cards are still sold above MSRP, the prices are not as ludicrous as they once were. But for the value-minded folks, even if prices return to MSRP, some GPUs will still be overpriced. AMD's 6500XT is one such egregious example. No seriously, how is my ancient RX580 trading blows with it?
The goods news is that there are ways to get more performance without buying new GPUs at ripoff levels. Back in June 2021, AMD launched their spatial upscaler called FidelityFX Super Resolution (FSR). Compared to traditional linear upscaling, FSR 1.0 provides better image quality for extremely little overhead. It was easy for developers to implement the feature as it does not require temporal data, training algorithms, and dedicated hardware like with Nvidia's DLSS. Best of all, it is a platform-agnostic so it can be used on Nvidia's cards, too. (Nvidia eventually came up with its own solution called Nvidia Image Scaling which works similarly to FSR).
That said, spatial upscaling has its limitations and no matter how good the algorithm is, you will always get worse image quality than the native resolution. It's not so noticeable if you use Ultra Quality at 4K or 1440p, but it gets ugly when you go down to Balanced or Performance mode. FSR also does not play well with resolutions of 1080p and below because at that point, the amount of information to upscale from is minuscule. For a 1280 x 800 device like the Steam Deck, it may not be that apparent thanks to its small screen, but it's a different story if you use FSR on a 24-inch 1080p monitor.
This is where AMD's FSR 2.0 and Intel's XeSS come in. Both are temporal upscalers like DLSS, but they provide a number of key advantages the latter lacks. FSR 2.0 and XeSS will be open-source and platform agnostic, meaning you can use either upscaling solution on any (supported) GPU regardless if its the competitor's. While XeSS can be accelerated under Intel's future Arc GPUs with XMX, other hardware can still run it albeit with more overhead under the DP4a fallback option. Meanwhile, FSR 2.0 has the additional benefit of not requiring machine learning, meaning all developers need to do is implement it into their game engines.
Back in August 2021, I wrote on whether XeSS can "kill" DLSS. I concluded that it largely depends on how well it is implemented. Up until recently, Intel provided examples in rather inactive environments and the results looked very promising. However, they failed to quell my concerns on whether ghosting would be limited, a common weakness from temporal upscaling. This is why DLSS 1.0 was criticized for its vaseline appearance when it first launched.
Just yesterday, Intel showed how XeSS runs on a game called DOLMEN. While my eyes are not exactly trained to detect ghosting, I did not see much smearing effect or ghosting during the more active scenes. Granted, I would have liked for Intel to show off XeSS under a more "typical" gameplay environment where the player is dealing with multiple enemies instead of just one. Regardless, the results look impressive.
As for FSR 2.0, the still images definitely look way better than on 1.0. When comparing their respective Performance modes, version 2.0 blows 1.0 out of the water. You can see for yourself with the 4K Deathloop screenshots AMD provided. One obvious improvement I could spot was how the cables did not have the staircase effect under 2.0. Textures also looked more crisp and less garbled compared to 1.0. Faraway fine lines were conserved under 2.0 whereas they would disappear under 1.0. AMD did provide how FSR 2.0 runs in motion and I did not spot any ghosting, but it should be noted that the player was walking through a quiet setting.
The jury is still out on how good FSR 2.0 and XeSS will be, but they are at least on the right track. The stills look extremely impressive and the limited in-motion demonstrations do not exhibit much ghosting. As of now, DLSS is the only temporal upscaling solution and only those with RTX cards can utilize it. Competition in this area is badly needed, and if AMD and Intel hit their solutions out of the park, then this will dramatically change the PC gaming landscape. Old cards such as Polaris and the GeForce 10xx + 11xx series can last a few more years before riding into the sunset. Developers will have a significantly larger userbase who can enjoy the benefits of temporal upscaling which may lead to additional indirect effects such as more word-of-mouth and better sales. It's win-win situation for everyone, especially when GPUs are still grossly overpriced.
I recently learned how to use FSR on Linux (with Lurtis & Proton,) so I'm happy that FSR is getting an upgrade. Does it need to be implemented in game-by-game bases though? Since FSR 1.0 can be used even on the games that don't implement it.