Originally published on Publish0x.
On August 19, Intel showcased its Architecture Day 2021 presentation. The approximately 2-hour long event featured the Alder Lake architecture on the CPU side and the newly named Arc architecture on the GPU side. Overall, the presentation was very solid and I found the products to be worth getting excited for. It was definitely among Intel's better showings as opposed to those where it would oddly mention its competition more often than its own products or speak in word salads.
What particularly caught my eye was Intel revealing its own upscaling solution to compete with Nvidia's DLSS and AMD's FSR: XeSS. Not only will we get a (much needed) third competitor in the GPU market, but we may get an upscaling solution that adopts the strengths from both DLSS and FSR. The XeSS segment of Intel's Architecture Day 2021 presentation lasted about 6 minutes and there is a bunch of talk about:
The XeSS portion starts at 5:30.
Reading Intel's diagram of how XeSS generally works, it is more similar to Nvidia's DLSS than AMD's FSR. XeSS reconstructs an image based on motion vectors and past frames. In other words, it is using a temporal method just like DLSS. Not to mention, Intel even hired an engineer who worked on DLSS to help it make its own solution.
The resemblance is uncanny.
AMD's FSR, on the other hand, is not temporal, but strictly spatial. It implements an upscaling and sharpening pass after rasterization and lighting before post processing and UI elements. The end result will look much better than traditional bilinear upscaling. Because it also does not require any machine learning, FSR is very easy for developers to implement. That said, because it is upscaling with less data than a temporal solution, some details will inevitably get lost. It really becomes noticeable at low resolutions like 1080p. DLSS, however, can upscale to 1080p with comparable quality to the native image because it is working with more data from past frames.
Based on the short demo running on the Unreal Engine, XeSS was able to show comparable image quality to a native 4K image at just a 1080p target (i.e. the demo runs on 1080p and then, XeSS upscales it to 4K). Intel claimed that framerates can boost up to 2x, though it did not disclose how close or how often XeSS can hit that theoretical maximum.
I can glean how much of a boost XeSS can offer relative to traditional 4K, but for goodness sake, offer some soft of scale on the axis.
Another caveat worth mentioning is that the demo was rather calm, i.e. there wasn't much action and movement going on. One of DLSS's biggest weaknesses is ghosting artifacts. Ghosting is where you have multiple images superimposed over one screen, creating a smudgy, blurry look. Since DLSS relies on previous frames for data, it's no surprise that some ghosting will occur. Because XeSS is using the same method, it will also have the same weaknesses. XeSS is extremely impressive with still images and inactive scenes, I'll give Intel that, but how will it fare with something that is more hectic?
While the jury is still out on how well XeSS will perform relative to FSR and DLSS under similar workloads, I'm really happy that Intel is following AMD in terms of having XeSS available to all GPUs. Considering that Intel will effectively start out at 0% marketshare in the discrete GPU market, making XeSS available to everyone is a good move. However, I think Intel is also being clever by going with a hybrid proprietary approach by leveraging the Matrix Engines (XMX) in its Arc GPUs to reduce the XeSS overhead. Intel said that once XeSS matures, the technology will go open-source. It would definitely be interesting to see a fshack-like implementation of XeSS on Linux similar to what Georg 'DadSchoorse' Lehmann accomplished with FSR.
Not only can you use FSR on Steam, but also on Lutris (0.5.9 or newer) for your non-Steam games.
So can Intel's XeSS kill Nvidia's DLSS? Technically, anything is possible, especially since XeSS is doing the same things as DLSS. However, it greatly depends on how well Intel executes the technology. XeSS looks fantastic with inactive scenes. However, knowing the inherent weaknesses with temporal upscaling, will the image still look great when there's a lot of chaos? If Intel can get it to work well on action-packed games like Doom Eternal from the get-go, then Nvidia will definitely feel the pressure. Going with a vendor-agnostic approach was a smart idea as developers will be more willing to implement technology that will be accessible on all GPUs, even integrated ones.
Intel still has a long ways to go, but so far, it is ticking the right boxes.
No one really cares about these technologies. What intel need is to keep up with the low end dedicated gpu solutions of nvidia and amd, and try closing the gap with the midrange. That strategy would allow them to slowly push out nvidia from the industry. Of course nvidia is aware of this, thats why they are so desperately trying to get more market share by creating SOCs.