Death Stranding Sam feature

Death Stranding – Native 4K vs NVIDIA DLSS 2 vs AMD FSR 2.0 vs Intel XeSS benchmarks & comparison screenshots

Kojima Productions released yesterday a patch that added support for both AMD FSR 2.0 and Intel XeSS in Death Stranding. As such, Death Stranding is the first game that supports all available PC upscaling techniques (DLSS, FSR and XeSS). And, as you might have guessed, we’ve decided to benchmark and compare them.

For these benchmarks and comparison screenshots, we used an Intel i9 9900K with 16GB of DDR4 at 3800Mhz and NVIDIA’s RTX 3080. We also used Windows 10 64-bit, and the GeForce 517.48 driver. For our comparisons, we used the Quality Mode of all the upscaling techniques (so no, we did not use the Ultra Quality Mode of the XeSS).

The NVIDIA GeForce RTX3080 can run Death Stranding with more than 60fps in native 4K on Ultra settings. However, and as you can see in the following video, the game suffers from major aliasing issues in 4K/TAA. In this scene, NVIDIA’s DLSS 2 was able to offer the best visuals and performance. In terms of visual quality, Intel’s XeSS came second, and AMD’s FSR 2.0 was third.

Death Stranding - Native 4K vs NVIDIA DLSS 2 Quality vs AMD FSR 2.0 Quality vs Intel XeSS Quality

Now while AMD FSR 2.0 looks a bit sharper, it cannot compete with Intel’s XeSS. To better illustrate this, we’ve zoomed in on the following comparison. We’ve also circled the points of interest. As you can see, Intel XeSS does a better job at eliminating aliasing.

Intel XeSS better AAAMD FSR 2.0 worse AA

Below you can also find some comparison screenshots between DLSS 2 Quality (left), AMD FSR 2.0 (middle) and Intel XeSS (right). In all of our comparisons, DLSS 2 Quality looks better (even than native 4K as demonstrated in the video). Intel’s XeSS looks smoother than FSR 2.0, but does a better job at reconstructing the image.

DS DLSS Quality-1DS FSR 2.0 Quality-1DS XeSS Quality-1 DS DLSS Quality-2DS FSR 2.0 Quality-2DS XeSS Quality-2 DS DLSS Quality-3DS FSR 2.0 Quality-3DS XeSS Quality-3 DS DLSS Quality-4DS FSR 2.0 Quality-4DS XeSS Quality-4

Performance-wise, DLSS 2 Quality is the big winner here. Thanks to NVIDIA’s AI-upscaling technique, our RTX3080 was able to come close to a 120fps experience. DLSS 2 Quality is followed by FSR 2.0 and then by XeSS. It’s also worth noting that all of these upscaling techniques offer better performance than native 4K.

All in all, DLSS 2 Quality remains the best upscaling technique as it offers the best image quality and performance. On the other hand, Intel’s XeSS offers a better image than FSR 2.0 but is less performant than AMD’s solution!

Death Stranding upscaling techniques 4K benchmarks

31 thoughts on “Death Stranding – Native 4K vs NVIDIA DLSS 2 vs AMD FSR 2.0 vs Intel XeSS benchmarks & comparison screenshots”

    1. Thats what I come to expect, but in this example DLSS looks ever so slightly more blurry in those screens. Probably just the sharpness tuned up slightly more on FSR though.

  1. It’s great to have so much competition in this space.

    Each software will probably have hardware and games that they work better on than the others. It would be great if we could get to the point where we have all three engines available and just choose which one works best for the desired game.

    1. I wouldn’t say “so much”. AMD started making waves last year and XeSS just came out this year. It’s the first time nVIdia actually has to compete against someone in the graphic sector ever since they bought VooDoo exactly 20 years ago and anihilated any sort of competition.

      1. Not really, once a cult is formed its hard to change people’s mind. Nvidia has formed a cult, I was a long term Nvidia user since 2002 but recently tried amd Gpu & it works just fine. Heck I prefer the amd software over nvidia control panel. There are lots of stereotypes surrounding amd cards like many games crash, amd gpu’s don’t last very long blah blah. Nvidia only has upper hand due to the fact that they excel in doing marketing.

        1. Nvidia excels also in SW side, when they offer additional values in various graphics enhancements. Like PhysX, Gmeworks, DLSS. And for now also better raytracing support or in-house gamestream from PC to TV without neccessity of using additional HW or SW. Not everything is just marketing.

          1. All of the feature you mention I find it hard to be call as exceling and more of feature gating when they first introduced. When something like DLSS could actually be run on their own previous gen GPU (look at XeSS), they will choose to gate the feature only on their next gen of GPU, it make them looks bad even in front of their own customer. Now they try to gate DLSS3, albeit the previous gen of their GPU actually also have the hardware necessary to run it (optical flow something), thats non excuse in my opinion of not giving their customer options to pick when it actually could,. And lets not get into Gsync (adaptive sync) and ray tracing, that finally they made available to all range of hardware

        2. But what about the DLSS and HDR technology being better? At the end of the day we just want what’s best out there. I mean that’s what a true gamer wants right?

          Personally i con’t care about any cult or marketing strategies. I care about the outcome, the end result.

  2. Yeah, dlss is the bees knees. Better than native is hard to believe and people still deny it but here you go. Granted that’s not the case in all games but by and large it is comparable or basically even on the whole while being much more performant to boot. Love it!

  3. AMD really need to get their software in order.

    AMD can currently not compete with Nvidia and Intel’s h.264 (NVENC), AV1, DLSS 2.0 / XeSS.

    AMD treats GPU software as an afterthought, buggy drivers, buggy encoding, buggy streaming.

    AMD also does a horrible job supporting developers, as Tomshardware pointed out when they reviewed AMD’s GPU encoder last July.

    The problem with AMD’s AMF encoder update is that no streaming platform has offered support, including OBS, despite the update being out for nearly four months. It’s unclear why no one has implemented support for the update yet, but AMD does have a history of not broadly supporting developers in implementing its encoder SDKs. This might be the reason why support is taking so long to implement.

    1. RDNA2 encoder is now competitive in H.264.
      Recording in H.265 on all AMD GPUs that have the capability has always been competitive.
      Remote play with Steam Remote Play, Parsec or AMD Link (I think there’s even a Moonlight version for AMD) using HEVC is outstanding.
      AV1 decode is present on RDNA2.
      AV1 encode will be present on RDNA3.
      These games aren’t using FSR 2.1.1, vs Nvidia’s best DLSS version or XeSS. It’s not a 1:1 comparison.
      FSR 2.x is VASTLY faster across ANY GPU vs XeSS.

      AMD is doing plenty fine.

        1. Or maybe you are too fragile to be online.

          I disliked Nvidia’s DLSS 3.0 because it adds latency.
          I disliked Intel’s lack of native DX9 support.

          I don’t hold back because it’s AMD. Their encoding and streaming support still sucks, OBS support still sucks, and their FSR is still behind DLSS 2.0.

          AMD’s GPU are price competitive with Nvidia when it comes to FPS. Yet, the 15 most used GPU on Steam are all Nvidia. There is a reason for this. AMD’s driver instability and lack of features is the reason.

          1. Fake frames don’t give ANY responsiveness advantage. Literally the exact OPPOSITE in fact.

            (DLSS 3 + Reflex is only able to get the
            input latency to at best MATCH that for the pre-DLSS 3 framerate! Aka if your pre-DLSS 3 framerate is a mere 30fps than 100fps DLSS 3 + Reflex will still have a native 30fps level of latency/input lag. Ergo DLSS 3 has absolutely ZERO competitive benefits! Or god forbid you try & use DLSS 3 WITHOUT Reflex? Then you’ll have EVEN WORSE THAN native 30fps levels of input lag/responsiveness!)

            They’re also covered in endemic AI rendering artifacts (like those made by art bots like DALL-E), at least in the footage Nvidia has shown so far (Spider-Man Remastered w/ DLSS 3 looks absolutely TERRIBLE in frame-by-frame close-up).

          2. Congrats on trying to look objective. Most people cannot tell the difference between FSR or DLSS. I agree Nvidia’s solution produces better results, however its more anti-consumer/proprietary tech from them. Everything is in a closed loop system. I personally have never really had issues with AMD drivers. Nvidia has had just as many problems and people like myself who have used both on and off through the years know this. You exude fanboism on every post in favor of Nvidia.

          3. There are lots of stereotypes surrounding Amd cards like they crash in many games, there cards don’t last very long blah blah. I used Nvidia Gpu’s since 2002 but im not a brand loyal or join some cult like these nvidia fanboys cult has formed who keep spreading misinformation about amd cards, I recently tried amd gpu’s and they work just fine. Heck I find amd software miles better then Nvidia control panel. Also the sharpness filter offered from amd is much better then Nvidia.

    2. There is nothing to get in order about FSR 2.0
      It just doesn’t use exclusive hardware capabilities to be able to match DLSS 2, it would only manage that if AMD ever chooses to enable some hardware exclusive features alas DLSS 3 that would only work on the latest RDNA and for the rest it will still be FSR 2.0. It would also need for time for the AI to learn as Nvidia already has a lot of time ahead. Not to worry tho, this will become like G-Sync and Freesync in a few years.

      1. This is nonsense. FSR 2.1 is ridiculously competitive with the latest versions of DLSS. They each win & lose at different things. Hence why even in the same game, which looks better can literally vary frame-by-frame.

        Also DLSS 2 doesn’t actually use the Tensor Cores. ? They ditched using them after the DLSS 1 disaster (which WAS actually Tensor accelerated). Post DLSS 2 supporting DLSS on older GPU architectures is a deliberate financial choice by Nvidia, NOT a hardware limitation regardless of whatever BS they try to sell you. (This changes for DLSS 3 though, which again is back to being Tensor accelerated).

        1. I understand that with each software update they can come closer, but it can’t offer as much performance and image quality as an AI accelerated hardware exclusive solution. It never could. With FSR 2.1 being so close to DLSS 2, I wonder how far could they get if they begin using hardware exclusive features as well.

    3. OBS has already added B Frame support for AMD RX 6000 you idiot… Maybe quote ACTUALLY recent sources?

      AMD’s VCE encoder on RX 6000 has been fixed on AMD’s end for like 6+ months now and is EVERY bit competitive with the latest NVENC engine! Google “AMD encoder B frames” and then kindly take your fanboy butt some place else where they also prefer fanboy lunacy to hard facts & logic. ???? It’s 110% not AMD’s fault it took OBS like 3 months to update their dang software with the fixes.

    4. When it comes to FSR, you and this article are full of crap. Article says DLSS>XeSS>FSR but has no evidence for that other than the first zoomed-in image about antialiasing. The video doesn’t show any other noticeable IQ difference, and the four sets of 3 images at the end don’t show any FSR loss to XeSS and in fact the last set clearly shows that FSR is virtually identical to DLSS and XeSS is more different, so if DLSS is the gold standard that’s a FSR win against XeSS.

      Let’s make another benchmark here: comparing all the supersampling technique benchmarks, with Digital Foundry’s videos in first place and amateurs posting on reddit or twitter in second place, this comes in third place. Because of the zero-effort approach and the statements not supported by evidence.

      1. DLSS better because it uses AI. Basically this entire article in a gist. In fact, in the second set of pictures, that “Void if tampered” yellow label is clearly more visible and resolved with FSR 2 compared to DLSS 2.

  4. No wonder XeSS is more demanding than FSR, since it’s actually reconstructing the image with AI instead of merely applying a sharpen filter like AMD’s solution.

    1. Im probably wrong in this, but AI are not expicitly use inside the hardware that we have in home, it is the algorithms that generated by that AI after being trained that actually run inside our hardware utilising dedicated tensor cores to accelerate their ‘comparison and picking’ process. Hence why, XeSS can still be utilised on other hardware that support dp4a instruction in the gpu as slower alternative to acceleration using dedicated hardware. cmiiw

  5. I would expect XeSS to be less performant than FSR as it’s doing a lot more, hence the superior image produced. It will be good to see how Intel with XeSS competes with Nvidia and DLSS going forward.

  6. Crap lazy test is crap. FSR 2.1 has been widely available for a while now and is STUPID easy to mod into existing FSR 2/DLSS 2 titles. Either test the ACTUAL latest tech available from each company or don’t test them at all. ????

    (It’s totally fine, even preferable to include OG FSR 2.0 in the testing, but leaving out FSR 2.1 ENTIRELY is absolutely freaking absurd.)

Leave a Reply

Your email address will not be published. Required fields are marked *