Nine new games will support NVIDIA’s DLSS tech, including Darksiders 3 and OVERKILL’s The Walking Dead

NVIDIA has announced that nine new games will support its upcoming new supersampling technique that will be exclusive to its RTX series, DLSS. DLSS stands for Deep Learning Super Sampling and will be using the Tensor Cores and the AI in order to super sample the resolution to higher values.

By using DLSS, PC gamers will be able to get almost as good as smooth visuals as those found in native 4K, while also benefiting from a smaller performance hit. We don’t know whether DLSS will be exclusive to 4K resolutions or whether PC gamers will be able to use it also for 1440p or 1080p.

The six new games that will support DLSS are: Darksiders 3, Deliver Us The Moon: Fortuna, Hellblade: Senua’s Sacrifice, KINETIK, Outpost Zero, OVERKILL’s The Walking Dead, SCUM and Stormdivers.

Both the NVIDIA GeForce RTX2080Ti and RTX2080 will release on September 20th, and we expect the first reviews to hit the Internet on September 19th.

Stay tuned for more!

UPDATE:

Fear the Wolves will be the ninth new game that will support DLSS.

12 thoughts on “Nine new games will support NVIDIA’s DLSS tech, including Darksiders 3 and OVERKILL’s The Walking Dead”

  1. 6 days and counting. Benchmarks will tell the true story of what Turing will bring over Pascal, at least in current and past games. No more leaks. No more Nvidia claims. I’m ready for some in depth reviews and benches from respectable tech sites.

    1. DLSS is sort of a deep learning antialiasing.

      It’s a kind of neural network to find jagged edges, and perform high-quality anti-aliasing by determining the best color for each pixel, and then apply the proper color to create smooth edges, and also overall improve the image quality.

      As per Nvidia, this new DLSS feature offers the highest quality anti-aliasing with fewer artifacts, than other forms of AA.

      Nvidia programs their supercomputer to run any game at extremely high resolutions, and the AI compares that data to the standard resolution, and tries to figure out what it should look like using BOTH these sets of data.

      Once the AI has figured this, the instructions are saved via a driver or profile, so that the Tensor cores on the Turing GPU can run that code, and give you the same quality but with slightly better performance, according to Nvidia, because IMO, I think CUDA isn’t calculating antialiasing anymore.

      According to theory (not fully up to the mark though)

      I think DLSS requires a training set of full RES frames of the aliased images, that use ONE sample per pixel to act as a baseline for training.

      Because, another full resolution set of frames with at least 64 samples per pixel, actually act as the Base reference that DLSS aims to achieve.

      At runtime, a FULL resolution frame with motion vector, is required in addition to the input image to be anti-aliased.

      Out of which the motion vectors are used to “temporally” transform the previously anti-aliased frame to align with the current frame, enabling the Algorithm to use temporal info.

      IMO, DLSS doesn’t run at Native Res, it just upscales.

      Basically the Tensor cores use that as a basis to “super sample” the lower resolution rendered game (with the Turing cores rendering it at 1080p/1440p, before it can be sent out to the monitor (4K output)..)

      But there is more to this, as ground-truth images are also needed.

      1. This is what my best guess has been…

        No other explanation allows for such a feature to “improve performance” unless the game is rendered at a lower res and the AI upscales the quality.

    2. Nobody knows 100%, not even nvidia seems lol. It should be a filter much similar to SSAA (SuperSamplingAntiAliasing) but only for a certain amount of textures, which will be downloaded periodically maybe with nvidia drivers, or perhaps standalone. And works by using a much much smaller texture and super sampling it, to avoid jags and misalignement of antialiased frames.

  2. “momentum continues”, yeah that’s what we thought with Nvidia’s previous AA tech. They supported it with 2-3 games and then dropped it when everyone wasn’t want to adopt it. It either gets adopted across the board or don’t bother with it

    Last thing I want is good tech being squandered and talked highly of, but never really being used all that much. Either they make massive pushes with this solution, or they should just drop it and wait for someone else to make the big push for a new AA solution.

    1. nvidia push a lot of AA tech. FXAA is one of them. that one being adopted very widely. some games even completely drop MSAA over FXAA. the issue will the type of AA that need very close integration into game engine like TXAA. plus many people did not like the “blurring” and very heavy performance hit with TXAA so it does not really popular even among gamer. DLSS should do the exact opposite. instead of costing performance from applying AA it should increase performance significantly. and there is no need for game integration. it will be all done on driver level. the only problem is you going to need that “trained” profile instead of enabling them on the fly like FXAA.

      1. While there is no need for game integration, there is need for the “training”, as you mentioned, which only happens when developers send their code to NVIDIS to create these game-specific profiles. I personally think this is some pretty slick technology, but I’m still leaning towards an expectariex that a majority of developers are not going to even go through the trouble of sending their code to NVIDIA.

        There does seem to be a pretty healthy start to the number of games supporting it before the first driver even releases though, so maybe that’s a good sign that developers are all excited to go in on something that will potentially double their game’s performance numbers at 4K without any extra optimization on their own .

        1. Yes majority of game developer probably will not going to care. But this is nvidia. They will be the one that going to seek developer instead of waiting for them to come. That’s how CUDA end up being very successful in professional world.

  3. “and will be using the Tensor Cores and the AI in order to super sample the resolution to higher values”
    That’s not how super sampling works…
    Super sampling is rendering higher, then going lower. Not starting lower then going higher.
    This is more of a post process, or possibly a synchronous one based on benchmarks and how the hardware is set up.

Leave a Reply

Your email address will not be published. Required fields are marked *