NVIDIA GeForce header image

NVIDIA officially announces and details DLSS 2.1, brings ultra performance mode for 8K gaming

A few days ago, DLSS 2.1 appeared in Wolfenstein: Youngblood’s Steam database. And during a recent Reddit AMA, NVIDIA has officially revealed this new version of DLSS. Yes, DLSS 2.1 is real, and here are its official details, straight from NVIDIA.

According to NVIDIA, DLSS 2.1 brings a new ultra performance mode for 8K gaming. NVIDIA claims that DLSS 2.1 will deliver 8K gaming on GeForce RTX 3090 with a new 9x scaling option. And, as you may have guessed, Wolfenstein: Youngblood was one of the games that were showcased running on the RTX3090.

Furthermore, DLSS 2.1 will bring VR support. As such, developers can now finally use DLSS tech in their VR games, something that was not possible with the previous versions of DLSS.

Lastly, DLSS 2.1 brings dynamic resolution support.

“The input buffer can change dimensions from frame to frame while the output size remains fixed. If the rendering engine supports dynamic resolution, DLSS can be used to perform the required upscale to the display resolution.”

67 thoughts on “NVIDIA officially announces and details DLSS 2.1, brings ultra performance mode for 8K gaming”

    1. It scale is locked to if i’m not mistaken 1/2 or 1/4 depending on mode in dlss2… so no matter what you say the end res is you will get scaled from either 1/2 or 1/4 of that.

      Now its possible to change the upscale ratio dynamically.. ie the end scaled wont change even if the baseline does.

    1. The article states that DLSS now works with VR and of course if you can DLSS to 8k you can certainly do it at 4k with better performance

        1. Yes it would but hopefully going forward most games start using DLSS including VR games seeing as it now works with VR

      1. It can run games upscaled to 4K (up to 9x upscaled to make it work according to the article and the benchmarks they’ve shown). It’s not native. Not even close. So for 4K, it isn’t doing much. 4K is likely finally viable in most games though.

        1. Forza runs at native 8k/60. I know its Forza and my 3770k + 1080 can pretty much max ot out at 4k/60. But still impressive. Plus i wouldn’t mind being able to up the resolution modified in games a bit to further reduce the aliasing in games at 4k.

          1. Yeah forza runs great.
            You’re getting aliasing at 4k? What’s up with that?
            I’m at 1440p and generally it’s crystal clear. Although I’ve come across some exceptions.

          2. I’m playing on a 65″ 4k Samsung TV. I dont use in game AA. There is definitely aliasing at 1440 and even at 4k you can still see hints of aliased edges. at 6k or higher that should be pretty much gone.

            Also play something like Project Cars 3, alias land. Youch. Esp compared to PC2 (which is a far better game).

    2. I do, just from the impressive power stand point. Its nice the whole range of 3000 series can pretty much max any game at 4k/60. That’s the selling point, the 8k/60 is icing on the 3090 cake that I will gladly eat.

    3. Dude 8k is 4x 4k that means you can do perfect downsampling without even downsampling really. F*king 0 aliasing and incredible details and sharpness.

      1. Did NVIDIA make a DLSS implementation that works as a high quality AA for native rendering resolution? I remember them talking about something like that but I haven’t seen it being used yet. That sounds like a better use of resources than upscaling to 8k with it and then down to 4k.

        1. 8k is the limit of human sight. I’m sure you could enable smaa with reshade and eliminate any remaining aliasing or just sit further away.

          1. Depends on size of screen and how far from your watching – Basically how much an pixel takes up in your filed of vision. On small monitors such res don’t matter (perhaps a lill if you had the nose glued to the screen) but would matter if you were close on a huge screen.

    4. Yea 8K (even 4K unless you have a massive screen) is a pointless waste of resources. Talk first about high framerate 120Hz+ then start taking about resolution.

  1. According to Digital Foundry dlss 2.0 gives as good if not better picture quality than native res. So this should be huge boon to 4k gaming at 60fps and above.

  2. Say what you want about dlss, dlss2 is imo a huge thing really. As most AAA titles have TAA as main AA method – It results in good AA for low resource cost but it introduces quite alot of blur. That’s where dlss2+ (quality mode) comes in, even when it up-scales the images the end result is often better than even native resolution because dlss’s don’t have that darn blur that taa introduces – So both better and faster, whats not to like?

    1. If I’m not mistaken, DLSS2 only looked better than native in two games, right? Control and Wolfenstein.
      Every other game came out a bit blurry.
      It’s weird how upscaling is suddenly a stand out feature.

      1. Pretty much all dlss2 titles i checked that featured taa had better clarity with dlss2 due to “taa blur busting” thus far. Death Stranding is another example.
        https://uploads.disquscdn.com/images/f08f344819d1ec680a01aafa845605db87a6083f552126b93600407e17790e37.png

        Look at the screenshot from a frigging youtube video that already lost quite abit of fidelity due to compression, don’t bother with the highlited area… look at the lightpole + stonework to the right or that parallax mapping on the brick wall to the left. Its day and night imo.

        Its not that many titles that features dlss2 but considering its rather new its not really that surpricing

        1. there’s motion blur in this game, and the character is clearly running, so I would assume the bricks and light pole are just blurry because of motion blur on that shot, look at his left arm too

          1. Might be the case for this shot, still don’t explain the clearly noticeable better clarity in game. Motion blur is the first that always get set to off, its only there to mask crap frame-rates anyway.

          2. motion blur is a matter of taste in my opinion, there’s good and bad motion blur, it has its place outside of just hiding bad framerate.
            gamers’ religious preaching that it’s bad and should be disabled is sad and off-putting

          3. There are good and bad implementations yea, but when over 100 fps its no use really. Good motion blur takes frametimes into account to adjust its strength but still no where near a good fps and speedy screen in presentation.

          4. 100fps+ still doesn’t remove the gaps between frames completely, especially noticeable in fast motion
            and I agree that they should make it so the camera shutter speed for motion blur is dependant on frame times, but developers don’t bother with making it a reality and set a static value for 30 or 60 fps target, I guess

      2. This is because the upscaling algorithm is intelligent and trained on datasets much larger than the native resolution. Upscaling without machine learning shouldn’t give a better result than native.

        If the upscaling algorithm is trained on 16K resolution data as NVIDIA claims, it is certainly possible that the algorithm has knowledge of how to make a lower resolution (4K) look like something higher res (8K) which can result in the upscaled image looking better than native.

      1. Enable proper Full-RT (not just RT enhanced raster) and it will crawl down to an slideshow even with it’s insane specs, you will want dlss even on that on an occasion no doubt (I would actually enable it just to get rid of the taa blur if the dlss implementation didn’t lack the necessities like proper motion vector handling).

        The more crazier the dev’s get with RT the more you will want to have it 🙂

        Perhaps 3090 in sli… hmm. J/K shame mgpu scaling is generally low currently and i doubt the new nvlink will be speedy enough/games coded for mgpu for that to be viable again beside a few titles anytime soon. (Perhaps with chiplet designs that are rumored to be added into the next gen gpu’s, they could likely feature a tile-based rendering system, to save chiplet to chiplet bandwidth and that could also likely be applied to mgpu rendering – Thus “make mgpu great again” a thing 🙂

    2. I’m excited about these cards. Though AMDs software Fidelity FX upscaling in Monster World looks amazing. I dont get any weird artifacts when moving around and it looks pretty damn close to actual 4k. All this tech is going to make gaming just that much better. Hopefully with consoles finally getting a taste of PCMR power that means better optimization for PC games.

    3. It’s almost not in their best interest if you think about it : )) ..Really, I don’t find a reason to buy a top end card , if I can buy a cheap 2060 and use the DLSS 2.0/2.1 to achieve 4k or better graphics or better frame rate

      1. Yeah, its a great value proposition. I think the main idea was to use that enhanced fps towards enabling heavy effects like RT but the additional fps can be used to whatever the gamer prefers!

        1. Ray Tracing is the future. VR gaming is the future. We need more power right now to power it. VR demands much higher frames for an excellent experience. The lower the latency the better.

          1. I bet every devs just want the time to jump forward 10-15 years so they wont have to fake and bake raster anymore and rather spend that time and effort to make better games.

        1. I will want to see a bunch of mixed benchmarks, not just mostly rt enhanced raster games. After that its very likely i could end up with one of thoose. Hope also Amd have released info about their new lineup at-least before Cyberpunk 2077, if they haven’t ill bite on one of the new nv cards.

          1. I’m waiting for Amd + scaling tests of the pcie3 & pcie4 so i don’t get stuck with a great cpu but bottlenecked by a bus. On an 9900k now so no rush to swap that out

      2. They need to “sponsor” games as is to support and once they pay one then no one does it out of good will because.. the other guys got paid.
        I have no idea why people think that Wolfenstein/Control weren’t paid… they been patching these versions into games years out of development.

        1. Dlss1 was crap (almost made me wonder if someone had smeared oil on my screen) and a b-t-c-h to add for the devs as it required specially for that game trained driver support beside the additional coding, the new dlss2 is faster to add.

          Think we will see an ramp-up in the adaptation speed now that dlss2 have been out a few months, many dev’s are gamer’s at heart and embrace good new tech

          1. >many dev’s are gamer’s at heart.

            Tbh, I heard the same thing about ray tracing. How it just turns on and its a developers dream. No one touched ray tracing who didn’t ask for a check. Hell, they had to commission someone just to do Quake RTX and its opensource.

          2. If 0.01% have the hardware to drive full rt, how can you get the money from the lacking 99.99%? That’s the problem, mainstream have to adopt something enough to get financially viable.

            Like all tech that have 2 sides of the coin it becomes an chicken and egg situation, peeps don’t want to spend on hardware that dont have software… and the software dev’s don’t want to spend on something that few can run meaning low sales.

            Devs want FULL rt, that means they can skip a lot of the painful steps like faking and baking of raster. Today they still have to add raster faking and baking and then rt on top.

          3. Definetly. My main problem is that people thinking that devs going back removing existing lighting/etc are doing so out of there own free will then implementing ray tracing as an option for the *few* and not being paid by NVIDIA.

            People thinking that consoles [with very limited raytracing] will change everything. It won’t. Developers are going to limit/not use ray tracing in multiplatform releases and if they do have it. They aren’t going to support DLSS to make souped up version for NVIDIA without a check… because everyone else got one.

    4. dlss is garbage it has to be added to games via developers, amd has better solutions where it doesnt have to be programmed into the game.

    5. When motion vectors are missed on some effects those will look off as they get either overdrawn if small or could get “smeared” out when mid/large.

      And yeah, dlss have a field day due to taa blur and that will likely continue as proper aa costs way more resources.

    6. What’s not to like? Proprietory tech that requires developer time to implement resulting in lack of use, e.g. PhysX.

      Now let’s see how Nvidia implement MS’s ML to do basically the same thing. This is what we will see more use of on PC as games are developed for console.

      1. So you compare basically sending a motion vector map to an api and some minor changes vs coding for an entire physics api with dubious support meaning you likely have to use two different phys api/engines to remain performant on all brands. – What a comparison.

        Even when i don’t like prop tech – Doubt there will be lack of dlss titles as its not that hard to implement. How ms ml turns out… time will tell. Dlss2 have several released titles already and more on the way. Zero for ml thus far if i’m not mistaken?

    7. Hoping DLSS 2.1 is able to bring back the missing rain drops. There are a lot of optimizations in 2.1 they said so it’s possible.

  3. i went from a 2.5K monitor to just HD and i couldn’t be happier, 1080p looks great. dont really care / want, 4k, 8k, xp for gaming, so useless gimmick, i would use DLSS tech to sell cheaper ( hardware) cards, but then nobody would buy the other 600 cards no? thats why the 3060. 3050, 3040 where mia, waiting for the real cards, / amd

  4. That’s why 27” 1440p 144Hz is still the holy grail of PC gaming. Anything bigger and you gotta move head and eyes too much and any higher res it’s a waste of resources.

    1. Yea when the RTX 4070 is out and it’s so easy maybe moving up to 4K/144Hz will be worth it. But no one’s eyes are getting any better then about 20/20 so resolution has diminishing returns at this point. There is much more to graphics to focus on now like framerate, ray tracing for lighting, AI, etc.

      1. Yea that’s a good point about temporal AA its days are numbered. I’ll buy into 4K when 120Hz or higher monitors are affordable and the GPUs that sustain those frames. Until then 1440p/144Hz is still ideal for gaming to me. DLSS should take care of the rest in terms of image quality and AA problems until then.

Leave a Reply

Your email address will not be published. Required fields are marked *