NVIDIA has just lifted the embargo for some of its RTX 50 series slides, and we can finally share some interesting comparisons between DLSS 4’s Transformer Model and Basic Temporal Upscalers. So, let’s take a look at the visual benefits of the new DLSS 4 Mode.
DLSS 4 will be supported by all RTX GPUs. This means that these improvements will be available to all RTX gamers. This is crucial to note. The only DLSS 4 feature that will be exclusive to the RTX 50 series will be Multi Frame Generation.
PC gamers will be able to switch between the CNN and the Transformer Models of DLSS 4 at will via the NVIDIA App. On January 30th, NVIDIA will release the latest version of its app. This new version will allow you to either use CNN or Transformer in all games that currently support DLSS. As such, you will no longer have to install DLL files to use the latest version of DLSS.
NVIDIA told us that the Transformer Model will offer better visuals with less shimmering and ghosting than the CNN Model. However, the Transformer Model will not be as fast as the CNN Model. This is something you should all keep in mind.
So with this out of the way, let’s take a look at the following comparison screenshots. In those examples, we can see how DLSS 4’s Transformer Model can improve visual quality over the Basic Temporal Upscalers. These comparisons are not between the older CNN Model and the Transformer Model.
Funny thing is that DLSS 4 already looked great. So, the fact that it will now look even better is something that will please a lot of gamers. We’ve seen instances in which 4K DLSS Quality looked better than Native 4K. And now, with the Transformer Model, things will get even better for DLSS.
And there you have it. It will be really interesting to see how much better DLSS 4 will look with this new model. It will also be interesting to see whether Intel and AMD will be able to catch up with it. AMD is already working on FSR 4.0 which looked better than FSR 3.0. But will it be able to compete with DLSS 4’s Transformer Model? Only time will tell.
Stay tuned for more!

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email






wowww,its muchhh better
But they're not comparing it to DLSS3 ? "basic temporal upscaler"
This is the equivalent of monitors manufacturers always comparing modern resolution to the blurry image of the old one.
In other words – pure marketing speak.
very soon we will find out the real quality of dlss 4 .im sure all they said about neural shading and dlss 4 is true .they have been so honest in recent years .
They can't because there's nothing noticeable there, just fps numbers. It's just like when they invented margarine out of butter remains, they had to sell it somehow, they couldn't just call it leftovers. Thats what DLSS4 is. Leftovers of a great upscaling tech.
Margarine isn't made from dairy products it is mainly made with corn oil but used to be made from animal fats not milk itself.
and what is butter my dude? where do u think butter comes from? ANIMAL FAT! Margarine is the by product. If you're googling sht then read it properly.
No one likes a smart*ss
and what is butter my dude? where do u think butter comes from? ANIMAL FAT! Margarine is the by product. If you're googling sht then read it properly.
No one likes a smart*ss
Hey-jus are you a dumb f*x! There is no as you say "ANIMAL FAT!" by product in margarine. It's made from vegetable oil. Holy kittens you are dumb! Grow up and learn you lil'boy…
idiot
Listen kid, i get it u have a hard-on for me, u;ve made that clear from your previous comments but this is getting pretty old and pretty fast, you're pathetic at this point.
Go read a book u fkng dche nuzzle. You're the 90% of this idi*tic population. Dumb c*nt!
Seriously i told u again ang again, do not f'ing breed. If you do, cause most of you do, f*k your fking children hard u dmb idiot along with ur dmb wife.
Your choices will be exactly what you are, trash. I will be blocking you after this so dont even bother, idiot.
Hush lil'boy … Go and take a nap in your onesie like your mama put on ya. No one can really help your stupidity at this point. Maybe when you wake up, you'll be less of a cretin than you are now. "Go read a book" as you say lolz the irony…
Good old "fake it to stay on top" NVIDIA.
This is awful, they dont even consider native anymore lol…
Jensen Huang said several years ago that native resolution wasn't the future. DLSS was the future. Nvidia has been pushing DLSS ever since and they will continue to do so. It's getting more and more expensive to go to smaller nodes so software is the big push for performance increases. Personally I prefer native but I seem to be in the minority on that.
I wish he'd accept my fake cash for his GPUs, I swear it looks just like the real thing from afar.
" However, the Transformer Model will not be as fast as the CNN Model. "
so how much slower are we talking ? How many framerate will you sacrifice ?
Is it worth to change to Transformers Models if the difference is so small and the cost so high ?
Than its no wonder Nvidia needs its Framegen enhanced with next updates let alone choosing Framegen x3 x4 giving more artifacts..
5% slower at most
DF talked about it. Obviously they don't know for sure yet but based on the briefing with Nvidia they predicted a slight dip on 40 series and earlier and next to no dip or no dip at all on 50 series due to additional tensor cores just powering through it. We'll see but I bet it's a negligible difference across the board
It will be slower than natives … lol. Upscaler that slower than natives, imagine that .. Only Nvidia can said such BS but still sold out
It will be slower than natives … lol. Upscaler that slower than natives, imagine that .. Only Nvidia can said such BS but still sold out
It will be slower than natives … lol. Upscaler that slower than natives, imagine that .. Only Nvidia can said such BS but still sold out
Well truth is – AMtrash is on life support with theirs Worthless POS trash GPU's!
Soon it's game over for them and Good Riddance to Garbage!
Intel will do great tho, and the needed competition we need!
https://uploads.disquscdn.com/images/72502db37b2a492e25295c723b2e2743bdc192788eefad98c1ce75990a2791f8.png
We lost the technology xD
We had it. It just slipped thru our fingers.
Hitman Codename 47 did this in 2000
Planar reflections made sense in old games, but that's no longer the case because you need to render the same scene twice.
This technique had the limitation that it could only be used in a single small room. In Uncharted on TLOU1, some small rooms used planar reflections, but most mirrors still use SSR to conserve resources.
Also note that planar reflections cannot be used for diffuse reflections.
Whatever the technic being used, they should have been better with old tech utilised by devs and should have been no problem like real time lighting or shadows. We just lost it, rather than going towards solution that solved the problem, we create more problem as solution
Rendering path traced mirror reflections is more resource intensive than rendering the same scene twice. You're basically just rendering the same room and you can lower the details on the reflection render and achieve better results because PT denoiser can't keep up. Realtime Specular/Metallic/Mirror materials with low roughness are performance killers and as you can see in Saga Anderson 2, it looks like sh*t. https://uploads.disquscdn.com/images/0a6c20be3dee32b0173475a22f74e3f9a2e1461bc1193cee6e353e3cc8084bb8.png
This is what Im thinking, lack optimisation is worrying
I'd rather they focus on brute force and try work out how to get MCM working on gpu's for gaming.
Maybe next gen or the one after they move to MCM?
I think the 5090 will be MCM but the real problem is power usage. It's getting insane already to push all of those cores. A node shrink generally improves efficiency so that for the same amount of cores less watts is needed but it's getting more and more expensive to move to smaller and smaller nodes.
There is a limit to how much power a mainstream gamer is comfortable with using and so MCM is one way forward but it has the power usage considerations.
The real breakthrough will come when silicon is replaced with another material or combination of materials. When will that be? Who knows but scientists and engineers are working on the research.
The RTX 5090 actually uses a monolithic die, as also evident from the card's PCB. No MCM here either.
But the interesting part to note is that this Blackwell GB202 GPU uses an unorthodox three-piece/layer printed circuit board (PCB).
3-layered modular PCB design (with a total of 34 phase power delivery/VRM, since the backside of the board has four power delivery phases as well).
For context, the RTX 4090 Founders Edition card only had a 23-phase VRM.
.
https://uploads.disquscdn.com/images/89c9dd681a2d4d09fac1905080bcf1bf64b005c21f23e61425cf58af07fc08a8.jpg
.
Another problem is as node size shrinks heat density increases which is why no one is using 3nm or 2 nm for high performance GPUs, it's just too hard to get the heat away so you get bad hot spots which cause the chip to fail
The 5090 is monolithic the only chip Nvidia will be making sort of MCM will be the B200 which is essentially 2 B100 chips with a high performance 10 TB/s interface that is way too expensive for low margin consumer devices. You are talking a chip on a device that costs $70,000 a pop (and is selling out as fast as they can make them)
MCM causes you to lose performance compared to a monolithic die. The intent of MCM is not to increase performance but to lower manufacturing costs. That's why the 7000 series GPUs needed about 10% more transistor to match the Nvidia 4000 series which is why AMD couldn't compete with a 4090
So i.e. things start to go back to how they used to be.
Looks like they nailed some of the issues with dlss/ai scalers in general, shame it will mostly be abused as an excuse to skimp even further on optimizations or scale fuggly models no one want on the screen to begin with
I don't mean this as a sarcasm or a slight, but can anyone explain how this statement is at all possible?
It's simple some implementations of TAA suck and thus work worse than DLSS especially if they are using TAAU like Witcher 3 Next Gen.
If an ML model is trained on 16K data, it can reconstruct a frame from say 1080p up to 4K with the understanding of how the frame would look at 16K.
therefore, it's possible for 4K dlss to look better than 4K native.
Interesting.
they should compare dlss 4 to dlss3 though, it would be more fair, to shown any possible increase