NVIDIA has just lifted the review embargo for the NVIDIA GeForce RTX 5080. And, since the green team has provided us with a review sample, it’s time to benchmark it in the most graphically demanding games that are available on PC.
For these benchmarks, we used an AMD Ryzen 9 7950X3D with 32GB of DDR5 at 6000Mhz. We also used Windows 10 64-bit, and the GeForce 572.02 driver. Moreover, we’ve disabled the second CCD on our 7950X3D. That’s the ideal thing you can do for gaming on this particular CPU.
Before continuing, here are the full specs of the RTX 5080. This GPU has 10752 NVIDIA CUDA Cores, uses a 256-bit memory interface and comes with 16GB of GDDR7. It has 3x DisplayPort 2.1b with UHBR20 and 1x HDMI 2.1b connectors. Plus, it supports Dolby Vision for HDR.
Let’s start with our Native 4K benchmarks. For these tests, we’ve used the most demanding games you will find on PC. As you will see, we’ve also replaced Starfield with Final Fantasy 7 Rebirth. Starfield was CPU-bound even at 4K, so it made no sense to include it in our graphs.
Without DLSS, the NVIDIA RTX 5080 struggles to run non-RT games at 4K. Overall, the NVIDIA RTX 5080 is 20% slower than the RTX 4090 and 41% slower than the RTX 5090.
Sadly, we don’t have an RTX 4080 so we could compare it with the RTX 5080. So, we don’t know whether NVIDIA’s claims of its performance over the RTX 4080 are valid or not. For this, we highly recommend reading other reviews. Despite that, the good news is that the NVIDIA RTX 5080 is priced similarly to the RTX 4080. So, even if the RTX 5080 is only 5% faster than the RTX 4080, it still offers a better value than it. It may not set the world on fire. However, it has a better value.
But what about DLSS 4? Well, now that’s a subject we need to talk about. Let’s start with the 4K benchmarks and the framerate benefits you’ll get with DLSS 4 X4 (using Super Resolution Performance Mode).
Sadly, there are some issues with DLSS 4 that can be more easily noticed on the RTX 5080 than on the RTX 5090. For example, Alan Wake 2 had noticeable input latency issues. Not only that, but it also suffered from the same visual artifacts we’ve experienced with the RTX 5090. Yes, you’ll get a smoother performance at 4K (than without using DLSS 4 Super Resolution or Frame Gen). However, this isn’t a free performance boost. It’s more like movement smoothing.
Cyberpunk 2077 also suffered from some visual artifacts, especially when it came to in-game icons and quest markers. They were quite noticeable on my end. These artifacts are similar to those I had in Dragon Age: The Veilguard when running it on the RTX 5090 at 8K with DLSS 4 X4. Basically, the base framerate (before using Frame Gen) is so low that MFG has trouble calculating the location of some icons or HUD elements. So, during quick movements, you’ll get stuttery in-game icons.
In short, the RTX 5080 exposes the issues of MFG. When you have a high base framerate, you’ll get a great gaming experience with MFG. Dragon Age: The Veilguard was a perfect example of this on the RTX 5080. But when your framerate is around 35-45FPS, you’ll get noticeable visual artifacts and, in some games, extra input lag. Thankfully, NVIDIA will release Reflex2 which might be able to reduce the input latency. Sadly, we didn’t have access to it so I can’t comment about it. As for the visual artifacts, I really hope the green team will fix them via future drivers/versions. There is a lot of potential here. Right now, though, MFG is not as ideal as it was on the RTX 5090. Which… well… makes perfect sense as the base framerate on the RTX 5090 was higher.
Another thing to note is that MFG mostly targets high refresh rate monitors. If you have a 4K/120Hz monitor, MFG is not something you’d want to use. In these situations, DLSS 4 FG X2 should be enough for gaming without exceeding the refresh rate of your monitor. So, if you don’t have a 240Hz or 360Hz monitor, MFG is a nothing-burger.
NVIDIA has been trying to push MFG as a “free performance upgrade” but we all know that’s not the case. If it was, it would not introduce extra input lag or visual artifacts when using it at 30FPS. It’s a cool tech, but it’s not a free performance boost. It also currently suffers from the issues I’ve described above. Once NVIDIA manages to iron them, MFG will feel and look better, and it will be easier to recommend (for anything other than an RTX 5090).
So, for whom is this GPU? If you have an RTX 4080, you should not consider upgrading to an RTX 5080. The performance difference between them is not that big. For those that are still with an RTX30 series GPUs, the RTX 5080 is a “turbo-boosted” RTX 4080 Super with MFG. For the same price, you’ll get a faster GPU with new features. It’s not a bad deal. The RTX 5080 is not a bad GPU. It just doesn’t feel like a next-gen GPU. It doesn’t have that “wow” factor. So, if you are expecting or hoping for something truly amazing, you’ll be disappointed!

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email





Overpriced refresh!
The gap between the RTX 5080 and 5090 is huge. The RTX 5090 has comparable performance to the RTX5080 in SLI, or 3090 in 3-Way SLI. The only good thing is that the price difference is equally big. I think Nvidia might release the RTX5080ti with more VRAM in the future, maybe even on smaller node.
I'm not planning to upgrade my RTX4080S unless nvidia can build 2x faster GPU at similar price point and power consumption. The RTX5090 is twice as fast, but I cant accept the $2,000 price tag and 575-600 watts of power consumption.
The RTX5080 is 12% faster than the RTX4080S, but it seems you can also add another 12% with OC.
https://uploads.disquscdn.com/images/7489757287a835c2e53f7883a6fb19d0e8a1ec2301cb6434c2aa44ec0f2646ec.png
BTW. John can you tell me how much fps 4090 and 5080 have in Black Myth Wukong with the same settings?
https://uploads.disquscdn.com/images/1222075220f44c0d5857418a0e0b40b8ef29d4b46dffbe362e7a58df0ddef1bf.jpg
This card has a very high chance of becoming DOA.
It's going to sell out immediately ……
You have commented faster than me my friend. Immediately after writing that, I went on to watch Hardware Unboxed and realized this will be a sell out anyway. The price level is same, the performance is a bit better and there is no AMD threat in the horizon. Easy win inbound.
wtf is nvidia doing?
Trying to scr*w over their customers. I hope they don't get away with it but we will see if this 5080 sells a lot anyway. If it does and AMD doesn't even compete with the 5080 then expect the same treatment from Nvidia on the 6080 🙁
Considering the benchmarks I expect about the same demand as 4080, meaning very low, they will EOL the card as soon as possible and push 5080 Super.
I think they will go with a 5080 Ti first that uses about 20% more shader units and cores and will give you a 15-20% increase in power
I think they will go with a 5080 Ti first that uses about 20% more shader units and cores and will give you a 15-20% increase in power
I have doubts. Nvidia stated that 5080 die is already maxed out, only thing they can do is increase the memory, power delivery and tune the clocks.
What exactly did you want them to do? When you are stuck on the same node the ONLY way to increase performance is add more sharer units and up the core counts. They could have just wimped out like AMD and not release anything in the high end at all this generation.
We aren't going to see much change for the next few years until they figure out AI Rendering because node shrinks just aren't much of an option now so the ONLY way forward is major overhaul in rendering techniques.
I'd rather they work on getting MCM on GPU's working like it does now for CPU's.
Nothing. Because AMD is doing nothing either.
This is just funny at this point. I can't wait for 5060 8 GB to be 2% faster than 4060 which was slower than 3060 😉
It's literally a rebranding of 4080 Super!
It reminds me of long ago with the GTX 480 and GTX 580. The 580 was a refresh with a few more cores and faster clocks but this 5080 is supposed to be a generational leap. It's a fail imo
GTX580 was a welcome refresh, becasue GTX480 had a serious flaw, leaky transistors. Because of these leaky transistors, NVIDIA had to disable the affected cores in the chip and increase the voltage at the same time. These changes made the GTX480 perform worse than it should. There were samples with 512 fully functional cores, but normal GTX480s only have 480 working cores. Nvidia fixed problems with leaky transistors and re-released the GTX480 as the GTX580.
4080 vs 5080 is however definitely a different architecture. Blackwell has new additions to ADA architecture (read blackwell whitepaper) and once developers start using these new features, the gap between the 4080 and 5080 will widen dramatically, just as it did when Black Myth Wukong finally used OMM engine in ADA Lovelace (in this game my RTX4080S is 2x faster than the RTX3090). The Blackwell architecture is also more power efficient. The Blackwell RTX5080 can operate at 3200MHz drawing 320W (1.05 volts). By comparison, the RTX4080 at max OC 3000MHz draws 350W at 1.1 volts. Also the RTX5080 has much higher memory bandwidth. When the RTX5080 is OC'ed it's as fast as the RTX4090 in some games.
The RTX5080 offers lower generational leap than expected, but it's still clearly better than the RTX4080S. Watch Hardware Unboxed review of MSI Vanguard SOC RTX5080. When OC'ed it's as fast as 4090 in some games.
https://uploads.disquscdn.com/images/a84740ea382a7addcc0e95d7744e1a65adc018e1b0619d6f7cf70c88d8dc6200.jpg
The gap between the RTX 5080 and 5090 is huge. The RTX 5090 has comparable performance to the RTX5080 in SLI, or 3090 in 3-Way SLI. The only good thing is that the price difference is equally big. I think Nvidia might release the RTX5080ti with more VRAM in the future, maybe even on smaller node.
I'm not planning to upgrade my RTX4080S unless nvidia can build 2x faster GPU at similar price point and power consumption. The RTX5090 is twice as fast, but I cant accept the $2,000 price tag and 575-600 watts of power consumption.
The RTX5080 is 12% faster than the RTX4080S, but it seems you can also add another 12% with OC.
https://uploads.disquscdn.com/images/7489757287a835c2e53f7883a6fb19d0e8a1ec2301cb6434c2aa44ec0f2646ec.png
BTW. John can you tell me how much fps 4090 and 5080 have in Black Myth Wukong with the same settings?
https://uploads.disquscdn.com/images/1222075220f44c0d5857418a0e0b40b8ef29d4b46dffbe362e7a58df0ddef1bf.jpg
FYI, we use a different scene for our Black Myth: Wukong benchmarks. We don't use the in-game benchmark tool (we are using a more demanding area).
Once I find some free time, I'll test the 5080 and the 5090 with those settings with the benchmark tool.
OK, thanks John. My 4080S is OCe'd to 59TF and 820GB/s memory bandwidth, so it will be interesting to see if the OC is enough to close the gap with stock RTX5080 which also has 59TF. I'm expecting that stock RTX5080 will be a little bit faster (newer architecture, and 17% higher memory bandwidth), but how much exactly?
edit- I watched framechasers review and the RTX 5080 is better than I thought. They definitely improved architecture. The RTX5080 is able to sustain 3250MHz with ease at lower voltage compared to Ada and with such OC 5080 isnt that far behind the RTX4090.
I doubt we'll see a node change …. 3nm is for higher efficiency not higher power. As node size shrinks heat density increases so they run at much lower wattages. That's great for mobile devices that run from batteries but terrible for high current devices like GPUs. Even liquid metal can't get the heat away from the chips fast enough when they are run at high wattages
There are other ways and nvidia can well afford to develop new ways to scale up performance – For instance mcm solutions, like we didn't had multi gpu cards in the past… And with today's interconnects the gpu<->gpu connectivity is less of an issue than it was back then (that was the main reason it died down).
So for a monolithic we are at the end of the line (unless some major breakthrough happen). Hope some new player come in and gives a fresh breath into the milking industry because it wont change otherwise. Kinda like when amd became the x86 king (again, was kind of amusing to see the athlons for instance)
https://uploads.disquscdn.com/images/98635cc89dcd9b3bff3dcff3457565a02acd5116c015e706b094dc95698a6229.jpg
It's funny that all of the people that were defending this 5080 and Nvidia and saying that it would be great have disappeared now. Well, at least the idiots have shut up about it now.
After all the benchmark reviews, I will defend this card even more. This will sell out as long as AMD decides to mess this up once again.
btw the idiots comment wasn't directed at you or anyone here. It was some of the people on Videocardz being complete idiots backing up Nvidia.
The 5080 is one of the biggest disappointments in years. Compare the relationship with the 4080 over the 3080 and the gap between the 4080 and 4090. This time the gap is 2 times more cores from the 5080 to the 5090.
The 5080 will probably sell very well anyway. It doesn't look like AMD will have anything to compete with it 🙁
I am not talking about that mate. I too backup Nvidia a lot but it is getting tiresome, especially in this generation after the infamous RTX 20 series. After DeepSeek, I am also questioning the efficiency of these BS cards. Just look at the CUDA core counts + BW uplifts. I think it is 70% something. And we are getting 32% uplift. Some people are saying same process node but I don’t think so. AMD is trying to contest with G6 chips for like almost eternity now whereas Nvidia is revving things up with G6X (even AMD doesn’t even have access to it yet!) & now G7. But where is everything else?
Its the same RTX2080 story all over again
. Back in 2018.
Accept that in that launch it had additional quite darn heavy features (the rtx hardware logic) that ate up silicon estate rather than just a price increase for basically the same.
True but it neither “perfotmed” in rt
informative article thanks John.
Once again we are screwed. Damn this is getting anoying to say the least!
With no generational node change to speak of these results were all predictable just based on shader unit and core count number differences between the 4080, 5080, 4090 and 5090
Definitely a skip gen if you are on high end 40 series already.
Long live Pascal.
at this point there is no generational leap, its just brute forcing across the board with some gimmick techs, you'll pay more to gain a bit more by having more TDP, and the industry is already conditioned by reliance on Frame gen and upscaling. its gonna backfire soon and the progress will slow down further more.
Of course there is no big generational leap the chips are made on the same node.
Better get used to it because we've pretty much hit the limit for node shrinks for GPUs. The best way forward to gain performance will be creating a new paradigm for generating frames that is faster than current traditional methods (Wrongly called "rasterization" since all graphics including Ray Tracing are rasterized")
The 5080 is a joke, the minuscule upgrade along with hefty price. Its in reality a 5070 rebranded to fool people. Nvidia should be ashamed of themselves! But i feel sorry for poor Jensen who only have 100 billion dollars on the pile, he needs all your money!
Nvidia's probably waiting for the launch of AMD's new cards and nerfing the results on purpose and use this time to say they updated their drivers to counter attack after the release. Does it make sense to have all this additional hardware that's larger in every way only to be 20% faster when it comes to the 4090/5090. When a 3090 and 4090 are 80% difference in performance using the same bus width at 384bit and slightly faster gddr memory?
I think I am going to bring my mighty old 8400 GS back from its retirement.
This is have to be a typo, should likely said 4070 with that paltry raster uplift. Heck its getting closer to the margin of error!
So the stack presented thus far…
5090 = 4090TI
4080 = 4070… seems to be all watered down and priced higher (after the 40 series pricefix of the 80' too look cheaper this time around)
Nvidia and amd – Proud killers of pc gaming!
Australian pricing just released, Nvidia can shove the entire series.
If you are on anything under the 30 series then go for it but anything newer forget it.
I feel bad for you guys in Australia. I know you have to account for the exchange rate on the dollar but even after that you guys get overcharged on hardware 🙁
Yeah it's getting pretty bad with everything down here, Went from being an enthusiast getting the high end gpu every gen to now skipping a gen, That started around the 20 series.
We don't have large places like Micro Center where you can get some good deals or bundles.
It seems it is having loads of pcie5 signal issues as well, tons of peeps is having issues with these 5080 cards.