Earlier this month, HardwareUnboxed reported some interesting findings about NVIDIA’s GPUs. According to HUB, NVIDIA’s GPUs have major driver overhead in CPU-intense games under the DirectX 12 API. And today, HUB tested more games in order to showcase these issues.
According to HardwareUnboxed, this driver overhead is mainly due to some optimizations that NVIDIA has done for its GPUs. These optimizations worked wonderfully in DX11 games, in which we’ve seen AMD’s driver falling behind. This is something we’ve criticized AMD for. However, the tides have turned with the arrival of DX12.
During DX12 CPU-intense games, NVIDIA’s driver overhead brings a negative performance impact. In fact, the NVIDIA RTX3080 can be slower than the AMD Radeon RX 5600XT when used in really old CPUs.
To showcase its findings, HUB used Watch Dogs Legion, Shadow of the Tomb Raider, Horizon Zero Dawn, Death Stranding, Rainbow Six Siege, Cyberpunk 2077 and Assassin’s Creed Valhalla. In the games that were significantly stressing the CPU, the AMD Radeon RX 5600XT and 6800XT were noticeably faster than the NVIDIA RTX 3080.
What this basically means is that owners of really, really old CPUs should avoid NVIDIA’s GPUs when playing DX12 games. The situation changes, however, when playing DX11 games. In DX11 games, NVIDIA’s GPUs are noticeably faster than AMD’s GPUs.
Still, these are really interesting findings and it will be interesting to see whether NVIDIA will make any adjustments to its drivers (and architecture) in future GPUs!

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email
If devs were smart, and not subsidized by nvidia, they would just switch to vulcan. Nvidia is no different, programmed obsolescence to the manuf warranty period is pretty standard, even easier to implement with software related products.
lol vulkan is from Khronos Group and the President of the Khronos Group is Vice President at Nvidia. Why would nvidia subsidize people not to use Vulkan when they are actively involved in its development?
Neil Trevett
In July 2005 he became vice president of mobile ecosystem at Nvidia where he is responsible for enabling and encouraging visual computing applications on non-PC platforms, including mobile phones.
Just because one guy holds positions in both doesn’t mean Nvidia is actively invoked in Vulkan development.
nvidia themselves probably more interested for game developer to use vulkan rather than DX12. just look at the mess they made with opengl before. if they can pull similar feat AMD and intel will be at absolute disadvantage when using vulkan.
Fascinating findings.
Vulkan is the best API. And in the Vulkan AMD is the best…
I mean, technically yes but I think its dependent more-so on the genre of game. For instance if you’re playing an RTS that’s traditionally going to be very CPU heavy even if you increase your resolution and throw more work at the GPU, you’re wasting the same amount of CPU on the driver overhead and that’s the problem. nVidia needs to come out and address this ASAP because in no world should an RX 480/580 be performing better than a 3000-Series Ampere card in any game.
Throwing more CPU power or a higher resolution at this should not be considered anything more than applying makeup to a pig.
” nVidia needs to come out and address this ASAP because in no world should an RX 480/580 be performing better than a 3000-Series Ampere card in any game.”
Agree with this. Its unacceptable when more dollars GPU beat by the less dollars one and a few gen before. Yes, it maybe limit the GPU potential when paired with older CPU, but lost from cheaper GPU from few gen before indicate that there is something wrong
it was design decision for the architecture. will nvidia remedy them? maybe yes maybe not. because the test are done with non-ideal situation like pairing one of the highest end GPU with older mid range CPU and limit the setting at 1080p medium just to purposely create the bottleneck.
and AMD also have their own issue on other area. like have you ever seen GTX1650 super/ GTX1060 3GB end up beating AMD RX5600 XT at 4k resolution?
https://tpucdn.com/review/watch-dogs-legion-benchmark-test-performance-analysis/images/performance-3840-2160.png
indeed, why don’t people put out an article on this. lol
I got 9400F and 3060ti for my 1080p gaming as well.
It is what it is. If people does not like it then they can always change their gpu. It’s like forcing nvidia maxwell to be better at async compute even if they don’t have the hardware for it. There are probably things that did not go as nvidia expected.
When i say some things did not go as nvidia expected it was not about DX11 vs DX12 games in the future. As you said nvidia themselves are promoting DX12. But they still stick with their design decision to not remedy the issue they had when CPU are fully choked.
Take this as an example: why AMD decided to invest heavily on HBM to replace GDDR5 instead pushing the GDDR tech into GDDR6? (It was AMD that develop GDDR4 and 5. And yet after GDDR5 it is nvidia that work with memory maker to make GDDR5X and 6 and finally 6X). That’s because AMD expect HBM to be more cheaper and much easier to produce as time goes by. In the end things did not go as AMD planned so with RDNA AMD are moving back to GDDR memory instead of using HBM from top to bottom.
Nvidia expect by now majority of user should be on 1440p or 4k. And with AMD was quite aggressive with Ryzen release before. If AMD still keeping the same pace as they were before we then we might see processor such as 5800X in sub $300 instead of $450. But instead we see AMD increase their 6 core 12 thread offering towards $300. If this go as well as nvidia hope then there is less concern about nvidia performance when cpu is being choke. Because powerful hardware to brute force solve the issue can easily be obtained.
It is the same issue with VRAM capacity in my early post. There are reasons why AMD adding more VRAM on their cards. Because unlike nvidia the lack of VRAM can be more fatal to them to their performance. And AMD should aware about this issue for several generation already. So rather than fixing their memory issue they just add more memory to their card.
“they needed to warn users about the performance degradation.”
marketing 101: don’t talk about your product weakness. they don’t need to lie about it but for the most part don’t talk about it IF user did not found out about it. to me it’s simple. if you don’t like it then don’t buy it. buy alternative product. for those that thinking this issue worth taking to the court then bring it on. i don’t think anyone will really object to that other than the company themselves.
“also, vram isnt fatal at all in 1440p. only 4k. i never exceeded above 8gb yet. you know usage vs allocation right?”
yeah i was giving the example on 4k but that’s not the whole point. i suggest you look at the whole performance test to see what happen with AMD card in watch dogs legion test by TPU. the performance of 5600XT is fine up to 1440p. but why did it fall behind to the likes of GTX1650 4GB, GTX1060 3GB, RX580/590 8GB at that resolution? how did weaker card with less VRAM to boot able to beat it? another example is at AMD own sponsored game
https://www.youtube.com/watch?v=693NBRwk8z4
then read back to my original reply to the comment above. it was nvidia design decision. will they fix it? maybe yes maybe not. but since it already happen for years it might be something that nvidia have no intention to fix. similar to the VRAM issue on AMD that they did not want to fix. my point mostly about AMD and nvidia initial design decision can have it’s own issue if things are not go the way they expect it to be in the future. that’s all to it. no point being bitter about it.
Why do you assume its a design limitation and not just a driver bug or poor optimization? All it takes is one thread being overloaded doing something trivial for an issue like this to occur. If you search around a little you can find decade+ old threads with people complaining about this very issue like here for example https://www.nvidia.com/en-us/geforce/forums/game-ready-drivers/13/115131/nvidia-drivers-cpu-overhead-something-to-chew-upon/
>nvidia moved from a hardware scheduler to a software based one
>you know people on forums arent always saying the truth?
of course, but in this case the issue is the same and clearly it’s being proven. a bottleneck is a bottleneck regardless, and they should try to alleviate it without requiring the end-user to throw more cores at it.
>one core being overloaded, so unoptimized garbage games?
firstly I didn’t say a core, I said a thread, more specifically I was referring to the display driver, where if one thread is being worked too hard it can hang or overload the entire render pipeline.
>eliminated mostly from 1440p
I’m not sure about that, higher res results in lower FPS but that does not change the CPU utilization of the driver itself from what I saw, I think more in-depth research will be needed to get a better idea of what is going on behind the scenes though.
I disagree. Playing CP2077 with everything maxed using a 3080 / 3770k at 1440p was fantastic. AMD just can’t come close to that next gen performance.
what is your cpu ??
3770k
have you tried it with AMD GPU before switch to 3080?? 5600/5700/5800 or 6700/6800 maybe
anyone stupid enough to put a 3080 with a 3770k cant be reasoned with.
Nothing is wrong with doing this as long as he plans on upgrading his CPU soon.
Sometimes we PC gamers go a bit to far with this stuff but i have to admit i’m more into CPU’s anyways i tend to upgrade just for the fun of it as i love tweaking things mainly memory lol.
I guess i shouldn’t upgrade from GTX 1070 to 3080 with my ancient i5- 2500k LOL
Waiting for next gen AMD anyways. I can wait a bit more, plus GPU’s might finally be in the stores by then. (MAYBE LOL)
why wait, just get a new cpu now
Idk, PCI E 5, DDR5 and other things. It will be a crazy huge jump. I like those, and the end of the year is not too far now.
The year just started and its April now LOl. Thats how fast time moves in Covidland.
Buy what you can afford, either way buying a better GPU will enable you to run games at higher quality settings, and then later you can upgrade mobo, memory and CPU, which isn’t exactly cheap these days. With my GTX 770 and 3770k, I couldn’t run Ark Survival Evolved at all, not even on the lowest settings (15fps), now with a RTX 2070 I can max out the settings, all on Ultra and get over 60fps with the same I7 3770k CPU. So it was definitely worth buying a better high end graphics card.
Look in the mirror. You’re talking complete nonsense.
AMD GPUs can’t compete in next gen performance, RT + DLSS.
you are a freaking fool putting a 3080 with that old pos of a cpu. even my much slower 1080 ti was held back by an oced 4770k in many if not most games. hell in Watch Dogs 2 I literally doubled the framerate in cpu intensive areas going from oced 4770k to 9900k even at 1440p. ffs your cpu cant even maintain 60 fps in some games you dimwit.
I get 95% + GPU utilisation due to RT and DLSS. My old 1080Ti FTW3 can’t come close.
Your not the first to suggest the better approach would be to buy a lower GPU, upgrade the CPU and then upgrade GPU again.
Sharp by name but moronic in nature 😉
Aye, 3770k has already shown how old and slow the cpu has turned, like years ago. It will severely bottleneck any fane that will also be using cpu along with gpu. That cpu along with the normal combo of ddr3@1600 has aged quite a bit. One example is the now old FA4. The 3770k would cripple the performance, no matter what gpu you were using. Going with a 3080, I’d such a waste.
Yes, max settings, RT Psycho and DLSS Quality with a target of 1440p/60Hz. I end up with 45-60 FPS, which I find very playable. From memory, DLSS adds ~2ms to frame time.
I’ve been meaning to upgrade the CPU for years. It’s been fine for gaming with the previous 980Ti/1080Ti and more than enough for programing. I’ve been waiting for the bug fixes on the 5950x boards as I use a lot of USB ports, but now with supply and pricing I may just hold on until DDR5 is available.
Same here, I have no problems running the latest games and obviously a faster CPU will run them quicker, but they are also lot more expensive and money doesn’t grow on trees. I’ll upgrade evntually, but for the time being every game I’ve tried runs fine.
What this basically means is that owners of really, really old CPUs
should avoid NVIDIA’s GPUs when playing DX12 games. The situation
changes, however, when playing DX11 games. In DX11 games, NVIDIA’s GPUs
are noticeably faster than AMD’s GPUs.
*** alright, for older cpus = DX11 rules the way…… ^^
if you have a really, really old CPU you would most likely be playing older games which run on DX11 since newer games requiring DX12 will probably struggle on the older CPU anyway
DX12 games don’t struggle on my i7 3770k, but then I’m not obssesed with having 200fps minimum. 60fps with vsync is sufficient, as I can’t stand screen tearing and only have a 60hz 1080 24″ monitor.
I was planning to get a 3000 card with my old a*s cpu but I dont think I will.
Buy what you can afford, a better GPU will make a big difference. I’m still using a i7 3770k and bought a RTX 2070 over a year ago and don’t regret it. PC gamers who brag about owning highend CPU’s and GPU’s are obssesed with high framerates. 60fps min is more than playable.
The myth of AMD has terrible drivers finally is over.
https://uploads.disquscdn.com/images/a0f9eda45269c6cf558f85e48d2e2efc17f8260b73722b13b524a3b79e0413dd.jpg
My only question then is : what is considered old ?
I have a i7 8700k is that old ?
old
Old as dirt.
Really old CPU’s. WTF does that even mean? How old?
A faster CPU is always going to run better, but then they are also a lot more expensive. I have a i7 3770k with a RTX 2070 and can still max out most games. I also didn’t just buy this GFX card for DX12 games, as I have lots of DX9, DX10, DX11 games still to play and I also like retro gaming. I will upgrade my CPU eventually when the prices have come down. Whenever that might be!
I’ve just heard senile Joe is threatening the UK with tariffs of up to 25% on a host of UK exports in retaliation for a UK tax on tech firms. Capitalist greed, don’t you just love it.