EIDOS Montreal and NIXXES have added DX12 support to the main build of Deus Ex: Mankind Divided. As such, we decided to test the game and see whether this latest DX12 build is able to match the performance of the DX11 build on NVIDIA’s hardware.
As we’ve already said, the benchmark tool of Deus Ex: Mankind Divided is not representative of the in-game performance. Therefore, we decided to test a number scenes from the game’s city hubs.
Unfortunately, DX12 still performs poorly on NVIDIA’s hardware. As we can see below, Deus Ex: Mankind Divided was unable to take full advantage of our GTX980Ti in DX12. In DX11, however, the game was running smoothly.
As a result of the underwhelming performance in DX12, the game was unable to offer a constant 60fps experience, therefore – and once again – we strongly suggest avoiding the game’s DX12 if you are gaming on NVIDIA’s hardware.
For our tests, we used an Intel i7 4930K (turbo boosted at 4.2Ghz) with 8GB RAM, NVIDIA’s GTX980Ti, Windows 10 64-bit and the latest WHQL version of the GeForce drivers.
DX12 images are on the left, whereas DX11 images are on the right.

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email






Another game with tacked on DX12 under performing vs DX11.
Nothing to see here folks. We’ve seen this countless times and things won’t change.
Still interesting that, with the exception of Rise of the Tomb Raider, it only seems to affect Nvidia.
No, thats because John only tested it on NVIDIA.
Lets wait for someone to test it on AMD and see how it goes.
I know, but still, if you look for DX12 results you’ll see AMD improving and Nvidia, at best (with the 1060), staying the same (which could imply that Nvidia is starting to gimp their older cards?). The only game this didn’t happen was on Rise of the TR (although the gap reduced with the latest patch if I recall correctly and even there, Nvidia’s performances on DX12 was worse than on DX11 if I recall correctly) and Ashes where Nvidia saw small gains. I don’t know what’s going on there, but it’s still interesting.
I don’t think it implies NVIDIA are gimping older cards as they don’t do that. All that it says to me is that some developers are porting their console AMD optimisations over to PC and calling it Directx12.
Also AMD’s DX11 drivers are woefully inefficient so simply moving to DX12 which allows all CPU cores to send draw calls to the GPU will make a massive difference on it’s own. Hell AMD’s DX11 API is so poor that high end NVIDIA cards like 980ti can be pushed to 60fps with an i3 CPU however AMD becomes CPU bottlenecked requiring at least an i5 even for AMD CPU’s far lower down the performance curve than 980ti. So simply removing this one issue alone with DX12 is going to make it look like AMD is getting bigger performance gains as will simply porting over AMD console optimisations over to PC and calling it DX12..
Gears of War 4 is running really well on both brands of hardware under DX12
“I don’t think it implies NVIDIA are gimping older cards as they don’t do that” It was just speculation on my part but it’s not like they have not done (or tried to do) that before (I think it was the 970 overperforming the 780, something that shouldn’t happen). I won’t argue about AMD’s drivers having a bigger overhead compared to Nvidia’s and that hurts their performance in DX11… but that doesn’t explain some recent cases, like Far Cry Primal and The Division were performance is basically the same on both brands.
Yes if you look at Far Cry Primal that’s a game that simply loves memory bandwidth so the more you have the better it will run . However play FarCry Primal or The Division with an i3 CPU and see what happens, typically you can get away with having a lower end CPU with NVIDIA than you do with AMD of course this is hardly noticed because whenever you see a website benchmark a new GPU they always start by saying “We have used a powerful CPU to remove any CPU bottlenecks” which is fine and all but you usually pair powerful CPU’s with high end GPU’s, not mid-range ones.
Maxwell is a simply more modern GPU so I don’t see too much wrong with Maxwell outperforming Kepler. I mean even 1070 outperforms 980ti despite having similar compute performance even when comparing older games that existed before 1070 even launched yet 980ti isn’t running these older games slower. Even GCN is the same, modern games run better on the newest GCN 4.0 architecture than 3.0 even with similar compute performance.
kepler suffers from GPGPU oriented design. the way i heard about it while nvidia able to increase their double precision utilization they suffer under utilization for single precision which gaming application rely upon. hence with maxwell they reduce their SMM count to 128 from 192 in SMX. and nvidia did not end there. within that 128 SMM nvidia divide the SM into four cluster. all this done to increase their gpu utilization. anandtech in their GM200 review (i think it was titan X) saying that maxwell is pure gaming architecture.
then here we are with pascal. we see GP100 SM arrangement is very different from what we see from maxwell. and yet why GP102/104/106 resemble maxwell SMM arrangement? this is most likely not to repeat the problem they have with kepler. now nvidia decided to go with two different design with their gpu; one geared towards pure compute application (GP100) and the other is for gaming (GP102 and the rest).
in the case of FC Primal and The Division AMD able to catch up most likely because of console. with their hardware in two main console many game engines are tweaked to console hardware because most game development are focused on console first then being scale up to pc. but if you look at raw power alone you will know that nvidia are more efficient with DX11. look at Fury X vs 980Ti for example. that is 8.6Tflops vs 5.6Tflops. and yet why in many cases Fury X did not significantly outpace 980Ti?
“in the case of FC Primal and The Division AMD able to catch up most
likely because of console. with their hardware in two main console many
game engines are tweaked to console hardware because most game
development are focused on console first then being scale up to pc”
It is true that console comes first but, by that logic, AMD would be able to catch up, if not directly surpass Nvidia’s performance in almost every game. This has not been the case, and while Gameworks is largely to blame there in most cases, there are vendor agnostic games that present an important difference between both brands (like Doom on OpenGL for Nvidia or Call of Duty Black Ops 3 for AMD).
As for the Fury X having more Tflops than the 980Ti, as far as I understand, that just refers to the amount of data it can transmit, nothing else. In fact, the Fury X fares better in 4k because it’s not bottlenecked.
that’s why it is already quite a feat that maxwell able to keep up with AMD GCN. AMD never hide the advantage they have with console. back in 2012 they say radeon is the right way to go because they dominate the console hardware. then in regards to FL12_1 they said it is not important feature to have because console hardware does not have it implying that game developer will not going to bother with it. even polaris does not support FL12_1. that’s how confident AMD are with their console win and how it will going to influence game development on PC. even with the upcoming shader model 6 i heard more feature based on GCN architecture will be incorporated into DX12. but because of console AMD also did not dare to make big changes to their current architecture on PC. Polaris is just another GCN iteration. if they make big changes to the architecture they might lose the “similar architecture to console” advantage.
and TFLops did refer to cards raw performance in general. but how effective the gpu can utilize that raw power is a different story. in Fury X and 980ti case it is clear that 980Ti is more effective at utilizing it’s raw performance into real performance.
“DX12 still performs poorly on NVIDIA’s hardware”
Nothing new. Probably this game use a lot of asynchronic shaders which are supported only on AMD Radeon. Nvidia must try to create better drivers
Drivers can’t fix what the hardware is missing. AMD will always have an advantage with DX12/Async until NV update their GPU architecture. Pascal (1xxx) is nothing more than a Maxwell (9xx) die shrink with higher clocks. Maxwell and now Pascal do not have Async hardware support.
But NV does perform much better with multi-GPU DX12 support in the latest DE:MD preview build branch, as i showed in benchmarks in the comments of the last DE:MD article, but it currently has a weird frame bug (mGPU DX12 support has only just appeared in early beta).
NV hardware will benefit most with DX12 for people with multiple GPU’s, IF devs implement it well, because they have full control. Whereas DX11 SLI/Crossfire is purely a driver hack and isn’t something game devs implement.
you know that DX12 is more than just async compute right? also going low level will allow dev to tune their code for architecture specific. even if nvidia hardware did not have async dev should be able to extract more performance because of low level access. instead of nvidia create better drivers it is developer that should do better in extracting more performance from nvidia hardware because most optimization are now in developer hands.
Oh no a dx 12 game not running good on nvidia.. well that is a pattern it seems except GoW 4
One could think that epic games actually know how to do a proper pc version (when they take their time).
Epic Games did not develop Gears of War 4. The Coalition did 😉
Rip my stupidity lol.
meh….ps4 version is same, another bad port for pc gaming LOL
I have gotten vastly better performance in DX12 with a my desktop GTX 980 MSI notebook (6800 i7, 32 GB DDR4, Windows 10) in 3440 x 1440 resolution
Yep, R9 390 and i5 4590, DX11 I get mid to low 50s in the large hub spaces on a mix of low-mid settings, DX12 I get perfect 60fps on the very-high preset :). Shame the load times take a massive hit…
It really needs to stated again that DX12/Vulkan (DX12 in particular) are low level APIs that put a good part of the optimizations that NV/AMD did on the game developers, who previously were already struggling to get games out of the door on time in a reasonable state. These types of low level APIs take 5-10 years and often multiple iterations (DX13 etc.) for game developers to even begin to optimize properly.
Right now you’ve got the more top-tier developers in essentially experimental mode. It’s also no secret that both APIs were heavily “influenced” by Mantle; the popular comparison is async compute which even T2 Maxwells have but is implemented differently from AMD; the current API implementations at best result in little performance gain for NVidia due to this. I suppose NVidia focused more on DX11 and banked on developers sticking with the higher level APIs due to game developer time constraints.
TL;DR: DX12/Vulkan in games is going to be “disappointing” or slightly above average at best for a while.
There are only three games now with official Vulkan support, and the three runs very good.
From Valve?
Ohhh right, forgot about that.
Last time I posted some guy told me that Vulkan isnt doing any good with those games which support it, I didnt answer him because some people around here get so butthurt when it comes to DX12 that they even lie about VULKAN performance. You can check, there are only three games on PC with official Vulkan and the three runs just very fine, you cant say Vulkan is running like crap as DX12 is doing. Dont come with the “its OK DX12 isnt running ok, because Vulkan isnt doing any good too” because is a lie.
Funny Nvidia doesn’t have performance problems in Gears of War: Ultimate Edition, Rise of the Tomb Raider, Forza Motorsport 6: Apex, Halo 5: Forge, Forza Horizon 3, Gears of War 4 but only in AMD sponsored games. Still AMD fans still trying to claim that AMD sponsored games running good on both AMD/Nvidia.
Nvidia has problem in every single game! There is basically no low level APi game where nvidia would gain any significant amount of performance.
Nvidia was right when they said MANTLE brings no benefit, they just forgot to mention “for their architectures”.
They excel in DX11 and sux in DX12. AMD has it opposite way.
BTW AMD GPUopen uses MIT licence so developers can read, share, redistributed or even modify it so there is no way NVidia could be hurt by this as AMD vastly was with GW.
I have no problem at all, with Nvidia ! Mantle was created by AMD….for AMD architecture….so do you really think, this will avantage Nvidia? Come on!
But we don’t care, Nvidia don’t need DX12, to win in almost all game! AMD can’t write great driver…so it put this, with DX12, in the hand of the developper. ….and like +Akirascreaming said “Where is that directx12 magic we were promised”. We still waiting !
NVidia can roughly handle twice as more draw calls than AMD under DX11.
DX12 can handle with same latency 8-16x times more draw calls!
So unless nvidia wont get 7-8x faster under DX11 they will need DX12!
Or of course you can always can get CPU with 800% single thread IPC performance^^ (joke)
Also you remember DX10/11 came and runs actually slower than DX9 for long time actually.
It will need some time but if done right you can easily get 200% minFPS and MANTLE in THIEF showed it is achievable but not for Nvidia ATM mostly because all games have to be also DX11 compatible and they cannot suddenly use 7times more draw calls when you turn DX12 on!
Game has to be obviously made specifically for this amount of draw calls and therefore cannot be used at all in DX11!
Also number of max draw calls is not something we could even run on todays HW anyway, HW would be limiting factor (not API) and of course they made this API for future use.
DX12/Vulkan is much more about removing limitation of API which no longer limit HW. That was the promise and thats what we got, not sure what you have been promised though…
I know about drawcall, i “play” with the 3Dmark DX12 for fun. Nvidia can use DX12 to handle more drawcall too! (You can see the result) And Thief run well on my Nvidia system too. 🙂
But, the remove of this limitation, is not the only new things DX12 give to us. (And yes drawcall, it’s a step in the right direction.)
The only thing i don’t like about DX12 and Mantle, (dead now/ a big part of DX12) it’s the optimisation is back in the hands of games developpers.
This remember me, 3Dfx and the GLide longtime ago.
So now, we have 2 differents architectures,(like always) from AMD and Nvidia, and game developper have to support both. Before that, Nvidia and AMD do this, in driver level. Now, developper have to invest time and money to optimize each architecture.
But like you write, DX12 is new, like DX10/DX11 long time ago, this will take time, to become more optimize and performant.
So wait and see!
https://uploads.disquscdn.com/images/c7b71e59bb974dbd8df97fe4d6d56303c1410e81ee2a14735085df3379698f88.png
Wrong Doom was a massive increase
Massive? I though massive was for Radeons but what do i know.
The point is Nvidia never pushed low level API, they do not need it as much as AMD who made GCN for low level access consoles have and it is not suited for high abstract API such DX11 and lower. Obviously AMD will have higher increase which basically put their GPUs into position where they should stand all along. AMD even couldn’t make 2 separate architectures one for high abstract API and one for consoles and low level API on PC.
This have nothing to do with nvidia, it is just logical result. And more developers will get along with DX12 better results for radeons will be.
Remember Thief on MANTLE Radeon were uncomparable to nvidia unless there was MANTLE used, especially on mid-range CPUs. GCN just do not scale well on DX11, never will.
And thats why AMD instead of making different architecture which would not be suited for consoles or better drivers which wouldnt help them much anyway bring low level console like API on PC.
You are wrong again its actually worse for Nvidia then that lol and this is coming from someone who is thinking about ordering a 1080 this weekend as i can barely afford it with paypal and paying for it for 6 months i could easily pay 180$ a month to its paid off But ramble ramble i love to go on and on.
Anyways Nvidia was pushing for 12 before Amd even was don’t you remember it being on nvidia’s own site? Hell at that time Amd was the one saying 12 was never going to happen. Then when performance numbers came out nvidia has been quiet very quiet about 12.
Vulkan BTW did see a nice boost with Nvidia. Also a new API that promises more performance should be better for all hardware like vulkan. I personally think its the dev’s fault. They still need to learn more about directx 12 and the programming aspect of it. PLUS its Nvidia’s too they need to put A-sync compute into their hardware 100% and cut their losses.
there is no such thing about “architecture build for low level API”. and there is no such thing “architecture not suited for high level API” either. AMD problem in DX11 was utilization. they create ACE to solve the issue (hence even GCN 1.0 have them) but the problem is those ACE are not transparent to DX11 API. they need async compute to be part of the API for ACE to function. that’s why AMD create Mantle. they want to stir MS direction to the route they want.
nvidia have less issue with utilization. that’s why their 5.6TFLOPs 980Ti able to catch up to Fury X easily despite Fury X having 8.6Tflops of raw performance. their architecture have issues with async because their hardware are not build for it.
Yeah Gears of War 4 performance on Nvidia is simple Amazing. AMD not so much.
I’m running GoW4 at very high settings on my R9 390 at a locked 60fps with no in-fight stutters, I don’t know what you’re talking about
Adored TV told me AMD is not stacking up to Nvidia. So I figured I would take his word for it.
I don’t understand why they even bothered with DX12 and not used vulkan instead if they were going to bring this game to linux. What a gigantic waste of time.
Agreed, DOOM is a prime example of what Vulkan can do.
Good AMD has the upper hand. Hello AMD, could you PLEASE capitalize on that and get some market shares. NV is alone on top.
Where is that directx12 magic we where promised
I will only update my 8.1 when I see that magic
😀
nvidia doesnt have async compute so there is no magic…lol
if you strictly need async compute to gain more performance with low level API then you should just forget about going low level.
That is still some way off. Basically current graphics engines are not designed for DX12 and developers needs to drop DX11 support to really make use of DX12.
This explanation from a game developer makes it all clearer:
“The reason why we don’t see bigger performance boost in DX11->DX12 ports is that most engine code bases are designed around DX11 (DX9 roots). If you simply implement DX12 behind an existing graphics API wrapper, you will basically end up emulating the DX11 driver. That’s not going to bring magical performance boosts. You need to completely rewrite your whole resource management pipeline, including data streaming, loading, memory management, dynamic resources, etc. You need to implement logic to group draws in bundles (and cache them) instead of submitting lots of tiny new draw calls every frame. You need to use ExecuteIndirect when drawing lots of bulk objects with similar state (such as shadow maps). And you need to use bindless resources to reduce draw call counts (bindless makes all resources are available without binding -> resource changes don’t need draw call boundaries). None of the current DX12 PC games use bindless resources. The reason is that most engines are so tightly programmed around the old DX9/10/11 binding model. Changing that would require a major rewrite to the engine code base.
The biggest problem is: Major rewrite to support DX12 perfectly (bindless and all) would make the code base incompatible with the old DX9/DX11 binding and resource management model. Dropping DX11 support is not an option for most developers, as Windows 7 is still so widely used. Vulkan (supporting Windows 7) might be the black horse, allowing developers to move faster to these radically new rendering engine designs. As soon as we get fully featured robust HLSL -> SPIR-V compiler (it is bound to happen), there is nothing preventing studios in dropping the legacy DX11 support. That’s when we see big changes. IMHO Microsoft made a big mistake when they decided to limit DX12 to Windows 10. Many developers would have already dropped DX11 (in favor of DX12) if DX12 supported Windows 7.
– Sebastion Aaltonen, RedLynx Lead Programmer”
gg
And on MS games the CPU gains are delegated to their new copy protection.
this is starting to get really annoying, can you stop crying? every time a dx12 game comes out you post 10 articles crying about nvidia’s poor performance, this is going to happen again and again, deal with it and stop crying
because DX12 promised to offer better performance regardless what gpu being use under the hood. in this case if developer are not confident to have better performance than what gpu driver team can do they probably should not try to attempt their luck with DX12 though sometimes i also want to blame their sponsor that forcing dev to use DX12 even if developer are not ready for them like Remedy with their Quantum Break.
Good thing the DX12 version looks exactly the same as the DX11 version then. Another game Win10 shills can’t use to push their agenda.
I guess you need to stop using a last gen card, on my 1080 Hybrid performance is identical between dx11 and 12. No boost, no drop either.
???????QQ?1371713675??????
???(?)???? 2016-10-8
they look exactly the same
Run poorly on Gtx 980TI because..nvidia Dx12 never ever good multithread & async on Maxwell in Dx12.
Watch every new card in benches and you will see how obsolete became Maxwell,well Paxwell is almost the same as Maxwell.Same same,but not the same. 🙂
Run good on R9 Radeons,RX 4xx and new GTX Pascal.
Bring it on!
as usual nvidia doesnt have async compute and wont for a while if ever.,dx12 async compute=win.
your card is not DX12 friendly man it is your problem john not DX 12
hardware never the problem. did you understand what low level API is? with low level API you tweak your software to architecture specific not the other way around (tweak the hardware to suit the software).
these cards’ architecture was heavily designed for Dx11 and that’s why there is no gain on old G-Force cards
you need to understand what low level API is. there is no such thing such as architecture build for low level API or high level API. so there is no such things such as the architecture specifically build for DX11 or DX12. with low level API you will tweak your code to match the architecture strength to maximize performance potential. it is all up to developer if they willing to optimize for every compatible DX12 architecture out there or not. one of AMD advantage is their hardware are used in console so developer are more familiar with their hardware. but then again if you look closely GCN 1.0 most often did not benefit much from DX12. and there is also a matter for exceeding driver team optimization. some developer readily admit that just to match DX11 optimization already a very challenging task let alone exceed them.