The Division 2 header screenshot 2

The Division 2 runs significantly faster in DX12, over 20fps increase in CPU/RAM-limited scenarios

Back in January, Ubisoft’s Technical Director Calle Lejdfors claimed that The Division 2 will be running faster in DX12 than in DX11. And since the French company has provided us with a review code for this new looter shooter, we’ve decided to test our NVIDIA RTX2080Ti and see whether those claims were true or not.

For this benchmark test between DirectX 11 and DirectX 12, we used our Intel i7 4930K (overclocked at 4.2Ghz) with 16GB of DDR3 RAM at 2133Mhz, NVIDIA’s RTX 2080Ti, Windows 10 64-bit and the GeForce 419.35 driver.

The Division 2 comes with a built-in benchmark that tests four areas/regions. Some of these areas are GPU-bound, while others will stress more your RAM and CPU. In our opinion, this benchmark tool is also representative of the in-game performance.

At first glance, the performance difference between DX11 and DX12 is not that big. In DX11 we had an average of 87fps on Ultra settings at 2560×1440. Our GPU was used at 91% and our CPU was used at 64% (so no, we were not CPU limited as some would have guessed). In DX12, we had an average of 94fps and our GPU and CPU were used at 93% and 65%, respectively.

However, and while the performance difference between the average framerate is not that big, there is a huge difference between the minimum framerates. Towards the end of the benchmark there is a big firefight in which DX11 reaches its limit.

In that particular scene, our system was able to deliver 59fps in DX11 and… wait for it… 80fps in DX12. Yeap, we are talking about a 20fps performance difference between these two APIs.

We’ve said it before and we’ll say it again; DX11 has been limiting a lot of PC configurations, especially due to its increased driver overhead. Moreover, this driver overhead is even higher on AMD’s hardware so we strongly suggest AMD owners using DX12 instead of DX11.

As we’ve said numerous times, owners of newer PC systems can overcome these DX11 driver overhead issues by using RAM modules that are clocked at higher frequencies. Yes, you can use more powerful hardware in order to brute force these DX11 API issues but this shouldn’t be the case when newer and more efficient APIs – like DX12 and Vulkan – can run the games faster on the very same hardware.

So kudos to Massive for proving us right and for offering a DX12 mode that runs faster than DX11 in its latest title. And now imagine how much faster games like Assassin’s Creed Odyssey could actually run in DX12 (especially if they weren’t also plagued by all their DRMs like VMProtect and Denuvo).

58 thoughts on “The Division 2 runs significantly faster in DX12, over 20fps increase in CPU/RAM-limited scenarios”

  1. Good for Ubisoft that The Division 2 uses DX12 and that it runs faster than DX11 and runs good.

    To bad that for some reason this sequel just feels too much similar to the last game. Even some of the other problems from the first game still happens in this game. And the sequel just feels somehow more rushed and lacks more of the charm that the first game has.
    Just not enough new features or gimmick to breathe new life into it.

    Also the radar and GPS in this game is outright just horrible. It spasms out way more than in The Crew 1 and 2, which had a relative good GPS, but it bugged out really often on PC.

  2. I wanted to love this sequel, but I’ll stick with the original for now. Maybe try Anthem, though that looks rough, too. Division 2 looks fine, just….too much of, more of the same thing.

  3. Great, can’t wait to actually play the game in a few days ans get into that sweet sweet endgame that I’ve been hearing so much about

    1. I’m debating wether to get it or not. Been having a good time with Anthem but after 40hours i’m getting « bored » due to the lack of activities to do. Also the loot isn’t where it should be. On the other hand there’s Division 2 that’s coming out and the gameplay isn’t as much fun as Anthem is BUT the content seems to be there to keep me occupied for 50+ hours. Still it’s 80$CAD for me. Ugh games got expensive.

      1. See, I bought The Division 2 because I really love the way the game’s shaping out to be in terms of content. I saw a Reddit post about the End Game content for TD2 and it seems to be an extreme amount of content for a game like this and I really like what I’m seeing. I find the gameplay to be adequate for TD2, I didn’t like Anthem though. Although none of my friends actually got Anthem, we all have purchased TD2 and probably will start a clan at some point and have some real fun.

        That sucks that the game is 80$CAD for you, maybe you should try getting the game off GMG or something like that at a reduced price? That’s what I did. Got the game for $52 which is a steal if the game has supposedly 30-40hrs of base content + another 20hrs of End game stuff to do.

  4. “Assassin’s Creed Odyssey could actually run in DX12 (especially if they weren’t also plagued by all their DRMs like VMProtect and Denuvo)”

    So we’ve finally resolved to accept this as fact even though there has yet to be any proof that the past two AC games have high CPU usage as a result of DRM?

    1. Nah, the AC games had the same performance profiles/issues before denuvo. Their engine just can’t handle the complex geometry/AI numbers they’re trying to push.

    2. I guess so.

      With Denuvo, it really is case by case and comes down to how the developer implements it.

      For some reason, everytime a single game on this site is tested, the commenters start screeching as if it applies to all game with Denuvo. Many games with Denuvo have been found to not suffer a performance hit and some some games with Denuvo *have* been hit.

      Like I said, it seems to be case by case.

      1. And it’s never been proven in the case of Assassin’s Creed to be a culprit.

        So at this point, it’s still an unknown, not an unproven axiom that should be referenced casually in articles.

  5. DX12 always could do this, it’s just Async Compute got in the way and it got into a shouting match about how NVIDIA didn’t support Async Compute via hardware or “no proper DX12 support”. You don’t need “full” DX12 support to get the benefits, you only support what you think is beneficial to your GPU architecture or your way of thinking.

    1. Not just async compute. DX12 makes things more complicated on the GPU side. That’s why even with the proper hardware for async compute it did not guarantee better performance for AMD when using DX12.

  6. Interesting how Resident Evil 2 and DMC 5 seems to run better on DX11 on radeon and nvidia, i wonder if most implementations of dx11 are just not good enough badly made for AMD.

  7. Yeah but… then you have to deal with games that look blurry as shlt because you can’t use reshade sharpen in DX 12. nvidia game filter is an alternative but the sharpen effect they provide looks terrible.

    1. This is probably the worst argument against DX12 (or just newer API’s in general) that I’ve ever heard.

      1. For a person who never uses sharpen and is used to blurry games or sub par anti aliasing it’s probably a terrible argument.

          1. Inferior to what exactly? Downsampling is the highest quality anti-aliasing solution available.

            Reshade is a crappy instagram filter.
            Tell Crosire to implement it for DX12 instead of complaining that a new API makes it hard to have sharp games (which is completely dependent on the game btw, has nothing to do with DX12).

          2. I wouldn’t say nothing, we already know that FXAA for example looks better in DX12(I remember seeing a comparison a long time ago) and some AA implementations can only be done in the low level APIs. As I remember, Farcry Primal on consoles had it’s own special AA implementation.

          3. Source? Far Cry Primal could have used a custom AA solution for reasons other than the API (their own rendering pipeline on console). PS4 uses OpenGL after all. I don’t see why an API would change an AA method such as FXAA which operates in post process.

          4. PS4 doesn’t use OpenGL, it uses GNM and GNMX. Also I can’t find where it says FXAA looks better in DX12 but I remember it. Consoles used HRAA in Farcry Primal.

          5. Yes, PS4 does use GNM/GNMX but if you look at what that API actually looks like, it’s very similar to OpenGL. PS3 essentially used OpenGL as the API. GNM/GNMX is an evolved form of the PS3 API.

            Looking into HRAA, it seems it is a post-process AA with a temporal filter. Very similar to SMAAT2X used in Crysis 3 on PC or TXAA offered by Nvidia. I doubt this has to do with the API.

          6. Well no because OpenGL is a high level API and consoles use a low level API and high level API. Mantle had to be created on the PC for that lower level access.

          7. On the antialiasing, yes API does have a say, Mantle for example, because it has extensions that expose more capabilities for the antialiasing, like programmable pattern sampling.

          8. I didn’t say that API has no say in general. I said in the specific case of HRAA, it has little to do with the API.

          9. Yes you were right about the sharpening option having nothing to do with the API version.

          10. Inferior in the sense that 4K on a non native 4K monitor isn’t worth the framerate sacrifice compared to native 1440p with temporal anti aliasing and a sharpening filter. I agree that reshade is a crappy instagram filter in general, with the exception of sharpening, which makes every game look a lot better and I personally can’t live without it.

            So why would I choose 4K and 60 fps or less for an image quality that only looks 10 to 20% better than native 1440p + TAA and 120 fps ?

          11. Inferior to what exactly? Downsampling is the highest quality anti aliasing in existence.

            Reshade is a sh**ty post process filter

  8. Please buy a new CPU. You’re running a near 6 year old processor with an $1100 video card. Of course you have severe CPU limitations that most other people with that card will not have.

    DX12 was designed to help with that CPU bottleneck specifically, which is why you get such extreme differences between the APIs here.

    Maybe overclock this processor further and show us that? You should be able to get at least 4.6GHz surely from it.

    I know you are trying to get newer hardware this year, maybe look into some of these tests again to demonstrate the difference if you eventually pick up a Zen+ in the summer or something.

    1. Intel processors haven’t changed that much in the last 6 years, honestly. It is fine.
      You DO NOT need a high end processor for gaming, unless the game runs like hot garbage.
      If you just want to go look at big numbers, go to a benchmarking site. This isn’t it.

      1. This particular game engine with this card is without doubt, WITHOUT DOUBT seriously constrained by this CPU.

        I guarantee it. I guarantee you put a better CPU in there and the difference between the APIs will drop and the framerates will rocket upwards. The results will be way different. It’s not a good test, unrealistic scenario.

        So in short this talk of DX11 v DX12 in this scenario is a poor comparison. All it is illustrating is an extreme example of DX12’s CPU optimisation because the CPU is not very fast.

        An example literally nobody with this expensive high end card will see because they won’t be pairing it with a 4th gen Intel processor. They will be pairing it with something probably 20 percent faster, or more!

        I am well within my rights to point this out. That the comparison is not very good.

        1. I don’t honestly think this is an unreasonable test. I have had the last three CPUs i’ve had in the past go through a few gens of GPU each.

          I would imagine that may people still have 3rd or 4th gen Intel CPUs. One of my mates still has a 3570K for example. He wants to upgrade it soon . So a 4930K is a reasonable CPU to use in my opinion.

          First i got a Core 2 Duo which was originally paired with a 9600GT and the that was replaced with a GTX 260.

          Second CPU bought was to a 3570K and got a GTX 660 to go with it. I also then bought a second hand R9 280x and then later a 290x to pair with it. I then bought an R9 Fury whilst and ran that with my 3570K for a while until i upgraded my processor to a 5820K. I then purchased a GTX 1080 as i got bored waiting for VEGA and have recently purchased a secong 1080ti which is now paired with my 5820K.

          So as you can see over the period of ten years i’ve gone through 8 GPUs and only 3 CPUs. Granted the purchases of the 1080ti and 290x have been within the same genereation as i’ve always looked our for second hand bargains. so i’d say over that period 4 or five GPUs wouldnt be unreasonable for some people.

          Also the 4930K is Ivy Bridge-E and not Haswell so it is also even older architecture.

          1. I am sure plenty of people have older CPUs, but they definitely won’t be pairing them with a $1100 video card such as this!

            Secondly what has happened in the past does not mean it will follow into the future.

            Current games are built primarily for consoles including this one, and consoles have feeble low end mobile CPU performance. So CPU demands are rather low, and this has not presented much of a problem even when trading up GPUs and keeping an older CPU.

            However next gen consoles are likely to have at least an 8 core Ryzen, much much more powerful. Desktop level performance. It’s safe to say future games will also be better threaded, and much more CPU performance will also be required on PC because of this.

            Therefore in the very near future of PC gaming quad core processors are likely to be overwhelmed, just as dual cores were 5-6 years ago. Dual cores are now bottom, bottom end. Not even minimum specs for many titles.

            Did you pair many new graphics cards with dual core processors after 2012?

            No, you did not. Likewise:

            Very very few people will be pairing a quad core CPU like this with a future GPU as fast as a 2080Ti. The scenario where lots of people keep a quad core and put it with a 2080Ti level of GPU won’t be happening. It’ll be at least 3 years before 2080Ti performance is mainstream (e.g $300) and by then the last of the fastest quad cores will be 5+ years old (7700k.)

            It’ll be at least 6 cores for gamers from this point on. Mainstream processors such as the i5 8400 and Ryzen 2600 are already 6 cores. Gamers will all be buying at least 6 cores this year, as they did last year to be fair. Nobody bought a 4 core gaming machine with a view to keep long term since early 2017..

            The past does not always = the future. Consoles dictate minimal CPU requirements, and new consoles are about to get an enormous CPU upgrade ensuring a step change on PC.

          2. The 3970X is a hexacore CPU. So your point about Quad cores doesn’t relate to my point and anyway there will be many people still using a 6700K or 7700K for the next few years.

        2. WITHOUT DOUBT – except for that part where it’s averaging 65% usage. Article doesn’t mention any stutters or sharp drops, which would happen if the CPU hit 100% usage – which would, in turn, be the defining point of a bottleneck / need for an upgrade.
          Some first gen intel processors are still fine for a lot of games, save for edge-case scenarios like BF5 which pommels a lot of processors.
          Your statement that one should at least get 4.6Ghz from that CPU demonstrates a bit of a lack of understanding about how overclocking works – every chip is unique, it’s entirely plausible this chip was not a “win” in the silicon lottery.
          The notion of upgrading every, or every other generation is just garbage from intel. I’ve seen plenty of older chipsets handle things fine – my buddy’s i7-2600k is driving 1440p 144hz without a problem in conjunction with a 1070ti.

          1. Firstly – Just because a game does not hit 100 percent CPU usage does not mean it won’t run (A LOT!) faster on an inherently faster CPU. That’s not how it works, you obviously don’t know anything about this subject.

            A game might not fully load 8 threads of an 2012 AMD FX8350, but it can still run waaaaaaaay faster on a 2017 12 thread 8700k. In fact hardly any games fully load 8 threads of anything, but they will still be faster on a faster CPU…..

            Secondly, I know how overclocking works. I also know how a 4930K overclocks if it’s on a decent board, with decent cooling, it should be safely capable of those speeds.

            Unless it’s a terrible, terrible example it should easily do better than 4.2 GHz. Easily.

        1. It’s true, atleast in WoW ;).
          1070ti
          8086k
          16gb ram
          1440p screen

          Sadly in most games it does nothing and serves only as marketing tool for Windows 10. But when it’s done well like in WoW it really is huge improvement.

  9. This game for PvP players is utter trash don’t buy it, PvE is good you gonna enjoy it if you didn’t play TD1!

  10. A question to people who are well-versed in DX:
    Is it really DX12’s merit, or simply some DX11 screwup? I thought only DX12-exclusive games could make good use of DX12.

    1. DX12 makes better use of CPU in certain workloads and is able to do more draw calls. Games in DX11 have to be programmed to use less draw calls as more draw calls require more CPU to process and if CPU power isn’t there or it hits the draw call limit then draw calls are culled.

      Basically DX12 uses the CPU more efficiently because it closer to the metal as the saying goes, and if you can use the CPU more efficiently the GPU benefits, as it gets the processed data faster. Draw calls, more information on the screen that has to be processed by the CPU, this can be increased geometry, LOD for example.

    2. It is a bit complicated really. We see good performamce in pure DX12 games because there is no DX11 comparison can be made. DX12 excel at eliminating CPU bottleneck issue but at the same time it might create more complicated issue for GPU. Hence when there is both API can be compared we see inconsistent performance on DX12 because DX12 did make everthing faster.

  11. Imagine if all of Ubi’s other games ran faster using DX12 and not being bogged down by “you’re a pirate, gimme money anyway” DRM.

Leave a Reply

Your email address will not be published. Required fields are marked *