Rise of the Tomb Raider Lara closeup 2

Rise of the Tomb Raider – New patch adds DX12 multi-GPU & Async Compute support

Crystal Dynamics has released a new update for the PC version of Rise of the Tomb Raider that adds DX12 multi-GPU support, as well as utilization of DirectX 12 Asynchronous Compute, on AMD GCN 1.1 GPUs and NVIDIA Pascal-based GPUs, for improved GPU performance. This patch will be auto-downloaded from Steam, and you can read its complete changelog below.

Rise of the Tomb Raider – Patch #7 Changelog:

  • Adds DirectX12 Multi-GPU support. Many of you have requested this, so DirectX 12 now supports NVIDIA SLI and AMD CrossFireX configurations.
  • The benefits of DirectX 12 are still the same, but now you are less likely to be GPU bottlenecked and can reach higher framerates due to the improved CPU utilization DirectX 12 offers.
  • Adds utilization of DirectX 12 Asynchronous Compute, on AMD GCN 1.1 GPUs and NVIDIA Pascal-based GPUs, for improved GPU performance.
  • On the latest Windows 10 version, V-sync can now be disabled (Windows Store version), and behavior of disabled V-sync has been improved (Steam version).
  • Improvements to stereoscopic 3D rendering, including fixes for NIVIDA Surround and AMD Eyefinity in combination with stereoscopic 3D.
  • Improved default settings for certain integrated GPUs.
  • Removes the Voidhammer Shotgun for users that do not have Cold Darkness Awakens DLC or have not yet unlocked it by rescuing prisoners in Cold Darkness Awakens.
  • A fix for a save issue where some game state could get lost in very rare circumstances.
  • This patch force-disables the Steam Overlay when using DirectX 12. This is due to stability issues with the Steam Overlay in combination with DirectX 12.
  • A fix for crashes with VXAO enabled on NVIDIA GTX1080 cards.
  • A variety of other smaller optimizations, bug-fixes, and tweaks.

91 thoughts on “Rise of the Tomb Raider – New patch adds DX12 multi-GPU & Async Compute support”

    1. “as well as utilization of DirectX 12 Asynchronous Compute, on AMD GCN 1.1 GPUs and NVIDIA Pascal-based GPUs, for improved GPU performance”?

      1. I asked the question *after* reading that info.
        Posting that info again does not answer my question.

      2. Hello, this must be very sad day for you. Full DX12 with async in another game. Sad day for all DX11-fans 🙂

        1. Have you heard, btw?

          Dead Rising 4 isn’t a Play Anywhere title! 😮

          I look forward to seeing whether or not it gets cut from your hilarious list now…….. ^^

          1. Who cares? I only care about DX12 and all I want is end of support for old DX11. Nothing else maters. As long as Dead Rising 4 is “dx12 only” I’m happy with this title. DX12 FTW!

          2. Also, according to id Software developers, DX12 for Xbox One & DX12 for Microsoft Windows are two separate things (even though they share a lot of coding, of course) 😮

    2. Becaue NVIDIA lied and Maxwell doesn’t support Async, despite them making the claim that it just needed to be enabled driver side and then proceeded to never release such driver.

      Maxwell is useless when it comes to DX12 and Paxwell ain’t that much better.

        1. No there’s no source for this info because he is trolling , NVIDIA spend 2 billion US dollars researching and inventing Pascal which is more than AMD is worth as a company and it shows in the results. For example Pascal’s 16nm architecture can hit 2300mhz while being super power efficient. Meanwhile AMD’s 14nm architecture is struggling to match 28nm Maxwell clockspeeds. Not only that but AMD’s new 14nm 480 GPU’s is barely and I mean barely anymore power efficient than NVIDIA’s aged 28nm architecture.

          So long story short they are trolling and the results show that to be the case. NVIDIA would have pulled apart the first AMD GCN GPU’s that launched over 4 years ago and figured out exactly how they worked including the asynchronous compute engines so the fact that these clowns think asynchronous compute is some secret weapon that has caught NVIDIA with their pants down is adorable. I mean AMD made a name for themselves by pulling apart other people’s CPU’s and cloning them before they got an official X86 license. Once an architecture has went on sale to general public it’s no longer a secret as both NVIDIA and AMD pull apart each other’s GPU’s to see how they work.

          Long story short NVIDIA has known about asynchronous compute for years and only an idiot would argue that Pascal is rehashed Maxwell. If any architecture could be criticised for not being a major upgrade on previous architectures it would be AMD’s 14nm GCN 4.0. I mean look at how NVIDIA has worked to make virtual reality 3x more efficient to the part where 1070 totally trounces Titan X at VR or multi monitor despite being close in 2D gaming

          I like to think of my self as unbiased, I actually own AMD and NVIDIA GPU’s right now however disagree with the nonsense some AMD zealots are spouting

          1. R9 370/370X uses first gen GCN from 4 years ago, most of their 200 series had GCN first gen as well, just like 7000 series, recycled GPUs.

          1. Really…. you AMD fanbois are pathetic. All of your comments simply reek of jealousy. Plain and simple.

          2. The AMD fanboys are pathetic, case in point when someone decides to call Pascal architecture rehashed Maxwell seeing as 16nm Pascal is able to hit clock speeds of 2300mhz while being super power efficient. Meanwhile AMD’s 14nm architecture is struggling to match clock speeds of 28nm Maxwell and power efficiency is barely improved over NVIDIA’s 28nm Maxwell. NVIDIA spent 2 billion researching and developing Pascal which is more than AMD is worth as a company and results above speak for themselves yet AMD fanboys are branding Pascal a Maxwell rehash???.

            Also these clowns don’t realise that NVIDIA would have put the first GCN GPU’s that launched now over 4 years ago under the microscope and figured out EXACTLY how they work including asynchronous compute engines yet they still keep blabbing about how NVIDIA has been beaten by AMD’s super secret invention asynchronous compute LOL nonsense. It’s like they don’t understand that once an architecture hits the market the rival firms pull it apart and understand exactly how it works, I mean how did they think AMD was able to clone Intel’s CPU’s all those years before becoming an official X86 licensee?.

  1. 1080P Ultra Settings other then Textures on high, SMAA, HBAO+ running 970’s in SLI with a I7 4770K kept at stock for the test. Seeing around 18% performance Boost in PAtch having DX12 on over DX11 🙂

          1. 1440P+ forever! 1080p in 2016?! I hate 1080p since i bought 1440p monitor 3 weeks ago.

          2. Forgot to add /s as in SARCASM.;P (Thought the 24 fps bit was enough.)

            Are you content with your new LCD?(brand,model?)

          3. I bought DellP2416D(24” 1440P 60 HZ IPS). I had LG22MP55(21,5” 1080P 60 HZ IPS) until then. My next upgrade will be when 4k monitors price drops to 300 euros or close to it but this will take lot of years. PC perfomance has been decreaed as a reshlt of it while on 1080p i could run almost evrything at max settings 60 fps now i can also run all games max settings(except quantum break) but some of them now run at even 30 fps howerver graphics and image quality is much better and that what matters. PC is 2500k 12 gb ddr 3 1600 mhz gtx 970 g1 gaming 4gb.

          4. I’m no expert but it looks like you’ve chosen your shiny new LCD (quite heavy piece of HW btw) quite well according to ratings etc.
            Nice setup which should last for a few years if Nvidia isn’t going to intentionally cripple the performance via the drivers of the 9xx series even more to boost the sales of their 1xxx series.

          5. Of course i chossed well. I read lot of reviews of many 1440p monitors before i made my choise. As evryone told me and from what i read from reviews this is the best 24”1440p monitor.

          6. Shhhh, don’t tell anyone or “the man” will come for you. It’s too dangerous to have people with such special abilities running around normies causing public disturbances.

          1. and i’ve had 9999k for 10 years, and 281763762k for 20 years. Who cares when it’s not a realistic resolution to run on today’s hardware?

            4k will maybe be a thing when we are up to the GTX 1280

            The GTX 1080 can’t even max out everything at 1080p on some games while maintaining a solid 60. 4k? please.

          2. problem of 1080 is not performance but price and availability. Also you cant really consider demanding gameworks effects, because nvidia made them to slow everything down.
            1080 is pretty decent card for 4K actually. It is always balance between FPS stability and visual quality/resolution etc. as you will always have limited performance. For that reason it is always smart use well optimized effects and games for that matter. Because if the game is badly optimized HW never really helps you, because there is always only so much HW can do. Best is not support games that arent well optimized than just buy better HW!

          3. “Also you cant really consider demanding gameworks effects, because nvidia made them to slow everything down”

            Do you realise that these effects are advanced graphics enhancements? They influence performance from the very base and it’s logical because graphics features are not performance free. Please stop trolling around gameworks.

          4. Years of developing effects that are optimised for all platforms so consoles can run them now. Look at SSAO in Tomb Raider, it took them a long time to even make that run on consoles and made it free with Async and it’s still no near NVIDIA’s HBAO+. never mind VXAO.

            Where is AMD’s HDAO now? It was in 2 titles, tress FX in one title using by one dev, yet again AMD can’t support software, just gives it away for others to do the work, just like Mantle.

        1. It is not irrelevant, it is the future, we just need Vega/1080Ti for it. The question is do you really need it? pay thousand dollars for such a card just to be able to run games in simillar speed as you can run it on 480 in HD? For 200USD?
          Thats the question, really!

          1. 1440P+ forever! 1080p in 2016?! I hate 1080p since i bought 1440p monitor 3 weeks ago. I also saw games running on 4k at the shop that i bought 1440p monitor(on high end pc’s of course) and i realized that 4k and 1440p dont have that much of a difference so i reconed get a 1440p monitor like i did. You willl have much better graphics than 1080p but without the very high requirements of 4k

      1. the whole point of a low-level API is CPU overhead and in 1080P your CPU is working a lot harder than it is at 4K

          1. yes the hell it is. that is why when they test cpus in gaming, they test at lower resolutions.

          2. CPU usage does not increase at lower resolutions, performance merely becomes more CPU dependant since you get much higher framerates which allow for clearer measurements, the CPU has to send more frames to the GPU.

          3. Yes it does, you become CPU bound the lower the resolution, at 4K resolution CPU usage goes down because you are more GPU bound. I’ve never seen my CPU cores max out at 4K, however they do at 1080p.

          4. you are clueless. I can run a game at 1280×720 and the cpu usage is higher than at 1920×1080.

          5. Its to do with framerate right? higher the framerate the higher the cpu Usuage

          6. Nope, higher resolutions do not decrease CPU workload, neither does lowering it increase it.

            Tests are done at lower resolutions/settings because the % differences in framerate tend to be small so at higher resolutions they would almost be within the margin of error.

          7. lol yes it does… Maybe you should PC game since right now you sound like you don’t

      2. 1440P Is the best for today gaming pc. I bought a 1440p monitor 3 weeks ago(i5 2500k gtx 970 12 gb ram) and now i realy have regret it that i kept 1080p so long! Because while peffomance of the pc has somehow decreased all games look much better, So i advice you all of you who have a pc simiar to mine but for whatever reaon you are still on 1080p go to 1440p. I bought the affordable 1440p monitor from Dell the DellP24D for 290 euros!!

      1. if you had a clue about flyby benchmarks then you would know that minimum is just the transition and is a fraction of a second. in the actual game that does not happen.

    1. hbao+ kills amd card and whats funny i have seen youtube reviewers use it in dx12 tests with the rx 480 to give nvidia cards the advantage in dx12 if tehre is any…lol..everyone is starting to see the nvidia bias from youtube reviewers that i have seen.

      1. this game is one good example of DX12 and shows very weird results, seems like another title killed by gameworks. Though nice that tressFX hair got so much better since previous version, in original TR tressFX killed performance and here you dont loose almost no performance when you turn it on. awesome. Lets hope there will be some proper async compute support, because until now DX12 version was the most crappy version of any game ever released.

        1. Give the game async compute support and it doesn’t matter if it is needed or not Right? 🙂

          “in original TR tressFX killed performance and here you dont loose almost no performance when you turn it on”

          Does TressFX version in TR support the same feature set as previous game or it is easier and several are missing? Because it is not possible to simulate hair and fur with no or minimal performance influence.

        2. it is not killed by Gameworks for crying out loud will you please stop the anti Gameworks propaganda please

          1. amd is killed by games works and hbao+ which is all gameworks.also tell that to project cars you couldnt even turn off nvidia physx…lol

          2. There is only CPU PhysX in Project Cars. So what do you want to turn off? It is like you demanding to have option to turn off physics in any game. Stop spreading BS.

          3. No he iis NOT wrong , I have looked into Project Cars and there’s no GPU PhysX, it’s the same PhysX that both PS4 and XB1 manage to run without issue in the console versions of the game and the same CPU PhysX that 360 and PS3 managed to run without issue in games like Bioshock Infinite. I think the same CPU PhysX is also used in Witcher3 without issue. Please stop spreading fairy tales about nasty NVIDIA

          4. there is no GPU based PhysX in project cars. if you’re saying this you clearly did not know how PhysX work.

      2. I know a lot of people with AMD gpus and they tell me hbao+ does not destroy performance

      1. You want to know what I would like to see? How Pascal performs in DX12 compared to DX11. That’s still my biggest concern as to whether to get the 1060 or the 480.

        1. Look at any nvidia mainstream card in last 5 years and they all lost performance in time. AMD on the other hand gain. 460/560/760 etc. faster when released and after 1-2 years its much slower than radeon competition.
          It is due nvidia always benefits only newest architecture in their drivers/gameworks and also because of less memory (6<8).
          So really question is for how long you want that card. If for more than 1.5 years then I would take AMD for sure. AMD also enforce independence and democratic nature of PC as a platform which nvidia really does not.
          Difference between AMD and Nvidia is like difference between Sanders vs. Trump 🙂

          1. What you are saying about NVIDIA GPUs losing performance over time is utter nonsense and propaganda. You say this because Maxwell improved Compute performance by a massive amount and AMD improved their drivers, has nothing to do with Kepler losing performance. Why don’t you tell people how FuryX runs like junk, even a GTX 970 can match it old games like Crysis.

        2. Get the 1060 basically. Unless you want to do a multi gpu config (which i strongly advise against), the 1060 is the option that makes more sense

          1. So far, based on their hype, it is. But I have to wait and see if it’s true they are giving 980 performance for $250. That IS a great deal, unless it’s on VR only, something the 480 gets quite close, according to some benchmarks. Again, I’ll have to wait and see.

  2. finally dx12 is on par or even better than dx11 on my pc (980 ti 16gb ddr3 2400mhz,i7 3770k 4.2ghz)
    i see around 1-2 fps increase on overall framerate but the maximum framerate shot from 131 on dx11 to 142 in dx12

  3. I just ran the ingame benchmark in DX11 and DX12 modes to compare.

    GTX 1080 @ 2025/5000
    i7 3770K @ 4.4 GHz
    8GB DDR3

    1080p resolution and using SMAA and the “Very High” graphics preset unchanged.

    DX11
    Mountain peak: 154.21 fps (min: 75.66, max: 231.88)
    Syria: 109.13 fps (min: 30.19, max: 149.37)
    Geothermal Valley: 107.40 fps (min: 11.58, max: 162.99)

    Overall: 124.41 fps

    DX12
    Mountain peak: 162.34 fps (min: 80.83, max: 209.89)
    Syria: 124.97 fps (min: 62.87, max: 157.62)
    Geothermal Valley: 124.07 fps (min: 15.99, max: 162.14)

    Overall: 137.83 fps

    1. AMD Fury X – great improvement in minimum fps (Overclock 3D Results):

      1080p (minimum fps ):
      DX11: 51 fps
      DX12: 62 fps (+20% performance)

      1440p (minimum fps)
      DX11: 27 fps
      DX12: 42 fps (+55% performance)

      4K (minimum fps)
      DX11: 12 fps
      DX12: 19 fps (+50% performance)

      Can’t wait for more DX12 games. DX12 FTW!

      1. Yeah I was hoping Nvidia was not stupid enough to do it since the RX 480 is out… I guess they are to full of themselves now. AMD really needs to step up it’s marketing. The Market Share needs a shift in pace for AMD or we will continue to see Nvidia pulling BS Stunts.

  4. Doesn’t mean AMD are right, AMD have played the victim many times ,they are just appealing to feelings rather than facts, because there are none from AMD other than blame.

    1. async compute is why amd gets a big boost in dx12 titles and the fact that async compute wasnt in tomb raider shows i was right it was just a patched dx11 title.

      you will see much different results now it has async compute support.

      1. The simple fact is that Pascal can run Async Compute, Maxwell can’t because it is broken. I’m running Patch 7 with Async Compute enabled on my GTX 1070, runs lovely and smooth.

          1. AMD will get better performance with Async, that is old news,Pascal is able to run it now, just with not much if any gains because it’s preemption, NVIDIA need to work with devs to get it tuned just like AMD do.

  5. Again, you seem to not understand basic things, no Pascal doesn’t support Async Compute on the SM/CU level like AMD does, it does it via preemption. The point is that Pascal can use Async Compute now where Maxwell it’s broken.

    “Adds utilization of DirectX 12 Asynchronous Compute, on AMD GCN 1.1 GPUs and NVIDIA Pascal-based GPUs, for improved GPU performance.”

Leave a Reply

Your email address will not be published. Required fields are marked *