Rebellion’s Strange Brigade has just been released on the PC and we are happy to report that it is the first game supporting both the DirectX 12 and the Vulkan APIs. As such, and perhaps for the first time, we are able to benchmark these APIs side by side and see how NVIDIA’s and AMD’s hardware perform with them.
For our tests, we used an Intel i7 4930K (overclocked at 4.2Ghz) with 8GB RAM, AMD’s Radeon RX Vega 64 and NVIDIA’s GTX980Ti, Windows 10 64-bit and the latest version of the GeForce and Catalyst drivers.
Contrary to most DX11 games, Strange Brigade ran silky smooth – and way faster – on AMD’s hardware. This further confirms our claims that AMD’s DX11 drivers are currently really unoptimized and come with a significant CPU overhead (at least compared to the ones that NVIDIA is offering for its graphics cards).
We’ve tested two resolutions; 1280×720 and 1920×1080. The game comes with an in-game benchmark and for both scenarios we set our graphics settings to Ultra. In both of these resolutions, the AMD Radeon RX Vega 64 was able to beat the NVIDIA GeForce GTX980Ti by around 30fps.
The AMD Radeon RX Vega 64 did not favour any API in Strange Brigade as both DX12 and Vulkan performed almost similarly (DX12 was a bit faster). On the other hand, our NVIDIA GeForce 980Ti clearly favoured DX12 over Vulkan. At 720p, DX12 was faster by around 20fps and at 1080 Microsoft’s API was faster by around 10fps.
Our full PC Performance Analysis will go live later today or tomorrow, so stay tuned for more!

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email

M$ wins again. GG.
amd sponsored game to boot
Amd sponsored 1/10 games Nvidia sponsored 9/10 games
not even close to that number.. good try to troll though
Who is trolling ? me ? please give us true numbers ?
we will have to go wayy back , and do not mix a game using hbao+ for example as nvidia or a game useing pure hair as an amd game… only those games that are paid by nvidia and amd
Yeah like all games sponsored by AMD run correctly, don’t make me laugh you fkin braindead.
Care to provide a counterexample?
Any chance you know of a list of AMD sponsored games? The list on their site (featured games) is woefully incomplete and only goes back a couple years.
Tomb Raider 2013 is a counterexample.
That’s why it sucks visually..
Lol, this retrd here has to understand he better stay on wccftech where others like him comment.
Good stuff, but you guys need to test those cards on something newer from nvidia too…
And some of them 0.1% and 1% lows
Yeah agreed, make tests a bit more in depth and with more stats than now.
It’s really all about minimums, not highs, and here, DX12 is clearly winning.
Its native API integrated in operating system – it always will be faster than Vulkan
Now we need optional support of DX12 in Bethesda games like Doom, Rage and Wallenstein. So everyone can choose between Vulkan and DX12
DX12 having a portion being implemented within the OS has nothing to do with the performance between the API’s, as if that was the case, AMD cards would be the same as Nvidia cards in the differences between FPS. Given how AMD cards fare, the differences seem to be based on how the GPU drivers add support for the API. This makes sense, given that AMD on record supports both api’s by utilizing the same driver layer within their hardware, and given the differences are by 3 fps for AMD, this perfectly falls within the margin of error between implementations that perform the same. Nvidia’s large disparity in performance comes from the methods they used within implementation
Nvidia beating AMD in DX11 games but only low resolutions was a known thing long before Vega. The Fury was exactly the same – Fury X was hopeless at 1080P, *had* to run at 1440P or higher to be any good.
And it’s not just AMD’s drivers. Yes, AMD’s DX11 driver has poor CPU efficiency, but that is far from the only reason that high-end AMD GPUs suck at low resolution. The main reason is that Nvidia and AMD GPUs have very different raw fillrate efficiency curves. Nvidia GPUs usually have much higher texel bandwidth, but become bottlenecked sooner. The lower the number of pixels per frame, the more efficient Nvidia can fill it. The higher the number of pixels per frame, the more efficiently AMD can fill it. This is a highly simplified explanation, obviously.
So what does a CPU have to do with this if the results are better on new CPUs? They mitigate the driver bottleneck, because the end to end memory latency and peripheral comms bandwith are better, and so on.
So why hasn’t AMD solved the driver problem after all these years but did solve it on DX12/Vulkan drivers? Pretty simple, DX11 is a huge monolithic library. DX12 is a heavily reduced (from DX11) library that is more like a translation layer than an abstraction layer. The amount of work required to optimise DX11 was probably a loss leader. The amount of work required by a *GPU vendor* to optimise efficiency for DX12 is significantly less. At a very basic level, the move from DX11 to DX12 shifts a huge amount of the burden from the driver developer to the engine developer, which also explains the slow uptake of DX12.
DX12/Vulkan/Metal were all driven by pressure from big studios (DICE for example) to have lower level interfaces that they could have more control over the hardware and more ability to optimise.
DX12 in particular (not so much Vulkan, and definately not Metal which is apparently really nice to work with) is not a particularly great thing for small studios that don’t have the resources to acquire people with the necessary skillset to work with very low level interfaces.
By Metal you mean Mantle?
Or perhaps he’s referring to apple’s newish ‘metal’ graphics api
Also don’t forget about mabe most important thing about Radeon in DX12/VLK
The Hardware Sheduler, for procesing GFX pipeline.
ATI Radeon is build for the Future Gaming (DX12/VLK, also good for DX11.1 or 11.2)
And statement that Radeon have DX11 CPU overhead is wrong.
Good example is BF1 or BF4 using FrostBite in DX11.1
It’s all about Software vs Hardware Sheduler…
Seems to also have Vulkan MultiGPU, Noticed both my GTX 1080Ti’s loading up and performing really well at 7680x1440P 110+ FPS everything on minus AA.
Only issue is MSI Afterburner OSD is flickering, but the game itself is perfectly fine.
Can anyone else confirm this is true?
Is SLI even worth it? Isn’t the scaling lousy?
I don’t believe he’s talking about sli. Vulkan’s (and DX12’s) multi-gpu functionalities are separate from that.
Well SLI is still common with DX12 and Vulkan. Just now known as Explicit Multi-GPU (Same but not the same function as SLI requiring two of the same GPU’s which is the only common trait) or Heterogeneous Multi-GPU (What we know of from Ashes of Singularity leveraging all GPU’s in the system as one monolithic beast)
But yeah i really mean multi-gpu 🙂
Definitely worth it if you have an extreme monitor setup like mine with triple 1440P @ 144hz (7680x1440P). Not so much if you’re on 1080P-1440P @ 60hz. Scaling is pretty good, from 60-70FPS single to 110-130 FPS SLI in Vulkan ultra everything, Tessellation on, and AA off. Rebellion is one of the better devs that know how to optimize their games out of the gate along with the Far Cry Devs.
You guys clearly need a Pascal GPU and a newer CPU.