YouTube’s ‘club4ghz’ has released two videos, showcasing the performance difference between CRYENGINE V and CRYENGINE 3, as well as the performance difference between CRYENGINE V’s DX11 and DX12 modes. As we can see, there is a huge performance difference between CRYENGINE V and CRYENGINE 3.
For this comparison, club4ghz used the built-in Woodland Tech Demo, and the performance difference between CRYENGINE V and CRYENGINE 3 is around 20fps (with SVOTI enabled).
This is big news for all game developers interested in using Crytek’s engine, as there is a big performance boost when using the latest version of CRYENGINE.
However, we do have to note that we were underwhelmed by the performance of DX12. As showcased below, the Woodland Tech Demo runs worse in DX12 than in DX11.
For what is worth, club4ghz used an NVIDIA GTX780Ti, so we don’t know whether this has anything to do with NVIDIA’s GPUs or not. According to reports, the DX12 mode in CRYENGINE V runs worse even on AMD’s GPUs.

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email
It’ still fairly new so will take time to mature. I thought one of the big benefits of having low level access was to benifit those with weaker cpu’score performance. I would like to see some benchmarks of dx 11 v 12 on some slightly dated AMD cpu’s to see if there are any performance improvements.
Also I remember reading an article on dig found awhile back that brought up the possibility DX12 allowing for multi gpu configurations to share memory vs cloning it. That’s something that I would really like to hear more about, the possibility of doubling your vram just from an api update would be the most dramatic performance increase imo
Yeah it’s still early days mate. Just give it time and the games start using DX12 from scratch. Then we will see big improvments, or so we can hope 🙂
All good news…let’s wait for actual implementation in games and then enjoy those lovely extra frames or “performance boost”. 😉
PS: Out of topic but I’m playing Mad Max (got it 2 days ago) and have to say that is one of the best looking open world games and it performs like a dream. Loving the game so far.
Same … But It has to be the fact that there is nothing to load in the Desert lol
It looks so good in 4k 🙂
what is SVOTI
Sparse Voxel Octree Total (Global) Illumination. basically it is a form of voxel based Global Illumination that creates highly realistic lighting, light bouncing and large scale ambient shadowing.
Footnote to Prithvi’s post; Alternatively known as SVOGI, to Unreal 4 users.
is it me? CE3+SVOTI looks better to me. CE5 has higher framerate sure.. but it’s also having less details. more misty.
Ye, i thought it looked more blurry too.
Yup..Yup!
This is exactly what I noticed in the first 10secs of watching the ‘Woodland’ demo…DX12 version is considerably less sharp/blurry…
First thing I noticed. The blur in 5 is very noticeable
I downloaded CE5. For some reason, no matter what I change, the mouse controls don’t work when you start the game-simulation in-engine thing.
DX12 needs a lot of work, you can’t just put DX12 patches out or have basic support and get big gains. AMD know this but they are plugging DX12 purely for marketing their cards. Meanwhile, NVIDIA are sitting back while DX11 is doing well and pushing some DX11.3 features that are in DX12. Again, AMD are pushing tech too early, they do it all the time and it just doesn’t work out for them.
Dx12 is not some magic that many like to believe that will solve dx11 problem. Btw cryengine 5 huh? I thought that crytek decided to stop to give numbers to their engine after cryengine 3 and would like to call their engine simply as cryengine.
15+ additional FPS is pretty magical. especially in a shoddy demo.
And not every card able to benefit extra performance even for radeon (and some actually have reduce performance in DX12). In fact stability wise DX11 is better than DX12.
DX12 still has a way to go before its properly documented, right now those that are working with it are piratically experimenting. As far as a per title basis goes DX12 in Ashes of the Singularity has been pretty stable so it depends on the implementation.
Regarding every card not benefiting from it, goes back to being a per title basis. Fable legends showed performance just fine across the board but utilized very little Async Rendering support, if at all. It all depends on how the devs gear their software to use the hardware in the wild. In that respect you’re correct to some degree “its not fairy dust”. Although I’d argue, it sure as hell is fairy dust, it just depends on how they sprinkle it. *Insert fairy dust meme here*
the major problem will be the “variety” of hardware available on PC itself. Quantum Break dev already admit that while increasing the performance on CPU side is much easier with DX12 but on the gpu side even matching DX11 performance is no easy task. if they want everyone to benefit from DX12 they need to tweak their performance for every architecture or even every card that capable of running DX12 out there. the only question is if they are willing to do it or not. some dev like Oxide already mention they will not going to tweak their game down to architecture specific because of time constraint and will instead going to optimize their game that they think will benefit most modern GPU a.k.a generic optimization.
And thats where Nvidia and AMD come into play. they’re going to need to create tools or something in order to make sure all their cards can get the granularity they deserve.
Even when comparing Oxide’s AOTS DX11 to DX12, its more than just CPU scaling in regards to where the benefits come from. Although they did a generic optimization it still provided much better performance in DX12 than DX11.
Regarding “but on the gpu side even matching DX11 performance is no easy task.” maybe if its being ported from DX11 to DX12, as the renderer has to be rewritten and can cause chaos throughout the code depending on how it was originally designed.
The one that mention that was Quantum Break dev and the game they build is no port. Probably the first true DX12 title from the ground up. Even more so than AoS.
Nvidia and AMD can create as many tool they want but it will be useless if dev doesn’t have the desire to optimize for every architecture out there. Even for console it takes years for developer to understand the architecture so they can optimize their games better. And that only have one hardware config.
If the tools provide easier optimization for the myriad of products it’d be the right tool for the job. Nearly endless possibilities with software. It all depends how much time and money they’re willing to spend on it.
Amd and nvidia spen a lot of money for tools like this. And both actually work hard so we not just get simple port with 720p texture asset and 30fps lock. Now it is up to dev to picking up. Will they wiling to do what it takes to optimize for every architecture out there (including the future one). They were the one asking for low level on pc so they should do their best or even spend extra resource for optimal optimization.
How many years before a company codes a full compliant dx12 engine, from scratch 3 years ? 4 ? 5?
Because those unreal , cryengine, etc etc engines are pretty cool… but they were made many years ago, and a decade later, there are all those ‘layers’ of legacy code, necessary to make the games run with real old graphic cards, old cpu’s and its old instructions sets, etc etc.
I am sure if a new cryengine was coded from scratch, with dx12 ONLY in mind, no legacy craap, instead, coded exclusively to work with cpu’s like i7 4770k or above, and graphic cards like r290 or equivalent, 4Gb of video ram, +16 gb of ddr4, and fast ssd drives,
games would run much faster and would look much better than with current versions.
also, how many times a developer will go like ‘OK, we can code this super effect with dx12… but it will require more time and knowledge. .. and only a few high-end rigs will see it… so, let’s rather use some dx11 techniques, it will be ugly, but that’s fine’….
Though relistically crytek want their game engine to work for everyone; not just for people with high end hardware. And i think there is no such think for game engine to be created with specific API in mind. Which API to use (DX9, DX10, DX11, OpenGL or Vulkan) is up to game developer not game engine. So in cryrek case where they intend to license their game engine to other have more than one API support for their game engine will be better because different people will need different tool. And even if the game was build from the scratch from the ground up for DX12 we will still going to see issues. Remember PC is not console. Some of this dev understand there are many config and more than one gpu architecture out there but they still refuse to make optimization on each of them because it is too time consuming for them to do so.
So excited. Seriously though many will be pissed because for some reason software running like garbage = best graphics.