Epic Games has added a new PSO precaching mechanism to Unreal Engine 5.1 to improve PSO hitching in DX12 titles. This new mechanism will attempt to reduce the shader compilation stutters in future DX12 games (provided these games support this new mechanism).
Going into more details, the new PSO precaching mechanism now skips drawing objects if their PSOs aren’t ready yet. The system aims to have the PSO ready in time for drawing, but it will never be able to guarantee this. When it’s late, it is now possible to skip drawing the object instead of waiting for the compilation to finish. This will ultimately reduce shader compilation stutters.
Furthermore, Epic has reduced the number of PSOs to precache. Not only that, but the team has improved the old (manual) PSO cache system. As such, developers can use it alongside the new precaching mechanism.
You can find more details about the new PSO precaching mechanism here. Also note that these improvements are exclusive to Unreal Engine 5.1 and up. In other words, older games will not benefit from it.
Anyway, this is great news so let’s hope that developers will take advantage of it in their upcoming games!

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email
All Epic had to do was use Async Shader Compile… But they did not since Nvidia could not do Async Correctly until RTX 2000 series. You know the tech AMD had since the HD 7000 series back in 2011…
Async Shader Compile has been around for years and can be seen used in emulation programs while making games run smooth without stutters you would normally see when building a shader cache.
You are really confused. There is no such thing as Async Shader Compile. It’s Async COMPUTE. That is, running the graphics pipeline at the same time as the compute pipeline (compute shaders).
This has nothing to do with shader compilation, which is a CPU operation. Creating PSO’s in general is a CPU operation and should be done on separate worker threads from the rendering thread.
Seems like you all don’t know that it’s just a term used. And it does work and can work. Gears Of War 5 had a Async Compute option. And it was amazing.
I was giving a example of Async Shader Compile with emulation because it’s huge in terms of reducing stuttering next to nothing.
But have you done Async compile in programs such as yuzu. Its a night and day difference. The same thing can be done in todays games as well with shaders and would eliminate shader stutter that is seen more and more in todays games on PC.
That’s not what “async” means.
It’s just a term used for Async Compute and it works well in Emulation programs 😀
Async Compute is not async compiling of shaders thought, lol.
Async compile in CEMU for BOTW worked on my GTX 1060 so what are you on about it only being supported on the 2000s and later?
“so what are you on about” Nvidia did not fully support Async compute on a hardware level until RTX 2000 series.
I’d like to know why they just skip rendering the object. This is going to cause it to just pop in when its specialized PSO is ready. Instead, while you’re building the optimal PSO for that object in a worker thread, use a more generic PSO that is quick to build/compile, then when the “real” one is ready, switch over to it.
Great work by the team!
huh? so the “solution” is not rendering the object and produce artefacts instead ?
Hmm, yeah “reduce” them but not get rid of them, so it means we will still get random stutters in the latest games most likely
John, the improved PSO caching in UE 5.1 has been known for quite a while. Close to a year now maybe. The link in your article mentions UE 5.2 even, not 5.1.
How about making some wrapper for UE4 > UE5 and force that change globally for all current UE4 games?
An engine upgrade requires a ton of work. You can’t just make a wrapper that does it. Old assets (maps, models, shaders, etc) aren’t necessarily going to work and all need to be converted into newer formats. It’s basically like doing a remaster of the game, without the work of updating the textures to higher resolution.
It might be possible for a very dedicated modder to make something for a single game that altered how shader compilation was handled, however I don’t recall ever hearing about anyone doing this.
Vulkan has solved this problem in a more elegant way via a new extension, and it’s already implemented in DXVK & Valve’s Source 2 engine:
This metod works only for new games or i can just drag and drop DXVK on windows for older UE4 games ?
Why would you want your games to run worse?
DXVK isn’t a magic pill that makes all of your games better (regardless of what some Linux fanboys who don’t even use Windows like to claim). Technically DXVK is a translation layer that converts DirectX API calls into Vulkan API calls. This adds extra processing overhead, and aside from a few broken games that were poorly designed and QA tested (like the remaster of GTA IV) most games actually run better on Windows if you don’t use DXVK.
I’ve tried a lot of versions of DXVK in various games, and it has never made them magically perform better. When you do this on Windows you end up with more stutters and your FPS will bounce up and down more than if you weren’t using DXVK.
There’s also the problem that some anti-cheats will react badly to attempting to tamper with the render API like this, and you could get yourself kicked from online matches, or potentially even banned from multiplayer games for using DXVK. That’s of course assuming that the anti-cheat even allows the game to launch when using DXVK (some won’t).
I know DXVK is a wrapper but i was curious if the shader compilation could be fixed by using this.
DXVK has its own shader cache, and that may help in some games that have issues with shader compilation (it helped Elden Ring when it first released due to issues with shader compilation, however I think they eventually fixed that). Overall I tend to see more stuttering in games when using DXVK than if I were not to use it (that applies not only to D3D12 games, but also D3D11 games as well).
As a rule of thumb Vulkan works best with games that are CPU bound because it offloads part of the work to the GPU. However if you are already GPU bound then it tends to overload the GPU and causes worse 1% and 0.1% lows which is where the stuttering comes from
Of course it’s even more of a problem if you are using a wrapper to convert DX to Vulkan
That’s a bit of a problem, because the only games I have that have both D3D12 and Vulkan all have fairly low CPU usage regardless of the API they are using.
I guess I could test that by changing CPU affinity for the game’s process, and prevent it from having enough CPU resources to create an artificial bottleneck.
That being said, DXVK doesn’t magically change a game’s render API, it just translates a game’s DirectX API calls into Vulkan API calls. As far as the game knows, it’s still trying to use whatever D3D API it was programmed to use, and I really haven’t seen any evidence that DXVK can help with performance of the average game on Windows in any way.
Only games in which DXVK improved performance significantly for me are mostly Old DX9 games such as Prototype, Gta iv, Fallout 3 & new vegas both, Skyrim old legendary edition, Alan wake original & only one Dx11 game called saints row the third remastered. Other then that every Dx11 game i tested on Dxvk runs worse as compared to native Dx11.
I actually have Fallout 3, Fallout New Vegas, Skyrim LE and Skyrim SE, and Saints Row The Third Remastered. I’ll have to try them and see if there’s any improvement with DXVK. If there is, then that would be the first time I’ve ever seen an improvement with it at all (granted I don’t have GTA IV or Elden Ring and it’s well known DXVK helped with both of those).
Don’t use DXVK in skyrim special edition, only in legendary edition becoz special edition already has Dx11 and it runs worse on Dxvk. Some old dx9 titles suffer from fps drops due to single threaded and limitation of DX9 api, vulkan makes them multithreaded hence improving fps. Make sure to check fallout new vegas in vault 21, vault 21 is known to be a massive fps drop area in new vegas but when using vulkan it runs at str8 60fps.
DXVK doesn’t change the number of render threads a game uses. If a D3D9 game is using a single render thread, then it will continue to use a single render thread even when DXVK is translating its D3D9 API calls to Vulkan.
I’ve tried DXVK in Dragon Age Origins and Half Life 2, and they both perform worse with it than they do without it.
Yeah in dragon age origins it reduces the performance but dragon age origins already runs perfectly fine and even half life 2 runs awesome so dxvk is no needed in both of these games. Try DXVK in the games i mentioned above and believe me its a free magical fps boost specially in GTA IV
I already tried it in Saints Row The Third Remastered, and the game seems to have an FPS cap so I’ll have to try again later with SpecialK to see if I can bypass it.
One thing I did notice is that DXVK broke some effects in that game.
which GPU are you using ? The 25 fps cap issue only happens in cutscenes, in native dx11 the game has micro stutters but DXVK fixes it
RTX 3070 Ti. The stuttering seems worse with DXVK 2.1 (or at least just as bad).
The game is capped at 60 FPS by default (but not very well, as the FPS will run anywhere from 60-65 FPS while playing). SpecialK seems to be able to override FPS caps in the game, so I think I had been using it previously to resolve issues. I don’t tend to use SpecialK anymore due to incompatibility with ReShade, and when he introduces compatibility issues the developer of SpecialK prefers to blame other tools for the problems so I don’t bother reporting issues anymore.
I think I downloaded a copy of the last 64-bit SpecialK DLL that didn’t have compatibility issues with ReShade, so I may dig that up and try Saints Row The Third Remastered again, just note that SpecialK probably isn’t going to fix the issues with effects in the game when using DXVK.
I don’t think AMD GPU drivers support that extension. Not sure if Intel does either. I know that NVIDIA does, but I don’t know if it works with every series of GPU, so that limits the actual number of systems that can utilize it.
As much as I may give game developers/publishers flak for bad game design decisions, I feel I should point out that game developers need to take into account the technologies that will be available on every GPU that the game is officially supposed to be able to run on, and they can’t implement anything that won’t work with GPU’s that meet the system requirements. They also often don’t implement technologies that would prevent the game from running on CPU’s and GPU’s that are a few generations older than their minimum requirements. There are always plenty of people who will try to run the game on hardware that doesn’t meet the system requirements under the assumption that raw horsepower was the determining factor for the system requirements and that if they don’t mind low FPS they can still run the game on very old hardware, and then they get upset if their PC can’t run the game at all.
I’ve seen people get upset because a new game didn’t support old enough versions of SSE to run on their 8+ year old CPU (as in they couldn’t launch the game as all) and the game developers had to recompile that game’s EXE file to add support for older versions of SSE for these people, and I’ve seen someone try to run a game that said in its system requirements that a CPU with AVX2 was required on a CPU that didn’t have AVX2 support and the game actually ran (although poorly due to the age of the CPU).
Not that I want to defend Epic Games, but a major concern for them here is making their engine work on the greatest number of systems possible without game devs needing to worry about hardware compatibility. That’s why they are trying to come up with better systems for handling this than just one Vulkan extension that may not work everywhere, especially since Vulkan doesn’t perform as well on Windows as DirectX 12 does and most game developers aren’t going to want to waste their time on it (and it’s a huge waste of time when they have to make a DirectX 12 version for Xbox anyway, and then redoing their game in Vulkan for PC means extra work and an inconsistent experience from Xbox to PC).
yes AMD does support it. https://www.phoronix.com/news/RADV-Starts-GPL-Extension
That’s not AMD’s driver. Only people running Linux are going to be using unofficial and unsupported drivers like that.
yes AMD does support it. Phoronix did a article about it in Aug of 2022