Microsoft has announced the next version of DirectX Raytracing, also known as DXR, DirectX Raytracing 1.2. DXR 1.2 will come with two major features that aim to significantly improve performance. So, let’s take a closer look at them.
The two new features of DXR 1.2 are Opacity micromaps and Shader execution reordering.
Opacity micromaps aim to significantly optimize alpha-tested geometry. According to MS, they can deliver up to 2.3x performance improvement in path-traced games. By efficiently managing opacity data, OMM reduces shader invocations and greatly enhances rendering efficiency without compromising visual quality.
On the other hand, Shader Execution Reordering offers a major leap forward in rendering performance. This can make DXR be up to 2x faster in some scenarios than its previous version. SER achieves this by intelligently grouping shader execution to enhance GPU efficiency, and reducing divergence. The result is higher frame rates, something that will make raytraced titles smoother and more immersive than ever.
As MS pointed out, NVIDIA has committed driver support across its GeForce RTX GPUs. MS is also actively working with other hardware vendors, including AMD, Intel, and Qualcomm, to ensure widespread adoption.
Now while this is cool, we won’t see a game that supports DXR 1.2 anytime soon. So, don’t hold your breath for something spectacular in 2025 (or even 2026). Still, as MS stated, DXR 1.2 will pave the way for even more path-traced games on PC.
Stay tuned for more!

John is the founder and Editor in Chief at DSOGaming. He is a PC gaming fan and highly supports the modding and indie communities. Before creating DSOGaming, John worked on numerous gaming websites. While he is a die-hard PC gamer, his gaming roots can be found on consoles. John loved – and still does – the 16-bit consoles, and considers SNES to be one of the best consoles. Still, the PC platform won him over consoles. That was mainly due to 3DFX and its iconic dedicated 3D accelerator graphics card, Voodoo 2. John has also written a higher degree thesis on the “The Evolution of PC graphics cards.”
Contact: Email
Remember when midgets on this site claimed ray tracing is a gimmick like Nvidia's physx back in 2018-19ish? God people on this site are so rytarded.
Didn't they claim DLSS is dead?
People say retarded things all the time, more precisely because either they're missing information, perspective and experience, sometimes because they're young, or they don't have the interest or time to be up to every last bit of news and analysis, or they're heavily biased on the wrong side because of all those reasons too. Some just don't care enough to bother going for a deeper understanding on some topics. Don't bother going against that, and focus the talk on more worthwhile points.
Never underestimate the power of Stupid …….
You can lead a person to knowledge but you can't make them think
Amen to that Buddy Joe!
I have to say I have been pretty disappointed in you lately, you seemed to have lost your touch at getting people riled up …….
Well joe, there are a few mostly sad reasons for that which I’m not getting into but I’m passing the torch to young talented minds such as yourself.
Dude ….. I turned 66 a few days ago ……
Thanks for the compliment
ShatMD dumbnatics.
Now they have shifted the narrative to "PT socks bollz", it will star to matter with UDNA.
seething AMD cultists said that. the dumbest kind of consumer slave that ever existed. "Muh I support the slightly smaller company, AMD is the little people's company, AMD is just like us!" They seethed ever since raytracing or any other Nvidia innovation was annouced. When people realized that raytracing is a babe indeed AMD cultist went as far as to say AMD originally invented it. Mindbroken morons, most of them leftists btw. The rest are pajeets or poorf@gs who cry and rage over 50-100$ price differences between the brands.
>the answer to amd drones is to be an nvidia drone
imbecile
You’re wrong Nvidia owners are normal, they just simply play games. But the AMD cult, they seethe at every opportunity, attack journalists, and everyone who dares to like Nvidia products. You are mindbroken morons, cultists.
32-bit PhysX isn't supported on RTX 50 series GPU's, and from what Gamers Nexus said in their video modern PhysX isn't GPU accelerated anymore. Considering that these old games will no longer be playable with the same level of fidelity they once were (or in the case of Assassin's Creed IV: Black Flag, no longer playable at all) I have to wonder why the PhysX of the day could still be considered anything more than a gimmick.
Raytracing is for the developers, and reports of it making in-game lighting look better are very much exaggerated.
Well I think your first mistake is comparing RT to Physx, a tech that you can count the number of titles that utilized it on barely 2 hands. RT is a fundamental shift in rendering more akin to the shift to "3D Accelerator Required" that occured in the back half of the '90s. I wouldn't be surprised if the proliferation of RT lighting in games will also bring with it an increase in physical objects and interactivity in games over the next decade
An increase in physical objects? You mean ones you can kick/push around a gameplay environment, like in Bethesda's old titles?
"Now while this is cool, we won’t see a game that supports DXR 1.2 anytime soon."
I'd be surprised if we don't see a CP2077 / Witcher 3 update.
I'd be surprised if we do. Remember, CDPR have switched to Unreal Engine. I wouldn't expect them to keep pouring the time of their few graphics programmers into a performance patch for old games.
Yes I believe they have yet we have still received updates since their announcement.
Witcher 3 hasn't been worked on in over a year except for adding some extensions for Redkit. It's been pretty much abandoned in it's current subpar state.
“hasn’t been worked on in over a year” followed by ” except”… hahaha
It’s easier to play with new tech when you have a project to play with.
There’s still hope…
https://www.youtube.com/watch?v=XRy5bpNcFrk
Oh, interestingly, according to PCGamer (of all sources!):
" Our understanding is that Nvidia RTX GPUs, at least back to the RTX 30-series, already support both of those features in hardware. SER support appears to have been added with the RTX 30-series, while OMM was there from the very beginning with RTX 20-series GPUs.
In other words, the claimed performance benefits Microsoft is talking about are already present in most Nvidia RTX GPUs. That is likely why Microsoft says, "Nvidia has committed driver support across GeForce RTX GPUs," but that it is, "actively working with other hardware vendors, including AMD, Intel, and Qualcomm, to ensure widespread adoption." In other words, RTX GPUs are already good to go. "
That's the entire problem, Nvidia is so far ahead of everyone else in RT and neural networks that we have to wait for the others to play catch up before these technologies can have widespread use in DX12 and Vulkan
It always been the same, the slowest mainstream determines what hardware features are being used in games etc. Usually conslows with their slow 6-7 year cycles (also rarely uses latest gen features since its costly even when they do get around for an upgrade) along with the potato PC's – That's what holds developers back
It always been the same, the slowest mainstream determines what hardware features are being used in games etc. Usually conslows with their slow 6-7 year cycles (also rarely uses latest gen features since its costly even when they do get around for an upgrade) along with the potato PC's – That's what holds developers back
PC Gamer is wrong, as usual:
RTX 40 series is where SER support started, not 30.
Also, NVIDIA already implemented both of these "new" DXR extensions within their Vulkan driver.
In fact, the recently released RTX Remix 1.0 is already able to take advantage of these via its custom fork of DXVK.
You can check out every available option here (which is alot):
https://uploads.disquscdn.com/images/f3a549cb7feab2574b1503a6f5322721ca3216731310164e8976ac6ea12144e6.png
"significantly improve performance"
Heard all this before and got the opposite. even shills in the comments are low energy compared to 10years ago.
Indeed, I'll believe it when I see it.
Games already use OMM and the difference is huge. This technology is the reason why AMD lags so far behind NVIDIA in PT games.
Kind alike the hardware sites that think a mere 10% gen on gen improvement is amazing, mind-blowing and what not… Easy to spot the paid advertisers vs the truthful who are reviewing the hardware for what it is.
So yeah, time will tell what side this will have. If its just hollow marketing words or if its indeed the truth
OMM is already implemented in Ada / Blackwell GPUs and this technology indeed DRASTICALLY improve performance in PT games like Black Myth Wukong, Alan Wake 2, and Indiana Jones.
https://uploads.disquscdn.com/images/0fd1969cd570807808653063d0e001a5359fad1913f6580455fa0a7ff64764e2.jpg
sounds like you are coping
DirectX and unreal engine have become complete garbage thanks to years of corporate bull crap, DEI and simping for China and hiring cheap labor.
Oh well, for now we're missing some more solid context and info on when this might arrive in games, how quickly and seamlessly Nvidia or some studios or UE will integrate this, and some hard benchmarks. Also, possibly some cons and whether it will be used or lost, superseded, or if there's a roadmap for later. Eagerly awaiting.
So FPS with raytracing will go from 30 FPS to 35 FPS? Maybe 40 FPS? Big deal. Tell me when we get to playable framerates of 120+ FPS at native resolution.
Nvidia already uses this technology and there's a huge 2-3x performance boost compared to Ampere GPUs.
I dont want to play raster games, because SSR artefacts, ir flat lighting are impossible to ignore. On my RTX4080S I can play PT games at 110-170 fps (1440p DLSSQ+ FGx2) and that's good enough for me. As for standard RT games, I dont even need AI (DLSS / FG) to play at high refreshrate (some RT games run at well over 120fps even at 4K native), but if I cant max out my monitor refreshrate I like to use FGx2, becasuse this technology works so well. I cant tell the difference between real 170fps and generated 170fps. I measured only 1-4ms difference in Cyberpunk between FG on and off, and that's placebo territory. DLSS FG makes aiming easier (unlike LSFG or FSR FG) and the picture quality during motion is much sharper.
I haven't seen RTX 40 series or RTX 50 series doing 2-3 times the FPS of RTX 30 series in raytracing benchmarks. NVIDIA has to cheat with fake frames on newer generations. They aren't talking about 2-3 times the FPS, they're talking about specific calculations being 2-3 times faster, for a small improvement in FPS.
Have a look at these screenshots.
https://uploads.disquscdn.com/images/ae2bfbd25289825ab4e81428c00bde1a1096751991c2c1b28dec0e40939c65c7.jpg
https://uploads.disquscdn.com/images/0fd1969cd570807808653063d0e001a5359fad1913f6580455fa0a7ff64764e2.jpg
The developers of Black Myth Wukong released a video about opacity micro maps and explained that's the reason why Ada Lovelace offers much better performance compared to Ampere in their game. If PT game has a lot of foliage (Black Myth, Alan Wake 2, Indiana Jones) OMM drasticaly reduce number of shader interactions with the rays.
Shader exectution reordering also make a big difference. I tested this feature in Half Life 2 RTX and when I turned it off framerate droped from 100fps to 65fps.
Microsoft is trying to standardise these two features, as they make a drastic difference in PT / RT games.
AI generated frames can be considered fake, but I do not care. DLSS FG is an extremely useful feature and that's what counts. When you configure DLSS FG correctly (Vsync on + riva tuner "reflex" limiter instead of default "async") it's very diffucult to tell a difference between real and generated framerate. I even tried playing at 30fps (15fps base) just for fun. Without FG aiming was extremely difficult, but with FG I was able to aim without much problems. I'm playing the most demanding PT games at 110-170fps thanks to DLSS FG and I'm having a blast.
I prefer benchmarks from people who run standardized controlled testing to make sure everything gets the same workload. No fake frames to make it seem like RTX 40 and RTX 50 series are better than they really are, just pure in-engine rendering. I'm talking about media outlets like Gamers Nexus in case that wasn't obvious.
Now the RTX 4090 had a pretty big uplift in rasterization over the RTX 3090, and while the raytracing performance could be around double in some games (I didn't look at all of them to refresh my memory and don't remember it ever being that impressive when it launched) I seriously doubt that these two improvements were responsible for the majority of that performance uplift as there were hardware level improvements as well (a new generation of more powerful RT cores along with faster VRAM), that and NVIDIA tried to force as much wattage as possible through the GPU's to gain extra performance.
Will this new DirectX update help? More than likely. Will it be a spectacular increase in FPS approaching 2x what was possible before? I very much doubt it. If I'm wrong then I'll be pleasantly surprised, but I'll still hate raytracing until they get it running at the same FPS as rasterized graphics and eliminate the need for temporal filtering. I just spent several hours last night tweaking the Lumen config in RoboCop: Rogue City to get rid of the ghosting in the thrice-damned Lumen Global Illumination (which can't be turned off BTW). I absolutely despise this technology and the way it's implemented, and that isn't going to change anytime soon.
I played Robocop Rogue City a few days ago. UE5 games that use lumen generally have ghosting issues, but I did not notice this issue with Robocop that much. I however noticed lumen "boiling" (unstable lighting / reflections) when I turned DLSS Quality. The problem is, RT quality (even software based lumen) is tied to internal resolution, and as you lower internal resolution you also lower amount of rays and DLSS SR cant help with that. Ray reconstruction can adress this problem but only if hardware lumen is used.
But on my PC this game runs very well even with DLAA (transformer model, forced J preset), I had 90-110fps at 1440p. With these settings Iumen "boiling" was minimal and didnt bothered me at all, while dynamic GI made a huge difference. I often thought the graphics in this game looked photorealistic. 90-110fps was already good enough for me, but I also turned on DLSSx2 and had locked 160fps for 99% of time. The game was even more responsive and smooth with these settings.
https://i.ibb.co/21zxr7gZ/Robo-Cop-Win64-Shipping-2025-03-20-18-58-44-848.jpg
https://i.ibb.co/nNBZjBgg/Robo-Cop-Win64-Shipping-2025-03-20-18-39-51-409.jpg
https://i.ibb.co/23yH16VB/Robo-Cop-Win64-Shipping-2025-03-20-14-24-15-374.jpg
https://i.ibb.co/DPW4t4nB/Robo-Cop-Win64-Shipping-2025-03-12-17-19-14-621.jpg
https://i.ibb.co/G459Mw8v/Robo-Cop-Win64-Shipping-2025-03-13-02-27-37-449.jpg
DX update will not help. GPU must support these features on the hardware level.
I'm sure OMM is the reason why Black Myth Wukong runs so much better on the RTX 40 series because the developers said so. My RTX4080S isnt 2x faster than RTX3090 in standard RT games but in Black Myth Wukong I get 2x performance scaling even without FG.
Here's screenshot from the same youtube video (Terra Ware : RTX 4000 generational uplift over RTX 3000), but with lumen instead of PT and the difference isnt nowhere as big compared to previous comparison that used PT.
https://uploads.disquscdn.com/images/a901c4a083b7beb8f659f9794873e98d0ad244215327ea9c97029eaeb40648f7.jpg
Black Myth Wukong runs much better on the RTX40 series. The game is still very demanding, but it's possible to get playable results with some tweaks. I get around 110-120fps at 1440p DLSSQ + FGx2 with very high RT and 130-150fps with medium PT.
Medium PT
https://uploads.disquscdn.com/images/ad27f8608bf821a93ed09fe48c9673807183bdeb0f8c118c833b464584cb688c.jpg
Lumen
https://uploads.disquscdn.com/images/90bb1fd8959485cd681964f166cf1a2f52eba27a24b66cd6d6d921e4b10d4aa8.jpg
Lumen is only 12% faster (134fps vs 150fps).
You said that FG makes the RTX 40/50 seem better than they really are, but the thing is people cannot use this feature on the RTX 30 / 20 series. I can use DLSS FGx2 in Black Myth Wukong on my RTX4080S to take the smoothness and gaming experience to another level, but on the RTX 30 / 20 series this option is grayed out. On the RTX 3090 I would have to use FSR FG or Lossless Scaling FG to play the most demanding RT games at over 100fps, but these software based frame generators doesnt work nearly as well and I wouldnt want to use them. I'm using DLSS FG because it improves my gaming experience. Motion clarity is much better with DLSS FG and I can aim even easier compared to base framerate, whereas software based frame generators negatively affect my gaming experience (noticeable input lag even at high base framerate, judder, noticeable artefacts) What's interesting DLSS FG in black myth wukong reduce latency (by around 12-20ms), because this game isnt using reflex by default, while DLSS FG force it.
Didn't John say that NVIDIA had committed to hardware support for this DirectX update for all RTX cards? That would include RTX 20 and RTX 30 series, would it not?
Regardless, the performance benefits wouldn't be double the FPS. I'm pretty sure they were talking about 2x to 3x in specific calculations, not average FPS.
Also, you're ignoring the other improvements to RTX 40 series over RTX 30 series that allow the RTX 4080 to perform so much better than the RTX 3090 in Black Myth Wukong. The odds of opacity micromaps being the only reason for the performance difference are incredibly low. I'm sure the RTX 4080 wouldn't be quite as good if it wasn't for the opacity micromaps, but it's only part of the reason for the better performance.
Regardless we'll see how helpful it is when developers update their games with support for these new DirectX features, at least in games where developers actually used the DirectX raytracing API. I think some early raytracing games only used NVIDIA's RTX API since the DirectX API didn't exist at the time.
Cyberpunk already uses shader execution reordering, but just through the NVIDIA driver.
Most of these technologies are already in NVIDIA drivers. They've been using Shader Execution Reordering since the 40 series.
OMM has been supported since the 30 series.
Glad it's an industry standard now at least, so AMD GPU's can use it as well