Just a few days ago AMD officially announced its RDNA 2-based flagship Big Navi graphics cards, the Radeon RX 6900XT, RX 6800XT and RX 6800, respectively. Even though the RDNA 2 Live event showcased benchmarks that favored the Radeon RX 6900 XT, Radeon RX 6800 XT, and Radeon RX 6800 over the competition, none of them included metrics related to ray tracing.
A lot of gamers have been waiting to hear about AMD’s approach to Ray Tracing. Recently a statement from AMD was published by AdoredTV, where the manufacturer has confirmed that it will support ray tracing in all games that do not come with proprietary APIs and extensions, such as NVIDIA DirectX RTX or NVIDIA Vulkan RTX.
AMD confirmed that they would be only supporting the industry standards, which basically includes Microsoft’s DXR API and Vulkan’s raytracing API.
“AMD will support all ray tracing titles using industry-based standards, including the Microsoft DXR API and the upcoming Vulkan raytracing API. Games making of use of proprietary raytracing APIs and extensions will not be supported.”— AMD marketing.
Both Microsoft DXR or Vulcan ray tracing API industry standards are slowly getting more popular, and the launch of AMD’s RX 6000-series GPUs will also push game developers away from NVIDIA’s in-house implementation, or proprietary APIs. For what’s it worth, Intel also plans to support DXR as well, with its Xe-HPG discrete GPU lineup.
What this basically means is that existing DXR-supported titles are supported, and also PC titles which will utilize the official Vulkan ray tracing API, like e.g. Control PC game which uses the DXR API. However, titles such as Wolfenstein: Youngblood and Quake II RTX will not be supported, because both of these titles use Nvidia’s proprietary Vulkan RTX API extensions.
Wolfenstein: Youngblood and Quake II RTX use non-standard, proprietary API extensions, so AMD’s RDNA 2 cards won’t support similar PC titles, though most of these games can still run without ray tracing enabled, and still offer a satisfying visual/gaming experience.
In case you didn’t know, AMD Radeon RX 6000 series also feature Ray Accelerators. One official slide elaborates on the hardware component that Radeon RX 6000 Series GPUs leverage for ray tracing – the Ray Accelerator (RA). Each Compute Unit carries one Accelerator as shown below:
These RA units are responsible for the hardware acceleration of ray tracing in games. The RX 6900 XT features 80 RAs, RX 6800 XT features 72, and RX 6800 has 60. The same Ray Accelerators can be found in RDNA2-based next-gen gaming consoles.
“New to the AMD RDNA 2 compute unit is the implementation of a high-performance ray tracing acceleration architecture known as the Ray Accelerator,” a description reads. “The Ray Accelerator is specialized hardware that handles the intersection of rays providing an order of magnitude increase in intersection performance compared to a software implementation.” via AMD.
AMD will launch the Radeon RX 6800 and RX 6800 XT series of graphics cards on November 18th. The AMD Radeon RX 6900 XT flagship card will be available on December 8th and will cost around $999 USD.
Thanks, OC3D.
Stay tuned for more!
Hello, my name is NICK Richardson. I’m an avid PC and tech fan since the good old days of RIVA TNT2, and 3DFX interactive “Voodoo” gaming cards. I love playing mostly First-person shooters, and I’m a die-hard fan of this FPS genre, since the good ‘old Doom and Wolfenstein days.
MUSIC has always been my passion/roots, but I started gaming “casually” when I was young on Nvidia’s GeForce3 series of cards. I’m by no means an avid or a hardcore gamer though, but I just love stuff related to the PC, Games, and technology in general. I’ve been involved with many indie Metal bands worldwide, and have helped them promote their albums in record labels. I’m a very broad-minded down to earth guy. MUSIC is my inner expression, and soul.
Contact: Email


While RTX cards support Raytracing in EVERY game with Raytracing support, even boosting the performance of games with software-based Raytracing support.
And so does AMD as well, imo ? Software-based that is..
That’s true right now. But, basically starting next week everything is going to be coded for AMD first as their hardware is now in the modern consoles. Nvidias stuff, as usual, is going to become an afterthought at best. Proprietary stuff never wins.
both AMD and nvidia ray tracing implementation are proprietary (hence both of them have different way of doing it). but both implementation are compliant with MS DXR. there are reasons why nvidia aiming to be fully DXR compliant with their RTX implementation since the very beginning. this is to avoid that “coded for AMD first” situation in the future.
we heard that before. the “AMD next gen console advantage” rarely comes to fruition. if any, it drives NVIDIA to dominate PC Games while AMD taking a nap expecting someone else to do the job.
Proprietary stuff drives to new standards. in NVIDIA’s case it’s usually outperforming, or holding an advantage over them standards during the transition timeframe, which can take years. there’s plenty of win in proprietary.
This time it will be even harder because only Xbox support RDNA2 and PS5 use older RDNA1. Features like hardware ML, VRS, Mesh Shaders and Sampler Feedback are available only on PC and Xbox. This is not a good info for gamers
Both based on RDNA 2 U dumbo.
No, both AMD and MS confirmed that PS5 don’t support:
– hardware ML (CU in RDNA support 4-bit ops, in PS5 opnly 16-bit ops)
– mesh shaders (PS5 instead use Primitive Shaders from GCN),
– sampler feedback
– vrs
Why are you talking about things you don’t understand? Next time use Google and check. Do you even saw AMD RDNA2 presentation?
Features might differ, but it’s still based on RDNA 2 dumbo.
again no it’s not. Even Sony says it’s not RDNA 2… Dumbo 😀
Not full featured RDNA 2, but still based on it.
No it’s NOT. READ about it. It’s stated at RDNA 1.5…
And to avoid backlash or possible lawsuits from nit picky dumbos they saying it’s between RDNA 1 and 2, even tho it’s based on RDNA 2.
CU units are core element of every GPU from AMD – shaders
RDNA1 CU support 32-bit and 16 bit operations (2 in clock)
RDNA2 CU support 32-bit, 16-bit (2 in clock), 4-bit (8 in clock)
Now lets check how this looks on all platforms:
PC 5xxx cards – CU support 32-bit, 16-bit
PC 6xxx cards – CU support 32-bit, 16-bit, 4-bit
PS5 – CU support 32-bit, 16-bit
Xbox Series X|S – CU support 32-bit, 16-bit, 4-bit
Yeah, sounds like RDNA 2 with less features.
RDNA2 without shader units from RDNA2…
this is why Sony don’t support 4-bit operations (ML- DLSS), VRS, Mesh Shaders, Sampler Feedback and instead support old Primitive Shaders from GCN (AMD Vega) which is not available on RDNA2 cards
Sony use mix or RDNA1 and older GCN – hardware supported BC with PS4 and some old Promitive Shaders from Vega
RDNA 2 it is then.
mixed with GCN (shader units with promitive shaders). Performance will be between Radeon VII (GCN+Promitive Shaders) and first RDNA
Sounds like lies, mix between GCN and RDNA 1 can’t be RDNA 1.5, it should be RDNA 0.5.
Sounds like someone is jealous that PS5 based on RDNA 2 with less features will win this console battle over full featured RDNA 2 xbox.
Sony confirmed that GPU will support whole instruction set from GCN because Playstation use hardware level backward compatibility. This instruction set was supported by Radeon 5700 cards which supported both RDNA and GCN instruction. New Radeon 6xxx cards don’t support GCN instructions (this is why it never be used in PS5, PS5 Pro)
Check – “Road to PS5” by Mark Cerny about hardware BC with PS4. This is very different from software BC used by MS on PC and Xbox
RDNA2 with less features but to be sure not to trigger some butthurts they called it RDNA 1.5.
Because this term was used by lead Sony GPU engineer Rosario Leonardi who said in private message to “Grid” game developer that PS5 is RDNA1 and don’t support 4-bit operations. That developer published whole conversation on reddit as proof few days later
RDNA2
https://uploads.disquscdn.com/images/56bb9571b99b5d7be94ee660c8aecc201ec797e5ab1fc89420aa0427f3944f21.png
RDNA without:
– CU units from RDNA2 (4-bit support), Sony use CU from Raden 5700
– VRS
– Mesh Shaders
– Sampler Feedback
I think that PS5 will be 20% slower than Radeon 5700. Lets wait for benchmarks
CU units are core element of every GPU from AMD. Lets check official AMD description of architectures:
RDNA1 CU support 32-bit and 16 bit operations (2 in clock)
RDNA2 CU support 32-bit, 16-bit (2 in clock), 4-bit (8 in clock)
Now lets check how this looks on all platforms:
PC 5xxx cards – CU support 32-bit, 16-bit
PC 6xxx cards – CU support 32-bit, 16-bit, 4-bit
PS5 – CU support 32-bit, 16-bit
Xbox Series X|S – CU support 32-bit, 16-bit, 4-bit
Do you see difference?
Yes, I see. RDNA 2 with less features.
Yes, just like mix between Radeon VII (GCN for backward compatibility) with Radeon 5700.
PS5 GPU is standard Radeon 5700 with 36 CU – but slower because of power constraints of APU and less memory bandwidth (only 256 bit shared between CPU and GPU)
RDNA 2 as stated by AMD.
Yes, Radeon 5700 with less memory bandwidth (256-bit shared by CPU and GPU), no VRS, no MeshShaders and older CU units from Radeon Vega (Primitive Shaders)
No, Both AMD and MS confirmed that PS don’t support:
– hardware ML (CU in RDNA support 4-bit ops, in PS5 opnly 16-bit ops)
– mesh shaders (PS5 instead use Primitive Shaders from GCN),
– sampler feedback
– vrs
Why are talking about thinks you don’t understand? Next time use Google and check.
PS5 does not… It’s more like RDNA 1.5 it’s not full on 2.
This is a complete lie lmao…
Yeah, thats why AMD have market dominance as current gen consoles are AMD based…. /s
Your sarcasm doesn’t even make sense.
Performance isn’t boosted in software only raytracing titles. Crysis has Vulkan hardware acceleration.
Which boosts performance over the software-based implementation.
The reviews are coming very very soon. We shall know soon enough.
rt is amazing for developers, and terrible for consumers, yes it is the future, but today? still in its infancy, and not worth the headache
Would love to see how the Same game like e.g. CONTROL performs on AMD GPU. But I doubt AMD can beat Nvidia in ray tracing. They can match the perf though.
Because ray tracing hardware is still not powerful for full RTX implementation. will take 3-4 more years when this rtx becomes mainstream though. yeah I know next-gen consoles also have RDNA 2 graphics, but how powerful these are, remains to be seen.
RTX becomes mainstream next week with the consoles. It will be extremely limited, but we will have it.
The consoles will both usher in RTX and also limit it at the same time. Fun how that works.
Honestly speaking, I won’t call that mainstream though. Because ray tracing hardware is still not powerful for full RTX implementation.
By mainstream I meant every GPU be it low-end/mid-range and high-end must give a playable frame rate on all resolutions, at least on mid-high settings, if not ultra. Also, how powerful PS5 and XBX/S are, we don’t know yet.
Mainstream just means available to most, or common. Quality is really a different thing.
We’re a long way away from getting rtx at 4k, let alone that on every GPU.
Wec actually basically do know how fast they are. DF did some good breakdowns. Ps5 is roughly 2080 territory or even better. Sx is above that.
2080 with RDNA1? Sorry this is not possible. PS5 don’t even support mesh shaders, vrs or sampler feedback
RDNA1 with 2.2 Ghz clock speed yeah okay lol
This is wrong.
XSX is stock 2080 territory best case.
PS5 is 2070S
“Gears 5 benchmark run at 43fps on XSX, and 44fps on RTX2080”
Xbox series version use raytracing global illumination
PC use lower details until update planned for later this year
The gap between 2070S and 2080 is much larger than that in actual games. Maybe PS5 is closer to a regular 2070, doesn’t matter.
Xbox is on par with a stock 2080, that right there is not up for debate.
You clearly don’t know wtf you’re talking about if you think that a Zen2 CPU is going to bottleneck a game at 60 fps or lower. The XSX does locked 120 fps in Gears 5 multiplayer, which is almost completely CPU bound.
Turing TF are worth more in game performance, as is shown by the XSX and 2080 comparison…