We have some new technical data on Navi 2X graphics processors based on the new RDNA2 architecture. A Reddit user stblr has discovered the potential specifications for the Radeon RX 6000 series in Apple’s macOS Big Sur 11 beta code, and they look promising.
The user reverse-engineered this information from drivers present in the macOS and it is likely accurate as well as legitimate, though you should still exercise caution with any leaked data. Some of the specs were already confirmed before by AMD in the ROCm 3.8 software update. We now have some data on the Compute Unit/CU count, boosts clocks, and power target values which were extracted from macOS 11 beta having the firmware for the AMD Radeon RX 6000 series of cards.
As per rumors, AMD might be initially preparing 3 Navi 2X variants, Navi 21, Navi 22, and Navi 23, respectively. Navi 21 will likely house the RX 6900 XT flagship GPU, which AMD has teased before as well, whereas the Navi 22 chip should land up in the Radeon RX 6700 XT or RX 6800 XT series of cards, successor to the RX 5700 XT GPU. Navi 21 SKU has been codenamed as ‘Sienna Cichlid’, whereas the ‘Navy Flounder’ codename could refer to Navi 22.
The higher the Navi suffix the lower the GPU will be positioned in the hierarchy, which means Navi 22/23 will land up in entry-level and mainstream GPUs most likely.
First we have the Navi 21 GPU (most likely the RX 6900 XT) which will sport 80 CUs, assuming that each CU still carries 64 SPs on RDNA 2, and the ratio of CU to SP remains the same, then we are looking at a total of 5120 stream processors. According to the leaker, there are two different dies and variants, the Navi 21A silicon having a boost clock of up to 2,050 MHz, while the Navi 21B silicon has a 2,200 MHz boost clock. The power limit varies from 220W to 238W. With a boost clock of 2200 MHz, the Navi 21B will have a shader performance of 22.5 TFLOPs, i.e. single precision compute.
For comparison, the Ampere GeForce RTX 3080 offers a peak single-precision (FP32) performance of 29.8 TFLOPs, which is 32.4% higher than Navi 21. But the Navi 21 does pull a lead over the GeForce RTX 3070, which offers 20.4 TFLOPs. If this card comes in 16 GB VRAM flavor and is also priced correctly by AMD, then it could be an absolute winner, and a strong GPU contender.
Navi 22 (codenamed as Navy Flounder) mid-tier GPU on the other hand might arrive with 40 CUs, amounting to a total of 2,560 stream processors/SPs, the same number of CUs as found on Navi 10 GPU. Which means the Navi 22 is a direct successor to Navi 10.
Navi 22 has an impressive boost clock at 2,500 MHz within a 170W power limit. The Radeon RX 5700 XT, which is based on Navi 10, comes with 2,560 SPs with a boost clock of 1,905 MHz, and has a maximum single-precision performance of 9.8 TFLOPs. Navi 22 delivers 12.8 TFLOPs of FP32 shader compute performance, so that’s a roughly 30.5% improvement.
Lastly, the Navi 23 (codenamed as Dimgrey Cavefish) would be the entry-level RDNA 2 silicon chip. The Navi 23 die may end up with 32 CUs, amounting to 2,048 stream processors /SPs. Though unfortunately, the clock speeds and power limits for Navi 23 weren’t mentioned in Apple’s latest firmware.
Surprisingly, the first RDNA3-based next-gen Navi 31 GPU is also listed in the macOS Big Sur 11 beta firmware. The GPU appears to feature the same CU count as Navi 21, meaning we are looking at a refresh under a new GPU microarchitecture. Thus, the Navi 31 GPU would also feature 5120 stream processors. We expect to see the Navi 31 silicon being used in the Radeon RX 7000 series, and also future Radeon Pro SKUs for Apple Mac.
Right now we don’t know whether Navi 31 will be a proper replacement or just a refresh of Navi 21 though. Here is the method to get this firmware data, courtesy of Reddit user stblr:
- Install macOS 11 beta (you can also use a VM).
- Get this file: /System/Library/Extensions/AMDRadeonX6000HWServices.kext/Contents/PlugIns/AMDRadeonX6000HWLibs.kext/Contents/MacOS/AMDRadeonX6000HWLibs.
- Use a reverse-engineering tool like radare2 to find the offset to the firmware (look for _discovery_v2_navi21).
- Use a tool like ddto extract it. Example: dd skip=47252400 count=1344 if=AMDRadeonX6000HWLibs of=navi21_discovery.bin bs=1.
- Get the relevant values using the definitions from here. There is also a toolthat does that automatically.
We have also seen pictures of AMD’s Radeon RX 6000 RDNA2 GPU before, showing triple-fan and dual-fan designs. The triple fan variant would most probably be launched as the RX 6900 series, whereas the dual-fan model might house the RX 6800 or 6700 series family. The dual-fan version also features two 8-pin PCIe power connectors. It will be interesting to see whether the new RDNA 2 will be able to compete with NVIDIA’s high-end RTX 3090/3080, or its mid-tier brother, the RTX 3070.
AMD will officially reveal these new GPUs on October 28th. We expect the RX 6000 series to officially debut in November. Stay tuned for more!
Hello, my name is NICK Richardson. I’m an avid PC and tech fan since the good old days of RIVA TNT2, and 3DFX interactive “Voodoo” gaming cards. I love playing mostly First-person shooters, and I’m a die-hard fan of this FPS genre, since the good ‘old Doom and Wolfenstein days.
MUSIC has always been my passion/roots, but I started gaming “casually” when I was young on Nvidia’s GeForce3 series of cards. I’m by no means an avid or a hardcore gamer though, but I just love stuff related to the PC, Games, and technology in general. I’ve been involved with many indie Metal bands worldwide, and have helped them promote their albums in record labels. I’m a very broad-minded down to earth guy. MUSIC is my inner expression, and soul.
Contact: Email

Looks nice. Now. Dont F up the pricing
80 CU along with it comes 80 driver issues at launch… Radeon will never be a good option until AMD fix the damned drivers.
AMD Lisa Su is not AMD Raja Kadouri !! Jensen is affraid of AMD ( Lisa Su ) he knows what she diid to Intel !! & 80CU is F##g Monstrous !!
she has the same degrees that he has
but she is focusing on CPU division
I think Raja has a different point of view thann Lisa and he devoted himself to ATi and AMD (later) ha gave them all he could and he had reached his off limits, Lisa focused on CPUs more because of thier rentability and it’s a better strategy overall , if you make excellent CPUs you’ll sold more your GPUs easiely, Lisa sold made traps for Jensen : 1st= RX 6000 Series (Big Navi), 2nd = consoles ( PS5-XBX SX) , he’ll fall in one of them !
But there actually aren’t anymore big driver issues … I don’t know why people still saying that, but I know no one who has issues now.
Because Nvidia’s cards are working so perfectly right now, right?
If I have to choose between major hardware issues and driver issues which I experienced with 5700XT, guess which one I’m taking.
Well there is some fallacy. The RTX 3080 theoretically has 29,7 TFLOPs with 8704 CUDA-Cores. But it can just do 128 FP32-operations or 64 FP32 and 64 INT32-operations and not 128 FP32-operations and 64 FP32 and INT32-operations together, which would actually be hillarious.
So actually the real TFLOP is half the amount, cause none operations can be done all together, so all 4352 CUDA-Cores on the RTX 3080 can deliver around 15 TFLOPs with 128 FP32-operations or 15 TFLOPs with 64 FP32- and 64 INT32-operations. The same is for the RTX 3090. But theoretically if all operations worked together you would get theoretically 29,7 and 36 TFLOPs.
So the RX 6900 XT with around 22,5 TFLOPs (can do 64 FP32 and 64 INT32) is quite monstrous and powerful and for sure overpowering both top Nvidia cards in rasterization, but maybe not in Raytracing, although I heard that DXR 1.1 is also quite good and nice.
Maybe they can even push the clock speeds higher to 2,5 GHz to get around 25,6 TFLOPs which would push the RX 6900 XT far ahead of the RTX 3080 and RTX 3090.
If these specs are real we will really get Nvidia-killers for sure.
I’m not very tech-savvy, but from what I know TFLOPs don’t really tell you how well a card will perform in games.
Take, for instance, Vega 64 with 12.6 TFLOPs, and 5700 XT with 9.7 TFLOPs. Judging from TFLOP numbers Vega 64 should outperform 5700 XT, but 5700 XT is actually quite faster in all games.
Well, my friend, not everybody can be as intellectually gifted as you.
“…you can compare TFLOPS, though they only are rough estimates in raw compute rather than final framerate expectation due to different engines handling an architecture differently.”
So you actually agree with what I said in my comment. You can’t estimate from TFLOP numbers how GPUs will perform in games, especially if the GPUs are powered by different architectures.
Yes and no. It depends on how the architecture is built. GCN was built on a wave16 base with 4 SIMD16 vector units (all together was one execution unit) and if you would calculate short time operations with this kind of architecture all operations needed longer what resulted into inefficency in gaming workloads. So GCN had high TFLOPs but was a way more inefficient than Pascal and any other architecturs from Nvidia.
RDNA is built on a wave32 base with 2 SIMD32 vector units and this architecture can handle short time operations twice the time as GCN can do. So you find short time operations in games, cause you need to render a lot of stuff in a short time. So with RDNA AMD choose, similar to Nvidia, a slim GPU design to get the “theoretical” TFLOPs onto the street.
Moreover GCN was built on a 14 nm/7nm process node and RDNA is entirely built on 7 nm(+) TSMC. This plays an important role too. RDNA 1 has nearly the same IPC as Turing has and RDNA 2 will have nearly the same IPC as Ampere has, cause it can be scaled a way more efficient, also latencys were reduced, and so on.
So TFLOPs are actually only a theoretical value giving GPUs a status to let people know what power the GPU actually has but it never really tells you if this GPU can put the TFLOPs onto the street to work efficient and fast.
Well, not really, but it is a kind of marketing gag which wasn’t chosen so well for them who have knowledge about GPU-architectures.
Nvidia just redesigned the SMs newly adding another FP32-Unit onto them. But they didn’t increase the amount of SMs on the GPU, so an RTX 3080 still has 4352 Cuda-Cores and not 8704 Cuda-Cores. It is more a marketing gag and they sell it as a unique product, cause no one else has it. But actually you dont really need it for rendering stuff in games, cause there are no practical situations where just 128 FP32 operations are done.
So Nvidia is doing some deception to all of us but it is not really a scam to the people.
Fingers crossed!
Hey, AMD. I would LOVE to replace my 1080 with one of your new cards, instead of having to get another Nvidia. Give me a reason to do so. That’s all I ask.
Make a great, comparable card, for a comparable price to Nvidia, and I will happily join the AMD GPU family for the next few years. It’s all up to you.
Poor b*tard is going to have to deal with constant crashes because of sh**ty drivers. Doesn’t matter if the card is great the drivers are what run them.
Better to deal with crashes because of s***ty drivers, than s***ty hardware. Being the 30×0 CTD’s constantly due to crap capacitor implementation.
At least drivers can be fixed. Nice try though…………. I guess?
I have a 5700xt never had a crash. But last time a checked the new 30xx card are having crashes and black screen. https://uploads.disquscdn.com/images/dee8a49d1ec20f8f7ac83abb8f462de1936870358d909c135beeecb42d0b3b60.jpg
Damn seriously? Did you buy it at launch? I had major and minor issues for 4-6 months since release.
Lol and what is Nvidia doing currently? Crashing because of HARDWARE level issues.
I can’t wait to see what they have.
They can NOT afford to have another launch with terrible driver issues. I bought my 5700 XT pretty much at launch and had bigger and minor issues for at least 4-6 MONTHS.