AMD Big Navi temp header

AMD’s next-gen RDNA 2-based big ‘Navi 21’ GPU variants have been leaked, along with the RDNA 1 ‘Navi 10 refresh’ lineup

Written by Metal Messiah

On the HardwareLeaks website, which belongs to “rogame” (the renowned hardware leaker from Twitter), some new “NAVI 21” GPU IDs have been leaked, which gives us some idea about AMD’s plans for their next-gen GPU lineup.

According to “rogame” AMD is planning several NAVI 21 GPU variants, which will be powered by the “RDNA2” architecture, and we also have the NAVI 10 “refresh” GPU lineup which is based on the older “RDNA1” architecture/silicon.  As per this leak and other rumors, we know that AMD plans to release next-generation RDNA 2-based “Big Navi” 2X lineup, and also RDNA 1 parts which will be offered as a “refresh” of  the existing Radeon RX 5000 series cards.

With the Navi 2X GPU lineup, AMD’s primary focus is to release next-gen “enthusiast” and “flagship” offerings, to tackle existing flagship models from NVIDIA such as the GeForce RTX 2080 Ti and the GeForce RTX 2080 SUPER. These might also challenge Nvidia’s next-gen “Ampere” Gaming GPU lineup.

For the AMD NAVI 21 high-end “Gaming cards”, we have 4 variants listed. These are going to power high-end enthusiast GPUs. The information is derived from the PCI IDs of the GPUs which have been leaked out.

Following are the four SKUs, “” Navi 21 XTX (0x731F:D0), Navi 21 XT (0x731F:D1), Navi 21 XL (0x731F:D3), Navi 21 XLE (0x731F:DF) “”. Each of these Navi 21 GPU SKUs might be a replacement of  the existing cards in the Radeon RX 5000 series family, though these might target a totally different performance segment, most probably ‘high-end”.

The list mentions several “cut-down” variants of Navi 21, which could mean that AMD is trying to reuse these chips as much as possible. So we can expect a new Radeon RX 6600 SKU featuring the same silicon, or whatever name AMD decides to give.

Apart from these gaming graphics cards, we also have two “Radeon Pro” variants based on the Navi RDNA2 design. Following are the SKUs, “Navi 21 Pro-XT (0x731F:10)” and the “Navi 21 Pro-XL (0x731F:12)”. These might replace the existing PRO cards such as the W5700X/W5700 Pro series.

AMD is also releasing at least four variants for the next-generation products from “Apple”. We also have several PCI IDs listed for the AMD Navi 22 and Navi 23 GPUs, as leaked by “rogame”.

Finally, we have AMD NAVI 10 RDNA1-based “gaming” refresh lineup of graphics cards. There are at least three Navi 10 Refresh SKUs in the works which include, “” Navi 10 XT+ (731F:E1), Navi 10 XM+ (731F:E3), and Navi 10 XTE+ (731F:E7) “”.

According to this leak, the Navi 10 XT+ GPU should replace the Navi 10 XT GPU which powers the current RX 5700XT card, the Navi 10 XM+ should replace the Navi 10 XM which powers the current 5600M and finally, we have the Navi 10 XTE+ which will “be” a new replacement SKU which currently powers the RX 5600XT GPU.

These are mostly going to target the “mainstream” GPU market segment, unlike big NAVI 21 which is for the “high-end” and enthusiast segments. We can expect a much “lower” price point than the existing lineup of Radeon RX 5000 series cards, with these new “refreshed” parts.

AMD RDNA 2 Navi 21, Navi 22, Navi 23 GPU Device IDs For Radeon RX and Radeon Pro Graphics Cards. (Image Credits: Rogame).

41 thoughts on “AMD’s next-gen RDNA 2-based big ‘Navi 21’ GPU variants have been leaked, along with the RDNA 1 ‘Navi 10 refresh’ lineup”

    1. But what if it’s not BS man? I am excited as a PC gamer that if this sh*t is real… When it comes to PC and it WILL. It will be bomb.

      It’s at least nice to think that there is room to grow in that department.

  1. What’s up with all the hype surrounding PS5’s new SSD ? I haven’t paid much attention to any of the tech articles published on this news though.

    Sony’s president “Kenichiro Yoshida” released the hype engines on the PlayStation 5, where he recently said that the PS5 will be “100 times faster” than the PS4. He hyped the PlayStation 5 up during Sony’s recent corporate marketing strategy meeting, where he said that the PS5 is “100 times faster” than the PS4, lol.

    1. Apparently the throughput of the drive is the hype. John Carmack chimed in on it the other day, stating it can load GPU formatted data, directly in to the GPU, giving massive bandwidth to stream graphics data.

      Sounds all great on paper, but, I will still await seeing it in general use. It’s all hype, until it can be proven in everyday use.

      1. I’m skeptical as to how much of a difference it’s going to actually make in practice. And most games are tri-platform so only PS5 exclusives can be built around the technology.

          1. I’m not an expert on any of this stuff, but my guess is that on PC you could simply cache more resources in system memory (with 32GB probably being the norm on gaming PCs within a year). Plus aren’t new Nvidia/AMD going to have the ability to leverage system ram and ssd space directly?

        1. This is not PS5 exclusive.

          It’s now hyped because … money and Sony & Epic deal ??? Dunno myself.

          What is certain is that it will be present in UE5 and that will run, better or worse, on all supported platforms, on PC of course better.

          My take on this is that they use faster IO to do direct I/O -> GPU transfer of data, that probably is already preformated to match “GPU” expected data layout so they can bypass CPU+Memory. This will result in faster performance even thought GPU itself is not much faster than previous gen. I think that this will or already is available in Dx12 and Vulcan.

          And here is the kicker, laptop with RTX 2080 and SSD can outperform PS5 in that UE5 demo, apparently. Some more info:

          pcgamer.com/unreal-engine-5-tech-demo-pc-performance/

          PS. Yes this is big, especially for 3D artists. Having ability to visualise end result in real time, even with “low quality”, is huge time saver and makes work flow so much faster and easier. For games, definitely, that live preview that I mentioned above is nothing more than “real time engine”, some 3D apps already do it, i.e. Blender and eevvee (with is, IMO, best preview engine that is currently available in 3D modeling software).

    2. You should stop saying “soybox”, it makes you look like a silly fanboy and you seem better than that.

  2. AMD GPU Division is a joke, they are late 2 years to the high end market, and instead of releasing RDNA 2 proper card, they refreshing their existing overpriced RDNA 1 cards, AMD is not competitive in the GPU space, they are charging the same money as nvidia, and nvidia is getting most of the sales
    and we plebs get stupid cards like 1650/1650SUPER & 5500XT with 128-bit bus and 4GB of VRAM, GPU market is so depressing that a GTX 1660 SUPER looks like a good deal compared to other cards

  3. I just want to know if anyone is going to give us more than an 8GB card in the mid-range ($300-$400).

      1. I’m a little concerned that new games will have a bump in vram requirements after the new consoles come out – kind of like how when the PS4/One released, a year or so later suddenly 2GB cards on PC were having stuttering issues. My 1070 will be over 4 years old when these cards come out and it seems weird that the vram would get no bump at all after all that time.

        1. Yeah the VRAM requirements are surely going to increase for some of the AAA games, if not all.

          But I still think 6GB of VRAM is a sweet spot for most of today’s AAA games, though having extra VRAM never hurts, since it gives us more headroom to “crank up” the in-game Video/graphic settings, along with screen resolution.

          All of this actually depends on the Game’s Engine, and the game being played I think. I’m not sure whether majority of the future PC titles are going to be more VRAM-hungry, or eat up a lot of GPU memory.

          But let us see what the next-gen consoles have in store for us though.

  4. Honestly speaking, I have my doubts. When it comes to high-end “enthusiast” offerings, I’m pretty sure NVIDIA must be having a monster GPU up its sleeve to counter ANY model which AMD offers.

    Though if AMD can prove me wrong, then I would be more than happy to give them my money. But right now, even I’m skeptical.

    Also, you wrote a very valid point regarding AMD’s new offering “crushing” the 2080 Ti beast of a GPU. Chances are slim. Even if they do “crush”, AMD is too late for the party, imho.

    “Crush” seems like a far cry. It’s more like, AMD’s new high-end GPU finally “Matches” the RTX 2080 Ti’s performance in games, lol. 😛

      1. but people still buy them. HBU once make comparison or nvidia vs AMD GPU sales. and some people point that they were surprise that for every three 5700 being sold there is one 2080Ti being sold. he thought the number might be more like 10:1 ratio or maybe even bigger but in reality it might be much smaller like 3:1.

    1. GUARANTEE THIS TOO!!! 1080ti came out 3 years ago. Guarantee they have some blazing amazing gpus just sitting there waiting for the competition to show up white gouging everyones wallets at the same time

    2. I just wonder what big Navi aka RDNA2 will be about since isnt RDNA1 kinda a hybrid of GCN/NewArch?

      As for Ray Tracing AMD could get a upper hand since they will keep things open as usual. And the biggest thing is Unreal5 engine. If it can run that well on AMD consoles then that tells me the once Pro Nvidia engine will now be sided more with AMD…

      What I am getting at is Ray Tracing + Unreal 5 could be a big deal the next coming years for AMD. Much like how the RE Engine is.

      1. I think RDNA2 is a totally refined and overhauled “arch” from AMD, since it will also have “hardware level” support for RAY TRACING as well.

        But this is all based on rumors. We really don’t know if RDNA2 is just a refresh of RDNA1, or a totally new Architecture. But AMD really needs to step up their game if they want to remain competitive.

        I have a very bad gut feeling that the next-gen AMPERE GPU lineup and the RTX cards are going to far more Powerful than “Turing”, and they might even annihilate and decimate AMD’s offerings.

        But no doubt, the Ampere RTX is going to be a graphics powerhouse for sure.

        1. I don’t think Ampere will be that amazing. Nvidia follows their business model. Baby steps each arch and not going all out. If anything it will be like how they treated gameworks. Pascal > Maxwell > Kepler dealing with tesselation. If anything Ampere will have better Raster over Turning and slightly better RT. But nothing mind blowing.

          I mean Nvidia is a company who took full advantage of being 1st over AMD dealing with a DX spec for a change and we all saw that with the “starting price” of $999 LMAO for a 2080 Ti.

          And if Big Navi is the full RDNA arch and no GCN involved I think we will see pretty amazing thing from AMD. Aslong as they can have launch drivers work like they should.

          1. Hey, YES, I think RDNA2 arch won’t have any GCN features. Wasn’t RDNA1 the same, a completely NEW architecture from AMD ?

            I think VEGA was the last to use GCN if I’m not mistaken. According to statements from AMD, RDNA 2 will be a “refresh” of the RDNA 1 architecture.

            Additional features confirmed by AMD include real-time, hardware accelerated ray tracing and variable rate shading.

          2. Yeah I believe so but it’s very awkward how it’s driver support came to play. Hopefully this time around AMD can stomp out any driver issues they had. And also give out better low lvl API performance with RDNA2 since 5700X and below was not as amazing as I thought it would be next to previous GCN.

  5. BTW, dual-GPU setup is sort of “dead”. I know NV still offers us SLI, but not many Games “scale” well on such a setup.

    SLI is kind of dead at least in my knowledge. Implementing SLI/CRFX requires a lot of coding, as well as resources/time. The game developers need to make sure that the game’s engine is going to scale well.

    Okay. But apart from this, NVLINK might take the place of SLI though in near future, mostly in DX12 API, in my opinion. But this is not 100% confirmed yet.These two technologies are almost same, but they still differ in many aspects.

    I just hope NVLINK brings something new to the table in the coming future, especially in DX12 multi-gpu mode.

    You guys must be aware that NVidia has introduced a new interface called NVLINK with the consumer Turing GPUs, instead of the old SLI. Obviously, it’s the same multi-GPU bridge which can be used for gaming,but it has an interface with many times the bandwidth of an SLI connection.

    Since NVLink can be used for direct memory access between cards, and not through the PCIe slots as this was creating a huge bottleneck with SLI, so I think NVlink might be the future, if we go by Nvidia’s theory.

    Though, not many Games might be able to reap the full benefits of NVlinK, because the same thing happened with SLI. SLI bridges mostly used to have a bandwidth of 1GB/s (normal bridge), and 2GB/s (for the HB bridge), with a rough estimate.

    NVLink on Turing cards can do 25GB/s one way, and or 50GB/s in total. But according to Nvidia, total bandwidth is 50GB/s one way, and 100GB/s total. But all of this will only help, if GAMES are going to take advantage of this new multi-GPU feature, provided the Game developers also implement this.

    IMO, I think the main advantage of Nvlink is that it might help with peer-to-peer interface, and VRAM stacking, because essentially the GPUs are much closer together now, also bringing the latency of a GPU-to-GPU transfer way down. So unlike SLI, where the latency had to go through PCIe as well as memory, Nvlink behaves in a different manner.

    We can think of it an app that looks at one GPU, and then looks at another GPU and does something else same time. So it seems NVlink will be the future when it comes to multi-GPU setup, but sadly ONLY on the high-end market segment, as other Turing and AMPERE cards will lack NVLINK support.

    But again, like I said before, all of this will actually depend on how well the Game’s ENGINE benefits from a future multi-GPU setup. Also, assuming NVLINK will also help with VRAM stacking, the 2 GPUS should support Split Frame rendering/SFR. Unlike the previous AFR mode used mostly in SLI, Alternate frame rendering that is, in which each GPU used it’s own frame buffer/VRAM, and it never got added/stacked.

    According to theory,

    In AFR, each GPU renders each of the other frame (either the alternate Odd or Even).
    In SFR, each GPU renders half of every frame. (top/bottom, or plane division).

    So I think NVLINK should also help with VRAM stacking, though we need to see how this gets implemented fully in most of the Games, either in DX12 or VULKAN API mode. Apart from this, even the price of an NVLINK bridge is kind of high, so this can be a very expensive multi-GPU setup, and not many gamers might be able to afford these. Can’t comment about the performance on NVlink though.

    Just my 2 cents ! Sorry to go a bit off topic, btw.

    1. Even with all that bandwidth the NVlink is still subpar tech because data transfer over that bridge still takes several microseconds, which introduces a lot of problems with resource management. Until they’ll sync the cards through PCIe, multi-gpu gaming is dead in the water. Simply because devs refuse to mess with it, due to either budget or time constraints.

      I’m saying this as someone who had multi gpu setups since 2007, until 1080Ti, when I saw the lack of support I sold one and won’t be returning until I see commitment from both GPU manufacturers and developers.

    2. Well said, im so frikking tired of al ALLWAYS this overhyped BS when a new consol gen will come. And right now, these console fanboys – Sonys as the worst, are getting dumber by the day. They just swallow every sh*t that only the usuall marketing hype propaganda!

      Sad to see that their is so many retards out there, that dont have a f*cking clue ..LOL!

      Oh and yeah about that UE5 demo. Well a laptop 2800 = 2060S or so, was running it better then the weak AMD POS that will be in it. I mean come on it’s not even funny ..LOL!

      Atleast MS, put a little better crap in their, and they will probably beying able to do some RT.

      Annyway, I will buy a PS5 tho – just for some of Sonys exclusives, as I allways do. But in the future I guess I dont have to buy these underpowered crap toyboxes, as im sure even Sony will release all games on the pc also. Theres romours that Bloodborne is on the way. Damn that makes me happy, as it is one of the few PS games that’s totaly awesome. And to play it the way it should with high fps will be fantastic!

      Annyway, I cant wait for nVidia Ampere. My new bought LG C9 OLED, thats on my way to me right now, will need that 2.1 HDMI Ampere 😀

      Im sure even the 3060, will beat the living sh*t out of both the xbox and ps5 together and sh*t em out 😀

    3. to my knowledge the one that allows for VRAM stacking was NVLink itself. for traditional multi GPU AFR will always be preferable than SFR due to scaling. and from there GPU maker try to address shortcomings of AFR over the years like introducing hardware based solution (sorry i forgot the exact names for it) to minimize microstutters and help the game to produce much better frametime. with mantle AMD try to reintroduce the usage of SFR but as seen in games like Civilization BE btoh game developer and AMD still have no solution around SFR terrible scaling.

      if GPU maker still intent to push multi GPU they need with new multi GPU tech that is not AFR nor SFR which will work much better with new generation of game engines. the weakness of AFR is the tech did not work with many new modern rendering method. so war we heard nvidia is working with brand new multi GPU tech to be use in tandem with MCM type GPU. will it benefit regular multi GPU as well is not known at this point. but all the effort are useless if game engine developer never built their games to work well with multi GPU ecosystem. funny thing is we saw heavy works on game engine to take advantage multi core CPU (even before DX12/Vulkan is a thing) but same effort never happen with GPU.

    4. Multi gpu’s will come back in my opinion, But not in the way we are used to with separate cards.
      I think after big Navi and Ampere we might start hearing more about MCM for gpu’s. Die shrinks are getting harder and more costly, The obvious solution is to do what they are doing with cpu’s and put more on one die. All of them even including Intel have been working on MCM for many years, I remember reading articles about it a few years back and the difficulties of getting it to work mostly from the software side of things.
      Honestly I can’t wait, Had SLI for several builds and it’s only been this latest 2080Ti where I held back going SLI again, Mainly because the price here in Australia was so ridiculous just to buy one card.

      1. I’ve kinda been wondering if we’ll ever see “add on” cards like PhysX stuff back in the day. Like a secondary card just for ray tracing acceleration. Heck, if they just let you use a second card to add more ray tracing horsepower to your primary card – literally nothing else – people would probably buy a second card just for that.

  6. Dude, console gaming was never about crushing the PC or graphical leaps. Console gaming is about quality gaming on a budget.

Leave a Reply

Your email address will not be published. Required fields are marked *