According to one reliable source over at Igor’s Lab, it was predicted before that Nvidia is planning to release at least three different high-end variants in the RTX 3000 series of Ampere lineup. These are RTX 3080, RTX 3080 (Ti/Super) and RTX 3090 (Ti/Super). Some rumored specs were also shared by one twitter user kopite7kimi before. All three variants are based on the same PG132 board/PCB, and they are different bins of the same “GA102” chip.
Igor’s Lab has once again highlighted that the flagship RTX 3090 GPU will feature a Total Graphics Power/TGP of 350 Watts. We also have some confirmation on the complete breakdown of the estimated total power consumption and losses of this flagship Ampere card.
According to Igor, the RTX 3090 (or whatever name NVIDIA goes for) will have a TGP of 350 Watts and a GPU core TDP of 230 Watts. This is a huge increase in power consumption, and it looks like Nvidia is trying to squeeze every ounce of performance from the GA102 chip and the PCB. Shifting to the 7nm fabrication node will actually help lower the overall power consumption of Ampere cards, but it looks like Nvidia might trade off efficiency with brute GPU horsepower.
That’s a “total graphics power” of 350 Watts, and if we compare this value with the previous-gen flagship RTX 2080 Ti which had a TGP of 260 Watts on some of the custom GPU variants, we are looking at a higher overall power consumption.
Anyways, let us see the full breakdown and dissection of the TGP. The RTX 3090’s GA102 GPU core alone will draw roughly 230 watts of power, with the 24GB memory drawing around 60 watts. The MOSFETs, Inductor, Caps will consume a further 30-34 watts of power, and 7 watts goes for the fan. We can expect PCB losses to be in the range of 15 watts, and the input section/AUX consumption should hover around 4 watts. So we get a total graphics power of 350 Watts, and this is for the reference Founder’s Edition SKU variant. Expect custom AIB cards to go even higher, based on the cooling and PCB.
Igor’s Lab claims all three GeForce Ampere Founders Edition cards will be based on the same PG132 circuit board with the main difference being the VRAM and board power. With a die size of 700 mm² and a transistor count of 45,000 million the GA102 is indeed a very big chip, which might also require some serious cooling at least for the high-end flagship Models.
The flagship RTX 3090 SKU will end up with 24 GB of GDDR6X double-sided vRAM, on a 384-bit bus width, and will have a 350W TGP, as per the above speculation. We don’t for sure whether the RTX 3090 will be a direct replacement of the TITAN GPU. Sure, this GPU supports NVLINK and has double the VRAM, but this doesn’t confirm whether Nvidia plans to drop the TITAN nomenclature. Though, current-gen TITAN already carries the RTX branding/nomenclature.
Hello, my name is NICK Richardson. I’m an avid PC and tech fan since the good old days of RIVA TNT2, and 3DFX interactive “Voodoo” gaming cards. I love playing mostly First-person shooters, and I’m a die-hard fan of this FPS genre, since the good ‘old Doom and Wolfenstein days.
MUSIC has always been my passion/roots, but I started gaming “casually” when I was young on Nvidia’s GeForce3 series of cards. I’m by no means an avid or a hardcore gamer though, but I just love stuff related to the PC, Games, and technology in general. I’ve been involved with many indie Metal bands worldwide, and have helped them promote their albums in record labels. I’m a very broad-minded down to earth guy. MUSIC is my inner expression, and soul.
Contact: Email

if its above 800$ they can stick it where the sun does not shine
Not sure which GPU you are referring to but if it’s the 3080 it may be around $700 but if it’s the 3080 TI then probably at least $1,200. If the 3090 is the replacement for the RTX Titan then probably $2,500.
Most likely the 390 will appeal to people that need a card for work and gaming.
The 3080 (non Ti) should be faster than the 2080 Ti and it will only appeal to gamers using 4K monitors or 120Hz/144Hz monitors and that’s allowing that they have the CPU power to feed the GPU with that measure of draw calls.
I expect the 3060 will be what most people will buy and it will most likely be more than enough for 1080p or 1440p gamers. Maybe a 3070 for 1440p gamers but we have reached a point where GPUs are incredibly fast.
$2,500 seems reasonable for a professional enterprise GPU. Still a bit high imo.
Even 1200 USD is too much for a card to be honest. I guess most of us would need to stick with the lower end models 3050, or 3060.
The vast majority of gamers buy entry level up to midrange GPUs so I agree. According to the Steam Hardware Survey less than 1% are running a 2080 or 2080 TI.
The majority of gamers are running 1080p and only 2% are running 4K. The cost of 4K monitors or TVs have really come down in price but they still are rarely used because they need at least a $700 GPU to run it.
Yeah, 1080p is not going anywhere. Its here to stay for a long time till the market gets saturated with 2K and 4K screens.
According to the Steam Hardware Survey 1080p is the most used resolution with 63.5% reporting they use 1080p. The second place is 1366 x 768 with 10.9% reporting that resolution but that’s a standard laptop resolution.
Stay tuned.
3 more juicy and important tech articles are going to go live soon, lol. John is a bit busy since its Sunday, hence the delay in publishing them.
This card will have great support for DXR 1.1
First generation of RTX cards was created just as proof of concept of DXR. But when it was designed DXR was only an idea in head of Nvidia and MS. There was no library, no games. MS released DXR 1.0 at GDC 2018 and few months later Nvidia releasd RTX cards. Developers start making games and send feedback.
Based on that early feedback MS updated DXR to 1.1 in 2020. All technologies like DXR, DirectML (DLSS), VRS, MeshShaders from Nvidia cards was added as core part of DX12 Ultimate and used also by AMD on PC and Xbox (RDNA for PS5 don’t use those features and instead use Primitive Shaders).
I believe that Ampere will be first GPU designed for DX12 Ultimate in its final form
Not some early draft like in 2018. Designed and optimized for real library and real games. DXR performance should be a lot better
If someone want read directly from source about DXR, Sampler Feedback – then you should go to twitter account of James Stanard. He works as Graphics Optimization R&D and Engine Architect at DirectX team. He is best source for information about DX12 Ultimate and new features of compatible hardware on PC (Ampere, RDNA2) and Xbox. On his twitter he talk a lot with game developers and users trying to explain how new DX12 Ultimate works (both software and hardware)
twitter JamesStanard with_replies
Insane amount of cores tho
That’s a huge power increase for the board fro sure. I wonder whether lower end mid-range mainstream cards are also going to feature this much of TDP/TGP ??
But these cards are going to be an AMD-killer for sure. I doubt AMD can even manage to come close to their perf, even with next-gen RDNA 2 cards.
According to the GPU database at TPU the GA102 will be a little smaller in size than the TU102 used in the Turing Titan but it will have 45 billion transistors versus 18.6 billion transistors in the RTX Titan GPU. The GA 102 will have 7552 CUDA cores versus the Turing Titan chip which has 4608 CUDA cores.
Interestingly the number of Tensor cores is down from the Turing Titan but the number of RT cores will increase with the Ampere Titan.
TITAN is actually a luxury product though. I’m thinking the RTX 3090 will replace the Titan nomenclature from previous-gen.
Good TPU finding though. I haven’t properly checked and compared the specs of these new RTX cards. But 7552 CUDA cores means a monster of a GPU.
3090 = Titan
this sounds right. At this point why call it “titan”. Just slap a number on it and all good 😉
You see, actually NVIDIA’s *90 series nomenclature has historically been reserved only for dual-chip graphics cards.
But since the Titan Z which launched back in 2014, NVIDIA hasn’t made any dual-chip card. The flagship spot has since then been taken up by the Ti series cards such as the current RTX 2080 Ti. The Ti offerings are a step up above the standard non-TI offerings, powered by a better GPU config, more memory, and better performance.
But the number 9 seems to be the indicator of ultra-enthusiast products these days. We have seen Ryzen 9 and the Threadripper 3990X, and the Core i9 series from Intel.
NVIDIA could drop the Ti branding in favor of bringing the “*90′ series nomenclature back on its cards but there will still be some room open for NVIDIA if they want to release a faster Ti variant of the RTX 3080 without using a super lengthy name such as the ‘SUPER’ series.
Yea i remember having a 690 and feeling overpowered but man the SLi and Crossfire solutions keep going down and the way it’s going i don’t feel like having dual gpu is the solution nowadays. It’s better if they use the *90 for the ultra enthusiast instead of the potential dual gpu/pcb. I don’t like the name titan anyways 🙂
True. SLI and CFX solutions don’t matter too much these days. Dual-GPU setup is sort of “dead”. I know NV still offers us SLI, but not many Games “scale” well on such a setup.
SLI is kind of dead at least in my knowledge. Implementing SLI/CRFX requires a lot of coding, as well as resources/time. The game developers need to make sure that the game’s engine is going to scale well.
Okay. But apart from this, NVLINK might take the place of SLI though in near future, mostly in DX12 API,in my opinion. But this is not 100% confirmed yet.These two technologies are almost same, but they still differ in many aspects.
I just hope NVLINK brings something new to the table in the coming future, especially in DX12 multi-gpu mode.
You guys must be aware that NVidia has introduced a new interface called NVLINK with the consumer Turing GPUs, instead of the old SLI. Obviously, it’s the same multi-GPU bridge which can be used for gaming,but it has an interface with many times the bandwidth of an SLI connection.
Since NVLink can be used for direct memory access between cards, and not through the PCIe slots as this was creating a huge bottleneck with SLI, so I think NVlink might be the future, if we go by Nvidia’s theory.
Though, not many Games might be able to reap the full benefits of NVlinK, because the same thing happened with SLI. SLI bridges mostly used to have a bandwidth of 1GB/s (normal bridge), and 2GB/s (for the HB bridge), with a rough estimate.
NVLink on Turing cards can do 25GB/s one way, and or 50GB/s in total. But according to Nvidia, total bandwidth is 50GB/s one way, and 100GB/s total. But all of this will only help, if GAMES are going to take advantage of this new multi-GPU feature, provided the Game developers also implement this.
IMO, I think the main advantage of Nvlink is that it might help with peer-to-peer interface, and VRAM stacking, because essentially the GPUs are much closer together now, also bringing the latency of a GPU-to-GPU transfer way down. So unlike SLI, where the latency had to go through PCIe as well as memory, Nvlink behaves in a different manner.
We can think of it an app that looks at one GPU, and then looks at another GPU and does something else same time. So it seems NVlink will be the future when it comes to multi-GPU setup, but sadly ONLY on the high-end market segment, as other Turing and AMPERE cards will lack NVLINK support.
But again, like I said before, all of this will actually depend on how well the Game’s ENGINE benefits from a future multi-GPU setup. Also, assuming NVLINK will also help with VRAM stacking, the 2 GPUS should support Split Frame rendering/SFR. Unlike the previous AFR mode used mostly in SLI, Alternate frame rendering that is, in which each GPU used it’s own frame buffer/VRAM, and it never got added/stacked.
According to theory,
In AFR, each GPU renders each of the other frame (either the alternate Odd or Even).
In SFR, each GPU renders half of every frame. (top/bottom, or plane division).
So I think NVLINK should also help with VRAM stacking, though we need to see how this gets implemented fully in most of the Games, either in DX12 or VULKAN API mode. Apart from this, even the price of an NVLINK bridge is kind of high, so this can be a very expensive multi-GPU setup, and not many gamers might be able to afford these. Can’t comment about the performance on NVlink though.
Just my 2 cents ! Sorry to go a bit off topic, btw.
The tensor cores in ampere most likely more complex and capable than the original tensor core used in volta and turing. When looking jensen GTC presentation i suspect the tensor cores end up being much bigger (hence using more die space). but once being properly used it can propel certain workload by several fold. if nvidia can apply that to gaming workload not just AI workloads that will be interesting. Because the principle behind their A100 FP64 acceleration is like this. Traditional FP64 performance is around 9.2Tflops. But once accelerated they can go as high as 20Tflops. And this is not AI related workloads at all (no AI stuff from training to inference using FP64).
24gb that’s titan level right ?
Yes the Turing Titan card came with 24 GB VRAM. That’s a lot more than any gamer will need even at 4K with max settings. The 2080 Ti came with 11 GB VRAM. That’s why I think the GA102 will be the replacement for the Turing Titan.
According to the TPU database the same chip will be used in the 3080 Ti but it will come with 12 GB VRAM but it looks like it’s going to be gimped because it has fewer CUDA cores. They are saying that the 3080 Ti will launch at $999
The Titans are really for people who use their card for work and gaming.
Whats PCB losses more like ?
GTX 480
I will never buying a top card again…
Wow Nvidia is really pulling it all out to make sure they don’t lose the performance crown from RDNA 2.0.
I expect sadly for Nvidia to stay on top.
The most sad thing will be if AMD did not even have any intention to go all out. Nvidia heard some good stuff about AMD and prepare accordingly. Happen with polaris. Happen with vega.
so, if i am wanna using i7 10700 non K + 16 gb ddr4 + 3 hdd sata3 + 2 hdd usb3 + 2 dvd rw, but, i want a rtx 3080, the PSU must be minimum => corsair/seasonic 750 watt … ??
even 650 is enough,, but 750 is much better.. my advice, build ryzen don’t intel.. it’s garbage..