NVIDIA GeForce RTX 4090 feature

Here’s what’s happened to the 12VHPWR power cable of our NVIDIA RTX 4090 after two years of continuous work

So, after some stories that surfaced online about the melting issues of the 12VHPWR power cables for the RTX 5090 and the RTX 4090, I’ve decided to check the one we’ve been using these two years for the RTX 4090. So, what has happened to that power cable? Has it melted? Let’s find out.

Before continuing, I want to make it crystal clear that we’ve been using the official power cable for the RTX 4090 for over two years. We haven’t removed it when testing different GPUs. For instance, for benchmarking the RTX 3080, we’ve been using its own power cable. Same applies to the RTX 5080 and the RTX 5090. Moreover, the power cable is still working, and we haven’t encountered any stability issues.

Here are the full specs of our main PC system.

AMD Ryzen 9 7950X3D
Corsair CPU Cooler H170i iCUE Elite
G.Skill Trident Z5 Neo RGB 32GB DDR5 RAM at 6000Mhz
NVIDIA GeForce RTX 4090 Founder’s Edition
Corsair PSU HX Series 1200W
Corsair 7000X iCUE RGB TG Full Tower
Gigabyte Motherboard X670E AORUS MASTER
Windows 10 64-bit

So, here are some screenshots from the power cable. As you can see, it has signs of degradation. The plastic around one connector has been slightly melted. So yes, the melting issues are real.

NVIDIA RTX 4090 power cable melting issues-4NVIDIA RTX 4090 power cable melting issues-1 NVIDIA RTX 4090 power cable melting issues-2NVIDIA RTX 4090 power cable melting issues-3

Since some of you may have trouble seeing the melting issue, here are two additional closer shots. You can clearly see it here. If I had to guess, the plastic melted and then “broke”.

NVIDIA RTX 4090 power cable melting issues-5NVIDIA RTX 4090 power cable melting issues-6NVIDIA RTX 4090 power cable melting issues-7

As I said, the power cable and the RTX 4090 GPU have been working without issues. They still work without issues. And yes, when I installed the cable, I made sure it was plugged properly. Moreover, the melted connector is not as bad as those you might have seen online. However, we can all agree that there are things that need to be improved here.

For science, I also tried the power cable with the RTX 5090. I did not experience any crashes or stability issues. The RTX 5090 worked fine, and it ran games at full speed without any throttling issues or crashes.

So, at least for now, even though the plastic around the connector has melted, the power cable works. In case you’re wondering, no. During our benchmarks all these years, I have never removed the power cable of the RTX 4090 (to use it with any other GPU). For the other GPUs (RTX 3080, RTX 4090 and RTX 5090), I’m using their own official cables. I don’t know what may happen in the next two or three years. I’ll be using this power cable when using the RTX 4090 for our tests. So, I guess we’ll have to wait and see.

Right now, the power cables of the RTX 5080 and the RTX 5090 are completely fine, without any issues of degradation. Then again, we’ve only used these GPUs for a month. So, things might change in one or two years.

We’ve already informed NVIDIA about our findings. Plus, NVIDIA is investigating the issues of the melted RTX 5090 issues that surfaced earlier this month.

Finally, here is an interesting video about the evolution of the NVIDIA 12VHPWR connector. I don’t know if this is 100% legit. However, it falls in line with what we’ve all seen so far. Der8auer also had an interesting video about this. So, make sure to watch both of them.

Stay tuned for more!

How Nvidia made the 12VHPWR connector even worse.

12VHPWR on RTX 5090 is Extremely Concerning

46 thoughts on “Here’s what’s happened to the 12VHPWR power cable of our NVIDIA RTX 4090 after two years of continuous work”

  1. "see this baby?"

    -Taps 18080 TI

    "they dont make them like this anymore"

    "Pfft mine is faster"

    "and it wont be in a few years"

  2. All owners of a 4090 knew this.
    The first thing to do is to replace that bad adapter.
    Major fire hazard! Isn't the same one included with the 5090?

  3. I put a 1000 watt psu in my system with the adapter. I put it in a new system recently and it looked fine. Had a 5080 but it wouldn't load the display tried different displays and the hdmi and display port. Took it back and got a 4090 prebuilt system because I'm not impressed with the new gen cards. I might build a 9070 system.

  4. Yeah I was thinking of buying a PSU with a 16 pin power connector for my upcoming ITX build but I think I might do the same thing as you if they're going to be this bad this consistently.

    1. The issue isn't with the FE design specifically, it's Nvidia's core power delivery design which affects all cards from the 40 and 50 series. The problem is they treat the entire connector as a single 12V input, so the GPU has no way to detect or correct for an unbalanced load (too much current over one pin.) Asus added per-pin power monitoring to the 4090 Matrix and 50 series Astral cards, but they still can't do load balancing due to Nvidia's power delivery design. In the 30 series despite using the same power connector, the cards treated it like 3 separate power inputs with 2 pairs of pins each and did load balancing – thus there have been no reports of failures with those, even the power-guzzling 3090 Ti.

      Now that said, unless the user really screws up, 80 series cards and lower should be fine even if the load is unbalanced since there shouldn't be enough current on a single pin to generate enough heat to cause any damage.

  5. I bought a Gigabyte Gaming OC 4090 at launch and used the adapter that came with it. I recently looked at it when I pulled it out to sell the card and there were no issues whatsoever. Looked like new. Sucks if this happens to people though.

  6. Whoever at PCI-SIG thought this design was a "good idea" needs to be ejected from the PCI-SIG conference completely. Clearly, yes, AMD and Intel had input on this, but seeing their response to using 12VHPWR/12V-2X6 in either implementation outside of a niche few designs from OEMs like ASRock, speaks volumes about their faith in this design and specification for power delivery.

    The next problem is the Redditors who stick up for Nvidia like they are obligated to. These people need to be called out for what is obviously complete bullshit, lies, manipulation, and silencing of criticism towards what is an obvious problem that has perpetuated over two versions of the same design, the second meant to fix, the original, which in turn, clearly, hasn't fixed the problem. The amount of denial from both Nvidia and their fanbois is absurd. Nvidia clearly has lied to their community and their investors both on this issue. They've also clearly lied to Quality Check and Assessment authorities to determine if an electronic device is safe for usage. And obviously, they've lied to their investors to say the "RTX 5000 series is problem free". PCI-SIG has lied about the specifications of this cable also. Clearly this isn't a cable designed for 12V, 23A, 600W power. They can say it is, but clearly it's not. If 23A is melting this cable like butter in a skillet, then it's either improperly gauged, or the fault tolerance should be set far lower. Home A/C Units with heavy duty 240v power lines use only about 16A. This is almost 75% more and using a 0.8 gauge wire with minimal thermal insulation.

    What needs to happen here? A lot obviously! These cards need to be pulled off shelves immediately and sent back to be destroyed, and new units refitted with 3x or 4x 6+2-pin standard PCIE power lines before they are sold to the public. Nvidia needs to issue a full apology and refund to every customer. PCI-SIG should immediately pull and ban the usage of 12VHPWR/12V-2X6 cables in ATX 3.0 or later Power Supplies, and Power Supply manufacturers need to accept thsi and pull the units off shelves and either cap the connection off, or remove it entirely.

    But will they do this? Hell no. They sunk too much money into this and the customer is going to have to pay for it either with broken and damaged equipment, and they won't stop until a government alphabet soup letter agency steps in, makes legal threats, customers invoke some level of class action lawsuit, and enough people stand up and say enough is enough. Investors also need to re-evaluate the stock price of Nvidia, because clearly, they think they're invincible and can ride out this storm with money. If their stock dropped by half, things would change because it would bite them in the wallet hard. And it needs to happen.

    1. I believe Intel designed the connector and they've been using it in datacenter cards for a while now, they just haven't had a reason to use it for gaming cards yet since they've only been making low end models so far.

  7. Was your PSU a Corsair Professional Series 2022 HX1200i? I switched to an FSP Hydro Ti because on Load the 12v Monitoring from the GPU went down to 11,5v with an extension (3x8Pin zu 12vHP). I could have used the Adapter from the Card but then wanted a native 12vHP Cable.

  8. Insane power hungry GPU+tiny trash connector? what can go wrong?

    I am surprised no one modded a 4090/5090 with a IEC320 C14 connector! how hard can this be?

  9. NVIDIA: Don't you guys have thermal cameras to watch the temp of your GPU cables to make sure your house doesn't burn down?
    Yeez.
    Better make the 6xxx series card 1k more expensive to fix your skill issue.

  10. Actually the melting connector is NOT the problem, it is a symptom of the real problem and that is a load balancing issue. 600W @12 V = 50 amps/6 wires = 8.2 amps per wire. If you watch DeBauer's video you can see where he is measuring 22 amps on one wire while another is only a couple of amps

    22 amps not only exceeds the rating of the connectors if it's standard 18 ga. wire then it is also exceeding the rating of the wire in which case the wire itself heats up which causes the plug to melt because you are running 264 watts through one pin that is rated for 100 watts.

    I've seen this thing happen with the old 8 pin Molex connector when a cheap power supply uses only 8 wires to feed two 8 pin plugs (Piggyback) and the same thing can happen if the wire gauge can't handle twice the current than normal, gets hot and melts Molex connectors or possibly even the wire if it is using a cheap jacket that has a low melting point.

    1. The simplest way to deal with the connector is increase all dimensions including pin diameter by 50%. That would change the dimensions from 20mm x 8 mm to 30mm x 12mm. You'd still get the advantage of a single connector (board space) and only slightly larger and still considerably less board space than 3 three 8 pin molex connectors. The downside is you also have to change the socket on the power supply side too or it will become the weak point. That's currently an Intel ATX 3.0 standard and the connector on the Nvidia side is both a PCI-SIG standard as well as the ATX 3.0 standard

      However you are still going to have problems if you don't fix the load balancing issue and have a single wire trying to carry 20+ amps

  11. I would be more concerned if I saw that connector I am using is in process of melting… you know, fire being associated with speed being more of graphical thing and not thing of my 4090/5090 is so fast that it is now burning my 12+4 pin connector.

    Amyway, worst part here is that this is nVidia. They will never admit they made mistake. So they won't go back to 8 pin or use 2 12+4 pins, because that would be admitting they were wrong when they told you this is futureproof. Hell they did it in past too, when 4080 sold badly, they rather made fewer to keep 1200USD MSRP, than just lower the price and waited till they could release 4080 Super, so they didn't admit it was mistake, they brought new product to te market that improved the value. Or how they sold Apple defective GPUs that caused a lot of warranty claims, then dumped all the blame on Apple, rather destroying that relationship than admitting they messed up. Abd Apple isn't sone small customer, that was a lot of money that then went exclusively to AMD. Hence why Apple will never work with nVidia again. Even if AMD or their own chips have nothing remotely close to nVidia. Instead nVodia will wait for enough generations to pass so futureproof promise is "real", then they will consider new connector so they can say they weren't wrong. They just made something better.

  12. You may think that 400w/ 500w of power is a lot to handle but in reality its not, the power supply side didn't harm in anyway but delivering power with 4 parallel output to one single socket with shared ground wire would result in such unnecessary outcome. they could have gone the classic route and deliver it with 4 distinct socket with their assigned chocks and fuses to the PWM controller. this is not only a design flaw but this is the sheer stupidity to shove this connectors to look apart/cool while delivering the same amount of power to the PCB. or maybe its planned obsolescence all along.

  13. @John Papadopoulos, can I suggest you replace that adapter cable with a Thermal Grizzly Wireview. I have one on my 4080.

  14. @John Papadopoulos, can I suggest you replace that adapter cable with a Thermal Grizzly Wireview. I have one on my 4080.

  15. This has to be a problem with founders.

    My 4090 is an Asus Tuf and 0 issues, just replaced it with a 5090 and cable was perfect.

    And my 5090 Gigabyte also no problems, cable is cool and nice

  16. I have an RTX 4090 as well. My cable has no issues. My cable never gets hot. I don't overclock my rtx4090. Did you ever?
    That being said I run my RTX 4090 for hours and hours in crazy intense games. And we know how unreal 5 engine performs yes? The engine makes things work hard when they really don't need to.

    I also use my GPU for 4k video editing.

    Your experience doesn't prove anything. My experience doesn't really prove anything either. You're talking about over 2 years later….using a product.

    We all hope that our products last a long time. But most of these GPUs only have a two or three year warranty. I always purchase an extended because things usually happen after the warranties run out.

    I certainly don't want my RTX 4090 to die. That would be devastating financially. But the way I use my GPU it wouldn't surprise me if it didn't last five, six, or seven years. But I do take care of it so I hope it does.

    One of the main problems with the cord issue is that we don't really know how each of the people who own the gpus used those gpus.

    In my personal experience I have no problems at all but I also don't overclock my GPU. I don't mod my GPU. I don't mess with the drivers outside of installing them. Etc. Any one of those things can alter the outcome of the performance of a product or longevity of it.

    The other thing that factors in here is the cables themselves and how the third party company put their GPU together to begin with.

    Build quality matters. Cable build quality matters.

    Nvidia doesn't put the third party gpus together. Asus, msi, zotac, etc. does.

    If I am wrong about that then correct me. As far as I know Nvidia just gives them specs and the GPU die or PCB. But the third party has to put it (the final product GPU card)all together and make it work for the consumer. And that includes cables.

    People said there was a lot of problems with the RTX 2080ti. I still have mine and it's running strong. I use it to this day.

    People said the RTX 3090 had problems. I still have mine and I use it to this day and it's running strong. Better than ever actually.

    Social media can blow things way out of proportion because somebody can get a story out to a lot of eyes and ears. But when it all comes down to it what is the actual percentage of people who had problems with melting cables?

    From what I heard through the grapevine the 4090s sold well over 400,000 units. Back in the day it was acceptable to have about a 5% failure rate for whatever reasons. So that would be 20,000 gpus. Yes?

    I haven't heard any figures that come even close to a small percentage of that.

    I probably won't read any of the replies here because I never get any kind of alerts through Google for these articles. But that's my take. Have a discussion about it. Cheers !!!

  17. On a side note – The more expensive the card, the bigger chance it melts. Usually trash tier hardware have that issue but for nvidia its the reverse as the more expensive draws more power through that connector. Would not be suspriced if we see more 5090 failures beside the pcie5 issues plenty of peeps seems to have. (And yeah i got a 4090 and been using that card for week long raytracing renders, (for anims) ie load of very close to 100% load due to the pretty darn good inner workings of the iray api and still no burned/damaged pins last i checked)

  18. Official Nvidia statement: "User error, better luck next time, thanks for your money though!"
    A multi billion dollar tech giant who doesnt know what they're doing. Your multi thousand dollar PC could fry, your house could burn down, doesnt instill confidence whatsoever.

Leave a Reply

Your email address will not be published. Required fields are marked *